Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
null
null
{}
SemValX/acoustic_model0_cv_17_fr_XLSR-53
null
[ "region:us" ]
null
2024-04-24T11:21:47+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
LumousInTheWild/image_captioning_1
null
[ "transformers", "safetensors", "vision-encoder-decoder", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-24T11:23:16+00:00
null
peft
# Llama3_8B_Odia_Unsloth Llama3_8B_Odia_Unsloth is a fine-tuned Odia large language model with 8 billion parameters, and it is based on Llama3. The model is fine-tuned on a comprehensive [171k Odia instruction set](https://huggingface.co/datasets/OdiaGenAI/all_combined_odia_171k), encompassing domain-specific and cultural nuances. The fine-tuning process leverages Unsloth, expediting the training process for optimal efficiency. For more details about the model, data, training procedure, and evaluations, go through the blog [post](https://www.odiagenai.org/blog/odiagenai-releases-llama3-fine-tuned-model-for-the-odia-language). ## Model Description * Model type: A 8B fine-tuned model * Primary Language(s): Odia and English * License: Llama3 ## Inference Sample inference script. ### Installation ``` #Install Unsloth %%capture import torch major_version, minor_version = torch.cuda.get_device_capability() # Must install separately since Colab has torch 2.2.1, which breaks packages !pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git" if major_version >= 8: # Use this for new GPUs like Ampere, Hopper GPUs (RTX 30xx, RTX 40xx, A100, H100, L40) !pip install --no-deps packaging ninja einops flash-attn xformers trl peft accelerate bitsandbytes else: # Use this for older GPUs (V100, Tesla T4, RTX 20xx) !pip install --no-deps xformers trl peft accelerate bitsandbytes pass ``` ### Model loading ``` from unsloth import FastLanguageModel import torch max_seq_length = 2048 dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+ load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False. model, tokenizer = FastLanguageModel.from_pretrained( model_name = "OdiaGenAI-LLM/Llama3_8B_Odia_Unsloth", max_seq_length = max_seq_length, dtype = dtype, load_in_4bit = load_in_4bit, ) alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: {} ### Input: {} ### Response: {}""" ``` ### Inference ``` FastLanguageModel.for_inference(model) inputs = tokenizer( [ alpaca_prompt.format( "କୋଭିଡ୍ 19 ର ଲକ୍ଷଣଗୁଡ଼ିକ କ’ଣ?", # instruction "", # input "", # output - leave this blank for generation! ) ], return_tensors = "pt").to("cuda") outputs = model.generate(**inputs, max_new_tokens = 512, use_cache = True) tokenizer.batch_decode(outputs) ``` ### Citation Information If you find this model useful, please consider giving 👏 and citing: ``` @misc{Llama3_8B_Odia_Unsloth, author = {Shantipriya Parida and Sambit Sekhar and Debasish Dhal and Shakshi Panwar}, title = {OdiaGenAI Releases Llama3 Fine-tuned Model for the Odia Language}, year = {2024}, publisher = {Hugging Face}, journal = {Hugging Face repository}, howpublished = {\url{https://huggingface.co/OdiaGenAI}}, } ``` ### Contributions - Dr.Shantipriya Parida - Sambit Sekhar - Debasish Dhal - Shakshi Panwar
{"license": "llama3", "library_name": "peft", "base_model": "unsloth/llama-3-8b-bnb-4bit", "model-index": [{"name": "Llama3_8B_Odia_Unsloth", "results": []}]}
OdiaGenAI-LLM/Llama3_8B_Odia_Unsloth
null
[ "peft", "safetensors", "llama", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:llama3", "region:us" ]
null
2024-04-24T11:23:40+00:00
text-generation
transformers
{}
TraderVerse/Llama-2-7b-chat-finetune-sec-fundamentals-qlora
null
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T11:24:32+00:00
text-classification
transformers
{}
rahmaabusalma/sentiments_anlysis_first
null
[ "transformers", "safetensors", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T11:27:08+00:00
text-generation
transformers
{}
simonycl/Mistral-7B-Instruct-v0.2-alpaca-baseline
null
[ "transformers", "pytorch", "mistral", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T11:27:35+00:00
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/ValiantLabs/Llama3-70B-ShiningValiant2 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-i1-GGUF/resolve/main/Llama3-70B-ShiningValiant2.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-i1-GGUF/resolve/main/Llama3-70B-ShiningValiant2.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-i1-GGUF/resolve/main/Llama3-70B-ShiningValiant2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-i1-GGUF/resolve/main/Llama3-70B-ShiningValiant2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-i1-GGUF/resolve/main/Llama3-70B-ShiningValiant2.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-i1-GGUF/resolve/main/Llama3-70B-ShiningValiant2.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-i1-GGUF/resolve/main/Llama3-70B-ShiningValiant2.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-i1-GGUF/resolve/main/Llama3-70B-ShiningValiant2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-i1-GGUF/resolve/main/Llama3-70B-ShiningValiant2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-i1-GGUF/resolve/main/Llama3-70B-ShiningValiant2.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-i1-GGUF/resolve/main/Llama3-70B-ShiningValiant2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-i1-GGUF/resolve/main/Llama3-70B-ShiningValiant2.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-i1-GGUF/resolve/main/Llama3-70B-ShiningValiant2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-i1-GGUF/resolve/main/Llama3-70B-ShiningValiant2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-i1-GGUF/resolve/main/Llama3-70B-ShiningValiant2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-i1-GGUF/resolve/main/Llama3-70B-ShiningValiant2.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-i1-GGUF/resolve/main/Llama3-70B-ShiningValiant2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-i1-GGUF/resolve/main/Llama3-70B-ShiningValiant2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-i1-GGUF/resolve/main/Llama3-70B-ShiningValiant2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-i1-GGUF/resolve/main/Llama3-70B-ShiningValiant2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.1 | | | [PART 1](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-i1-GGUF/resolve/main/Llama3-70B-ShiningValiant2.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-i1-GGUF/resolve/main/Llama3-70B-ShiningValiant2.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["shining-valiant", "shining-valiant-2", "valiant", "valiant-labs", "llama", "llama-3", "llama-3-instruct", "llama-3-instruct-70b", "70b", "conversational", "chat", "instruct"], "base_model": "ValiantLabs/Llama3-70B-ShiningValiant2", "license_link": "https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct/blob/main/LICENSE", "license_name": "llama3", "model_type": "llama", "quantized_by": "mradermacher"}
mradermacher/Llama3-70B-ShiningValiant2-i1-GGUF
null
[ "transformers", "gguf", "shining-valiant", "shining-valiant-2", "valiant", "valiant-labs", "llama", "llama-3", "llama-3-instruct", "llama-3-instruct-70b", "70b", "conversational", "chat", "instruct", "en", "base_model:ValiantLabs/Llama3-70B-ShiningValiant2", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-24T11:28:53+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
HenryCai1129/adapter-toxic2nontoxic-100-100
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-24T11:29:00+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: winglian/llama-3-32k-merged model_type: LlamaForCausalLM tokenizer_type: AutoTokenizer hub_model_id: KolaGang/Red_Llama_32_base hub_strategy: end load_in_8bit: false load_in_4bit: false strict: false datasets: - path: Drewskidang/chatlaw type: sharegpt conversation: chatml - path: Drewskidang/tool type: sharegpt conversation: chatml - path: rxavier/economicus type: sharegpt conversation: chatml - path: KolaGang/mergers type: alpaca dataset_prepared_path: last_run_prepared val_set_size: 0.05 eval_sample_packing: False output_dir: ./out sequence_len: 8192 sample_packing: true pad_to_sequence_len: true wandb_project: swag_llama wandb_entity: wandb_watch: wandb_name: wandb_log_model: gradient_accumulation_steps: 8 micro_batch_size: 2 num_epochs: 1 optimizer: paged_adamw_8bit lr_scheduler: cosine learning_rate: 2e-5 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true flash_attn_cross_entropy: false flash_attn_rms_norm: true flash_attn_fuse_qkv: false flash_attn_fuse_mlp: true warmup_steps: 100 evals_per_epoch: 4 eval_table_size: saves_per_epoch: 1 debug: deepspeed: deepspeed_configs/zero1.json # multi-gpu only weight_decay: 0.1 fsdp: fsdp_config: tokens: - "<|im_start|>" - "<|im_end|>" ``` </details><br> # Red_Llama_32_base This model is a fine-tuned version of [winglian/llama-3-32k-merged](https://huggingface.co/winglian/llama-3-32k-merged) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6810 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 5 - gradient_accumulation_steps: 4 - total_train_batch_size: 40 - total_eval_batch_size: 10 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.8855 | 0.02 | 1 | 0.9452 | | 0.7195 | 0.26 | 16 | 0.7678 | | 0.6507 | 0.52 | 32 | 0.6943 | | 0.6398 | 0.79 | 48 | 0.6700 | | 0.5713 | 1.03 | 64 | 0.6622 | | 0.5277 | 1.29 | 80 | 0.6616 | | 0.5166 | 1.55 | 96 | 0.6582 | | 0.5437 | 1.82 | 112 | 0.6500 | | 0.3328 | 2.06 | 128 | 0.6977 | | 0.2989 | 2.32 | 144 | 0.6900 | | 0.2852 | 2.58 | 160 | 0.6821 | | 0.2714 | 2.84 | 176 | 0.6810 | ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.2.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
{"tags": ["axolotl", "generated_from_trainer"], "base_model": "winglian/llama-3-32k-merged", "model-index": [{"name": "Red_Llama_32_base", "results": []}]}
KolaGang/Red_Llama_32_base
null
[ "transformers", "safetensors", "llama", "text-generation", "axolotl", "generated_from_trainer", "base_model:winglian/llama-3-32k-merged", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T11:29:41+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
CognitoLibera2/model_s9_7b_16
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T11:31:07+00:00
text-generation
transformers
# Skyrim Fine-Tune for Meta Llama 3 8B Instruct [<img src="https://staticdelivery.nexusmods.com/mods/1704/images/headers/98631_1692410778.jpg" width="1000"/>](https://art-from-the-machine.github.io/Mantella/) GGUF models: [https://huggingface.co/art-from-the-machine/Mantella-Skyrim-Llama-3-8B-GGUF](https://huggingface.co/art-from-the-machine/Mantella-Skyrim-Llama-3-8B-GGUF) ## Introduction This fine-tune of Meta's Llama 3 8B Instruct model is intended to be used as a custom-tailored model for the [Mantella Skyrim mod](https://art-from-the-machine.github.io/Mantella/). ## Dataset An Alpaca-style dataset of 8,800+ player <-> NPC interactions has been used in this fine-tune. Details on this dataset can be found in the [Mantella LLM Fine-Tuning](https://github.com/art-from-the-machine/Mantella-LLM-Fine-Tuning) repo. Example: ```json { "instruction": "Generate dialogue in the style of Skyrim.", "input": "Where can I find fire salts?", "output": "A flame atronach's body might provide fire salt. They're dangerous creatures that can be summoned by wizards. Of course, it would be much easier to check with an alchemist. They occasionally have them for sale." } ``` ## Specifications - **Developed by:** art-from-the-machine - **License:** apache-2.0 - **Fine-tuned from model:** unsloth/llama-3-8b-Instruct-bnb-4bit - **Context length:** 8,192 ## Example Output The fine-tuned [Mantella-Skyrim-Llama-3-8B-Q4_K_M.gguf](https://huggingface.co/art-from-the-machine/Mantella-Skyrim-Llama-3-8B-GGUF/blob/main/Mantella-Skyrim-Llama-3-8B-Q4_K_M.gguf) model has been tested in Mantella v0.11.2's debug mode, with the model running locally in koboldcpp. The character tested is Hulda. A list of questions are tested against both the fine-tuned model as well as an equivalent [Q4_K_M Llama 3 Instruct](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF/blob/main/Meta-Llama-3-8B-Instruct-Q4_K_M.gguf) base model for comparison. For both models, Mantella's `max_response_sentences` setting has been set to 4. <details> <summary><b>Mantella-Skyrim-Llama-3-8B-Q4_K_M.gguf</b></summary> ```json [ { "role": "assistant", "content": "Ah, welcome to the Bannered Mare! Take a seat by the fire and I'll have one of the girls bring you a drink. What can I get for you?" }, { "role": "user", "content": "What's life like in Whiterun?" }, { "role": "assistant", "content": "It's a good life. The people are friendly, the food is good, and the Jarl is fair. Of course, there are some who grumble about the Empire, but I think they're just jealous of the Jarl's fine wine cellar. What about you? What brings you to Whiterun?" }, { "role": "user", "content": "What do you think about the civil war in Skyrim?" }, { "role": "assistant", "content": "I think it's a shame. I've lived in Whiterun all my life, and I've seen the Empire bring peace and prosperity to our land. The Stormcloaks just want to tear everything down and start over. I don't think that's the way to build a better future. But what do I know? I'm just a simple innkeeper." }, { "role": "user", "content": "I'm looking for work. Got any leads?" }, { "role": "assistant", "content": "Well, the Companions are always looking for new recruits. They're a bit rough, but they're good people. And of course, there's the Jarl's court. If you're looking for work, you might want to try talking to his steward. He's always looking for someone to do odd jobs for him." }, { "role": "user", "content": "I'm looking for an Elder Scroll. Do you know anything about them?" }, { "role": "assistant", "content": "Elder Scrolls? Ha! You must be joking. Those are ancient relics, hidden away in the ruins of old. You won't find one in Whiterun, that's for sure. If you're looking for something like that, you should try talking to the Greybeards. They're the ones who live in the mountains and meditate all day." } ] ``` </details> <br> <details> <summary><b>Meta-Llama-3-8B-Instruct-Q4_K_M.gguf</b></summary> ```json [ { "role": "assistant", "content": "Ah, welcome to The Bannered Mare! Have a seat by the fire, and I'll have one of the barmaids bring you a pint of our finest ale. What brings you to Whiterun today?" }, { "role": "user", "content": "What's life like in Whiterun?" }, { "role": "assistant", "content": "Life in Whiterun is good, thank you for asking. The Jarl's been doing a fine job of keeping the city prosperous. Of course, there's always something going on, whether it's the Companions looking for new members or the Thieves Guild trying to... well, steal things. But overall, it's a good place to live." }, { "role": "user", "content": "What do you think about the civil war in Skyrim?" }, { "role": "assistant", "content": "The civil war? Well, I think it's a shame, to be honest. I've got friends on both sides of the conflict, and it's hard to see them fighting each other. But I suppose it's not my place to say who's right or wrong. I just hope it ends soon and we can all get back to living our lives in peace." }, { "role": "user", "content": "I'm looking for work. Got any leads?" }, { "role": "assistant", "content": "Work, you say? Well, the city's always looking for people to help with the harvest, and the Jarl's guards could use some extra hands. If you're looking for something more... adventurous, you might want to talk to the Companions. They're always looking for new members to join their ranks. Just be warned, though - it's not for the faint of heart." }, { "role": "user", "content": "I'm looking for an Elder Scroll. Do you know anything about them?" }, { "role": "assistant", "content": "Elder Scrolls? Ah, yes... I've heard rumors about those ancient artifacts. Some say they hold the secrets of the world, while others claim they're cursed. I don't know much about them, but I've heard that the Greybeards in the mountains might know more. They're a reclusive bunch, but if you're willing to brave the wilderness, you might be able to learn something from them." } ] ``` </details> ## How to use with Mantella See [here](https://art-from-the-machine.github.io/Mantella/pages/installation.html#local-models) for full instructions on how to run Mantella with a local LLM. ## Notes This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"}
art-from-the-machine/Mantella-Skyrim-Llama-3-8B
null
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "gguf", "conversational", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T11:32:52+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
CognitoLibera2/model_s9_7b_18
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T11:32:59+00:00
null
transformers
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6626144c3892aa32a898c997/Vo3yQozK1__c4VEZpe23z.jpeg) # Disclaimer This model is an experimental fine tune of LLama-3 ## Datasets used: - unalignment/toxic-dpo-v0.2 - NobodyExistsOnTheInternet/ToxicQAFinal - Open-Orca/SlimOrca (subset of data) - PygmalionAI/PIPPA - MinervaAI/Aesir-Preview ### Model Description <!-- Provide a longer summary of what this model is. --> The model is highly uncensored + suitable for roleplay ## About Us Building - AI Waifu Supremacy [X](https://twitter.com/hikikomorihaven) [Discord](discord.gg/QS27Ka3cnq) ## Credits: (For open sourcing tools + methodology to assist with fine tuning) - Unisloth - NurtureAI (For open sourcing data to be used for fine tuning) - NobodyExistsOnTheInternet - unalignment - Open-Orca - PygmalionAI - MinervaAI
{"language": ["en"], "license": "llama3", "library_name": "transformers", "tags": ["unsloth"], "datasets": ["unalignment/toxic-dpo-v0.2", "NobodyExistsOnTheInternet/ToxicQAFinal", "Open-Orca/SlimOrca", "PygmalionAI/PIPPA", "MinervaAI/Aesir-Preview"]}
hikikomoriHaven/llama3-8b-hikikomori-v0.4
null
[ "transformers", "safetensors", "unsloth", "en", "dataset:unalignment/toxic-dpo-v0.2", "dataset:NobodyExistsOnTheInternet/ToxicQAFinal", "dataset:Open-Orca/SlimOrca", "dataset:PygmalionAI/PIPPA", "dataset:MinervaAI/Aesir-Preview", "license:llama3", "endpoints_compatible", "region:us" ]
null
2024-04-24T11:33:32+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["unsloth"]}
Ahmedhany216/Text-to-Code-Generation-Gemma2B-V2
null
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-24T11:33:46+00:00
null
null
WARNING: I DIDN'T MAKE THOSE MODELS! I JUST MADE A COLLECTION OF THE MODELS I WANTED.
{}
scq2x/collection
null
[ "region:us" ]
null
2024-04-24T11:36:00+00:00
text-generation
transformers
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/Z-THA-NDPl3YPUnACsQ0c.png) Checkpoints of [Weyaxi/Einstein-v6-7B](https://huggingface.co/Weyaxi/Einstein-v6.1-Llama3-8B). Head to the main model for more information :) https://huggingface.co/Weyaxi/Einstein-v6.1-Llama3-8B
{"language": ["en"], "license": "other", "tags": ["axolotl", "generated_from_trainer", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "science", "physics", "chemistry", "biology", "math", "llama", "llama3"], "datasets": ["allenai/ai2_arc", "camel-ai/physics", "camel-ai/chemistry", "camel-ai/biology", "camel-ai/math", "metaeval/reclor", "openbookqa", "mandyyyyii/scibench", "derek-thomas/ScienceQA", "TIGER-Lab/ScienceEval", "jondurbin/airoboros-3.2", "LDJnr/Capybara", "Cot-Alpaca-GPT4-From-OpenHermes-2.5", "STEM-AI-mtl/Electrical-engineering", "knowrohit07/saraswati-stem", "sablo/oasst2_curated", "lmsys/lmsys-chat-1m", "TIGER-Lab/MathInstruct", "bigbio/med_qa", "meta-math/MetaMathQA-40K", "openbookqa", "piqa", "metaeval/reclor", "derek-thomas/ScienceQA", "scibench", "sciq", "Open-Orca/SlimOrca", "migtissera/Synthia-v1.3", "TIGER-Lab/ScienceEval", "allenai/WildChat", "microsoft/orca-math-word-problems-200k", "openchat/openchat_sharegpt4_dataset", "teknium/GPTeacher-General-Instruct", "m-a-p/CodeFeedback-Filtered-Instruction", "totally-not-an-llm/EverythingLM-data-V3", "HuggingFaceH4/no_robots", "OpenAssistant/oasst_top1_2023-08-25", "WizardLM/WizardLM_evol_instruct_70k"], "base_model": "meta-llama/Meta-Llama-3-8B", "model-index": [{"name": "Einstein-v6.1-Llama3-8B", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 62.46, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 82.41, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 66.19, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 55.1}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 79.32, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 66.11, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B", "name": "Open LLM Leaderboard"}}]}]}
Weyaxi/Einstein-v6.1-Llama3-8B-checkpoints
null
[ "transformers", "safetensors", "llama", "text-generation", "axolotl", "generated_from_trainer", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "science", "physics", "chemistry", "biology", "math", "llama3", "conversational", "en", "dataset:allenai/ai2_arc", "dataset:camel-ai/physics", "dataset:camel-ai/chemistry", "dataset:camel-ai/biology", "dataset:camel-ai/math", "dataset:metaeval/reclor", "dataset:openbookqa", "dataset:mandyyyyii/scibench", "dataset:derek-thomas/ScienceQA", "dataset:TIGER-Lab/ScienceEval", "dataset:jondurbin/airoboros-3.2", "dataset:LDJnr/Capybara", "dataset:Cot-Alpaca-GPT4-From-OpenHermes-2.5", "dataset:STEM-AI-mtl/Electrical-engineering", "dataset:knowrohit07/saraswati-stem", "dataset:sablo/oasst2_curated", "dataset:lmsys/lmsys-chat-1m", "dataset:TIGER-Lab/MathInstruct", "dataset:bigbio/med_qa", "dataset:meta-math/MetaMathQA-40K", "dataset:piqa", "dataset:scibench", "dataset:sciq", "dataset:Open-Orca/SlimOrca", "dataset:migtissera/Synthia-v1.3", "dataset:allenai/WildChat", "dataset:microsoft/orca-math-word-problems-200k", "dataset:openchat/openchat_sharegpt4_dataset", "dataset:teknium/GPTeacher-General-Instruct", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:totally-not-an-llm/EverythingLM-data-V3", "dataset:HuggingFaceH4/no_robots", "dataset:OpenAssistant/oasst_top1_2023-08-25", "dataset:WizardLM/WizardLM_evol_instruct_70k", "base_model:meta-llama/Meta-Llama-3-8B", "license:other", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T11:36:56+00:00
text-classification
transformers
{"license": "mit"}
Juner/AI-generated-text-detection
null
[ "transformers", "safetensors", "bert", "text-classification", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T11:37:03+00:00
null
transformers
# Uploaded model - **Developed by:** art-from-the-machine - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"}
art-from-the-machine/Mantella-Skyrim-Llama-3-8B-LoRa
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-24T11:37:42+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
CognitoLibera2/model_s9_7b_17
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T11:37:48+00:00
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/adalbertojunior/Llama-3-8B-Instruct-Portuguese-v0.4 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-Portuguese-v0.4-GGUF/resolve/main/Llama-3-8B-Instruct-Portuguese-v0.4.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-Portuguese-v0.4-GGUF/resolve/main/Llama-3-8B-Instruct-Portuguese-v0.4.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-Portuguese-v0.4-GGUF/resolve/main/Llama-3-8B-Instruct-Portuguese-v0.4.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-Portuguese-v0.4-GGUF/resolve/main/Llama-3-8B-Instruct-Portuguese-v0.4.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-Portuguese-v0.4-GGUF/resolve/main/Llama-3-8B-Instruct-Portuguese-v0.4.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-Portuguese-v0.4-GGUF/resolve/main/Llama-3-8B-Instruct-Portuguese-v0.4.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-Portuguese-v0.4-GGUF/resolve/main/Llama-3-8B-Instruct-Portuguese-v0.4.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-Portuguese-v0.4-GGUF/resolve/main/Llama-3-8B-Instruct-Portuguese-v0.4.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-Portuguese-v0.4-GGUF/resolve/main/Llama-3-8B-Instruct-Portuguese-v0.4.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-Portuguese-v0.4-GGUF/resolve/main/Llama-3-8B-Instruct-Portuguese-v0.4.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-Portuguese-v0.4-GGUF/resolve/main/Llama-3-8B-Instruct-Portuguese-v0.4.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-Portuguese-v0.4-GGUF/resolve/main/Llama-3-8B-Instruct-Portuguese-v0.4.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-Portuguese-v0.4-GGUF/resolve/main/Llama-3-8B-Instruct-Portuguese-v0.4.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-Portuguese-v0.4-GGUF/resolve/main/Llama-3-8B-Instruct-Portuguese-v0.4.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-Portuguese-v0.4-GGUF/resolve/main/Llama-3-8B-Instruct-Portuguese-v0.4.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "library_name": "transformers", "datasets": ["adalbertojunior/openHermes_portuguese"], "base_model": "adalbertojunior/Llama-3-8B-Instruct-Portuguese-v0.4", "quantized_by": "mradermacher"}
mradermacher/Llama-3-8B-Instruct-Portuguese-v0.4-GGUF
null
[ "transformers", "gguf", "en", "dataset:adalbertojunior/openHermes_portuguese", "base_model:adalbertojunior/Llama-3-8B-Instruct-Portuguese-v0.4", "endpoints_compatible", "region:us" ]
null
2024-04-24T11:38:06+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lsg-lsh-mbart-cc25-16384-128-4-4ep This model is a fine-tuned version of [Auruncus/lsg-lsh-mbart-cc25-16384-128-4-2ep](https://huggingface.co/Auruncus/lsg-lsh-mbart-cc25-16384-128-4-2ep) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.5736 - Rouge1: 2.6972 - Rouge2: 1.6893 - Rougel: 2.6528 - Rougelsum: 2.6347 - Gen Len: 135.5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"language": ["ru"], "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "Auruncus/lsg-lsh-mbart-cc25-16384-128-4-2ep", "model-index": [{"name": "lsg-lsh-mbart-cc25-16384-128-4-4ep", "results": []}]}
Auruncus/lsg-lsh-mbart-cc25-16384-128-4-4ep
null
[ "transformers", "tensorboard", "safetensors", "mbart", "text2text-generation", "generated_from_trainer", "custom_code", "ru", "base_model:Auruncus/lsg-lsh-mbart-cc25-16384-128-4-2ep", "autotrain_compatible", "region:us" ]
null
2024-04-24T11:39:06+00:00
null
null
{"license": "apache-2.0"}
fggsds/fines
null
[ "license:apache-2.0", "region:us" ]
null
2024-04-24T11:39:39+00:00
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description This model is an artificial intelligence generated text detection model trained using real human text and AI generated text (mainly including Erine-Bot 4.0, Qwen-Turbo 4.0 and ChatGPT 3.0 )Can effectively identify whether text is generated by artificial intelligence. <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model You could implement the model with the sample if you want to classify between AI-generated text and real-text. ```python from transformers import AutoTokenizer,AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("Juner/AI-generated-text-detection-pair") model = AutoModelForSequenceClassification.from_pretrained("Juner/AI-generated-text-detection-pair") # 对输入进行编码并获取模型输出 question = "你喜欢我吗?" answer = "是的!我喜欢你!" inputs = tokenizer(question+answer,padding =True,truncation=True,return_tensors="pt",max_length=512) outputs = model(**inputs) ``` [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"language": ["zh"], "license": "mit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "widget": [{"text": "\u674e\u767d\uff08701\u5e742\u670828\u65e5\u2014762\u5e7412\u6708\uff09 [28]\uff0c\u5b57\u592a\u767d\uff0c\u53f7\u9752\u83b2\u5c45\u58eb [20]\uff0c\u7956\u7c4d\u9647\u897f\u6210\u7eaa\uff08\u4eca\u7518\u8083\u7701\u79e6\u5b89\u53bf\uff09\uff0c\u51fa\u751f\u4e8e\u8700\u90e1\u7ef5\u5dde\u660c\u9686\u53bf\uff08\u4eca\u56db\u5ddd\u7701\u6c5f\u6cb9\u5e02\u9752\u83b2\u4e61\uff09\uff0c\u4e00\u8bf4\u51fa\u751f\u4e8e\u897f\u57df\u788e\u53f6 [29]\u3002\u5510\u671d\u4f1f\u5927\u7684\u6d6a\u6f2b\u4e3b\u4e49\u8bd7\u4eba\uff0c\u51c9\u6b66\u662d\u738b\u674e\u66a0\u4e5d\u4e16\u5b59 [16] [23]\u3002\u4e3a\u4eba\u723d\u6717\u5927\u65b9\uff0c\u4e50\u4e8e\u4ea4\u53cb\uff0c\u7231\u597d\u996e\u9152\u4f5c\u8bd7\uff0c\u540d\u5217\u201c\u9152\u4e2d\u516b\u4ed9\u201d [2]\u3002\u66fe\u7ecf\u5f97\u5230\u5510\u7384\u5b97\u674e\u9686\u57fa\u8d4f\u8bc6\uff0c\u62c5\u4efb\u7ff0\u6797\u4f9b\u5949 [126]\uff0c\u8d50\u91d1\u653e\u8fd8\uff0c\u6e38\u5386\u5168\u56fd\uff0c\u5148\u540e\u8fce\u5a36\u5bb0\u76f8\u8bb8\u5709\u5e08\u3001\u5b97\u695a\u5ba2\u7684\u5b59\u5973\u3002\u5510\u8083\u5b97\u674e\u4ea8\u5373\u4f4d\u540e\uff0c\u5377\u5165\u6c38\u738b\u4e4b\u4e71\uff0c\u6d41\u653e\u591c\u90ce\uff0c\u8f97\u8f6c\u5230\u8fbe\u5f53\u6d82\u53bf\u4ee4\u674e\u9633\u51b0\u5bb6\u3002\u4e0a\u5143\u4e8c\u5e74\uff0c\u53bb\u4e16\uff0c\u65f6\u5e74\u516d\u5341\u4e8c [16]\u3002\u8457\u6709\u300a\u674e\u592a\u767d\u96c6\u300b [26]\uff0c\u4ee3\u8868\u4f5c\u6709\u300a\u671b\u5e90\u5c71\u7011\u5e03\u300b\u300a\u884c\u8def\u96be\u300b\u300a\u8700\u9053\u96be\u300b\u300a\u5c06\u8fdb\u9152\u300b\u300a\u65e9\u53d1\u767d\u5e1d\u57ce\u300b\u300a\u9ec4\u9e64\u697c\u9001\u5b5f\u6d69\u7136\u4e4b\u5e7f\u9675\u300b\u7b49 [2]\u3002\u674e\u767d\u6240\u4f5c\u8bcd\u8d4b\uff0c\u5c31\u5176\u5f00\u521b\u610f\u4e49\u53ca\u827a\u672f\u6210\u5c31\u800c\u8a00\uff0c\u4eab\u6709\u6781\u4e3a\u5d07\u9ad8\u7684\u5730\u4f4d\uff0c\u540e\u4e16\u8a89\u4e3a\u201c\u8bd7\u4ed9\u201d\uff0c\u4e0e\u8bd7\u5723\u675c\u752b\u5e76\u79f0\u201c\u674e\u675c\u201d\u3002"}, {"text": "\u674e\u767d\uff0c\u5b57\u592a\u767d\uff0c\u53f7\u9752\u83b2\u5c45\u58eb\uff0c\u53c8\u53f7\u201c\u8c2a\u4ed9\u4eba\u201d\uff0c\u7956\u7c4d\u9647\u897f\u6210\u7eaa\uff08\u4eca\u7518\u8083\u7701\u79e6\u5b89\u53bf\uff09\uff0c\u5510\u4ee3\u4f1f\u5927\u7684\u6d6a\u6f2b\u4e3b\u4e49\u8bd7\u4eba\uff0c\u88ab\u8a89\u4e3a\u201c\u8bd7\u4ed9\u201d\uff0c\u4e0e\u675c\u752b\u5e76\u79f0\u201c\u674e\u675c\u201d\u3002\u674e\u767d\u4e3a\u4eba\u723d\u6717\u5927\u65b9\uff0c\u7231\u996e\u9152\u4f5c\u8bd7\uff0c\u559c\u4ea4\u53cb\u3002\u4ed6\u6df1\u53d7\u9ec4\u8001\u5217\u5e84\u601d\u60f3\u5f71\u54cd\uff0c\u6709\u201c\u6d4e\u82cd\u751f\u3001\u5b89\u9ece\u5143\u201d\u7684\u653f\u6cbb\u62b1\u8d1f\uff0c\u4f46\u5374\u4ed5\u9014\u4e0d\u987a\uff0c\u53ea\u505a\u8fc7\u4e00\u4e9b\u4ece\u4ed5\u5c0f\u5b98\u3002\u5929\u5b9d\u5143\u5e74\uff08\u516c\u5143742\u5e74\uff09\uff0c\u56e0\u597d\u53cb\u4e3e\u8350\uff0c\u674e\u767d\u88ab\u5510\u7384\u5b97\u53ec\u89c1\uff0c\u4f9b\u5949\u7ff0\u6797\uff0c\u4f46\u4ed6\u5e76\u672a\u83b7\u5f97\u9ad8\u4f4d\u548c\u5b9e\u6743\uff0c\u53ea\u662f\u4f5c\u4e3a\u6587\u5b66\u4f8d\u4ece\u7684\u89d2\u8272\uff0c\u56e0\u6743\u8d35\u7684\u8c17\u6bc1\uff0c\u4e8e\u5929\u5b9d\u4e09\u8f7d\uff08744\u5e74\uff09\u88ab\u6392\u6324\u51fa\u4eac\uff0c\u6b64\u540e\u5728\u6c5f\u6dee\u4e00\u5e26\u76d8\u6853\uff0c\u5386\u7ecf\u78e8\u96be\u3002\u5b89\u53f2\u4e4b\u4e71\u7206\u53d1\u540e\uff0c\u674e\u767d\u56e0\u6c38\u738b\u674e\u7498\u8c0b\u53cd\u6848\u88ab\u7275\u8fde\u800c\u6d41\u653e\u591c\u90ce\uff0c\u9014\u4e2d\u5199\u4e0b\u300a\u65e9\u53d1\u767d\u5e1d\u57ce\u300b\u3002\u4e0d\u4e45\u540e\u53c8\u9047\u8d66\u8fd4\u56de\uff0c\u7ee7\u7eed\u8fc7\u7740\u98d8\u8361\u56db\u65b9\u7684\u6d41\u6d6a\u751f\u6d3b\u3002\u665a\u5e74\u674e\u767d\u6295\u5954\u4ed6\u7684\u65cf\u53d4\u3001\u5f53\u65f6\u5728\u5f53\u6d82\uff08\u4eca\u5c5e\u5b89\u5fbd\uff09\u5f53\u53bf\u4ee4\u7684\u674e\u9633\u51b0\uff0c\u4e0d\u4e45\u5373\u75c5\u901d\uff0c\u4eab\u5e74\u516d\u5341\u4e8c\u5c81\u3002\u674e\u767d\u7684\u8bd7\u6b4c\u521b\u4f5c\u5177\u6709\u6781\u9ad8\u7684\u827a\u672f\u6210\u5c31\u3002\u4ed6\u7684\u8bd7\u4ee5\u6292\u60c5\u4e3a\u4e3b\uff0c\u5584\u4e8e\u4ece\u6c11\u6b4c\u3001\u795e\u8bdd\u4e2d\u6c72\u53d6\u8425\u517b\u7d20\u6750\uff0c\u6784\u6210\u5176\u7279\u6709\u7684\u7470\u4e3d\u7eda\u70c2\u7684\u8272\u5f69\uff0c\u662f\u5c48\u539f\u4ee5\u6765\u79ef\u6781\u6d6a\u6f2b\u4e3b\u4e49\u8bd7\u6b4c\u7684\u65b0\u9ad8\u5cf0\u3002\u4ed6\u5c06\u53d9\u4e8b\u3001\u8bae\u8bba\u3001\u6292\u60c5\u4e09\u8005\u878d\u4e3a\u4e00\u4f53\uff0c\u4ee5\u6c14\u8d2f\u4e4b\uff0c\u65e2\u800c\u5f62\u6210\u4e86\u96c4\u5947\u98d8\u9038\u7684\u98ce\u683c\u3002\u4ed6\u7684\u8bd7\u6b4c\u65e2\u6709\u5927\u6c14\u78c5\u7934\u3001\u5954\u817e\u8df3\u8dc3\u7684\u6c14\u52bf\u548c\u529b\u91cf\uff0c\u53c8\u6709\u58ee\u4e3d\u5947\u4f1f\u7684\u666f\u8c61\uff0c\u5176\u4e2d\u4e5f\u4e0d\u4e4f\u6e05\u65b0\u660e\u5feb\u7684\u53e5\u5b50\u3002\u674e\u767d\u7684\u4e50\u5e9c\u3001\u6b4c\u884c\u53ca\u7edd\u53e5\u6210\u5c31\u4e3a\u6700\u9ad8\u3002\u5176\u6b4c\u884c\uff0c\u5b8c\u5168\u6253\u7834\u8bd7\u6b4c\u521b\u4f5c\u7684\u4e00\u5207\u56fa\u6709\u683c\u5f0f\uff0c\u7b14\u6cd5\u591a\u7aef\uff0c\u8fbe\u5230\u4e86\u6781\u5176\u900d\u9065\u81ea\u5728\u3001\u53d8\u5e7b\u83ab\u6d4b\u3001\u6447\u66f3\u591a\u59ff\u7684\u795e\u5947\u5883\u754c\uff0c\u5145\u5206\u4f53\u73b0\u4e86\u6d6a\u6f2b\u4e3b\u4e49\u7684\u98ce\u683c\u3002\u674e\u767d\u7684\u7edd\u53e5\u81ea\u7136\u660e\u5feb\uff0c\u98d8\u9038\u6f47\u6d12\uff0c\u80fd\u4ee5\u7b80\u6d01\u660e\u5feb\u7684\u8bed\u8a00\u8868\u8fbe\u51fa\u65e0\u5c3d\u7684\u60c5\u601d\u3002\u5728\u76db\u5510\u8bd7\u4eba\u4e2d\uff0c\u738b\u7ef4\u3001\u5b5f\u6d69\u7136\u957f\u4e8e\u4e94\u7edd\uff0c\u738b\u660c\u9f84\u7b49\u4e03\u7edd\u5199\u5f97\u5f88\u597d\uff0c\u517c\u957f\u4e94\u7edd\u4e0e\u4e03\u7edd\u800c\u4e14\u540c\u81fb\u6781\u5883\u7684\uff0c\u53ea\u6709\u674e\u767d\u4e00\u4eba\u3002\u603b\u7684\u6765\u8bf4\uff0c\u674e\u767d\u662f\u4e00\u4f4d\u5177\u6709\u4e16\u754c\u5f71\u54cd\u7684\u4f1f\u5927\u8bd7\u4eba\uff0c\u4ed6\u7684\u8bd7\u6b4c\u5728\u4e2d\u56fd\u6587\u5b66\u53f2\u4e0a\u5360\u6709\u91cd\u8981\u5730\u4f4d\uff0c\u5bf9\u540e\u4e16\u4ea7\u751f\u4e86\u6df1\u8fdc\u7684\u5f71\u54cd\u3002\u4ed6\u7684\u8bd7\u624d\u6a2a\u6ea2\uff0c\u88ab\u8a89\u4e3a\u201c\u8bd7\u4ed9\u201d\uff0c\u4ed6\u7684\u4f5c\u54c1\u5145\u6ee1\u4e86\u6d6a\u6f2b\u4e3b\u4e49\u7684\u8272\u5f69\uff0c\u5177\u6709\u6781\u9ad8\u7684\u827a\u672f\u4ef7\u503c\u548c\u5386\u53f2\u610f\u4e49\u3002"}]}
Juner/AI-generated-text-detection-pair
null
[ "transformers", "safetensors", "bert", "text-classification", "zh", "arxiv:1910.09700", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T11:40:05+00:00
null
null
{}
Allais/test
null
[ "region:us" ]
null
2024-04-24T11:40:10+00:00
null
null
# EffectXmed Creme Deutschland Erfahrungen Inhaltsstoffe, Vorteile Offizielle Preis, Kaufen EffectXMed ist eine Marke, die über eine gewisse Erfahrung mit Hautpflegeprodukten verfügt und anspruchsvolle Seren und Cremes anbietet. Die Produkte sind ausschließlich darauf ausgerichtet, den Reifungsprozess der Haut anzukurbeln, die Erhaltungsfunktionen zu unterstützen und die Hautalterung weiter voranzutreiben. Besonders wirksame Wirkstoffe zur Krebsprävention, Nährstoffe und feine Pflanzenöle sowie Aminosäuren stehen im Mittelpunkt der Details. ## **[Klicken Sie hier, um jetzt auf der offiziellen Website von EffectXmed Creme zu kaufen](https://capsules24x7.com/effectxmed-de)** ## EffectXmed-Befestigungen Der Gegenreifungsgenuss wird durch die Art und Weise gefördert, in der sich die wichtigsten natürlichen dynamischen Fixierungen befinden. Daher sollte das Produkt auch von allen Kunden rundum akzeptiert werden. Die zugehörigen EffectXmed-Befestigungen sind angegeben: Kigelia Africana extrahieren Platinpeptide Traubenkernöl Kaviar und Muscheln entfernen Shea-Margarine, Aprikosenkernöl, Sonnenblumenöl und Olivenöl ## Gold- und Juwelenpulver   Kigelia Africana-Konzentrat: Dies ist ein wesentlicher Bestandteil zahlreicher Anti-Aging-Cremes, einschließlich EffectXmed. Es sättigt die Haut. Dadurch wird die Entstehung von Falten gemildert und die Haut kann wiederbelebt werden. Traubenkernöl: Dieses einzigartige Öl fördert die Wundheilung und sorgt anschließend für ein glattes und verfeinertes Hautbild. Platinpeptide: Peptide garantieren eine hervorragende Bildung von Kollagen Typ 1 und 3. Dies führt zu einer strafferen und geglätteten Haut. Shea-Aufstrich, Aprikosenkernöl und Sonnenblumenöl: Die in der EffectXmed-Creme enthaltenen dynamischen Wirkstoffe sind ein erfolgreicher Ratgeber bei trockener Haut, die gleichzeitig eine Hautreifung zeigt. Gold- und Juwelenpuder: Dieses Puder wirkt nachts gegen Falten und hinterlässt einen echten Glanzeffekt. ## **[Klicken Sie hier, um jetzt auf der offiziellen Website von EffectXmed Creme zu kaufen](https://capsules24x7.com/effectxmed-de)**
{}
VKapseln475/EffectXmedCreme
null
[ "region:us" ]
null
2024-04-24T11:41:02+00:00
null
null
{"license": "mit"}
bollwerk/testing
null
[ "license:mit", "region:us" ]
null
2024-04-24T11:41:23+00:00
null
diffusers
{}
tuanio/ddpm-cifar10
null
[ "diffusers", "tensorboard", "safetensors", "diffusers:DDPMPipeline", "region:us" ]
null
2024-04-24T11:41:25+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mistral-7B-v0.1-VIGGO-qlora This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "Mistral-7B-v0.1-VIGGO-qlora", "results": []}]}
asprenger/Mistral-7B-v0.1-VIGGO-qlora
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-04-24T11:41:37+00:00
reinforcement-learning
stable-baselines3
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rwr20 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rwr20 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga rwr20 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
{"library_name": "stable-baselines3", "tags": ["SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "DQN", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "SpaceInvadersNoFrameskip-v4", "type": "SpaceInvadersNoFrameskip-v4"}, "metrics": [{"type": "mean_reward", "value": "239.00 +/- 157.98", "name": "mean_reward", "verified": false}]}]}]}
rwr20/dqn-SpaceInvadersNoFrameskip-v4
null
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-24T11:42:38+00:00
reinforcement-learning
ml-agents
# **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: mrbesher/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
{"library_name": "ml-agents", "tags": ["SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos"]}
mrbesher/poca-SoccerTwos
null
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
null
2024-04-24T11:42:51+00:00
text-generation
transformers
*There currently is an issue with the **model generating random reserved special tokens (like "<|reserved_special_token_49|>") at the end**. Please use with `skip_special_tokens=true`. We will update once we found the reason for this behaviour. If you found a solution, please let us know!* # Llama 3 DiscoLM German 8b v0.1 Experimental <p align="center"><img src="/DiscoResearch/Llama3_DiscoLM_German_8b_v0.1_experimental/resolve/main/disco_llama.webp" width="400"></p> # Introduction **Llama 3 DiscoLM German 8b v0.1 Experimental** is an experimental Llama 3 based version of [DiscoLM German](https://huggingface.co/DiscoResearch/DiscoLM_German_7b_v1). This is an experimental release and not intended for production use. The model is still in development and will be updated with new features and improvements in the future. Please find a online Demo [here](https://364b61f772fa7baacb.gradio.live/) (we may take this offline for updates). # Prompt Format DiscoLM German uses ChatML as the prompt format which enables OpenAI endpoint compatability and is supported by most inference libraries and frontends. System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model. ``` <|im_start|>system Du bist ein hilfreicher Assistent.<|im_end|> <|im_start|>user Wer bist du?<|im_end|> <|im_start|>assistant Ich bin ein Sprachmodell namens DiscoLM German und ich wurde von DiscoResearch trainiert.<|im_end|> ``` This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "Du bist ein hilfreicher Assistent."}, {"role": "user", "content": "Wer bist du?"} ] gen_input = tokenizer.apply_chat_template(message, return_tensors="pt") model.generate(**gen_input) ``` When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure that the model continues with an assistant response. # Example Code for Inference ```python model_id = "DiscoResearch/Llama3_DiscoLM_German_8b_v0.1_experimental" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "Du bist ein hilfreicher Assistent."}, {"role": "user", "content": "Wer bist du?"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` # Limitations & Biases This model can produce factually incorrect and offensive output, and should not be relied on to produce factually accurate information. This model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate biased or otherwise offensive outputs and it is the responsibility of the user to implement a safety/moderation layer. Please use with caution. # License This model is distributed under the META LLAMA 3 COMMUNITY LICENSE, see [LICENSE](LICENSE) for more information. # Acknowledgements Built with Meta Llama 3. DiscoLM German is a [DiscoResearch](https://huggingface.co/DiscoResearch) project, a collective effort by [JP Harries](https://huggingface.co/jphme), [Björn Plüster](https://huggingface.co/bjoernp) and [Daniel Auras](https://huggingface.co/rasdani). Development of Llama 3 DiscoLM German 8b was sponsored by [ellamind](https://ellamind.com). Compute was sponsored generously by [sysGen GmbH](https://www.sysgen.de/). [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) # About DiscoResearch DiscoResearch is an aspiring open research community for AI enthusiasts and LLM hackers. Come join our [Discord](https://discord.gg/ttNdas89f3), share your opinions and ideas, and advance open LLM research with us! # Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. This model should only be deployed with additional safety measures in place.
{"library_name": "transformers", "tags": []}
mayflowergmbh/Llama3_DiscoLM_German_8b_v0.1_experimental-AWQ
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-24T11:43:22+00:00
text-generation
transformers
{}
enchatted/llama-3-8b-oscar-2301-el
null
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T11:45:00+00:00
null
transformers
# Skyrim Fine-Tune for Meta Llama 3 8B Instruct [<img src="https://staticdelivery.nexusmods.com/mods/1704/images/headers/98631_1692410778.jpg" width="1000"/>](https://art-from-the-machine.github.io/Mantella/) ## Introduction This fine-tune of Meta's Llama 3 8B Instruct model is intended to be used as a custom-tailored model for the [Mantella Skyrim mod](https://art-from-the-machine.github.io/Mantella/). ## Dataset An Alpaca-style dataset of 8,800+ player <-> NPC interactions has been used in this fine-tune. Details on this dataset can be found in the [Mantella LLM Fine-Tuning](https://github.com/art-from-the-machine/Mantella-LLM-Fine-Tuning) repo. Example: ```json { "instruction": "Generate dialogue in the style of Skyrim.", "input": "Where can I find fire salts?", "output": "A flame atronach's body might provide fire salt. They're dangerous creatures that can be summoned by wizards. Of course, it would be much easier to check with an alchemist. They occasionally have them for sale." } ``` ## Specifications - **Developed by:** art-from-the-machine - **License:** apache-2.0 - **Fine-tuned from model:** unsloth/llama-3-8b-Instruct-bnb-4bit - **Context length:** 8,192 ## Example Output The fine-tuned [Mantella-Skyrim-Llama-3-8B-Q4_K_M.gguf](https://huggingface.co/art-from-the-machine/Mantella-Skyrim-Llama-3-8B-GGUF/blob/main/Mantella-Skyrim-Llama-3-8B-Q4_K_M.gguf) model has been tested in Mantella v0.11.2's debug mode, with the model running locally in koboldcpp. The character tested is Hulda. A list of questions are tested against both the fine-tuned model as well as an equivalent [Q4_K_M Llama 3 Instruct](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF/blob/main/Meta-Llama-3-8B-Instruct-Q4_K_M.gguf) base model for comparison. For both models, Mantella's `max_response_sentences` setting has been set to 4. <details> <summary><b>Mantella-Skyrim-Llama-3-8B-Q4_K_M.gguf</b></summary> ```json [ { "role": "assistant", "content": "Ah, welcome to the Bannered Mare! Take a seat by the fire and I'll have one of the girls bring you a drink. What can I get for you?" }, { "role": "user", "content": "What's life like in Whiterun?" }, { "role": "assistant", "content": "It's a good life. The people are friendly, the food is good, and the Jarl is fair. Of course, there are some who grumble about the Empire, but I think they're just jealous of the Jarl's fine wine cellar. What about you? What brings you to Whiterun?" }, { "role": "user", "content": "What do you think about the civil war in Skyrim?" }, { "role": "assistant", "content": "I think it's a shame. I've lived in Whiterun all my life, and I've seen the Empire bring peace and prosperity to our land. The Stormcloaks just want to tear everything down and start over. I don't think that's the way to build a better future. But what do I know? I'm just a simple innkeeper." }, { "role": "user", "content": "I'm looking for work. Got any leads?" }, { "role": "assistant", "content": "Well, the Companions are always looking for new recruits. They're a bit rough, but they're good people. And of course, there's the Jarl's court. If you're looking for work, you might want to try talking to his steward. He's always looking for someone to do odd jobs for him." }, { "role": "user", "content": "I'm looking for an Elder Scroll. Do you know anything about them?" }, { "role": "assistant", "content": "Elder Scrolls? Ha! You must be joking. Those are ancient relics, hidden away in the ruins of old. You won't find one in Whiterun, that's for sure. If you're looking for something like that, you should try talking to the Greybeards. They're the ones who live in the mountains and meditate all day." } ] ``` </details> <br> <details> <summary><b>Meta-Llama-3-8B-Instruct-Q4_K_M.gguf</b></summary> ```json [ { "role": "assistant", "content": "Ah, welcome to The Bannered Mare! Have a seat by the fire, and I'll have one of the barmaids bring you a pint of our finest ale. What brings you to Whiterun today?" }, { "role": "user", "content": "What's life like in Whiterun?" }, { "role": "assistant", "content": "Life in Whiterun is good, thank you for asking. The Jarl's been doing a fine job of keeping the city prosperous. Of course, there's always something going on, whether it's the Companions looking for new members or the Thieves Guild trying to... well, steal things. But overall, it's a good place to live." }, { "role": "user", "content": "What do you think about the civil war in Skyrim?" }, { "role": "assistant", "content": "The civil war? Well, I think it's a shame, to be honest. I've got friends on both sides of the conflict, and it's hard to see them fighting each other. But I suppose it's not my place to say who's right or wrong. I just hope it ends soon and we can all get back to living our lives in peace." }, { "role": "user", "content": "I'm looking for work. Got any leads?" }, { "role": "assistant", "content": "Work, you say? Well, the city's always looking for people to help with the harvest, and the Jarl's guards could use some extra hands. If you're looking for something more... adventurous, you might want to talk to the Companions. They're always looking for new members to join their ranks. Just be warned, though - it's not for the faint of heart." }, { "role": "user", "content": "I'm looking for an Elder Scroll. Do you know anything about them?" }, { "role": "assistant", "content": "Elder Scrolls? Ah, yes... I've heard rumors about those ancient artifacts. Some say they hold the secrets of the world, while others claim they're cursed. I don't know much about them, but I've heard that the Greybeards in the mountains might know more. They're a reclusive bunch, but if you're willing to brave the wilderness, you might be able to learn something from them." } ] ``` </details> ## How to use with Mantella See [here](https://art-from-the-machine.github.io/Mantella/pages/installation.html#local-models) for full instructions on how to run Mantella with a local LLM. ## Notes This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"}
art-from-the-machine/Mantella-Skyrim-Llama-3-8B-GGUF
null
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-24T11:45:45+00:00
null
null
{"license": "mit"}
Bluebomber182/Judy-Hopps-StyleTTS2-Improved-Model
null
[ "license:mit", "region:us" ]
null
2024-04-24T11:46:12+00:00
null
null
{}
Anshulmango/LLAMA3_8B_Chat_finetuned_DS_v4
null
[ "safetensors", "region:us" ]
null
2024-04-24T11:47:07+00:00
text-generation
transformers
# [MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.2-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.2-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.2](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.2) ## Description [MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.2-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.2-GGUF) contains GGUF format model files for [MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.2](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.2). ## Prompt Template This model uses `ChatML` prompt template: ``` <|im_start|>system {System} <|im_end|> <|im_start|>user {User} <|im_end|> <|im_start|>assistant {Assistant} ```` ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
{"tags": ["quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "llama", "llama-3", "text-generation"], "model_name": "Llama-3-8B-Instruct-DPO-v0.2-GGUF", "base_model": "MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.2", "inference": false, "model_creator": "MaziyarPanahi", "pipeline_tag": "text-generation", "quantized_by": "MaziyarPanahi"}
MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.2-GGUF
null
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "llama", "llama-3", "base_model:MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.2", "text-generation-inference", "region:us" ]
null
2024-04-24T11:47:43+00:00
text-generation
null
## Llamacpp imatrix Quantizations of Phi-3-mini-4k-instruct # This model is being deprecated in favour of the incoming conversion/quant with BPE tokenizers fixed. Will be here: https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-GGUF Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2717">b2717</a> for quantization. Original model: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) ## Prompt format ``` <s><|system|> {system_prompt}<|end|><|user|> {prompt}<|end|><|assistant|> ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [Phi-3-mini-4k-instruct-Q8_0.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-old-GGUF/blob/main/Phi-3-mini-4k-instruct-Q8_0.gguf) | Q8_0 | 4.06GB | Extremely high quality, generally unneeded but max available quant. | | [Phi-3-mini-4k-instruct-Q6_K.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-old-GGUF/blob/main/Phi-3-mini-4k-instruct-Q6_K.gguf) | Q6_K | 3.13GB | Very high quality, near perfect, *recommended*. | | [Phi-3-mini-4k-instruct-Q5_K_M.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-old-GGUF/blob/main/Phi-3-mini-4k-instruct-Q5_K_M.gguf) | Q5_K_M | 2.81GB | High quality, *recommended*. | | [Phi-3-mini-4k-instruct-Q5_K_S.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-old-GGUF/blob/main/Phi-3-mini-4k-instruct-Q5_K_S.gguf) | Q5_K_S | 2.64GB | High quality, *recommended*. | | [Phi-3-mini-4k-instruct-Q4_K_M.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-old-GGUF/blob/main/Phi-3-mini-4k-instruct-Q4_K_M.gguf) | Q4_K_M | 2.39GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [Phi-3-mini-4k-instruct-Q4_K_S.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-old-GGUF/blob/main/Phi-3-mini-4k-instruct-Q4_K_S.gguf) | Q4_K_S | 2.18GB | Slightly lower quality with more space savings, *recommended*. | | [Phi-3-mini-4k-instruct-IQ4_NL.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-old-GGUF/blob/main/Phi-3-mini-4k-instruct-IQ4_NL.gguf) | IQ4_NL | 2.17GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. | | [Phi-3-mini-4k-instruct-IQ4_XS.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-old-GGUF/blob/main/Phi-3-mini-4k-instruct-IQ4_XS.gguf) | IQ4_XS | 2.05GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [Phi-3-mini-4k-instruct-Q3_K_L.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-old-GGUF/blob/main/Phi-3-mini-4k-instruct-Q3_K_L.gguf) | Q3_K_L | 2.08GB | Lower quality but usable, good for low RAM availability. | | [Phi-3-mini-4k-instruct-Q3_K_M.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-old-GGUF/blob/main/Phi-3-mini-4k-instruct-Q3_K_M.gguf) | Q3_K_M | 1.95GB | Even lower quality. | | [Phi-3-mini-4k-instruct-IQ3_M.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-old-GGUF/blob/main/Phi-3-mini-4k-instruct-IQ3_M.gguf) | IQ3_M | 1.85GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [Phi-3-mini-4k-instruct-IQ3_S.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-old-GGUF/blob/main/Phi-3-mini-4k-instruct-IQ3_S.gguf) | IQ3_S | 1.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. | | [Phi-3-mini-4k-instruct-Q3_K_S.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-old-GGUF/blob/main/Phi-3-mini-4k-instruct-Q3_K_S.gguf) | Q3_K_S | 1.68GB | Low quality, not recommended. | | [Phi-3-mini-4k-instruct-IQ3_XS.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-old-GGUF/blob/main/Phi-3-mini-4k-instruct-IQ3_XS.gguf) | IQ3_XS | 1.62GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [Phi-3-mini-4k-instruct-IQ3_XXS.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-old-GGUF/blob/main/Phi-3-mini-4k-instruct-IQ3_XXS.gguf) | IQ3_XXS | 1.51GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [Phi-3-mini-4k-instruct-Q2_K.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-old-GGUF/blob/main/Phi-3-mini-4k-instruct-Q2_K.gguf) | Q2_K | 1.41GB | Very low quality but surprisingly usable. | | [Phi-3-mini-4k-instruct-IQ2_M.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-old-GGUF/blob/main/Phi-3-mini-4k-instruct-IQ2_M.gguf) | IQ2_M | 1.31GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [Phi-3-mini-4k-instruct-IQ2_S.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-old-GGUF/blob/main/Phi-3-mini-4k-instruct-IQ2_S.gguf) | IQ2_S | 1.21GB | Very low quality, uses SOTA techniques to be usable. | | [Phi-3-mini-4k-instruct-IQ2_XS.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-old-GGUF/blob/main/Phi-3-mini-4k-instruct-IQ2_XS.gguf) | IQ2_XS | 1.15GB | Very low quality, uses SOTA techniques to be usable. | | [Phi-3-mini-4k-instruct-IQ2_XXS.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-old-GGUF/blob/main/Phi-3-mini-4k-instruct-IQ2_XXS.gguf) | IQ2_XXS | 1.04GB | Lower quality, uses SOTA techniques to be usable. | | [Phi-3-mini-4k-instruct-IQ1_M.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-old-GGUF/blob/main/Phi-3-mini-4k-instruct-IQ1_M.gguf) | IQ1_M | .91GB | Extremely low quality, *not* recommended. | | [Phi-3-mini-4k-instruct-IQ1_S.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-old-GGUF/blob/main/Phi-3-mini-4k-instruct-IQ1_S.gguf) | IQ1_S | .84GB | Extremely low quality, *not* recommended. | ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
{"language": ["en"], "license": "mit", "tags": ["nlp", "code"], "license_link": "https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE", "pipeline_tag": "text-generation", "quantized_by": "bartowski"}
bartowski/Phi-3-mini-4k-instruct-old-GGUF
null
[ "gguf", "nlp", "code", "text-generation", "en", "license:mit", "region:us" ]
null
2024-04-24T11:47:45+00:00
audio-classification
transformers
{}
Selma323/my_awesome_mind_model
null
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "audio-classification", "endpoints_compatible", "region:us" ]
null
2024-04-24T11:49:25+00:00
null
null
{"license": "openrail"}
SimplCup/TheSpiffingBritV2
null
[ "license:openrail", "region:us" ]
null
2024-04-24T11:49:57+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # train_2024-04-24-11-25-37 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the alpaca_zh and the alpaca_gpt4_zh datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 2.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.37.2 - Pytorch 2.1.2+cu121 - Datasets 2.19.0 - Tokenizers 0.15.2
{"license": "other", "library_name": "peft", "tags": ["llama-factory", "lora", "generated_from_trainer"], "base_model": "meta-llama/Meta-Llama-3-8B", "model-index": [{"name": "train_2024-04-24-11-25-37", "results": []}]}
Sylvia2025/llama3-8B-alpaca_zh
null
[ "peft", "safetensors", "llama-factory", "lora", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B", "license:other", "region:us" ]
null
2024-04-24T11:51:10+00:00
null
null
{}
LAKSHM11-G/pegasus-arxiv-pegasus_article_summarization
null
[ "region:us" ]
null
2024-04-24T11:51:18+00:00
text-classification
transformers
language: en license: cc-by-4.0 tags: - text-classification repo: https://huggingface.co/awashh/RoBERTa-NLI-Group71 --- # Model Card for j34330vk-q26752aa-NLI <!-- Provide a quick summary of what the model is/does. --> This is a Natural Language Inference (NLI) classification model that was trained to detect if a hypothesis is true based on a premise. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This model is based upon a RoBERTa model that was fine-tuned on 26.9K pairs of premise-hypothesis texts. - **Developed by:** Awab Alshami and Vansh Kharbanda - **Language(s):** English - **Model type:** Supervised - **Model architecture:** Transformers - **Finetuned from model [optional]:** roberta-base ### Model Resources <!-- Provide links where applicable. --> - **Repository:** https://huggingface.co/FacebookAI/roberta-base - **Paper or documentation:** https://arxiv.org/pdf/1907.11692.pdf ## Training Details ### Training Data <!-- This is a short stub of information on the training data that was used, and documentation related to data pre-processing or additional filtering (if applicable). --> 26.9k pairs of premise-hypothesis texts. ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Training Hyperparameters <!-- This is a summary of the values of hyperparameters used in training the model. --> - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - num_epochs: 8 #### Speeds, Sizes, Times <!-- This section provides information about how roughly how long it takes to train the model and the size of the resulting model. --> - overall training time: 1.2 hours - duration per training epoch: 9 minutes - model size: 600 MB ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data & Metrics #### Testing Data <!-- This should describe any evaluation data used (e.g., the development/validation set provided). --> A subset of the development set provided, amounting to 6.7K pairs. #### Metrics <!-- These are the evaluation metrics being used. --> - Precision: 0.882 - Recall: 0.879 - F1-score: 0.880 - Accuracy: 0.880 ### Results The model obtained a precision score of 88.2%, a recall score of 87.9%, an F1-score of 88% and an accuracy of 88%. ## Technical Specifications ### Hardware - RAM: at least 22.5 GB - Storage: at least 2GB, - GPU: A100 ### Software - Transformers 4.18.0 - Pytorch 1.11.0+cu113 ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Any inputs (concatenation of two sequences) longer than 512 subwords will be truncated by the model.
{}
awashh/RoBERTa-NLI-Group71
null
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1907.11692", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T11:52:25+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
khyat/vicuna_rlhf_v3
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T11:53:36+00:00
text-generation
transformers
Quantizations of https://huggingface.co/indischepartij/MiniCPM-3B-OpenHermes-2.5-v2 # From original readme # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_indischepartij__MiniCPM-3B-OpenHermes-2.5-v2) | Metric |Value| |---------------------------------|----:| |Avg. |51.91| |AI2 Reasoning Challenge (25-Shot)|47.44| |HellaSwag (10-Shot) |72.00| |MMLU (5-Shot) |53.06| |TruthfulQA (0-shot) |42.28| |Winogrande (5-shot) |65.43| |GSM8k (5-shot) |31.24|
{"language": ["en"], "license": "other", "tags": ["transformers", "gguf", "imatrix", "MiniCPM-3B-OpenHermes-2.5-v2"], "inference": false, "pipeline_tag": "text-generation"}
duyntnet/MiniCPM-3B-OpenHermes-2.5-v2-imatrix-GGUF
null
[ "transformers", "gguf", "imatrix", "MiniCPM-3B-OpenHermes-2.5-v2", "text-generation", "en", "license:other", "region:us" ]
null
2024-04-24T11:55:48+00:00
text-classification
transformers
{}
arunchess/context_db
null
[ "transformers", "safetensors", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T11:56:06+00:00
question-answering
transformers
{}
tanishq1420/distilbert-base-uncased-finetuned-squad
null
[ "transformers", "safetensors", "distilbert", "question-answering", "endpoints_compatible", "region:us" ]
null
2024-04-24T11:56:13+00:00
feature-extraction
transformers
# CNTDAI-6B ## Model Description CNTDAI-6B 是 Community AI Model Group 为了进行POC来微调出来的符合公司需求的一个实验行模型,基于GLM Transformer模型进行微调的中英文LLM.采用了更多样的训练数据、更充分的训练步数和更合理的训练策略。在语义、数学、推理、代码、知识等不同角度的数据集上进行优化. CNTDAI-6B is an experimental model fine-tuned by the Community AI Model Group for POC to meet the company's internal needs. It is a Chinese and English LLM fine-tuned based on the GLM Transformer model. It uses more diverse training data and more sufficient training steps. numbers and more reasonable training strategies. Optimize on data sets from different perspectives such as semantics, mathematics, reasoning, code, knowledge, etc. ## Usage ```python import os import platform import torch from transformers import AutoTokenizer, AutoModel #current_dir = os.path.dirname(os.path.abspath(__file__)) #model_path = os.path.join(current_dir, 'cntd','CNTDAI-6B') model_path = "cntd/CNTDAI-6B" print("是否可用:", torch.cuda.is_available()) # 查看GPU是否可用 print("GPU数量:", torch.cuda.device_count()) # 查看GPU数量 print("torch方法查看CUDA版本:", torch.version.cuda) # torch方法查看CUDA版本 print("GPU索引号:", torch.cuda.current_device()) # 查看GPU索引号 tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) model = AutoModel.from_pretrained(model_path, trust_remote_code=True).half().cuda() # 多显卡支持,使用下面两行代替上面一行,将num_gpus改为你实际的显卡数量 # from utils import load_model_on_gpus # model = load_model_on_gpus(model_path, num_gpus=2) model = model.eval() os_name = platform.system() clear_command = 'cls' if os_name == 'Windows' else 'clear' stop_stream = False def build_prompt(history): prompt = "欢迎使用 CNTDAI-6B 模型,输入内容即可进行对话,clear 清空对话历史,stop 终止程序" for query, response in history: prompt += f"\n\n用户:{query}" prompt += f"\n\nCNTDAI-6B:{response}" return prompt def main(): past_key_values, history = None, [] global stop_stream print("欢迎使用 CNTDAI-6B 模型,输入内容即可进行对话,clear 清空对话历史,stop 终止程序") while True: query = input("\n用户:") if query.strip() == "stop": break if query.strip() == "clear": past_key_values, history = None, [] os.system(clear_command) print("欢迎使用 CNTDAI-6B 模型,输入内容即可进行对话,clear 清空对话历史,stop 终止程序") continue print("\nCNTDAI:", end="") current_length = 0 for response, history, past_key_values in model.stream_chat(tokenizer, query, history=history, past_key_values=past_key_values, return_past_key_values=True): if stop_stream: stop_stream = False break else: print(response[current_length:], end="", flush=True) current_length = len(response) print("") if __name__ == "__main__": main() ```
{"license": "apache-2.0"}
cntd/CNTDAI-6B
null
[ "transformers", "pytorch", "chatglm", "feature-extraction", "custom_code", "license:apache-2.0", "region:us" ]
null
2024-04-24T11:56:22+00:00
text-generation
transformers
{"language": ["en"], "datasets": ["sohamslc5/curr1"], "metrics": ["accuracy"], "pipeline_tag": "text-generation", "base_model": "meta-llama/Llama-2-7b-chat-hf"}
sohamslc5/new_llama_new
null
[ "transformers", "pytorch", "llama", "text-generation", "en", "dataset:sohamslc5/curr1", "base_model:meta-llama/Llama-2-7b-chat-hf", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T11:56:52+00:00
text-to-audio
transformers
{}
fastinom/fastinoTTS
null
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "endpoints_compatible", "region:us" ]
null
2024-04-24T11:59:01+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
CognitoLibera2/model_s9_7b_19
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T11:59:13+00:00
fill-mask
transformers
{}
mputs1234/RobBERT-legal
null
[ "transformers", "tensorboard", "safetensors", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T12:01:04+00:00
feature-extraction
transformers
{}
harshita23sh/colbert
null
[ "transformers", "pytorch", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2024-04-24T12:01:38+00:00
text-generation
transformers
{}
mayflowergmbh/Llama-3-SauerkrautLM-8b-Instruct-AWQ
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-24T12:01:51+00:00
text-generation
transformers
# Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
{"license": "apache-2.0", "library_name": "transformers", "tags": ["autotrain", "text-generation-inference", "text-generation", "peft"], "widget": [{"messages": [{"role": "user", "content": "What is your favorite condiment?"}]}]}
farhadali/autotrain-wii4h-pd8vo
null
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "autotrain", "text-generation-inference", "peft", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T12:02:14+00:00
null
null
{"license": "openrail"}
Zavid/Letov
null
[ "license:openrail", "region:us" ]
null
2024-04-24T12:02:48+00:00
text-generation
transformers
# GALAXY-16B-v1.0 ![image/png](https://huggingface.co/TeeZee/GALAXY-16B-v1.0/resolve/main/GALAXY-16B-v1.0.jpg) ## Technical notes - 72 layers,DUS procedure, mistral(32)->SOLAR(48)->GALAXY(72) - 16B parameters - model created as a extension of depth upscaling procedure used for SOLAR by upstage ## Results - model can and will produce NSFW content - waiting for eval results
{"language": ["en"], "license": "apache-2.0", "tags": ["not-for-all-audiences"], "datasets": ["Intel/orca_dpo_pairs", "athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW", "Open-Orca/SlimOrca", "MinervaAI/Aesir-Preview", "allenai/ultrafeedback_binarized_cleaned"]}
TeeZee/GALAXY-16B-v1.0-bpw6.5-h8-exl2
null
[ "transformers", "safetensors", "llama", "text-generation", "not-for-all-audiences", "conversational", "en", "dataset:Intel/orca_dpo_pairs", "dataset:athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW", "dataset:Open-Orca/SlimOrca", "dataset:MinervaAI/Aesir-Preview", "dataset:allenai/ultrafeedback_binarized_cleaned", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T12:02:51+00:00
null
null
{}
HenryConjecture/Pagol
null
[ "region:us" ]
null
2024-04-24T12:02:55+00:00
text-generation
transformers
GroundingGPT-7B
{"license": "apache-2.0"}
zwli/GroundingGPT
null
[ "transformers", "pytorch", "LEGO", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T12:03:42+00:00
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/AIdenU/Mistral-7B-v0.2-ko-Y24_v2.0 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.2-ko-Y24_v2.0-GGUF/resolve/main/Mistral-7B-v0.2-ko-Y24_v2.0.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.2-ko-Y24_v2.0-GGUF/resolve/main/Mistral-7B-v0.2-ko-Y24_v2.0.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.2-ko-Y24_v2.0-GGUF/resolve/main/Mistral-7B-v0.2-ko-Y24_v2.0.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.2-ko-Y24_v2.0-GGUF/resolve/main/Mistral-7B-v0.2-ko-Y24_v2.0.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.2-ko-Y24_v2.0-GGUF/resolve/main/Mistral-7B-v0.2-ko-Y24_v2.0.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.2-ko-Y24_v2.0-GGUF/resolve/main/Mistral-7B-v0.2-ko-Y24_v2.0.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.2-ko-Y24_v2.0-GGUF/resolve/main/Mistral-7B-v0.2-ko-Y24_v2.0.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.2-ko-Y24_v2.0-GGUF/resolve/main/Mistral-7B-v0.2-ko-Y24_v2.0.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.2-ko-Y24_v2.0-GGUF/resolve/main/Mistral-7B-v0.2-ko-Y24_v2.0.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.2-ko-Y24_v2.0-GGUF/resolve/main/Mistral-7B-v0.2-ko-Y24_v2.0.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.2-ko-Y24_v2.0-GGUF/resolve/main/Mistral-7B-v0.2-ko-Y24_v2.0.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.2-ko-Y24_v2.0-GGUF/resolve/main/Mistral-7B-v0.2-ko-Y24_v2.0.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.2-ko-Y24_v2.0-GGUF/resolve/main/Mistral-7B-v0.2-ko-Y24_v2.0.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.2-ko-Y24_v2.0-GGUF/resolve/main/Mistral-7B-v0.2-ko-Y24_v2.0.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.2-ko-Y24_v2.0-GGUF/resolve/main/Mistral-7B-v0.2-ko-Y24_v2.0.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["Mistral"], "base_model": "AIdenU/Mistral-7B-v0.2-ko-Y24_v2.0", "quantized_by": "mradermacher"}
mradermacher/Mistral-7B-v0.2-ko-Y24_v2.0-GGUF
null
[ "transformers", "gguf", "Mistral", "en", "base_model:AIdenU/Mistral-7B-v0.2-ko-Y24_v2.0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-24T12:06:15+00:00
reinforcement-learning
null
# **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="shinseungback/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
shinseungback/q-FrozenLake-v1-4x4-noSlippery
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-24T12:06:50+00:00
text-to-image
diffusers
# Fonglets Diathorn Pony XL <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/Fongletto/Fonglets_Diathorn_Pony_XL/tree/main) them in the Files & versions tab.
{"tags": ["text-to-image", "stable-diffusion", "lora", "diffusers", "template:sd-lora", "not-for-all-audiences"], "widget": [{"text": "score_9,score_8,score_7,score_8_up,score_7_up,score_6_up, ", "parameters": {"negative_prompt": "adult, mature, wide hips,"}, "output": {"url": "images/00050-354634611.png"}}, {"text": "score_9,score_8,score_7,score_8_up,score_7_up,score_6_up, ", "output": {"url": "images/00045-1669523689.png"}}, {"text": "score_9,score_8,score_7,score_8_up,score_7_up,score_6_up, ", "parameters": {"negative_prompt": "adult, mature, wide hips,"}, "output": {"url": "images/00054-2002188968.png"}}, {"text": "score_9,score_8,score_7,score_8_up,score_7_up,score_6_up, ", "parameters": {"negative_prompt": "adult, mature, wide hips,"}, "output": {"url": "images/00064-3633331710.png"}}, {"text": "score_9,score_8,score_7,score_8_up,score_7_up,score_6_up, ", "parameters": {"negative_prompt": "adult, mature, wide hips,"}, "output": {"url": "images/00062-753878261.png"}}, {"text": "-", "output": {"url": "images/maddiz.png"}}, {"text": "score_9,score_8,score_7,score_8_up,score_7_up,score_6_up, ", "parameters": {"negative_prompt": "adult, mature, wide hips,"}, "output": {"url": "images/00072-4227448286.png"}}, {"text": "score_9,score_8,score_7,score_8_up,score_7_up,score_6_up, ", "parameters": {"negative_prompt": "adult, mature, wide hips,"}, "output": {"url": "images/00067-153093350.png"}}], "base_model": "stablediffusionapi/pony-diffusion-v6-xl"}
Fongletto/Fonglets_Diathorn_Pony_XL
null
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "not-for-all-audiences", "base_model:stablediffusionapi/pony-diffusion-v6-xl", "region:us" ]
null
2024-04-24T12:07:56+00:00
null
null
{"license": "openrail"}
Zavid/Chabotina
null
[ "license:openrail", "region:us" ]
null
2024-04-24T12:08:03+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
JustAFool/wav2vec2-vi-300-vivos
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-24T12:09:53+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
InayaKripa/gemma-2b-it-v2-toxic-Conversation
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-24T12:11:39+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # HSE_PRAVO_complexity_classifier_roberta This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 3 - eval_batch_size: 3 - seed: 42 - gradient_accumulation_steps: 10 - total_train_batch_size: 30 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 200 ### Training results ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.36.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.15.2
{"license": "mit", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "FacebookAI/xlm-roberta-large", "model-index": [{"name": "HSE_PRAVO_complexity_classifier_roberta", "results": []}]}
marcus2000/HSE_PRAVO_complexity_classifier_roberta
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-large", "license:mit", "region:us" ]
null
2024-04-24T12:12:44+00:00
reinforcement-learning
null
# **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="shinseungback/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-Taxi-v3", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.40 +/- 2.69", "name": "mean_reward", "verified": false}]}]}]}
shinseungback/q-Taxi-v3
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-24T12:13:54+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
deadcode99/mistral-7b-lime-only-question-aware-instruct-before-v2
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-24T12:15:42+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-go_emotions This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1403 - Accuracy: 0.9375 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 250 | 0.1971 | 0.9215 | | 0.3462 | 2.0 | 500 | 0.1403 | 0.9375 | ### Framework versions - Transformers 4.36.0 - Pytorch 2.2.1+cpu - Datasets 2.19.0 - Tokenizers 0.15.0
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "roberta-base-go_emotions", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.9375, "name": "Accuracy"}]}]}]}
Laddoo/roberta-base-go_emotions
null
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T12:15:47+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistralv1_spectral_r4_7e-5_e5_bf16 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.39.3 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "mistralv1_spectral_r4_7e-5_e5_bf16", "results": []}]}
fangzhaoz/mistralv1_spectral_r4_7e-5_e5_bf16
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "region:us" ]
null
2024-04-24T12:15:54+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral_7b_finetune_test2_sharded This model is a fine-tuned version of [filipealmeida/Mistral-7B-Instruct-v0.1-sharded](https://huggingface.co/filipealmeida/Mistral-7B-Instruct-v0.1-sharded) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.2+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "filipealmeida/Mistral-7B-Instruct-v0.1-sharded", "model-index": [{"name": "mistral_7b_finetune_test2_sharded", "results": []}]}
MStefan/mistral_7b_finetune_test2_sharded
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:filipealmeida/Mistral-7B-Instruct-v0.1-sharded", "license:apache-2.0", "region:us" ]
null
2024-04-24T12:16:09+00:00
null
null
{"license": "mit", "datasets": ["mnist"]}
vtemplier/LeNet_MNIST
null
[ "onnx", "dataset:mnist", "license:mit", "region:us" ]
null
2024-04-24T12:17:36+00:00
text-generation
transformers
# Uploaded model - **Developed by:** tgey - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "sft", "4bit", "2epochs", "BNP_25"], "base_model": "unsloth/mistral-7b-bnb-4bit"}
tgey/neuredia_neuropsys_mistral_7B_4bit
null
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "4bit", "2epochs", "BNP_25", "en", "base_model:unsloth/mistral-7b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "8-bit", "region:us" ]
null
2024-04-24T12:17:40+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
fangzhaoz/mistralv1_spectral_r4_7e-5_e5_bf16_merged
null
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T12:20:42+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
krishnakalyan3/emo_cosine_model
null
[ "transformers", "pytorch", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-24T12:21:44+00:00
null
null
{"license": "unknown"}
hdv250202/yolov3_ocr_corner_card
null
[ "license:unknown", "region:us" ]
null
2024-04-24T12:22:34+00:00
text-generation
transformers
{}
waelChafei/new-llama2-interview
null
[ "transformers", "pytorch", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T12:23:41+00:00
null
mlx
# mlx-community/OpenELM-270M-instruct This model was converted to MLX format from [`apple/OpenELM-270M-instruct`]() using mlx-lm version **0.10.0**. Refer to the [original model card](https://huggingface.co/apple/OpenELM-270M-instruct) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/OpenELM-270M-instruct") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"license": "other", "tags": ["mlx"], "license_name": "apple-sample-code-license", "license_link": "LICENSE"}
mlx-community/OpenELM-270M-Instruct
null
[ "mlx", "safetensors", "openelm", "custom_code", "license:other", "region:us" ]
null
2024-04-24T12:24:28+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-cola This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.9030 - Matthews Correlation: 0.6083 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.4984 | 1.0 | 535 | 0.6242 | 0.5066 | | 0.3277 | 2.0 | 1070 | 0.5216 | 0.5757 | | 0.2297 | 3.0 | 1605 | 0.5808 | 0.5753 | | 0.1843 | 4.0 | 2140 | 0.7486 | 0.6083 | | 0.1261 | 5.0 | 2675 | 0.9030 | 0.6083 | ### Framework versions - Transformers 4.16.2 - Pytorch 2.2.1+cu121 - Datasets 1.16.1 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "roberta-base-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.6082844689817133, "name": "Matthews Correlation"}]}]}]}
rensendata/roberta-base-finetuned-cola
null
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "dataset:glue", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T12:24:38+00:00
null
mlx
# mlx-community/OpenELM-270M This model was converted to MLX format from [`apple/OpenELM-270M`]() using mlx-lm version **0.10.0**. Refer to the [original model card](https://huggingface.co/apple/OpenELM-270M) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/OpenELM-270M") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"license": "other", "tags": ["mlx"], "license_name": "apple-sample-code-license", "license_link": "LICENSE"}
mlx-community/OpenELM-270M
null
[ "mlx", "safetensors", "openelm", "custom_code", "license:other", "region:us" ]
null
2024-04-24T12:24:51+00:00
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # V0424HMA8 This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0143 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 100 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.9614 | 0.09 | 10 | 0.5664 | | 0.2254 | 0.18 | 20 | 0.1179 | | 0.1152 | 0.27 | 30 | 0.0943 | | 0.0996 | 0.36 | 40 | 0.0842 | | 0.0804 | 0.45 | 50 | 0.0773 | | 0.0871 | 0.54 | 60 | 0.0821 | | 0.0796 | 0.63 | 70 | 0.0734 | | 0.0733 | 0.73 | 80 | 0.0815 | | 0.0813 | 0.82 | 90 | 0.0755 | | 0.0826 | 0.91 | 100 | 0.0758 | | 0.0782 | 1.0 | 110 | 0.0724 | | 0.0647 | 1.09 | 120 | 0.0653 | | 0.0677 | 1.18 | 130 | 0.0887 | | 0.0668 | 1.27 | 140 | 0.0752 | | 0.0657 | 1.36 | 150 | 0.0685 | | 0.0675 | 1.45 | 160 | 0.0597 | | 0.0681 | 1.54 | 170 | 0.0788 | | 0.1141 | 1.63 | 180 | 0.0750 | | 0.0596 | 1.72 | 190 | 0.0417 | | 0.0448 | 1.81 | 200 | 0.0302 | | 0.0449 | 1.9 | 210 | 0.0771 | | 0.0497 | 1.99 | 220 | 0.0237 | | 0.0225 | 2.08 | 230 | 0.0266 | | 0.014 | 2.18 | 240 | 0.0178 | | 0.0092 | 2.27 | 250 | 0.0180 | | 0.0153 | 2.36 | 260 | 0.0177 | | 0.0092 | 2.45 | 270 | 0.0157 | | 0.0115 | 2.54 | 280 | 0.0145 | | 0.0089 | 2.63 | 290 | 0.0140 | | 0.0087 | 2.72 | 300 | 0.0141 | | 0.0125 | 2.81 | 310 | 0.0142 | | 0.0077 | 2.9 | 320 | 0.0143 | | 0.0088 | 2.99 | 330 | 0.0143 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.18.0 - Tokenizers 0.14.1
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/phi-2", "model-index": [{"name": "V0424HMA8", "results": []}]}
Litzy619/V0424HMA8
null
[ "safetensors", "generated_from_trainer", "base_model:microsoft/phi-2", "license:mit", "region:us" ]
null
2024-04-24T12:25:01+00:00
null
null
{"license": "unknown"}
hdv250202/yolov3_ocr_card
null
[ "license:unknown", "region:us" ]
null
2024-04-24T12:25:05+00:00
null
mlx
# mlx-community/OpenELM-450M-instruct This model was converted to MLX format from [`apple/OpenELM-450M-instruct`]() using mlx-lm version **0.10.0**. Refer to the [original model card](https://huggingface.co/apple/OpenELM-450M-instruct) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/OpenELM-450M-instruct") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"license": "other", "tags": ["mlx"], "license_name": "apple-sample-code-license", "license_link": "LICENSE"}
mlx-community/OpenELM-450M-Instruct
null
[ "mlx", "safetensors", "openelm", "custom_code", "license:other", "region:us" ]
null
2024-04-24T12:25:16+00:00
null
mlx
# mlx-community/OpenELM-450M This model was converted to MLX format from [`apple/OpenELM-450M`]() using mlx-lm version **0.10.0**. Refer to the [original model card](https://huggingface.co/apple/OpenELM-450M) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/OpenELM-450M") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"license": "other", "tags": ["mlx"], "license_name": "apple-sample-code-license", "license_link": "LICENSE"}
mlx-community/OpenELM-450M
null
[ "mlx", "safetensors", "openelm", "custom_code", "license:other", "region:us" ]
null
2024-04-24T12:25:37+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.001_ablation_4iters_bs256_sample2_iter_3 This model is a fine-tuned version of [ShenaoZ/0.001_ablation_4iters_bs256_sample2_iter_2](https://huggingface.co/ShenaoZ/0.001_ablation_4iters_bs256_sample2_iter_2) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZ/0.001_ablation_4iters_bs256_sample2_iter_2", "model-index": [{"name": "0.001_ablation_4iters_bs256_sample2_iter_3", "results": []}]}
ShenaoZ/0.001_ablation_4iters_bs256_sample2_iter_3
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:updated", "dataset:original", "base_model:ShenaoZ/0.001_ablation_4iters_bs256_sample2_iter_2", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T12:25:38+00:00
null
mlx
# mlx-community/OpenELM-1_1B-instruct-4bit This model was converted to MLX format from [`apple/OpenELM-1_1B-instruct`]() using mlx-lm version **0.10.0**. Refer to the [original model card](https://huggingface.co/apple/OpenELM-1_1B-instruct) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/OpenELM-1_1B-instruct-4bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"license": "other", "tags": ["mlx"], "license_name": "apple-sample-code-license", "license_link": "LICENSE"}
mlx-community/OpenELM-1_1B-Instruct-4bit
null
[ "mlx", "safetensors", "openelm", "custom_code", "license:other", "region:us" ]
null
2024-04-24T12:26:16+00:00
null
mlx
# mlx-community/OpenELM-1_1B-instruct-8bit This model was converted to MLX format from [`apple/OpenELM-1_1B-instruct`]() using mlx-lm version **0.10.0**. Refer to the [original model card](https://huggingface.co/apple/OpenELM-1_1B-instruct) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/OpenELM-1_1B-instruct-8bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"license": "other", "tags": ["mlx"], "license_name": "apple-sample-code-license", "license_link": "LICENSE"}
mlx-community/OpenELM-1_1B-Instruct-8bit
null
[ "mlx", "safetensors", "openelm", "custom_code", "license:other", "region:us" ]
null
2024-04-24T12:26:56+00:00
null
mlx
# mlx-community/OpenELM-1_1B-4bit This model was converted to MLX format from [`apple/OpenELM-1_1B`]() using mlx-lm version **0.10.0**. Refer to the [original model card](https://huggingface.co/apple/OpenELM-1_1B) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/OpenELM-1_1B-4bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"license": "other", "tags": ["mlx"], "license_name": "apple-sample-code-license", "license_link": "LICENSE"}
mlx-community/OpenELM-1_1B-4bit
null
[ "mlx", "safetensors", "openelm", "custom_code", "license:other", "region:us" ]
null
2024-04-24T12:27:14+00:00
automatic-speech-recognition
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Mihaj/w2v-bert-karelian-CodeSwitching
null
[ "transformers", "tensorboard", "safetensors", "wav2vec2-bert", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-24T12:27:36+00:00
null
mlx
# mlx-community/OpenELM-1_1B-8bit This model was converted to MLX format from [`apple/OpenELM-1_1B`]() using mlx-lm version **0.10.0**. Refer to the [original model card](https://huggingface.co/apple/OpenELM-1_1B) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/OpenELM-1_1B-8bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"license": "other", "tags": ["mlx"], "license_name": "apple-sample-code-license", "license_link": "LICENSE"}
mlx-community/OpenELM-1_1B-8bit
null
[ "mlx", "safetensors", "openelm", "custom_code", "license:other", "region:us" ]
null
2024-04-24T12:27:41+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # unsloth_checkpoints This model is a fine-tuned version of [unsloth/codellama-7b-bnb-4bit](https://huggingface.co/unsloth/codellama-7b-bnb-4bit) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0 - Pytorch 2.1.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "unsloth", "generated_from_trainer"], "base_model": "unsloth/codellama-7b-bnb-4bit", "model-index": [{"name": "unsloth_checkpoints", "results": []}]}
MakTek/pine_script_code_llama_last
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "unsloth", "generated_from_trainer", "base_model:unsloth/codellama-7b-bnb-4bit", "license:apache-2.0", "region:us" ]
null
2024-04-24T12:29:07+00:00
null
null
{}
KLAIMED/ARCH
null
[ "region:us" ]
null
2024-04-24T12:30:31+00:00
token-classification
transformers
<p align="center"> <br> <img src="http://www.ixa.eus/sites/default/files/anitdote.png" style="width: 45%;"> <be> # mDeBERTa-base for Multilingual Correct Explanation Extraction in the Medical Domain This model is a fine-tuned version of [mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) for a **novel extractive task** which consists of **identifying the explanation of the correct answer** written by medical doctors. The model has been fine-tuned using the multilingual [https://huggingface.co/datasets/HiTZ/casimedicos-squad](https://huggingface.co/datasets/HiTZ/casimedicos-squad) dataset. ## Performance F1 partial match scores (as defined in [SQuAD extractive QA task](https://huggingface.co/datasets/rajpurkar/squad_v2) are reported in the following table: <img src="https://raw.githubusercontent.com/hitz-zentroa/multilingual-abstrct/main/resources/multilingual-abstrct-results.png" style="width: 75%;"> ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.2 **Contact**: [Anar Yeginbergen](https://ixa.ehu.eus/node/13807?language=en) and [Rodrigo Agerri](https://ragerri.github.io/) HiTZ Center - Ixa, University of the Basque Country UPV/EHU
{"language": ["en", "es", "fr", "it"], "license": "apache-2.0", "library_name": "transformers", "datasets": ["HiTZ/casimedicos-squad"], "metrics": ["f1"], "pipeline_tag": "token-classification", "widget": [{"text": "Paradoxical pulse is a drop in blood pressure > 10 mmHg during inspiration; it represents an exaggeration of the physiological phenomenon consisting of inspiratory lowering of BP (normal up to 10 mmHg). In cardiac tamponade, inspiration, which causes an increase in blood flow to the right chambers, increasing their volume, secondarily causes a displacement of the interventricular septum to the left, so that the left heart lodges and expels less blood during systole and the pulse, therefore, decreases. In a normal heart this exaggerated displacement, caused by the pressure exerted by the tamponade on the RV free wall, does not occur. Sinus X represents the systolic collapse of the venous pulse, i.e., the pressure drop due to atrial relaxation (also partly due to a downward displacement of the RV base during systole). Sinus Y represents the diastolic collapse of the venous pulse, i.e., the pressure drop that occurs from the moment blood enters the tricuspid valve into the ventricle. In cardiac tamponade, the deep sinus X is characteristic. In constrictive pericarditis, the deep Y sinus. For all these reasons, the correct answer is 5."}]}
HiTZ/mdeberta-expl-extraction-multi
null
[ "transformers", "safetensors", "deberta-v2", "question-answering", "token-classification", "en", "es", "fr", "it", "dataset:HiTZ/casimedicos-squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-24T12:30:40+00:00
null
null
{}
atishay23/whisper-large_v3_translate
null
[ "region:us" ]
null
2024-04-24T12:30:52+00:00
null
null
{"license": "llama3"}
bluepisode/meta
null
[ "license:llama3", "region:us" ]
null
2024-04-24T12:30:53+00:00
null
null
{}
Aysgl/test
null
[ "region:us" ]
null
2024-04-24T12:31:30+00:00
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "meta-llama/Llama-2-7b-chat-hf"}
Dewa/funny-llama
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
2024-04-24T12:31:37+00:00
null
null
{"license": "mit"}
mlho/zy120
null
[ "safetensors", "license:mit", "region:us" ]
null
2024-04-24T12:31:51+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.001_ablation_4iters_bs256_nodpo_sample2_iter_2 This model is a fine-tuned version of [ShenaoZ/0.001_ablation_4iters_bs256_nodpo_sample2_iter_1](https://huggingface.co/ShenaoZ/0.001_ablation_4iters_bs256_nodpo_sample2_iter_1) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZ/0.001_ablation_4iters_bs256_nodpo_sample2_iter_1", "model-index": [{"name": "0.001_ablation_4iters_bs256_nodpo_sample2_iter_2", "results": []}]}
ShenaoZ/0.001_ablation_4iters_bs256_nodpo_sample2_iter_2
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:updated", "dataset:original", "base_model:ShenaoZ/0.001_ablation_4iters_bs256_nodpo_sample2_iter_1", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T12:32:05+00:00
null
null
**"It keeps getting better!"** "One of the top recent performers in the **Chaiverse Leaderboard**!" GGUF-IQ-Imatrix quants for [ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B). > [!IMPORTANT] > **Updated!** > These quants have been redone with the fixes from [llama.cpp/pull/6920](https://github.com/ggerganov/llama.cpp/pull/6920) in mind. <br> > Use **KoboldCpp version 1.64** or higher. > [!WARNING] > Compatible SillyTavern presets [here (recommended/simple)](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B/tree/main/Official%20Poppy%20Porpoise%20ST%20Presets)) or [here (Virt's)](https://huggingface.co/Virt-io/SillyTavern-Presets). <br> > Use the latest version of KoboldCpp. **Use the provided presets.** <br> > This is all still highly experimental, let the authors know how it performs for you, feedback is more important than ever now. > [!NOTE] > For **8GB VRAM** GPUs, I recommend the **Q4_K_M-imat** quant for up to 12288 context sizes. **Original model information:** ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/v6AZmbk-Cb52KskTQTwzW.png) # Update: Vision/multimodal capabilities again! If you want to use vision functionality: * You must use the latest versions of [Koboldcpp](https://github.com/LostRuins/koboldcpp). # To use the multimodal capabilities of this model and use **vision** you need to load the specified **mmproj** file, this can be found inside this model repo. https://huggingface.co/ChaoticNeutrals/Llava_1.5_Llama3_mmproj * You can load the **mmproj** by using the corresponding section in the interface: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/UX6Ubss2EPNAT3SKGMLe0.png)
{"language": ["en"], "tags": ["roleplay", "llama3", "sillytavern"]}
Lewdiculous/Poppy_Porpoise-v0.7-L3-8B-GGUF-IQ-Imatrix
null
[ "gguf", "roleplay", "llama3", "sillytavern", "en", "region:us" ]
null
2024-04-24T12:32:05+00:00
null
null
{"license": "openrail"}
Zavid/Dimok
null
[ "license:openrail", "region:us" ]
null
2024-04-24T12:32:34+00:00