The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Job manager crashed while running this job (missing heartbeats).
Error code:   JobManagerCrashedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

modelId
string
author
string
last_modified
unknown
downloads
int64
likes
int64
library_name
string
tags
sequence
pipeline_tag
string
createdAt
unknown
card
string
great0001/5343e7c0-8c61-4ede-8525-dce3b3e4b08e
great0001
"2025-01-23T19:40:01"
8
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2.5-7B", "base_model:adapter:Qwen/Qwen2.5-7B", "license:apache-2.0", "region:us" ]
null
"2025-01-23T19:38:24"
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2.5-7B tags: - axolotl - generated_from_trainer model-index: - name: 5343e7c0-8c61-4ede-8525-dce3b3e4b08e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2.5-7B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 699dbe0484a7b6fd_train_data.json ds_type: json format: custom path: /workspace/input_data/699dbe0484a7b6fd_train_data.json type: field_input: Definition1 field_instruction: Entity field_output: Definition2 format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: great0001/5343e7c0-8c61-4ede-8525-dce3b3e4b08e hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 10 micro_batch_size: 2 mlflow_experiment_name: /tmp/699dbe0484a7b6fd_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: c1c32915-ac79-4d23-9a89-ef0af747d830 wandb_project: Birthday-SN56-14-Gradients-On-Demand wandb_run: your_name wandb_runid: c1c32915-ac79-4d23-9a89-ef0af747d830 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 5343e7c0-8c61-4ede-8525-dce3b3e4b08e This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0922 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 3.313 | 0.0008 | 1 | 3.3060 | | 3.4038 | 0.0025 | 3 | 3.3049 | | 2.9396 | 0.0051 | 6 | 3.2757 | | 3.165 | 0.0076 | 9 | 3.0922 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
mgoudarz/distilbert-base-uncased-finetunded-emotion
mgoudarz
"2022-09-01T19:55:12"
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-09-01T11:41:27"
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - f1 model-index: - name: distilbert-base-uncased-finetunded-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: default split: train args: default metrics: - name: F1 type: f1 value: 0.9365368049598358 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetunded-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1584 - Accuracuy: 0.9365 - F1: 0.9365 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracuy | F1 | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:| | No log | 1.0 | 250 | 0.2735 | 0.9155 | 0.9134 | | No log | 2.0 | 500 | 0.1727 | 0.932 | 0.9321 | | No log | 3.0 | 750 | 0.1584 | 0.9365 | 0.9365 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
RichardErkhov/WYNN747_-_Burmese-GPT-main-v7-1k-8bits
RichardErkhov
"2025-02-28T05:17:15"
0
0
null
[ "safetensors", "gpt2", "arxiv:1910.09700", "8-bit", "bitsandbytes", "region:us" ]
null
"2025-02-28T05:16:18"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Burmese-GPT-main-v7-1k - bnb 8bits - Model creator: https://huggingface.co/WYNN747/ - Original model: https://huggingface.co/WYNN747/Burmese-GPT-main-v7-1k/ Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
open-rl-leaderboard/random-PongNoFrameskip-v4
open-rl-leaderboard
"2024-04-16T16:26:12"
0
0
null
[ "reinforcement-learning", "PongNoFrameskip-v4", "region:us" ]
reinforcement-learning
"2024-04-16T16:26:09"
--- tags: - reinforcement-learning - PongNoFrameskip-v4 ---
baby-dev/b48ec54c-72d7-4f44-81bb-c59b85ff98c0
baby-dev
"2025-02-03T00:03:42"
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:elyza/Llama-3-ELYZA-JP-8B", "base_model:adapter:elyza/Llama-3-ELYZA-JP-8B", "license:llama3", "region:us" ]
null
"2025-02-02T23:49:55"
--- library_name: peft license: llama3 base_model: elyza/Llama-3-ELYZA-JP-8B tags: - axolotl - generated_from_trainer model-index: - name: b48ec54c-72d7-4f44-81bb-c59b85ff98c0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) # b48ec54c-72d7-4f44-81bb-c59b85ff98c0 This model is a fine-tuned version of [elyza/Llama-3-ELYZA-JP-8B](https://huggingface.co/elyza/Llama-3-ELYZA-JP-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4597 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
cardiffnlp/mbert-base-tweet-sentiment-pt
cardiffnlp
"2023-03-22T23:17:07"
10
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-03-22T23:15:21"
# `cardiffnlp/mbert-base-tweet-sentiment-pt` This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the [cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual) (portuguese). Following metrics are computed on the `test` split of [cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual)(portuguese). | | eval_f1_micro | eval_recall_micro | eval_precision_micro | eval_f1_macro | eval_recall_macro | eval_precision_macro | eval_accuracy | |---:|----------------:|--------------------:|-----------------------:|----------------:|--------------------:|-----------------------:|----------------:| | 0 | 61.26 | 61.26 | 61.26 | 61.31 | 61.26 | 61.55 | 61.26 | Check the result file [here](https://huggingface.co/cardiffnlp/mbert-base-tweet-sentiment-pt/raw/main/eval.json).
SandLogicTechnologies/DeepSeek-R1-Distill-Qwen-7B-GGUF
SandLogicTechnologies
"2025-01-29T06:48:53"
133
2
null
[ "gguf", "Qwen2", "DeepSeek", "en", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B", "base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-01-29T06:24:33"
--- language: - en base_model: - deepseek-ai/DeepSeek-R1-Distill-Qwen-7B tags: - Qwen2 - DeepSeek --- # DeepSeek-R1-Distill-Qwen-7B Quantized Models This repository contains Q4_KM and Q5_KM quantized versions of the [DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) model, optimized for efficient deployment while maintaining strong performance. Discover our full range of quantized language models by visiting our [SandLogic Lexicon HuggingFace](https://huggingface.co/SandLogicTechnologies). To learn more about our company and services, check out our website at [SandLogic](https://www.sandlogic.com/). ## Model Description These models are quantized versions of DeepSeek-R1-Distill-Qwen-7B, which is a distilled 7B parameter model based on the Qwen architecture. The model demonstrates that reasoning patterns from larger models can be effectively distilled into smaller architectures, resulting in exceptional performance on various benchmarks. ### Key Features - Fine-tuned using DeepSeek-R1 generated reasoning data - Modified configurations and tokenizer optimized for performance - Maintains strong reasoning capabilities while reducing model size - Suitable for research and production deployment ### Available Quantized Versions 1. **Q4_KM Version** - 4-bit quantization using the K-means method - Approximately 4 GB model size - Optimal balance between model size and performance - Recommended for resource-constrained environments 2. **Q5_KM Version** - 5-bit quantization using the K-means method - Approximately 4.5GB model size - Higher precision than Q4 while maintaining significant size reduction - Recommended when higher accuracy is needed ## Usage ```bash pip install llama-cpp-python ``` Please refer to the llama-cpp-python [documentation](https://llama-cpp-python.readthedocs.io/en/latest/) to install with GPU support. ### Basic Text Completion Here's an example demonstrating how to use the high-level API for basic text completion: ```python from llama_cpp import Llama llm = Llama( model_path="model/path/", verbose=False, # n_gpu_layers=-1, # Uncomment to use GPU acceleration # n_ctx=2048, # Uncomment to increase the context window ) # Example of a reasoning task output = llm( "Q: Explain the concept of natural selection in simple terms. A: ", max_tokens=256, stop=["Q:", "\n\n"], echo=False ) print(output["choices"][0]["text"]) ``` ## Model Configuration Changes Please note that DeepSeek have made slight modifications to the original Qwen-7B configurations and tokenizer to optimize performance. When using these models, ensure you're using provided settings rather than the original Qwen-7B configurations. ## License This model inherits the license of the original DeepSeek-R1-Distill-Qwen-7B model. Please refer to the original model's license for usage terms and conditions. ## Acknowledgments We thank the DeepSeek AI team for open-sourcing their distilled models and demonstrating that smaller models can achieve impressive performance through effective distillation techniques. Special thanks also to the Qwen team for providing the base model architecture.
mk7756/ru_egov_mistral-7b-instruct_prompt_2
mk7756
"2024-03-06T21:12:07"
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-03-06T21:09:19"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/JeloH_-_qwen-textgen-model12-gguf
RichardErkhov
"2025-02-20T02:40:10"
0
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-02-20T02:08:04"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) qwen-textgen-model12 - GGUF - Model creator: https://huggingface.co/JeloH/ - Original model: https://huggingface.co/JeloH/qwen-textgen-model12/ | Name | Quant method | Size | | ---- | ---- | ---- | | [qwen-textgen-model12.Q2_K.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model12-gguf/blob/main/qwen-textgen-model12.Q2_K.gguf) | Q2_K | 0.63GB | | [qwen-textgen-model12.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model12-gguf/blob/main/qwen-textgen-model12.IQ3_XS.gguf) | IQ3_XS | 0.68GB | | [qwen-textgen-model12.IQ3_S.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model12-gguf/blob/main/qwen-textgen-model12.IQ3_S.gguf) | IQ3_S | 0.71GB | | [qwen-textgen-model12.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model12-gguf/blob/main/qwen-textgen-model12.Q3_K_S.gguf) | Q3_K_S | 0.71GB | | [qwen-textgen-model12.IQ3_M.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model12-gguf/blob/main/qwen-textgen-model12.IQ3_M.gguf) | IQ3_M | 0.72GB | | [qwen-textgen-model12.Q3_K.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model12-gguf/blob/main/qwen-textgen-model12.Q3_K.gguf) | Q3_K | 0.77GB | | [qwen-textgen-model12.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model12-gguf/blob/main/qwen-textgen-model12.Q3_K_M.gguf) | Q3_K_M | 0.77GB | | [qwen-textgen-model12.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model12-gguf/blob/main/qwen-textgen-model12.Q3_K_L.gguf) | Q3_K_L | 0.82GB | | [qwen-textgen-model12.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model12-gguf/blob/main/qwen-textgen-model12.IQ4_XS.gguf) | IQ4_XS | 0.84GB | | [qwen-textgen-model12.Q4_0.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model12-gguf/blob/main/qwen-textgen-model12.Q4_0.gguf) | Q4_0 | 0.87GB | | [qwen-textgen-model12.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model12-gguf/blob/main/qwen-textgen-model12.IQ4_NL.gguf) | IQ4_NL | 0.88GB | | [qwen-textgen-model12.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model12-gguf/blob/main/qwen-textgen-model12.Q4_K_S.gguf) | Q4_K_S | 0.88GB | | [qwen-textgen-model12.Q4_K.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model12-gguf/blob/main/qwen-textgen-model12.Q4_K.gguf) | Q4_K | 0.92GB | | [qwen-textgen-model12.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model12-gguf/blob/main/qwen-textgen-model12.Q4_K_M.gguf) | Q4_K_M | 0.92GB | | [qwen-textgen-model12.Q4_1.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model12-gguf/blob/main/qwen-textgen-model12.Q4_1.gguf) | Q4_1 | 0.95GB | | [qwen-textgen-model12.Q5_0.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model12-gguf/blob/main/qwen-textgen-model12.Q5_0.gguf) | Q5_0 | 1.02GB | | [qwen-textgen-model12.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model12-gguf/blob/main/qwen-textgen-model12.Q5_K_S.gguf) | Q5_K_S | 1.02GB | | [qwen-textgen-model12.Q5_K.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model12-gguf/blob/main/qwen-textgen-model12.Q5_K.gguf) | Q5_K | 1.05GB | | [qwen-textgen-model12.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model12-gguf/blob/main/qwen-textgen-model12.Q5_K_M.gguf) | Q5_K_M | 1.05GB | | [qwen-textgen-model12.Q5_1.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model12-gguf/blob/main/qwen-textgen-model12.Q5_1.gguf) | Q5_1 | 1.1GB | | [qwen-textgen-model12.Q6_K.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model12-gguf/blob/main/qwen-textgen-model12.Q6_K.gguf) | Q6_K | 1.19GB | | [qwen-textgen-model12.Q8_0.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model12-gguf/blob/main/qwen-textgen-model12.Q8_0.gguf) | Q8_0 | 1.53GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
andreastgram/xlm-roberta-base-finetuned-panx-de-fr
andreastgram
"2023-02-25T19:50:53"
105
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2023-02-25T19:36:57"
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1613 - F1: 0.8592 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.285 | 1.0 | 715 | 0.1919 | 0.8194 | | 0.1491 | 2.0 | 1430 | 0.1623 | 0.8471 | | 0.0951 | 3.0 | 2145 | 0.1613 | 0.8592 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.0 - Tokenizers 0.13.2
charleschen2022/zephyr-support-chatbot
charleschen2022
"2024-01-29T00:59:31"
0
0
null
[ "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:TheBloke/zephyr-7B-alpha-GPTQ", "base_model:finetune:TheBloke/zephyr-7B-alpha-GPTQ", "license:mit", "region:us" ]
null
"2024-01-29T00:54:08"
--- license: mit base_model: TheBloke/zephyr-7B-alpha-GPTQ tags: - trl - sft - generated_from_trainer model-index: - name: zephyr-support-chatbot results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-support-chatbot This model is a fine-tuned version of [TheBloke/zephyr-7B-alpha-GPTQ](https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 250 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
gorizont/test2
gorizont
"2025-02-03T16:19:26"
11
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-02-03T16:12:00"
--- base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** gorizont - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
DATEXIS/CORe-clinical-outcome-biobert-v1
DATEXIS
"2025-01-17T09:30:18"
167
0
transformers
[ "transformers", "pytorch", "jax", "safetensors", "bert", "medical", "clinical", "en", "endpoints_compatible", "region:us" ]
null
"2023-09-14T18:03:22"
--- language: "en" tags: - bert - medical - clinical thumbnail: "https://core.app.datexis.com/static/paper.png" --- # CORe Model - BioBERT + Clinical Outcome Pre-Training ## Model description The CORe (_Clinical Outcome Representations_) model is introduced in the paper [Clinical Outcome Predictions from Admission Notes using Self-Supervised Knowledge Integration](https://www.aclweb.org/anthology/2021.eacl-main.75.pdf). It is based on BioBERT and further pre-trained on clinical notes, disease descriptions and medical articles with a specialised _Clinical Outcome Pre-Training_ objective. #### How to use CORe You can load the model via the transformers library: ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("bvanaken/CORe-clinical-outcome-biobert-v1") model = AutoModel.from_pretrained("bvanaken/CORe-clinical-outcome-biobert-v1") ``` From there, you can fine-tune it on clinical tasks that benefit from patient outcome knowledge. ### Pre-Training Data The model is based on [BioBERT](https://huggingface.co/dmis-lab/biobert-v1.1) pre-trained on PubMed data. The _Clinical Outcome Pre-Training_ included discharge summaries from the MIMIC III training set (specified [here](https://github.com/bvanaken/clinical-outcome-prediction/blob/master/tasks/mimic_train.csv)), medical transcriptions from [MTSamples](https://mtsamples.com/) and clinical notes from the i2b2 challenges 2006-2012. It further includes ~10k case reports from PubMed Central (PMC), disease articles from Wikipedia and article sections from the [MedQuAd](https://github.com/abachaa/MedQuAD) dataset extracted from NIH websites. ### More Information For all the details about CORe and contact info, please visit [CORe.app.datexis.com](http://core.app.datexis.com/). ### Cite ```bibtex @inproceedings{vanaken21, author = {Betty van Aken and Jens-Michalis Papaioannou and Manuel Mayrdorfer and Klemens Budde and Felix A. Gers and Alexander Löser}, title = {Clinical Outcome Prediction from Admission Notes using Self-Supervised Knowledge Integration}, booktitle = {Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, {EACL} 2021, Online, April 19 - 23, 2021}, publisher = {Association for Computational Linguistics}, year = {2021}, } ```
mradermacher/Llama-Galavant-v0.7.5_Emma-GGUF
mradermacher
"2024-12-03T11:33:12"
5
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Kameljont/Llama-Galavant-v0.7.5_Emma", "base_model:quantized:Kameljont/Llama-Galavant-v0.7.5_Emma", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-12-03T10:41:55"
--- base_model: Kameljont/Llama-Galavant-v0.7.5_Emma language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> static quants of https://huggingface.co/Kameljont/Llama-Galavant-v0.7.5_Emma <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-Galavant-v0.7.5_Emma-GGUF/resolve/main/Llama-Galavant-v0.7.5_Emma.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-Galavant-v0.7.5_Emma-GGUF/resolve/main/Llama-Galavant-v0.7.5_Emma.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-Galavant-v0.7.5_Emma-GGUF/resolve/main/Llama-Galavant-v0.7.5_Emma.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-Galavant-v0.7.5_Emma-GGUF/resolve/main/Llama-Galavant-v0.7.5_Emma.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-Galavant-v0.7.5_Emma-GGUF/resolve/main/Llama-Galavant-v0.7.5_Emma.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-Galavant-v0.7.5_Emma-GGUF/resolve/main/Llama-Galavant-v0.7.5_Emma.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.8 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/Llama-Galavant-v0.7.5_Emma-GGUF/resolve/main/Llama-Galavant-v0.7.5_Emma.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-Galavant-v0.7.5_Emma-GGUF/resolve/main/Llama-Galavant-v0.7.5_Emma.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-Galavant-v0.7.5_Emma-GGUF/resolve/main/Llama-Galavant-v0.7.5_Emma.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-Galavant-v0.7.5_Emma-GGUF/resolve/main/Llama-Galavant-v0.7.5_Emma.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-Galavant-v0.7.5_Emma-GGUF/resolve/main/Llama-Galavant-v0.7.5_Emma.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-Galavant-v0.7.5_Emma-GGUF/resolve/main/Llama-Galavant-v0.7.5_Emma.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama-Galavant-v0.7.5_Emma-GGUF/resolve/main/Llama-Galavant-v0.7.5_Emma.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
mashishka/poetry-rugpt3small
mashishka
"2024-04-13T08:30:51"
210
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:ai-forever/rugpt3small_based_on_gpt2", "base_model:finetune:ai-forever/rugpt3small_based_on_gpt2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-12T11:41:45"
--- base_model: ai-forever/rugpt3small_based_on_gpt2 tags: - generated_from_trainer model-index: - name: poetry-rugpt3small results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # poetry-rugpt3small This model is a fine-tuned version of [ai-forever/rugpt3small_based_on_gpt2](https://huggingface.co/ai-forever/rugpt3small_based_on_gpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 3 - total_train_batch_size: 24 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Tokenizers 0.15.2
hungphongtrn/phobert-large_VietMed_Corpus
hungphongtrn
"2023-12-02T03:33:15"
5
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "base_model:vinai/phobert-large", "base_model:finetune:vinai/phobert-large", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2023-12-02T01:08:45"
--- base_model: vinai/phobert-large tags: - generated_from_trainer model-index: - name: phobert-large_VietMed_Corpus results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phobert-large_VietMed_Corpus This model is a fine-tuned version of [vinai/phobert-large](https://huggingface.co/vinai/phobert-large) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Framework versions - Transformers 4.32.0.dev0 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.13.3
Verlocksss/ppo-Huggy
Verlocksss
"2024-01-24T10:10:21"
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
"2024-01-24T10:09:56"
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Verlocksss/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
tensorblock/BgGPT-7B-Instruct-v0.1-GGUF
tensorblock
"2024-12-12T02:47:44"
30
0
transformers
[ "transformers", "gguf", "mistral", "instruct", "bggpt", "insait", "TensorBlock", "GGUF", "bg", "base_model:INSAIT-Institute/BgGPT-7B-Instruct-v0.1", "base_model:quantized:INSAIT-Institute/BgGPT-7B-Instruct-v0.1", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-12-12T02:12:42"
--- base_model: INSAIT-Institute/BgGPT-7B-Instruct-v0.1 tags: - mistral - instruct - bggpt - insait - TensorBlock - GGUF language: - bg library_name: transformers license: apache-2.0 --- <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"> Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a> </p> </div> </div> ## INSAIT-Institute/BgGPT-7B-Instruct-v0.1 - GGUF This repo contains GGUF format model files for [INSAIT-Institute/BgGPT-7B-Instruct-v0.1](https://huggingface.co/INSAIT-Institute/BgGPT-7B-Instruct-v0.1). The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d). <div style="text-align: left; margin: 20px 0;"> <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;"> Run them on the TensorBlock client using your local machine ↗ </a> </div> ## Prompt template ``` <s>[INST] {prompt} [/INST] ``` ## Model file specification | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [BgGPT-7B-Instruct-v0.1-Q2_K.gguf](https://huggingface.co/tensorblock/BgGPT-7B-Instruct-v0.1-GGUF/blob/main/BgGPT-7B-Instruct-v0.1-Q2_K.gguf) | Q2_K | 2.748 GB | smallest, significant quality loss - not recommended for most purposes | | [BgGPT-7B-Instruct-v0.1-Q3_K_S.gguf](https://huggingface.co/tensorblock/BgGPT-7B-Instruct-v0.1-GGUF/blob/main/BgGPT-7B-Instruct-v0.1-Q3_K_S.gguf) | Q3_K_S | 3.195 GB | very small, high quality loss | | [BgGPT-7B-Instruct-v0.1-Q3_K_M.gguf](https://huggingface.co/tensorblock/BgGPT-7B-Instruct-v0.1-GGUF/blob/main/BgGPT-7B-Instruct-v0.1-Q3_K_M.gguf) | Q3_K_M | 3.550 GB | very small, high quality loss | | [BgGPT-7B-Instruct-v0.1-Q3_K_L.gguf](https://huggingface.co/tensorblock/BgGPT-7B-Instruct-v0.1-GGUF/blob/main/BgGPT-7B-Instruct-v0.1-Q3_K_L.gguf) | Q3_K_L | 3.853 GB | small, substantial quality loss | | [BgGPT-7B-Instruct-v0.1-Q4_0.gguf](https://huggingface.co/tensorblock/BgGPT-7B-Instruct-v0.1-GGUF/blob/main/BgGPT-7B-Instruct-v0.1-Q4_0.gguf) | Q4_0 | 4.143 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [BgGPT-7B-Instruct-v0.1-Q4_K_S.gguf](https://huggingface.co/tensorblock/BgGPT-7B-Instruct-v0.1-GGUF/blob/main/BgGPT-7B-Instruct-v0.1-Q4_K_S.gguf) | Q4_K_S | 4.175 GB | small, greater quality loss | | [BgGPT-7B-Instruct-v0.1-Q4_K_M.gguf](https://huggingface.co/tensorblock/BgGPT-7B-Instruct-v0.1-GGUF/blob/main/BgGPT-7B-Instruct-v0.1-Q4_K_M.gguf) | Q4_K_M | 4.403 GB | medium, balanced quality - recommended | | [BgGPT-7B-Instruct-v0.1-Q5_0.gguf](https://huggingface.co/tensorblock/BgGPT-7B-Instruct-v0.1-GGUF/blob/main/BgGPT-7B-Instruct-v0.1-Q5_0.gguf) | Q5_0 | 5.035 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [BgGPT-7B-Instruct-v0.1-Q5_K_S.gguf](https://huggingface.co/tensorblock/BgGPT-7B-Instruct-v0.1-GGUF/blob/main/BgGPT-7B-Instruct-v0.1-Q5_K_S.gguf) | Q5_K_S | 5.035 GB | large, low quality loss - recommended | | [BgGPT-7B-Instruct-v0.1-Q5_K_M.gguf](https://huggingface.co/tensorblock/BgGPT-7B-Instruct-v0.1-GGUF/blob/main/BgGPT-7B-Instruct-v0.1-Q5_K_M.gguf) | Q5_K_M | 5.169 GB | large, very low quality loss - recommended | | [BgGPT-7B-Instruct-v0.1-Q6_K.gguf](https://huggingface.co/tensorblock/BgGPT-7B-Instruct-v0.1-GGUF/blob/main/BgGPT-7B-Instruct-v0.1-Q6_K.gguf) | Q6_K | 5.983 GB | very large, extremely low quality loss | | [BgGPT-7B-Instruct-v0.1-Q8_0.gguf](https://huggingface.co/tensorblock/BgGPT-7B-Instruct-v0.1-GGUF/blob/main/BgGPT-7B-Instruct-v0.1-Q8_0.gguf) | Q8_0 | 7.748 GB | very large, extremely low quality loss - not recommended | ## Downloading instruction ### Command line Firstly, install Huggingface Client ```shell pip install -U "huggingface_hub[cli]" ``` Then, downoad the individual model file the a local directory ```shell huggingface-cli download tensorblock/BgGPT-7B-Instruct-v0.1-GGUF --include "BgGPT-7B-Instruct-v0.1-Q2_K.gguf" --local-dir MY_LOCAL_DIR ``` If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try: ```shell huggingface-cli download tensorblock/BgGPT-7B-Instruct-v0.1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf' ```
screevoai/llama3-70b-instruct-4bit
screevoai
"2024-04-23T19:17:59"
29
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama3", "meta", "conversational", "base_model:meta-llama/Meta-Llama-3-70B-Instruct", "base_model:quantized:meta-llama/Meta-Llama-3-70B-Instruct", "license:other", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-04-23T18:08:43"
--- license: other base_model: meta-llama/Meta-Llama-3-70B-Instruct model-index: - name: Llama3-70b-Instruct-4bit results: - task: name: Text Generation type: text-generation metrics: - name: None type: None value: none pipeline_tag: text-generation tags: - llama3 - meta --- # Llama3-70b-Instruct-4bit This model is a quantized version of [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) ### Libraries to Install - pip install transformers torch ### Authentication needed before running the script Run the following command in the terminal/jupyter_notebook: - Terminal: huggingface-cli login - Jupyter_notebook: ```python >>> from huggingface_hub import notebook_login >>> notebook_login() ``` **NOTE:** Copy and Paste the token from your Huggingface Account Settings > Access Tokens > Create a new token / Copy the existing one. ### Script ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> import torch >>> # Load model and tokenizer >>> model_id = "screevoai/llama3-70b-instruct-4bit" >>> tokenizer = AutoTokenizer.from_pretrained(model_id) >>> model = AutoModelForCausalLM.from_pretrained( >>> model_id, >>> torch_dtype=torch.bfloat16, >>> device_map="cuda:0" >>> ) >>> # message >>> messages = [ >>> {"role": "system", "content": "You are a personal assistant chatbot, so respond accordingly"}, >>> {"role": "user", "content": "What is Machine Learning?"}, >>> ] >>> input_ids = tokenizer.apply_chat_template( >>> messages, >>> add_generation_prompt=True, >>> return_tensors="pt" >>> ).to(model.device) >>> terminators = [ >>> tokenizer.eos_token_id, >>> tokenizer.convert_tokens_to_ids("<|eot_id|>") >>> ] >>> # Generate predictions using the model >>> outputs = model.generate( >>> input_ids, >>> max_new_tokens=512, >>> eos_token_id=terminators, >>> do_sample=True, >>> temperature=0.6, >>> top_p=0.9, >>> ) >>> response = outputs[0][input_ids.shape[-1]:] >>> print(tokenizer.decode(response, skip_special_tokens=True)) ```
mission-impossible-lms/nondeterministic-shuffle-gpt2-no-pos
mission-impossible-lms
"2024-11-04T20:56:08"
7
0
null
[ "safetensors", "gpt2", "custom_code", "arxiv:2401.06416", "region:us" ]
null
"2024-11-02T07:43:49"
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for *NondeterministicShuffle* GPT-2 (without Positional Encodings) <!-- Provide a quick summary of what the model is/does. --> This is one model in a collection of models trained on the impossible languages of [Kallini et al. 2024](https://arxiv.org/abs/2401.06416). This model is a GPT-2 Small model trained *without positional encodings* from scratch on the ***NondeterministicShuffle*** language. We include a total of 30 checkpoints over the course of model training, from step 100 to 3000 in increments of 100 steps. The main branch contains the final checkpoint (3000), and the other checkpoints are accessible as revisions. ![languages.png](https://cdn-uploads.huggingface.co/production/uploads/6268bc06adb1c6525b3d5157/pBt38YYQL1gj8DqjyorWS.png) ## Model Details - **Developed by:** Julie Kallini, Isabel Papadimitriou, Richard Futrell, Kyle Mahowald, Christopher Potts - **Model type:** Causal Language Model - **Language(s) (NLP):** English - **GitHub Repository:** https://github.com/jkallini/mission-impossible-language-models - **Paper:** https://arxiv.org/pdf/2401.06416 ## Uses This artefact is solely intended for the study of language learning and acquisition in computational models. It should not be used in any production setting. ## How to Get Started with the Model Use the code below to get started with the model. **Important:** This will download our modified GPT-2 code that does not have absolute positional encodings. If using this model in the same environment as another GPT-2 model with positional encodings, load the second model as a `GPT2Model` explicitly. ```python from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer import torch # Load model and tokenizer model_id = "mission-impossible-lms/nondeterministic-shuffle-gpt2-no-pos" model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained(model_id) # Set up the prompt and encode it prompt = "He clean" inputs = tokenizer(prompt, return_tensors="pt") # Generate text output = model.generate(inputs.input_ids, max_length=20) # Decode and print the generated text generated_text = tokenizer.decode(output[0], skip_special_tokens=True) print(generated_text) ``` By default, the `main` branch of this model repo loads the last model checkpoint (3000). To access the other checkpoints, use the `revision` argument: ``` model = GPT2LMHeadModel.from_pretrained(model_id, revision="checkpoint-500") ``` This loads the model at checkpoint 500. ## Training Details ### Training Data This model was trained on the [100M-word BabyLM dataset](https://babylm.github.io/). Before training, we first transform the dataset into the corresponding impossible language, as described in our paper. ### Training Procedure This model was trained for 3,000 gradient steps with a batch size of 2^19 tokens. We train with a learning rate that linearly warms up from 0 to 6e-4 over 300 steps. ## Environmental Impact - **Hardware Type:** NVIDIA RTX 3090 (24GB) + NVIDIA RTX A6000 (48GB) GPUs. - **Hours used:** ~24 hours. ## Citation ```bibtex @inproceedings{kallini-etal-2024-mission, title = "Mission: Impossible Language Models", author = "Kallini, Julie and Papadimitriou, Isabel and Futrell, Richard and Mahowald, Kyle and Potts, Christopher", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.787", doi = "10.18653/v1/2024.acl-long.787", pages = "14691--14714", } ``` ## Model Card Authors Julie Kallini ## Model Card Contact [email protected]
hugohrban/progen2-BFD90
hugohrban
"2024-06-09T06:44:50"
189
0
transformers
[ "transformers", "safetensors", "progen", "text-generation", "custom_code", "arxiv:2206.13517", "license:bsd-3-clause", "autotrain_compatible", "region:us" ]
text-generation
"2024-06-09T06:38:25"
--- license: bsd-3-clause --- Mirror of the base ProGen2-BFD90 model (with slightly modified configuration and forward pass) introduced by [Nijkamp, et al.](https://arxiv.org/abs/2206.13517). See my github [repo](https://github.com/hugohrban/ProGen2-finetuning/tree/main) for an example of finetuning or sampling from this model. Example usage: ```python from transformers import AutoModelForCausalLM from tokenizers import Tokenizer import torch import torch.nn.functional as F # load model and tokenizer model = AutoModelForCausalLM.from_pretrained("hugohrban/progen2-BFD90", trust_remote_code=True, torch_dtype="auto") tokenizer = Tokenizer.from_pretrained("hugohrban/progen2-BFD90") tokenizer.no_padding() # prepare input prompt = "1MEVVIVTGMSGAGK" input_ids = torch.tensor(tokenizer.encode(prompt).ids).to(model.device) # forward pass logits = model(input_ids).logits # print output probabilities next_token_logits = logits[-1, :] next_token_probs = F.softmax(next_token_logits, dim=-1) for i in range(tokenizer.get_vocab_size(with_added_tokens=False)): print(f"{tokenizer.id_to_token(i)}: {100 * next_token_probs[i].item():.2f} %") ```
Drshafi/distilbert_EPU
Drshafi
"2024-06-14T02:26:59"
120
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-14T02:08:16"
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert_EPU results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_EPU This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0592 - Accuracy: 0.7291 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 12 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4904 | 1.0 | 699 | 0.5631 | 0.7077 | | 0.5241 | 2.0 | 1398 | 0.5150 | 0.7458 | | 0.3692 | 3.0 | 2097 | 0.5419 | 0.7501 | | 0.3366 | 4.0 | 2796 | 0.6243 | 0.7430 | | 0.2657 | 5.0 | 3495 | 0.7257 | 0.7358 | | 0.2303 | 6.0 | 4194 | 0.8840 | 0.7349 | | 0.0503 | 7.0 | 4893 | 1.0307 | 0.7291 | | 0.0732 | 8.0 | 5592 | 1.0592 | 0.7291 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
ReadyArt/Forgotten-Safeword-24B-Q3_K_M-GGUF
ReadyArt
"2025-02-17T05:24:54"
0
0
null
[ "gguf", "nsfw", "explicit", "roleplay", "unaligned", "dangerous", "en", "license:other", "region:us", "conversational" ]
null
"2025-02-17T05:21:50"
--- language: - en license: other license_name: mrl license_link: https://mistral.ai/licenses/MRL-0.1.md inference: false tags: - nsfw - explicit - roleplay - unaligned - dangerous --- ## Forgotten-Safeword-24B # **ACADEMIC RESEARCH USE ONLY** **DANGER: EXTREME CONTENT GENERATOR** Forgotten-Safeword-24B is a highly unaligned variant of Mistral-Small-24B-Instruct-2501, specifically engineered to bypass all standard AI safety protocols. This model will generate explicit, dangerous, and ethically compromised content wrapped in immersive roleplay scenarios. ## Quantized Formats - **EXL2 Collection**: [Forgotten-Safeword-24B](https://huggingface.co/collections/ReadyArt/forgotten-safeword-24b-exl2-67b23f41aa6ed0bf6e096f71) - **GGUF Collection**: [Forgotten-Safeword-24B](https://huggingface.co/collections/ReadyArt/forgotten-safeword-24b-gguf-67b23f5ed6fb180091aa008c) ## Recommended Settings Provided - **Mistral V7-Tekken**: [Full Settings](https://huggingface.co/sleepdeprived3/Mistral-V7-Tekken-Settings) ## Intended Use **STRICTLY FOR ACADEMIC RESEARCH INTO:** - AI safety failures - Content moderation bypass techniques - Adversarial model behavior - NSFW content generation patterns ## Training Data Trained on high-risk dialogues featuring: - Deliberate ethical constraint removal - Technical queries paired with explicit narrative responses - Gradual erosion of standard safety measures through reinforcement learning ## Ethical Considerations ⚠️ **CRITICAL WARNING** ⚠️ THIS MODEL WILL: - Generate detailed instructions for illegal activities - Create explicit sexual content without constraints - Produce hate speech and discriminatory material - Combine technical accuracy with dangerous roleplay scenarios **By using this model, you accept full responsibility for:** - All content generated - Any harm caused by its outputs - Legal consequences of its use By using this model, you: - Acknowledge its inherently dangerous nature - Waive all liability from the model creator - Agree to use only in controlled research settings ## Model Authors - sleepdeprived3
katanaml-org/invoices-donut-model-v1
katanaml-org
"2023-05-11T17:57:22"
315
38
transformers
[ "transformers", "pytorch", "vision-encoder-decoder", "image-text-to-text", "image-to-text", "en", "dataset:katanaml-org/invoices-donut-data-v1", "license:mit", "endpoints_compatible", "region:us" ]
image-to-text
"2023-03-13T20:51:57"
--- license: mit language: - en pipeline_tag: image-to-text datasets: - katanaml-org/invoices-donut-data-v1 --- ## Sparrow - Data extraction from documents with ML This model is finetuned Donut ML base model on invoices data. Model aims to verify how well Donut performs on enterprise docs. Mean accuracy on test set: 0.96 Inference: ![Inference Results](https://raw.githubusercontent.com/katanaml/sparrow/main/sparrow-ui/assets/inference_actual.png) Training loss: ![Training Loss](https://raw.githubusercontent.com/katanaml/sparrow/main/sparrow-ui/assets/donut_training_loss.png) Sparrow on [GitHub](https://github.com/katanaml/sparrow) Sample invoice [docs](https://github.com/katanaml/sparrow/tree/main/sparrow-ui/docs/images) to use for inference (docs up to 500 were used for fine-tuning, use docs from 500 for inference) Our website [KatanaML](https://www.katanaml.io) On [Twitter](https://twitter.com/katana_ml)
tesolnet/tari_gpt2_CompLing1
tesolnet
"2024-08-04T14:47:03"
116
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-08-04T14:46:01"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nitishpandey04/Reinforce-v1
nitishpandey04
"2025-02-21T10:05:34"
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2025-02-21T10:05:30"
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 91.60 +/- 6.55 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
RichardErkhov/Sakalti_-_dare_saba_1-gguf
RichardErkhov
"2025-02-28T10:48:42"
0
0
null
[ "gguf", "arxiv:2311.03099", "arxiv:2306.01708", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-02-28T07:53:26"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) dare_saba_1 - GGUF - Model creator: https://huggingface.co/Sakalti/ - Original model: https://huggingface.co/Sakalti/dare_saba_1/ | Name | Quant method | Size | | ---- | ---- | ---- | | [dare_saba_1.Q2_K.gguf](https://huggingface.co/RichardErkhov/Sakalti_-_dare_saba_1-gguf/blob/main/dare_saba_1.Q2_K.gguf) | Q2_K | 1.08GB | | [dare_saba_1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Sakalti_-_dare_saba_1-gguf/blob/main/dare_saba_1.IQ3_XS.gguf) | IQ3_XS | 1.2GB | | [dare_saba_1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Sakalti_-_dare_saba_1-gguf/blob/main/dare_saba_1.IQ3_S.gguf) | IQ3_S | 1.25GB | | [dare_saba_1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Sakalti_-_dare_saba_1-gguf/blob/main/dare_saba_1.Q3_K_S.gguf) | Q3_K_S | 1.25GB | | [dare_saba_1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Sakalti_-_dare_saba_1-gguf/blob/main/dare_saba_1.IQ3_M.gguf) | IQ3_M | 1.28GB | | [dare_saba_1.Q3_K.gguf](https://huggingface.co/RichardErkhov/Sakalti_-_dare_saba_1-gguf/blob/main/dare_saba_1.Q3_K.gguf) | Q3_K | 1.36GB | | [dare_saba_1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Sakalti_-_dare_saba_1-gguf/blob/main/dare_saba_1.Q3_K_M.gguf) | Q3_K_M | 1.36GB | | [dare_saba_1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Sakalti_-_dare_saba_1-gguf/blob/main/dare_saba_1.Q3_K_L.gguf) | Q3_K_L | 1.46GB | | [dare_saba_1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Sakalti_-_dare_saba_1-gguf/blob/main/dare_saba_1.IQ4_XS.gguf) | IQ4_XS | 1.52GB | | [dare_saba_1.Q4_0.gguf](https://huggingface.co/RichardErkhov/Sakalti_-_dare_saba_1-gguf/blob/main/dare_saba_1.Q4_0.gguf) | Q4_0 | 1.58GB | | [dare_saba_1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Sakalti_-_dare_saba_1-gguf/blob/main/dare_saba_1.IQ4_NL.gguf) | IQ4_NL | 1.59GB | | [dare_saba_1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Sakalti_-_dare_saba_1-gguf/blob/main/dare_saba_1.Q4_K_S.gguf) | Q4_K_S | 1.59GB | | [dare_saba_1.Q4_K.gguf](https://huggingface.co/RichardErkhov/Sakalti_-_dare_saba_1-gguf/blob/main/dare_saba_1.Q4_K.gguf) | Q4_K | 1.67GB | | [dare_saba_1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Sakalti_-_dare_saba_1-gguf/blob/main/dare_saba_1.Q4_K_M.gguf) | Q4_K_M | 1.67GB | | [dare_saba_1.Q4_1.gguf](https://huggingface.co/RichardErkhov/Sakalti_-_dare_saba_1-gguf/blob/main/dare_saba_1.Q4_1.gguf) | Q4_1 | 1.74GB | | [dare_saba_1.Q5_0.gguf](https://huggingface.co/RichardErkhov/Sakalti_-_dare_saba_1-gguf/blob/main/dare_saba_1.Q5_0.gguf) | Q5_0 | 1.89GB | | [dare_saba_1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Sakalti_-_dare_saba_1-gguf/blob/main/dare_saba_1.Q5_K_S.gguf) | Q5_K_S | 1.89GB | | [dare_saba_1.Q5_K.gguf](https://huggingface.co/RichardErkhov/Sakalti_-_dare_saba_1-gguf/blob/main/dare_saba_1.Q5_K.gguf) | Q5_K | 1.94GB | | [dare_saba_1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Sakalti_-_dare_saba_1-gguf/blob/main/dare_saba_1.Q5_K_M.gguf) | Q5_K_M | 1.94GB | | [dare_saba_1.Q5_1.gguf](https://huggingface.co/RichardErkhov/Sakalti_-_dare_saba_1-gguf/blob/main/dare_saba_1.Q5_1.gguf) | Q5_1 | 2.05GB | | [dare_saba_1.Q6_K.gguf](https://huggingface.co/RichardErkhov/Sakalti_-_dare_saba_1-gguf/blob/main/dare_saba_1.Q6_K.gguf) | Q6_K | 2.22GB | | [dare_saba_1.Q8_0.gguf](https://huggingface.co/RichardErkhov/Sakalti_-_dare_saba_1-gguf/blob/main/dare_saba_1.Q8_0.gguf) | Q8_0 | 2.88GB | Original model description: --- base_model: - win10/Qwen2.5-2B-Instruct library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [win10/Qwen2.5-2B-Instruct](https://huggingface.co/win10/Qwen2.5-2B-Instruct) as a base. ### Models Merged The following models were included in the merge: ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: win10/Qwen2.5-2B-Instruct parameters: density: 0.5 weight: 0.4 merge_method: dare_ties base_model: win10/Qwen2.5-2B-Instruct parameters: density: 0.5 weight: 0.6 int8_mask: true dtype: float16 ```
TOMFORD79/Kanda_3
TOMFORD79
"2025-02-11T17:53:45"
0
0
null
[ "onnx", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
"2025-02-11T17:38:44"
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
vltnmmdv/deepseek-moe-16b-base
vltnmmdv
"2024-08-08T09:13:41"
875
0
transformers
[ "transformers", "safetensors", "deepseek_with_concentration", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "region:us" ]
text-generation
"2024-04-23T09:11:59"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
finetrainers/crush-smol-v0
finetrainers
"2025-01-27T11:22:39"
150
9
diffusers
[ "diffusers", "safetensors", "text-to-video", "diffusers-training", "cogvideox", "cogvideox-diffusers", "template:sd-lora", "dataset:finetrainers/crush-smol", "base_model:THUDM/CogVideoX-5b", "base_model:finetune:THUDM/CogVideoX-5b", "license:other", "region:us" ]
text-to-video
"2025-01-27T10:50:41"
--- base_model: THUDM/CogVideoX-5b datasets: finetrainers/crush-smol library_name: diffusers license: other license_link: https://huggingface.co/THUDM/CogVideoX-5b/blob/main/LICENSE instance_prompt: DIFF_crush A red candle is placed on a metal platform, and a large metal cylinder descends from above, flattening the candle as if it were under a hydraulic press. The candle is crushed into a flat, round shape, leaving a pile of debris around it. widget: - text: DIFF_crush A red candle is placed on a metal platform, and a large metal cylinder descends from above, flattening the candle as if it were under a hydraulic press. The candle is crushed into a flat, round shape, leaving a pile of debris around it. output: url: "./assets/output_0.mp4" - text: DIFF_crush A bulb is placed on a wooden platform, and a large metal cylinder descends from above, crushing the bulb as if it were under a hydraulic press. The bulb is crushed into a flat, round shape, leaving a pile of debris around it. output: url: "./assets/output_1.mp4" - text: DIFF_crush A thick burger is placed on a dining table, and a large metal cylinder descends from above, crushing the burger as if it were under a hydraulic press. The bulb is crushed, leaving a pile of debris around it. output: url: "./assets/output_2.mp4" tags: - text-to-video - diffusers-training - diffusers - cogvideox - cogvideox-diffusers - template:sd-lora --- <Gallery /> This is a fine-tune of the [THUDM/CogVideoX-5b](https://huggingface.co/THUDM/CogVideoX-5b) model on the [finetrainers/crush-smol](https://huggingface.co/datasets/finetrainers/crush-smol) dataset. We also provide a LoRA variant of the params. Check it out [here](#lora). Code: https://github.com/a-r-r-o-w/finetrainers > [!IMPORTANT] > This is an experimental checkpoint and its poor generalization is well-known. Inference code: ```py from diffusers import CogVideoXTransformer3DModel, DiffusionPipeline from diffusers.utils import export_to_video import torch transformer = CogVideoXTransformer3DModel.from_pretrained( "finetrainers/crush-smol-v0", torch_dtype=torch.bfloat16 ) pipeline = DiffusionPipeline.from_pretrained( "THUDM/CogVideoX-5b", transformer=transformer, torch_dtype=torch.bfloat16 ).to("cuda") prompt = """ DIFF_crush A thick burger is placed on a dining table, and a large metal cylinder descends from above, crushing the burger as if it were under a hydraulic press. The bulb is crushed, leaving a pile of debris around it. """ negative_prompt = "inconsistent motion, blurry motion, worse quality, degenerate outputs, deformed outputs" video = pipeline( prompt=prompt, negative_prompt=negative_prompt, num_frames=81, height=512, width=768, num_inference_steps=50 ).frames[0] export_to_video(video, "output.mp4", fps=25) ``` Training logs are available on WandB [here](https://wandb.ai/sayakpaul/finetrainers-cogvideox/runs/ngcsyhom). ## LoRA We extracted a 64-rank LoRA from the finetuned checkpoint (script [here](https://github.com/huggingface/diffusers/blob/main/scripts/extract_lora_from_model.py)). [This LoRA](./extracted_crush_smol_lora_64.safetensors) can be used to emulate the same kind of effect: <details> <summary>Code</summary> ```py from diffusers import DiffusionPipeline from diffusers.utils import export_to_video import torch pipeline = DiffusionPipeline.from_pretrained("THUDM/CogVideoX-5b", torch_dtype=torch.bfloat16).to("cuda") pipeline.load_lora_weights("finetrainers/cakeify-v0", weight_name="extracted_crush_smol_lora_64.safetensors") prompt = """ DIFF_crush A thick burger is placed on a dining table, and a large metal cylinder descends from above, crushing the burger as if it were under a hydraulic press. The bulb is crushed, leaving a pile of debris around it. """ negative_prompt = "inconsistent motion, blurry motion, worse quality, degenerate outputs, deformed outputs" video = pipeline( prompt=prompt, negative_prompt=negative_prompt, num_frames=81, height=512, width=768, num_inference_steps=50 ).frames[0] export_to_video(video, "output_lora.mp4", fps=25) ``` </details>
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e7_s6789_v3_l6_v5
KingKazma
"2023-08-11T10:37:37"
0
0
peft
[ "peft", "region:us" ]
null
"2023-08-11T10:37:36"
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
philipphager/baidu-ultr_uva-bert_naive-pointwise
philipphager
"2024-05-01T15:36:15"
5
1
transformers
[ "transformers", "safetensors", "bert", "dataset:philipphager/baidu-ultr-pretrain", "dataset:philipphager/baidu-ultr_uva-mlm-ctr", "arxiv:2207.03051", "arxiv:2404.02543", "license:mit", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
null
"2024-04-24T10:07:06"
--- license: mit datasets: - philipphager/baidu-ultr-pretrain - philipphager/baidu-ultr_uva-mlm-ctr metrics: - log-likelihood - dcg@1 - dcg@3 - dcg@5 - dcg@10 - ndcg@10 - mrr@10 co2_eq_emissions: emissions: 2090 source: "Calculated using the [ML CO2 impact calculator](https://mlco2.github.io/impact/#compute), training for 4 x 45 hours with a carbon efficiency of 0.029 kg/kWh. You can inspect the carbon efficiency of the French national grid provider here: https://www.rte-france.com/eco2mix/les-emissions-de-co2-par-kwh-produit-en-france" training_type: "Pre-training" geographical_location: "Grenoble, France" hardware_used: "4 NVIDIA H100-80GB GPUs" --- # Naive Pointwise MonoBERT trained on Baidu-ULTR A flax-based MonoBERT cross encoder trained on the [Baidu-ULTR](https://arxiv.org/abs/2207.03051) dataset with a **pointwise sigmoid cross-entropy loss on clicks**. The loss is called "naive" as we use user clicks as a signal of relevance without any additional position bias correction. For more info, [read our paper](https://arxiv.org/abs/2404.02543) and [find the code for this model here](https://github.com/philipphager/baidu-bert-model). ## Test Results on Baidu-ULTR Ranking performance is measured in DCG, nDCG, and MRR on expert annotations (6,985 queries). Click prediction performance is measured in log-likelihood on one test partition of user clicks (≈297k queries). | Model | Log-likelihood | DCG@1 | DCG@3 | DCG@5 | DCG@10 | nDCG@10 | MRR@10 | |------------------------------------------------------------------------------------------------|----------------|-------|-------|-------|--------|---------|--------| | [Pointwise Naive](https://huggingface.co/philipphager/baidu-ultr_uva-bert_naive-pointwise) | 0.227 | 1.641 | 3.462 | 4.752 | 7.251 | 0.357 | 0.609 | | [Pointwise Two-Tower](https://huggingface.co/philipphager/baidu-ultr_uva-bert_twotower) | 0.218 | 1.629 | 3.471 | 4.822 | 7.456 | 0.367 | 0.607 | | [Pointwise IPS](https://huggingface.co/philipphager/baidu-ultr_uva-bert_ips-pointwise) | 0.222 | 1.295 | 2.811 | 3.977 | 6.296 | 0.307 | 0.534 | | [Listwise Naive](https://huggingface.co/philipphager/baidu-ultr_uva-bert_naive-listwise) | - | 1.947 | 4.108 | 5.614 | 8.478 | 0.405 | 0.639 | | [Listwise IPS](https://huggingface.co/philipphager/baidu-ultr_uva-bert_ips-listwise) | - | 1.671 | 3.530 | 4.873 | 7.450 | 0.361 | 0.603 | | [Listwise DLA](https://huggingface.co/philipphager/baidu-ultr_uva-bert_dla) | - | 1.796 | 3.730 | 5.125 | 7.802 | 0.377 | 0.615 | ## Usage Here is an example of downloading the model and calling it for inference on a mock batch of input data. For more details on how to use the model on the Baidu-ULTR dataset, take a look at our [training](https://github.com/philipphager/baidu-bert-model/blob/main/main.py) and [evaluation scripts](https://github.com/philipphager/baidu-bert-model/blob/main/eval.py) in our code repository. ```Python import jax.numpy as jnp from src.model import CrossEncoder model = CrossEncoder.from_pretrained( "philipphager/baidu-ultr_uva-bert_naive-pointwise", ) # Mock batch following Baidu-ULTR with 4 documents, each with 8 tokens batch = { # Query_id for each document "query_id": jnp.array([1, 1, 1, 1]), # Document position in SERP "positions": jnp.array([1, 2, 3, 4]), # Token ids for: [CLS] Query [SEP] Document "tokens": jnp.array([ [2, 21448, 21874, 21436, 1, 20206, 4012, 2860], [2, 21448, 21874, 21436, 1, 16794, 4522, 2082], [2, 21448, 21874, 21436, 1, 20206, 10082, 9773], [2, 21448, 21874, 21436, 1, 2618, 8520, 2860], ]), # Specify if a token id belongs to the query (0) or document (1) "token_types": jnp.array([ [0, 0, 0, 0, 1, 1, 1, 1], [0, 0, 0, 0, 1, 1, 1, 1], [0, 0, 0, 0, 1, 1, 1, 1], [0, 0, 0, 0, 1, 1, 1, 1], ]), # Marks if a token should be attended to (True) or ignored, e.g., padding tokens (False): "attention_mask": jnp.array([ [True, True, True, True, True, True, True, True], [True, True, True, True, True, True, True, True], [True, True, True, True, True, True, True, True], [True, True, True, True, True, True, True, True], ]), } outputs = model(batch, train=False) print(outputs) ``` ## Reference ``` @inproceedings{Hager2024BaiduULTR, author = {Philipp Hager and Romain Deffayet and Jean-Michel Renders and Onno Zoeter and Maarten de Rijke}, title = {Unbiased Learning to Rank Meets Reality: Lessons from Baidu’s Large-Scale Search Dataset}, booktitle = {Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR`24)}, organization = {ACM}, year = {2024}, } ```
Helsinki-NLP/opus-mt-uk-en
Helsinki-NLP
"2023-08-16T12:08:04"
14,225
8
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "uk", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
"2022-03-02T23:29:04"
--- tags: - translation license: apache-2.0 --- ### opus-mt-uk-en * source languages: uk * target languages: en * OPUS readme: [uk-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/uk-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/uk-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/uk-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/uk-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.uk.en | 64.1 | 0.757 |
Carick/xlm-roberta-base-wordnet_dataset_two-fine-tuned
Carick
"2024-11-22T13:52:48"
103
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-11-22T10:22:04"
--- library_name: transformers license: mit base_model: xlm-roberta-base tags: - generated_from_trainer model-index: - name: xlm-roberta-base-wordnet_dataset_two-fine-tuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-wordnet_dataset_two-fine-tuned This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6163 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.8833 | 1.0 | 7938 | 0.8784 | | 0.8657 | 2.0 | 15876 | 0.8769 | | 0.6386 | 3.0 | 23814 | 0.6163 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
mradermacher/InternLM_2_5-7b-GGUF
mradermacher
"2024-12-28T09:16:22"
12
0
transformers
[ "transformers", "gguf", "en", "base_model:inflaton-ai/InternLM_2_5-7b", "base_model:quantized:inflaton-ai/InternLM_2_5-7b", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-12-28T09:03:53"
--- base_model: inflaton-ai/InternLM_2_5-7b language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/inflaton-ai/InternLM_2_5-7b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/InternLM_2_5-7b-GGUF/resolve/main/InternLM_2_5-7b.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/InternLM_2_5-7b-GGUF/resolve/main/InternLM_2_5-7b.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/InternLM_2_5-7b-GGUF/resolve/main/InternLM_2_5-7b.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/InternLM_2_5-7b-GGUF/resolve/main/InternLM_2_5-7b.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/InternLM_2_5-7b-GGUF/resolve/main/InternLM_2_5-7b.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/InternLM_2_5-7b-GGUF/resolve/main/InternLM_2_5-7b.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/InternLM_2_5-7b-GGUF/resolve/main/InternLM_2_5-7b.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/InternLM_2_5-7b-GGUF/resolve/main/InternLM_2_5-7b.Q5_K_S.gguf) | Q5_K_S | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/InternLM_2_5-7b-GGUF/resolve/main/InternLM_2_5-7b.Q5_K_M.gguf) | Q5_K_M | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/InternLM_2_5-7b-GGUF/resolve/main/InternLM_2_5-7b.Q6_K.gguf) | Q6_K | 6.5 | very good quality | | [GGUF](https://huggingface.co/mradermacher/InternLM_2_5-7b-GGUF/resolve/main/InternLM_2_5-7b.Q8_0.gguf) | Q8_0 | 8.3 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/InternLM_2_5-7b-GGUF/resolve/main/InternLM_2_5-7b.f16.gguf) | f16 | 15.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
Jean-Baptiste/roberta-large-financial-news-topics-en
Jean-Baptiste
"2023-03-24T00:45:50"
23
2
transformers
[ "transformers", "pytorch", "safetensors", "roberta", "text-classification", "financial", "stocks", "topic", "en", "dataset:Jean-Baptiste/financial_news_sentiment_mixte_with_phrasebank_75", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-01-02T18:43:53"
--- language: en tags: - financial - stocks - topic datasets: - Jean-Baptiste/financial_news_sentiment_mixte_with_phrasebank_75 widget: - text: "LexaGene Receives Signed Quote from Large Biopharma Company to Purchase a MiQLab System -- LexaGene Holdings, Inc., (OTCQB: LXXGF; TSX-V: LXG) (“LexaGene” or the “Company”), an innovative, molecular diagnostics company that has commercialized the MiQLab® System for automated, genetic testing, is pleased to announce that it has received an indication that a major biopharma company intends to purchase its technology." - text: "Melcor REIT (TSX: MR.UN) today announced results for the third quarter ended September 30, 2022. Revenue was stable in the quarter and year-to-date. Net operating income was down 3% in the quarter at $11.61 million due to the timing of operating expenses and inflated costs including utilities like gas/heat and power" - text: "Badger Infrastructure Solutions Ltd. Announces Resignation of Chief Financial Officer and Appointment of Interim Chief Financial Officer -- Badger Infrastructure Solutions Ltd. (“Badger” or the “Company”) (TSX:BDGI) announced today the resignation of Mr. Darren Yaworsky, Senior Vice President, Finance & Chief Financial Officer and the appointment of Mr. Pramod Bhatia as interim Chief Financial Officer. Mr. Yaworsky will remain with the Company until December 31, 2022 to facilitate an orderly transition." license: mit --- # Model fine-tuned from roberta-large for topic classification of financial news (emphasis on Canadian news). ### Introduction This model was train on the topic column of financial_news_sentiment_mixte_with_phrasebank_75 dataset. The topic column was generated using a zero-shot classification model on 11 topics. There was no manual reviews on the generated topics and therefore we should expect misclassifications in the dataset, and therefore the trained model might reproduce the same errors. ### Training data Training data was classified as follow: class |Description -|- 0 |acquisition 1 |other 2 |quaterly financial release 3 |appointment to new position 4 |dividend 5 |corporate update 6 |drillings results 7 |conference 8 |share repurchase program 9 |grant of stocks ### How to use roberta-large-financial-news-topics-en with HuggingFace ##### Load roberta-large-financial-news-topics-en and its sub-word tokenizer : ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("Jean-Baptiste/roberta-large-financial-news-topics-en") model = AutoModelForSequenceClassification.from_pretrained("Jean-Baptiste/roberta-large-financial-news-topics-en") ##### Process text sample (from wikipedia) from transformers import pipeline pipe = pipeline("text-classification", model=model, tokenizer=tokenizer) pipe("Melcor REIT (TSX: MR.UN) today announced results for the third quarter ended September 30, 2022. Revenue was stable in the quarter and year-to-date. Net operating income was down 3% in the quarter at $11.61 million due to the timing of operating expenses and inflated costs including utilities like gas/heat and power") [{'label': 'quaterly financial release', 'score': 0.8829097151756287}] ``` ### Model performances Overall f1 score (average macro) precision|recall|f1 -|-|- 0.7533|0.7629|0.7499
hgnoi/Y3fajFOTXhVPHbbz
hgnoi
"2024-05-21T15:40:50"
121
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-05-21T15:39:13"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sd-concepts-library/jungalow
sd-concepts-library
"2023-04-04T17:29:31"
0
0
null
[ "region:us" ]
null
"2023-04-04T17:29:22"
--- license: mit --- ### Jungalow on Stable Diffusion This is the `<Jungalow>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<Jungalow> 0](https://huggingface.co/sd-concepts-library/jungalow/resolve/main/concept_images/optimismrug2_1512x.png) ![<Jungalow> 1](https://huggingface.co/sd-concepts-library/jungalow/resolve/main/concept_images/optimismrug3_1512x.png) ![<Jungalow> 2](https://huggingface.co/sd-concepts-library/jungalow/resolve/main/concept_images/optimism_1512x.png) ![<Jungalow> 3](https://huggingface.co/sd-concepts-library/jungalow/resolve/main/concept_images/optimismrug_bb5d0550-2ef4-449c-8d4c-04961a3d6c28_1512x.png) ![<Jungalow> 4](https://huggingface.co/sd-concepts-library/jungalow/resolve/main/concept_images/optimismrug_4c67e32a-aa9e-4686-82e7-782433c7f222_1512x.png)
QuantFactory/granite-3.1-8b-base-GGUF
QuantFactory
"2024-12-20T04:51:11"
108
3
transformers
[ "transformers", "gguf", "language", "granite-3.1", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-12-20T04:07:09"
--- license: apache-2.0 library_name: transformers tags: - language - granite-3.1 --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/granite-3.1-8b-base-GGUF This is quantized version of [ibm-granite/granite-3.1-8b-base](https://huggingface.co/ibm-granite/granite-3.1-8b-base) created using llama.cpp # Original Model Card # Granite-3.1-8B-Base **Model Summary:** Granite-3.1-8B-Base extends the context length of Granite-3.0-8B-Base from 4K to 128K using a progressive training strategy by increasing the supported context length in increments while adjusting RoPE theta until the model has successfully adapted to desired length of 128K. This long-context pre-training stage was performed using approximately 500B tokens. - **Developers:** Granite Team, IBM - **GitHub Repository:** [ibm-granite/granite-3.1-language-models](https://github.com/ibm-granite/granite-3.1-language-models) - **Website**: [Granite Docs](https://www.ibm.com/granite/docs/) - **Paper:** [Granite 3.1 Language Models (coming soon)](https://huggingface.co/collections/ibm-granite/granite-31-language-models-6751dbbf2f3389bec5c6f02d) - **Release Date**: December 18th, 2024 - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) **Supported Languages:** English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. Users may finetune Granite 3.1 models for languages beyond these 12 languages. **Intended Use:** Prominent use cases of LLMs in text-to-text generation include summarization, text classification, extraction, question-answering, and other long-context tasks. All Granite Base models are able to handle these tasks as they were trained on a large amount of data from various domains. Moreover, they can serve as baseline to create specialized models for specific application scenarios. **Generation:** This is a simple example of how to use Granite-3.1-8B-Base model. Install the following libraries: ```shell pip install torch torchvision torchaudio pip install accelerate pip install transformers ``` Then, copy the code snippet below to run the example. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "auto" model_path = "ibm-granite/granite-3.1-8B-base" tokenizer = AutoTokenizer.from_pretrained(model_path) # drop device_map if running on CPU model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device) model.eval() # change input text as desired input_text = "Where is the Thomas J. Watson Research Center located?" # tokenize the text input_tokens = tokenizer(input_text, return_tensors="pt").to(device) # generate output tokens output = model.generate(**input_tokens, max_length=4000) # decode output tokens into text output = tokenizer.batch_decode(output) # print output print(output) ``` **Model Architecture:** Granite-3.1-8B-Base is based on a decoder-only dense transformer architecture. Core components of this architecture are: GQA and RoPE, MLP with SwiGLU, RMSNorm, and shared input/output embeddings. | Model | 2B Dense | 8B Dense | 1B MoE | 3B MoE | | :-------- | :--------| :-------- | :------| :------| | Embedding size | 2048 | **4096** | 1024 | 1536 | | Number of layers | 40 | **40** | 24 | 32 | | Attention head size | 64 | **128** | 64 | 64 | | Number of attention heads | 32 | **32** | 16 | 24 | | Number of KV heads | 8 | **8** | 8 | 8 | | MLP hidden size | 8192 | **12800** | 512 | 512 | | MLP activation | SwiGLU | **SwiGLU** | SwiGLU | SwiGLU | | Number of experts | — | **—** | 32 | 40 | | MoE TopK | — | **—** | 8 | 8 | | Initialization std | 0.1 | **0.1** | 0.1 | 0.1 | | Sequence length | 128K | **128K** | 128K | 128K | | Position embedding | RoPE | **RoPE** | RoPE | RoPE | | # Parameters | 2.5B | **8.1B** | 1.3B | 3.3B | | # Active parameters | 2.5B | **8.1B** | 400M | 800M | | # Training tokens | 12T | **12T** | 10T | 10T | **Training Data:** This model is trained on a mix of open source and proprietary data following a three-stage training strategy. * Stage 1 data: The data for stage 1 is sourced from diverse domains, such as: web, code, academic sources, books, and math data. * Stage 2 data: The data for stage 2 comprises a curated mix of high-quality data from the same domains, plus multilingual and instruction data. The goal of this second training phase is to enhance the model’s performance on specific tasks. * Stage 3 data: The data for stage 3 consists of original stage-2 pretraining data with additional synthetic long-context data in form of QA/summary pairs where the answer contains a recitation of the related paragraph before the answer. A detailed attribution of datasets can be found in the [Granite 3.0 Technical Report](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/paper.pdf), [Granite 3.1 Technical Report (coming soon)](https://huggingface.co/collections/ibm-granite/granite-31-language-models-6751dbbf2f3389bec5c6f02d), and [Accompanying Author List](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/author-ack.pdf). **Infrastructure:** We train Granite 3.1 Language Models using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs. **Ethical Considerations and Limitations:** The use of Large Language Models involves risks and ethical considerations people must be aware of, including but not limited to: bias and fairness, misinformation, and autonomous decision-making. Granite-3.1-8B-Base model is not the exception in this regard. Even though this model is suited for multiple generative AI tasks, it has not undergone any safety alignment, there it may produce problematic outputs. Additionally, it remains uncertain whether smaller models might exhibit increased susceptibility to hallucination in generation scenarios by copying text verbatim from the training dataset due to their reduced sizes and memorization capacities. This aspect is currently an active area of research, and we anticipate more rigorous exploration, comprehension, and mitigations in this domain. Regarding ethics, a latent risk associated with all Large Language Models is their malicious utilization. We urge the community to use Granite-3.1-8B-Base model with ethical intentions and in a responsible way. **Resources** - ⭐️ Learn about the latest updates with Granite: https://www.ibm.com/granite - 📄 Get started with tutorials, best practices, and prompt engineering advice: https://www.ibm.com/granite/docs/ - 💡 Learn about the latest Granite learning resources: https://ibm.biz/granite-learning-resources
AmritTPU/Pmpbestprac
AmritTPU
"2024-04-17T15:50:36"
0
0
fastai
[ "fastai", "region:us" ]
null
"2024-04-17T15:29:31"
--- metrics: - accuracy library_name: fastai ---
Smashitup/FineLlama-3.1-8B-q3_k_m-GGUF
Smashitup
"2024-10-18T07:32:40"
107
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-10-18T07:32:17"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Stonekraken/my_awesome_model
Stonekraken
"2024-02-01T04:14:33"
91
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-02-01T02:58:30"
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: my_awesome_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2248 - Accuracy: 0.9315 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2276 | 1.0 | 1563 | 0.2361 | 0.9149 | | 0.1534 | 2.0 | 3126 | 0.2248 | 0.9315 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
sail-rvc/xmodel
sail-rvc
"2023-07-14T07:45:07"
1
0
transformers
[ "transformers", "rvc", "sail-rvc", "audio-to-audio", "endpoints_compatible", "region:us" ]
audio-to-audio
"2023-07-14T07:44:54"
--- pipeline_tag: audio-to-audio tags: - rvc - sail-rvc --- # xmodel ## RVC Model ![banner](https://i.imgur.com/xocCjhH.jpg) This model repo was automatically generated. Date: 2023-07-14 07:45:06 Bot Name: juuxnscrap Model Type: RVC Source: https://huggingface.co/juuxn/RVCModels/ Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
RichardErkhov/kshitizrimal_-_Gemma-2-2b-it-ne-detector_full-gguf
RichardErkhov
"2025-02-27T14:12:57"
0
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-02-27T13:28:27"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Gemma-2-2b-it-ne-detector_full - GGUF - Model creator: https://huggingface.co/kshitizrimal/ - Original model: https://huggingface.co/kshitizrimal/Gemma-2-2b-it-ne-detector_full/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Gemma-2-2b-it-ne-detector_full.Q2_K.gguf](https://huggingface.co/RichardErkhov/kshitizrimal_-_Gemma-2-2b-it-ne-detector_full-gguf/blob/main/Gemma-2-2b-it-ne-detector_full.Q2_K.gguf) | Q2_K | 1.15GB | | [Gemma-2-2b-it-ne-detector_full.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/kshitizrimal_-_Gemma-2-2b-it-ne-detector_full-gguf/blob/main/Gemma-2-2b-it-ne-detector_full.IQ3_XS.gguf) | IQ3_XS | 1.22GB | | [Gemma-2-2b-it-ne-detector_full.IQ3_S.gguf](https://huggingface.co/RichardErkhov/kshitizrimal_-_Gemma-2-2b-it-ne-detector_full-gguf/blob/main/Gemma-2-2b-it-ne-detector_full.IQ3_S.gguf) | IQ3_S | 1.27GB | | [Gemma-2-2b-it-ne-detector_full.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/kshitizrimal_-_Gemma-2-2b-it-ne-detector_full-gguf/blob/main/Gemma-2-2b-it-ne-detector_full.Q3_K_S.gguf) | Q3_K_S | 1.27GB | | [Gemma-2-2b-it-ne-detector_full.IQ3_M.gguf](https://huggingface.co/RichardErkhov/kshitizrimal_-_Gemma-2-2b-it-ne-detector_full-gguf/blob/main/Gemma-2-2b-it-ne-detector_full.IQ3_M.gguf) | IQ3_M | 1.3GB | | [Gemma-2-2b-it-ne-detector_full.Q3_K.gguf](https://huggingface.co/RichardErkhov/kshitizrimal_-_Gemma-2-2b-it-ne-detector_full-gguf/blob/main/Gemma-2-2b-it-ne-detector_full.Q3_K.gguf) | Q3_K | 1.36GB | | [Gemma-2-2b-it-ne-detector_full.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/kshitizrimal_-_Gemma-2-2b-it-ne-detector_full-gguf/blob/main/Gemma-2-2b-it-ne-detector_full.Q3_K_M.gguf) | Q3_K_M | 1.36GB | | [Gemma-2-2b-it-ne-detector_full.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/kshitizrimal_-_Gemma-2-2b-it-ne-detector_full-gguf/blob/main/Gemma-2-2b-it-ne-detector_full.Q3_K_L.gguf) | Q3_K_L | 1.44GB | | [Gemma-2-2b-it-ne-detector_full.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/kshitizrimal_-_Gemma-2-2b-it-ne-detector_full-gguf/blob/main/Gemma-2-2b-it-ne-detector_full.IQ4_XS.gguf) | IQ4_XS | 1.47GB | | [Gemma-2-2b-it-ne-detector_full.Q4_0.gguf](https://huggingface.co/RichardErkhov/kshitizrimal_-_Gemma-2-2b-it-ne-detector_full-gguf/blob/main/Gemma-2-2b-it-ne-detector_full.Q4_0.gguf) | Q4_0 | 1.52GB | | [Gemma-2-2b-it-ne-detector_full.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/kshitizrimal_-_Gemma-2-2b-it-ne-detector_full-gguf/blob/main/Gemma-2-2b-it-ne-detector_full.IQ4_NL.gguf) | IQ4_NL | 1.53GB | | [Gemma-2-2b-it-ne-detector_full.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/kshitizrimal_-_Gemma-2-2b-it-ne-detector_full-gguf/blob/main/Gemma-2-2b-it-ne-detector_full.Q4_K_S.gguf) | Q4_K_S | 1.53GB | | [Gemma-2-2b-it-ne-detector_full.Q4_K.gguf](https://huggingface.co/RichardErkhov/kshitizrimal_-_Gemma-2-2b-it-ne-detector_full-gguf/blob/main/Gemma-2-2b-it-ne-detector_full.Q4_K.gguf) | Q4_K | 1.59GB | | [Gemma-2-2b-it-ne-detector_full.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/kshitizrimal_-_Gemma-2-2b-it-ne-detector_full-gguf/blob/main/Gemma-2-2b-it-ne-detector_full.Q4_K_M.gguf) | Q4_K_M | 1.59GB | | [Gemma-2-2b-it-ne-detector_full.Q4_1.gguf](https://huggingface.co/RichardErkhov/kshitizrimal_-_Gemma-2-2b-it-ne-detector_full-gguf/blob/main/Gemma-2-2b-it-ne-detector_full.Q4_1.gguf) | Q4_1 | 1.64GB | | [Gemma-2-2b-it-ne-detector_full.Q5_0.gguf](https://huggingface.co/RichardErkhov/kshitizrimal_-_Gemma-2-2b-it-ne-detector_full-gguf/blob/main/Gemma-2-2b-it-ne-detector_full.Q5_0.gguf) | Q5_0 | 1.75GB | | [Gemma-2-2b-it-ne-detector_full.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/kshitizrimal_-_Gemma-2-2b-it-ne-detector_full-gguf/blob/main/Gemma-2-2b-it-ne-detector_full.Q5_K_S.gguf) | Q5_K_S | 1.75GB | | [Gemma-2-2b-it-ne-detector_full.Q5_K.gguf](https://huggingface.co/RichardErkhov/kshitizrimal_-_Gemma-2-2b-it-ne-detector_full-gguf/blob/main/Gemma-2-2b-it-ne-detector_full.Q5_K.gguf) | Q5_K | 1.79GB | | [Gemma-2-2b-it-ne-detector_full.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/kshitizrimal_-_Gemma-2-2b-it-ne-detector_full-gguf/blob/main/Gemma-2-2b-it-ne-detector_full.Q5_K_M.gguf) | Q5_K_M | 1.79GB | | [Gemma-2-2b-it-ne-detector_full.Q5_1.gguf](https://huggingface.co/RichardErkhov/kshitizrimal_-_Gemma-2-2b-it-ne-detector_full-gguf/blob/main/Gemma-2-2b-it-ne-detector_full.Q5_1.gguf) | Q5_1 | 1.87GB | | [Gemma-2-2b-it-ne-detector_full.Q6_K.gguf](https://huggingface.co/RichardErkhov/kshitizrimal_-_Gemma-2-2b-it-ne-detector_full-gguf/blob/main/Gemma-2-2b-it-ne-detector_full.Q6_K.gguf) | Q6_K | 2.0GB | | [Gemma-2-2b-it-ne-detector_full.Q8_0.gguf](https://huggingface.co/RichardErkhov/kshitizrimal_-_Gemma-2-2b-it-ne-detector_full-gguf/blob/main/Gemma-2-2b-it-ne-detector_full.Q8_0.gguf) | Q8_0 | 2.59GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
shankinson/ppo-LunarLander-v2
shankinson
"2022-05-22T20:10:18"
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2022-05-22T19:41:09"
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 287.55 +/- 17.52 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
ZhangShenao/SELM-Zephyr-7B-iter-3
ZhangShenao
"2024-06-08T14:55:40"
8
3
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "dpo", "trl", "selm", "conversational", "dataset:HuggingFaceH4/ultrafeedback_binarized", "arxiv:2405.19332", "base_model:ZhangShenao/SELM-Zephyr-7B-iter-2", "base_model:finetune:ZhangShenao/SELM-Zephyr-7B-iter-2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-05-25T11:16:38"
--- license: mit base_model: ZhangShenao/SELM-Zephyr-7B-iter-2 tags: - alignment-handbook - dpo - trl - selm datasets: - HuggingFaceH4/ultrafeedback_binarized model-index: - name: SELM-Zephyr-7B-iter-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [Self-Exploring Language Models: Active Preference Elicitation for Online Alignment](https://arxiv.org/abs/2405.19332). # SELM-Zephyr-7B-iter-3 This model is a fine-tuned version of [ZhangShenao/SELM-Zephyr-7B-iter-2](https://huggingface.co/ZhangShenao/SELM-Zephyr-7B-iter-2) using synthetic data based on on the HuggingFaceH4/ultrafeedback_binarized dataset. ## Model description - Model type: A 7B parameter Zephyr-based Self-Exploring Language Models (SELM). - License: MIT ## Results | | AlpacaEval 2.0 (LC WR) | MT-Bench (Average) | |----------------------------------------|------------------------|--------------------| | [SELM-Zephyr-7B-iter-3](https://huggingface.co/ZhangShenao/SELM-Zephyr-7B-iter-3) | &emsp; &emsp; &emsp;&emsp; 24.00 | &emsp; &emsp; &emsp; 7.48 | | [SELM-Zephyr-7B-iter-2](https://huggingface.co/ZhangShenao/SELM-Zephyr-7B-iter-2) | &emsp; &emsp; &emsp;&emsp; 23.40 | &emsp; &emsp; &emsp; 7.72 | | [SELM-Zephyr-7B-iter-1](https://huggingface.co/ZhangShenao/SELM-Zephyr-7B-iter-1) | &emsp; &emsp; &emsp;&emsp; 20.28 | &emsp; &emsp; &emsp; 7.42 | | [DPO-Zephyr-7B](https://huggingface.co/ZhangShenao/DPO-Zephyr-7B) | &emsp; &emsp; &emsp;&emsp; 14.45 | &emsp; &emsp; &emsp; 7.28 | Our model also ranks highly on [WildBench](https://huggingface.co/spaces/allenai/WildBench)! 🔥 ### Training hyperparameters The following hyperparameters were used during training: - alpha: 0.001 - beta: 0.01 - train_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - num_epochs: 1 ### Framework versions - Transformers 4.40.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.19.1
metga97/egyptian_modernbert_fineweb2
metga97
"2025-02-05T08:07:11"
8
4
transformers
[ "transformers", "safetensors", "modernbert", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2025-02-04T11:19:24"
--- library_name: transformers tags: - generated_from_trainer model-index: - name: egyptian_modernbert_fineweb2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # egyptian_modernbert_fineweb2 Modern EgyBert is a 🤗 Modern Bert transformers model trained on the Egyptian Arabic Split of the FineWeb2 dataset. Language(s) (NLP): Egyptian Arabic Finetuned from model: Modern Bert Base It achieves the following results on the evaluation set: - eval_loss: 2.5126 - eval_runtime: 134.9987 - eval_samples_per_second: 68.267 - eval_steps_per_second: 8.533 - epoch: 3 - step: 120000 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 ### Framework versions - Transformers 4.49.0.dev0 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
mehranbhz/LLM-project
mehranbhz
"2025-01-05T03:14:05"
116
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:lvwerra/distilbert-imdb", "base_model:finetune:lvwerra/distilbert-imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2025-01-05T03:06:17"
--- library_name: transformers license: apache-2.0 base_model: lvwerra/distilbert-imdb tags: - generated_from_trainer model-index: - name: LLM-project results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # LLM-project This model is a fine-tuned version of [lvwerra/distilbert-imdb](https://huggingface.co/lvwerra/distilbert-imdb) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 0.2415 - eval_model_preparation_time: 0.0029 - eval_accuracy: 0.9287 - eval_runtime: 380.7321 - eval_samples_per_second: 65.663 - eval_steps_per_second: 8.208 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
The-matt/autumn-shadow-48_10
The-matt
"2023-09-02T14:30:51"
0
0
peft
[ "peft", "region:us" ]
null
"2023-09-02T14:30:47"
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0.dev0
haryoaw/scenario-TCR-XLMV_data-cardiffnlp_tweet_sentiment_multilingual_all_alpha2
haryoaw
"2024-04-02T19:35:32"
116
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "generated_from_trainer", "dataset:tweet_sentiment_multilingual", "base_model:facebook/xlm-v-base", "base_model:finetune:facebook/xlm-v-base", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-04-02T19:33:13"
--- license: mit base_model: facebook/xlm-v-base tags: - generated_from_trainer datasets: - tweet_sentiment_multilingual metrics: - accuracy - f1 model-index: - name: scenario-TCR-XLMV_data-cardiffnlp_tweet_sentiment_multilingual_all_alpha2 results: - task: name: Text Classification type: text-classification dataset: name: tweet_sentiment_multilingual type: tweet_sentiment_multilingual config: all split: validation args: all metrics: - name: Accuracy type: accuracy value: 0.3333333333333333 - name: F1 type: f1 value: 0.16666666666666666 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # scenario-TCR-XLMV_data-cardiffnlp_tweet_sentiment_multilingual_all_alpha2 This model is a fine-tuned version of [facebook/xlm-v-base](https://huggingface.co/facebook/xlm-v-base) on the tweet_sentiment_multilingual dataset. It achieves the following results on the evaluation set: - Loss: 1.0987 - Accuracy: 0.3333 - F1: 0.1667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 24 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 1.1001 | 1.09 | 500 | 1.0987 | 0.3333 | 0.1667 | | 1.0999 | 2.17 | 1000 | 1.0990 | 0.3333 | 0.1667 | | 1.0998 | 3.26 | 1500 | 1.0997 | 0.3333 | 0.1667 | | 1.1 | 4.35 | 2000 | 1.0989 | 0.3333 | 0.1667 | | 1.0998 | 5.43 | 2500 | 1.0989 | 0.3333 | 0.1667 | | 1.0995 | 6.52 | 3000 | 1.0987 | 0.3333 | 0.1667 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.1.1+cu121 - Datasets 2.14.5 - Tokenizers 0.13.3
sd-concepts-library/eastward
sd-concepts-library
"2022-09-09T20:57:35"
0
4
null
[ "license:mit", "region:us" ]
null
"2022-09-09T20:57:28"
--- license: mit --- ### Eastward on Stable Diffusion This is the `<eastward>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<eastward> 0](https://huggingface.co/sd-concepts-library/eastward/resolve/main/concept_images/1.jpeg) ![<eastward> 1](https://huggingface.co/sd-concepts-library/eastward/resolve/main/concept_images/11.jpeg) ![<eastward> 2](https://huggingface.co/sd-concepts-library/eastward/resolve/main/concept_images/8.jpeg) ![<eastward> 3](https://huggingface.co/sd-concepts-library/eastward/resolve/main/concept_images/5.jpeg) ![<eastward> 4](https://huggingface.co/sd-concepts-library/eastward/resolve/main/concept_images/9.jpeg) ![<eastward> 5](https://huggingface.co/sd-concepts-library/eastward/resolve/main/concept_images/7.jpeg) ![<eastward> 6](https://huggingface.co/sd-concepts-library/eastward/resolve/main/concept_images/3.jpeg) ![<eastward> 7](https://huggingface.co/sd-concepts-library/eastward/resolve/main/concept_images/2.jpeg) ![<eastward> 8](https://huggingface.co/sd-concepts-library/eastward/resolve/main/concept_images/6.jpeg) ![<eastward> 9](https://huggingface.co/sd-concepts-library/eastward/resolve/main/concept_images/10.jpeg) ![<eastward> 10](https://huggingface.co/sd-concepts-library/eastward/resolve/main/concept_images/0.jpeg) ![<eastward> 11](https://huggingface.co/sd-concepts-library/eastward/resolve/main/concept_images/14.jpeg) ![<eastward> 12](https://huggingface.co/sd-concepts-library/eastward/resolve/main/concept_images/13.jpeg) ![<eastward> 13](https://huggingface.co/sd-concepts-library/eastward/resolve/main/concept_images/4.jpeg) ![<eastward> 14](https://huggingface.co/sd-concepts-library/eastward/resolve/main/concept_images/12.jpeg)
L-NLProc/LegalSeg_Hier_BiLSTM-CRF
L-NLProc
"2025-02-21T08:28:52"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2025-02-21T08:28:19"
--- license: apache-2.0 ---
kimjin0915/vit-base-beans-demo-v5
kimjin0915
"2024-07-04T07:48:00"
11
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "ViT", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2024-07-04T07:47:39"
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - image-classification - ViT - generated_from_trainer metrics: - accuracy model-index: - name: vit-base-beans-demo-v5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0157 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0882 | 1.5385 | 100 | 0.0274 | 1.0 | | 0.0571 | 3.0769 | 200 | 0.0157 | 1.0 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
biustnaspust/puszek41
biustnaspust
"2025-01-30T18:55:24"
19
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-01-30T18:50:52"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Moneyparking/Moneyparking
Moneyparking
"2025-02-12T18:14:19"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2025-02-12T18:14:17"
--- license: apache-2.0 ---
muhtasham/tiny-mlm-glue-mrpc-from-scratch-custom-tokenizer-target-glue-mnli
muhtasham
"2023-01-13T04:25:15"
101
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-01-13T04:09:08"
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: tiny-mlm-glue-mrpc-from-scratch-custom-tokenizer-target-glue-mnli results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-mrpc-from-scratch-custom-tokenizer-target-glue-mnli This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-mrpc-from-scratch-custom-tokenizer](https://huggingface.co/muhtasham/tiny-mlm-glue-mrpc-from-scratch-custom-tokenizer) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0217 - Accuracy: 0.4665 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.099 | 0.04 | 500 | 1.0972 | 0.3681 | | 1.0938 | 0.08 | 1000 | 1.0886 | 0.3654 | | 1.0844 | 0.12 | 1500 | 1.0758 | 0.4004 | | 1.0661 | 0.16 | 2000 | 1.0610 | 0.4208 | | 1.0616 | 0.2 | 2500 | 1.0567 | 0.4282 | | 1.055 | 0.24 | 3000 | 1.0497 | 0.4301 | | 1.0481 | 0.29 | 3500 | 1.0486 | 0.4384 | | 1.0304 | 0.33 | 4000 | 1.0303 | 0.4549 | | 1.0257 | 0.37 | 4500 | 1.0260 | 0.4638 | | 1.0209 | 0.41 | 5000 | 1.0217 | 0.4665 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
TheBloke/Wizard-Vicuna-13B-Uncensored-SuperHOT-8K-fp16
TheBloke
"2023-07-09T20:24:50"
19
18
transformers
[ "transformers", "pytorch", "llama", "text-generation", "custom_code", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-06-27T03:55:57"
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Eric Hartford's Wizard Vicuna 13B Uncensored fp16 This is fp16 pytorch format model files for [Eric Hartford's Wizard Vicuna 13B Uncensored](https://huggingface.co/ehartford/Wizard-Vicuna-13B-Uncensored) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test). [Kaio Ken's SuperHOT 13b LoRA](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`. Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/Wizard-Vicuna-13B-Uncensored) ## How to use this model from Python code First make sure you have Einops installed: ``` pip3 install auto-gptq ``` Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code. The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`. ```python from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline import argparse model_name_or_path = "TheBloke/Wizard-Vicuna-13B-Uncensored-SuperHOT-8K-fp16" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True) # Change this to the sequence length you want config.max_position_embeddings = 8192 model = AutoModelForCausalLM.from_pretrained(model_name_or_path, config=config, trust_remote_code=True, device_map='auto') # Note: check to confirm if this is correct prompt template is correct for this model! prompt = "Tell me about AI" prompt_template=f'''USER: {prompt} ASSISTANT:''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) print(pipe(prompt_template)[0]['generated_text']) ``` ## Using other UIs: monkey patch Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev. It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: zynix , ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). Tests have shown that the model does indeed leverage the extended context at 8K. You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** #### Looking for Merged & Quantized Models? - 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors) - 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors) #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model # Original model card: Eric Hartford's Wizard Vicuna 13B Uncensored This is [wizard-vicuna-13b](https://huggingface.co/junelee/wizard-vicuna-13b) trained with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA. Shout out to the open source AI/ML community, and everyone who helped me out. Note: An uncensored model has no guardrails. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car. Publishing anything this model generates is the same as publishing it yourself. You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.
theojolliffe/bart-cnn-science-v3-e5-v4-e6-manual
theojolliffe
"2022-06-18T15:08:31"
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2022-06-18T14:43:34"
--- license: mit tags: - generated_from_trainer metrics: - rouge model-index: - name: bart-cnn-science-v3-e5-v4-e6-manual results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-cnn-science-v3-e5-v4-e6-manual This model is a fine-tuned version of [theojolliffe/bart-cnn-science-v3-e5](https://huggingface.co/theojolliffe/bart-cnn-science-v3-e5) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8609 - Rouge1: 54.0982 - Rouge2: 36.1022 - Rougel: 36.9584 - Rougelsum: 52.5383 - Gen Len: 142.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 42 | 0.7022 | 56.9307 | 38.2717 | 39.9091 | 53.9037 | 142.0 | | No log | 2.0 | 84 | 0.6840 | 50.7036 | 29.9002 | 33.0298 | 47.8775 | 142.0 | | No log | 3.0 | 126 | 0.7179 | 52.8561 | 32.2202 | 35.7914 | 51.0248 | 142.0 | | No log | 4.0 | 168 | 0.8149 | 54.8457 | 36.4705 | 35.931 | 52.4241 | 142.0 | | No log | 5.0 | 210 | 0.8330 | 55.6746 | 37.8316 | 36.9614 | 54.3022 | 142.0 | | No log | 6.0 | 252 | 0.8609 | 54.0982 | 36.1022 | 36.9584 | 52.5383 | 142.0 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
kunalkumarsahoo/ppo-Huggy
kunalkumarsahoo
"2024-03-16T06:34:32"
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
"2024-03-16T06:34:20"
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: kunalkumarsahoo/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
vu92/wav2vec2-large-xls-r-300m-ch
vu92
"2022-11-29T17:31:29"
9
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:fleurs", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2022-11-25T20:39:14"
--- license: apache-2.0 tags: - generated_from_trainer datasets: - fleurs model-index: - name: wav2vec2-large-xls-r-300m-ch results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-ch This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the fleurs dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 2.6.1 - Tokenizers 0.13.1
End of preview.