modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-07 06:28:25
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-07 06:27:36
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
matthieuzone/CAMEMBERT2
|
matthieuzone
| 2024-05-17T13:04:34Z | 1 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-05-17T13:00:52Z |
---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of CAMEMBERT cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/CAMEMBERT2
<Gallery />
## Model description
These are matthieuzone/CAMEMBERT2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of CAMEMBERT cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/CAMEMBERT2/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
Nishit28/bge-large-finetuned-400
|
Nishit28
| 2024-05-17T13:02:06Z | 17 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-05-09T17:32:05Z |
---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 44 with parameters:
```
{'batch_size': 10, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 50,
"evaluator": "sentence_transformers.evaluation.InformationRetrievalEvaluator.InformationRetrievalEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 8,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
matthieuzone/CABECOU2
|
matthieuzone
| 2024-05-17T13:00:38Z | 1 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-05-17T12:56:57Z |
---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of CABECOU cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/CABECOU2
<Gallery />
## Model description
These are matthieuzone/CABECOU2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of CABECOU cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/CABECOU2/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
selincildam/chatbot-v0.2
|
selincildam
| 2024-05-17T12:59:14Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-05-17T12:28:45Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
datasets:
- generator
model-index:
- name: chatbot-v0.2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/scildam/huggingface/runs/2uwus52d)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/scildam/huggingface/runs/2uwus52d)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/scildam/huggingface/runs/2uwus52d)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/scildam/huggingface/runs/2uwus52d)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/scildam/huggingface/runs/2uwus52d)
# chatbot-v0.2
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.0
- Transformers 4.41.0.dev0
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.19.1
|
emilykang/medmcqa_question_generation-social_n_preventive_medicine_lora
|
emilykang
| 2024-05-17T12:58:19Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-05-17T12:24:46Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
datasets:
- generator
model-index:
- name: medmcqa_question_generation-social_n_preventive_medicine_lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medmcqa_question_generation-social_n_preventive_medicine_lora
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 10
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1
|
franciscobdl/EstigiaxTinyLlama-Q4_K_M-GGUF
|
franciscobdl
| 2024-05-17T12:57:54Z | 1 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-17T12:57:50Z |
---
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
---
# franciscobdl/EstigiaxTinyLlama-Q4_K_M-GGUF
This model was converted to GGUF format from [`franciscobdl/EstigiaxTinyLlama`](https://huggingface.co/franciscobdl/EstigiaxTinyLlama) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/franciscobdl/EstigiaxTinyLlama) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo franciscobdl/EstigiaxTinyLlama-Q4_K_M-GGUF --model estigiaxtinyllama.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo franciscobdl/EstigiaxTinyLlama-Q4_K_M-GGUF --model estigiaxtinyllama.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m estigiaxtinyllama.Q4_K_M.gguf -n 128
```
|
kamilakesbi/speaker-segmentation-fine-tuned-callhome-jpn
|
kamilakesbi
| 2024-05-17T12:56:52Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"pyannet",
"speaker-diarization",
"speaker-segmentation",
"generated_from_trainer",
"pyannote",
"pyannote.audio",
"pyannote-audio-model",
"audio",
"voice",
"speech",
"speaker",
"speaker-change-detection",
"voice-activity-detection",
"overlapped-speech-detection",
"resegmentation",
"dataset:diarizers-community/callhome",
"base_model:pyannote/segmentation-3.0",
"base_model:finetune:pyannote/segmentation-3.0",
"license:mit",
"endpoints_compatible",
"region:us"
] |
voice-activity-detection
| 2024-04-22T11:17:05Z |
---
license: mit
tags:
- speaker-diarization
- speaker-segmentation
- generated_from_trainer
- pyannote
- pyannote.audio
- pyannote-audio-model
- audio
- voice
- speech
- speaker
- speaker-change-detection
- voice-activity-detection
- overlapped-speech-detection
- resegmentation
base_model: pyannote/segmentation-3.0
datasets:
- diarizers-community/callhome
licence: mit
model-index:
- name: speaker-segmentation-fine-tuned-callhome-jpn
results: []
---
This is the model card of a pyannote model that has been pushed on the Hub. This model card has been automatically generated.
|
RichardErkhov/BioMistral_-_BioMistral-7B-DARE-8bits
|
RichardErkhov
| 2024-05-17T12:50:09Z | 78 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"arxiv:2402.10373",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-05-17T12:43:38Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
BioMistral-7B-DARE - bnb 8bits
- Model creator: https://huggingface.co/BioMistral/
- Original model: https://huggingface.co/BioMistral/BioMistral-7B-DARE/
Original model description:
---
base_model:
- BioMistral/BioMistral-7B
- mistralai/Mistral-7B-Instruct-v0.1
library_name: transformers
tags:
- mergekit
- merge
- dare
- medical
- biology
license: apache-2.0
datasets:
- pubmed
language:
- en
- fr
- nl
- es
- it
- pl
- ro
- de
pipeline_tag: text-generation
---
# BioMistral-7B-mistral7instruct-dare
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [BioMistral/BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mistralai/Mistral-7B-Instruct-v0.1
# No parameters necessary for base model
- model: BioMistral/BioMistral-7B
parameters:
density: 0.5
weight: 0.5
merge_method: dare_ties
base_model: mistralai/Mistral-7B-Instruct-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
<p align="center">
<img src="https://huggingface.co/BioMistral/BioMistral-7B/resolve/main/wordart_blue_m_rectangle.png?download=true" alt="drawing" width="250"/>
</p>
# BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains
**Abstract:**
Large Language Models (LLMs) have demonstrated remarkable versatility in recent years, offering potential applications across specialized domains such as healthcare and medicine. Despite the availability of various open-source LLMs tailored for health contexts, adapting general-purpose LLMs to the medical domain presents significant challenges.
In this paper, we introduce BioMistral, an open-source LLM tailored for the biomedical domain, utilizing Mistral as its foundation model and further pre-trained on PubMed Central. We conduct a comprehensive evaluation of BioMistral on a benchmark comprising 10 established medical question-answering (QA) tasks in English. We also explore lightweight models obtained through quantization and model merging approaches. Our results demonstrate BioMistral's superior performance compared to existing open-source medical models and its competitive edge against proprietary counterparts. Finally, to address the limited availability of data beyond English and to assess the multilingual generalization of medical LLMs, we automatically translated and evaluated this benchmark into 7 other languages. This marks the first large-scale multilingual evaluation of LLMs in the medical domain. Datasets, multilingual evaluation benchmarks, scripts, and all the models obtained during our experiments are freely released.
**Advisory Notice!** Although BioMistral is intended to encapsulate medical knowledge sourced from high-quality evidence, it hasn't been tailored to effectively, safely, or suitably convey this knowledge within professional parameters for action. We advise refraining from utilizing BioMistral in medical contexts unless it undergoes thorough alignment with specific use cases and undergoes further testing, notably including randomized controlled trials in real-world medical environments. BioMistral 7B may possess inherent risks and biases that have not yet been thoroughly assessed. Additionally, the model's performance has not been evaluated in real-world clinical settings. Consequently, we recommend using BioMistral 7B strictly as a research tool and advise against deploying it in production environments for natural language generation or any professional health and medical purposes.
# 1. BioMistral models
**BioMistral** is a suite of Mistral-based further pre-trained open source models suited for the medical domains and pre-trained using textual data from PubMed Central Open Access (CC0, CC BY, CC BY-SA, and CC BY-ND). All the models are trained using the CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/jean-zay/) French HPC.
| Model Name | Base Model | Model Type | Sequence Length | Download |
|:-------------------:|:----------------------------------:|:-------------------:|:---------------:|:-----------------------------------------------------:|
| BioMistral-7B | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Further Pre-trained | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) |
| BioMistral-7B-DARE | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge DARE | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE) |
| BioMistral-7B-TIES | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge TIES | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES) |
| BioMistral-7B-SLERP | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge SLERP | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP) |
# 2. Quantized Models
| Base Model | Method | q_group_size | w_bit | version | VRAM GB | Time | Download |
|:-------------------:|:------:|:------------:|:-----:|:-------:|:-------:|:------:|:--------:|
| BioMistral-7B | FP16/BF16 | | | | 15.02 | x1.00 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) |
| BioMistral-7B | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMM) |
| BioMistral-7B | AWQ | 128 | 4 | GEMV | 4.68 | x10.30 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMV) |
| BioMistral-7B | BnB.4 | | 4 | | 5.03 | x3.25 | [HuggingFace](blank) |
| BioMistral-7B | BnB.8 | | 8 | | 8.04 | x4.34 | [HuggingFace](blank) |
| BioMistral-7B-DARE | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE-AWQ-QGS128-W4-GEMM) |
| BioMistral-7B-TIES | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES-AWQ-QGS128-W4-GEMM) |
| BioMistral-7B-SLERP | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP-AWQ-QGS128-W4-GEMM) |
# 2. Using BioMistral
You can use BioMistral with [Hugging Face's Transformers library](https://github.com/huggingface/transformers) as follow.
Loading the model and tokenizer :
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("BioMistral/BioMistral-7B")
model = AutoModel.from_pretrained("BioMistral/BioMistral-7B")
```
# 3. Supervised Fine-tuning Benchmark
| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA | MedQA 5 opts | PubMedQA | MedMCQA | Avg. |
|-------------------------------------------|:---------------------------------------------:|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|------------------|
| **BioMistral 7B** | 59.9 | 64.0 | 56.5 | 60.4 | 59.0 | 54.7 | 50.6 | 42.8 | 77.5 | 48.1 | 57.3 |
| **Mistral 7B Instruct** | **62.9** | 57.0 | 55.6 | 59.4 | 62.5 | <u>57.2</u> | 42.0 | 40.9 | 75.7 | 46.1 | 55.9 |
| | | | | | | | | | | | |
| **BioMistral 7B Ensemble** | <u>62.8</u> | 62.7 | <u>57.5</u> | **63.5** | 64.3 | 55.7 | 50.6 | 43.6 | 77.5 | **48.8** | 58.7 |
| **BioMistral 7B DARE** | 62.3 | **67.0** | 55.8 | 61.4 | **66.9** | **58.0** | **51.1** | **45.2** | <u>77.7</u> | <u>48.7</u> | **59.4** |
| **BioMistral 7B TIES** | 60.1 | <u>65.0</u> | **58.5** | 60.5 | 60.4 | 56.5 | 49.5 | 43.2 | 77.5 | 48.1 | 57.9 |
| **BioMistral 7B SLERP** | 62.5 | 64.7 | 55.8 | <u>62.7</u> | <u>64.8</u> | 56.3 | <u>50.8</u> | <u>44.3</u> | **77.8** | 48.6 | <u>58.8</u> |
| | | | | | | | | | | | |
| **MedAlpaca 7B** | 53.1 | 58.0 | 54.1 | 58.8 | 58.1 | 48.6 | 40.1 | 33.7 | 73.6 | 37.0 | 51.5 |
| **PMC-LLaMA 7B** | 24.5 | 27.7 | 35.3 | 17.4 | 30.3 | 23.3 | 25.5 | 20.2 | 72.9 | 26.6 | 30.4 |
| **MediTron-7B** | 41.6 | 50.3 | 46.4 | 27.9 | 44.4 | 30.8 | 41.6 | 28.1 | 74.9 | 41.3 | 42.7 |
| **BioMedGPT-LM-7B** | 51.4 | 52.0 | 49.4 | 53.3 | 50.7 | 49.1 | 42.5 | 33.9 | 76.8 | 37.6 | 49.7 |
| | | | | | | | | | | | |
| **GPT-3.5 Turbo 1106*** | 74.71 | 74.00 | 65.92 | 72.79 | 72.91 | 64.73 | 57.71 | 50.82 | 72.66 | 53.79 | 66.0 |
Supervised Fine-Tuning (SFT) performance of BioMistral 7B models compared to baselines, measured by accuracy (↑) and averaged across 3 random seeds of 3-shot. DARE, TIES, and SLERP are model merging strategies that combine BioMistral 7B and Mistral 7B Instruct. Best model in bold, and second-best underlined. *GPT-3.5 Turbo performances are reported from the 3-shot results without SFT.
# Citation BibTeX
Arxiv : [https://arxiv.org/abs/2402.10373](https://arxiv.org/abs/2402.10373)
```bibtex
@misc{labrak2024biomistral,
title={BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains},
author={Yanis Labrak and Adrien Bazoge and Emmanuel Morin and Pierre-Antoine Gourraud and Mickael Rouvier and Richard Dufour},
year={2024},
eprint={2402.10373},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
**CAUTION!** Both direct and downstream users need to be informed about the risks, biases, and constraints inherent in the model. While the model can produce natural language text, our exploration of its capabilities and limitations is just beginning. In fields such as medicine, comprehending these limitations is crucial. Hence, we strongly advise against deploying this model for natural language generation in production or for professional tasks in the realm of health and medicine.
|
KenanKhan/rl_course_vizdoom_health_gathering_supreme
|
KenanKhan
| 2024-05-17T12:46:30Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-05-17T12:46:24Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 3.94 +/- 0.20
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r KenanKhan/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
RichardErkhov/BioMistral_-_BioMistral-7B-DARE-4bits
|
RichardErkhov
| 2024-05-17T12:42:32Z | 78 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"arxiv:2402.10373",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-05-17T12:38:53Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
BioMistral-7B-DARE - bnb 4bits
- Model creator: https://huggingface.co/BioMistral/
- Original model: https://huggingface.co/BioMistral/BioMistral-7B-DARE/
Original model description:
---
base_model:
- BioMistral/BioMistral-7B
- mistralai/Mistral-7B-Instruct-v0.1
library_name: transformers
tags:
- mergekit
- merge
- dare
- medical
- biology
license: apache-2.0
datasets:
- pubmed
language:
- en
- fr
- nl
- es
- it
- pl
- ro
- de
pipeline_tag: text-generation
---
# BioMistral-7B-mistral7instruct-dare
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [BioMistral/BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mistralai/Mistral-7B-Instruct-v0.1
# No parameters necessary for base model
- model: BioMistral/BioMistral-7B
parameters:
density: 0.5
weight: 0.5
merge_method: dare_ties
base_model: mistralai/Mistral-7B-Instruct-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
<p align="center">
<img src="https://huggingface.co/BioMistral/BioMistral-7B/resolve/main/wordart_blue_m_rectangle.png?download=true" alt="drawing" width="250"/>
</p>
# BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains
**Abstract:**
Large Language Models (LLMs) have demonstrated remarkable versatility in recent years, offering potential applications across specialized domains such as healthcare and medicine. Despite the availability of various open-source LLMs tailored for health contexts, adapting general-purpose LLMs to the medical domain presents significant challenges.
In this paper, we introduce BioMistral, an open-source LLM tailored for the biomedical domain, utilizing Mistral as its foundation model and further pre-trained on PubMed Central. We conduct a comprehensive evaluation of BioMistral on a benchmark comprising 10 established medical question-answering (QA) tasks in English. We also explore lightweight models obtained through quantization and model merging approaches. Our results demonstrate BioMistral's superior performance compared to existing open-source medical models and its competitive edge against proprietary counterparts. Finally, to address the limited availability of data beyond English and to assess the multilingual generalization of medical LLMs, we automatically translated and evaluated this benchmark into 7 other languages. This marks the first large-scale multilingual evaluation of LLMs in the medical domain. Datasets, multilingual evaluation benchmarks, scripts, and all the models obtained during our experiments are freely released.
**Advisory Notice!** Although BioMistral is intended to encapsulate medical knowledge sourced from high-quality evidence, it hasn't been tailored to effectively, safely, or suitably convey this knowledge within professional parameters for action. We advise refraining from utilizing BioMistral in medical contexts unless it undergoes thorough alignment with specific use cases and undergoes further testing, notably including randomized controlled trials in real-world medical environments. BioMistral 7B may possess inherent risks and biases that have not yet been thoroughly assessed. Additionally, the model's performance has not been evaluated in real-world clinical settings. Consequently, we recommend using BioMistral 7B strictly as a research tool and advise against deploying it in production environments for natural language generation or any professional health and medical purposes.
# 1. BioMistral models
**BioMistral** is a suite of Mistral-based further pre-trained open source models suited for the medical domains and pre-trained using textual data from PubMed Central Open Access (CC0, CC BY, CC BY-SA, and CC BY-ND). All the models are trained using the CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/jean-zay/) French HPC.
| Model Name | Base Model | Model Type | Sequence Length | Download |
|:-------------------:|:----------------------------------:|:-------------------:|:---------------:|:-----------------------------------------------------:|
| BioMistral-7B | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Further Pre-trained | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) |
| BioMistral-7B-DARE | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge DARE | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE) |
| BioMistral-7B-TIES | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge TIES | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES) |
| BioMistral-7B-SLERP | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge SLERP | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP) |
# 2. Quantized Models
| Base Model | Method | q_group_size | w_bit | version | VRAM GB | Time | Download |
|:-------------------:|:------:|:------------:|:-----:|:-------:|:-------:|:------:|:--------:|
| BioMistral-7B | FP16/BF16 | | | | 15.02 | x1.00 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) |
| BioMistral-7B | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMM) |
| BioMistral-7B | AWQ | 128 | 4 | GEMV | 4.68 | x10.30 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMV) |
| BioMistral-7B | BnB.4 | | 4 | | 5.03 | x3.25 | [HuggingFace](blank) |
| BioMistral-7B | BnB.8 | | 8 | | 8.04 | x4.34 | [HuggingFace](blank) |
| BioMistral-7B-DARE | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE-AWQ-QGS128-W4-GEMM) |
| BioMistral-7B-TIES | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES-AWQ-QGS128-W4-GEMM) |
| BioMistral-7B-SLERP | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP-AWQ-QGS128-W4-GEMM) |
# 2. Using BioMistral
You can use BioMistral with [Hugging Face's Transformers library](https://github.com/huggingface/transformers) as follow.
Loading the model and tokenizer :
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("BioMistral/BioMistral-7B")
model = AutoModel.from_pretrained("BioMistral/BioMistral-7B")
```
# 3. Supervised Fine-tuning Benchmark
| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA | MedQA 5 opts | PubMedQA | MedMCQA | Avg. |
|-------------------------------------------|:---------------------------------------------:|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|------------------|
| **BioMistral 7B** | 59.9 | 64.0 | 56.5 | 60.4 | 59.0 | 54.7 | 50.6 | 42.8 | 77.5 | 48.1 | 57.3 |
| **Mistral 7B Instruct** | **62.9** | 57.0 | 55.6 | 59.4 | 62.5 | <u>57.2</u> | 42.0 | 40.9 | 75.7 | 46.1 | 55.9 |
| | | | | | | | | | | | |
| **BioMistral 7B Ensemble** | <u>62.8</u> | 62.7 | <u>57.5</u> | **63.5** | 64.3 | 55.7 | 50.6 | 43.6 | 77.5 | **48.8** | 58.7 |
| **BioMistral 7B DARE** | 62.3 | **67.0** | 55.8 | 61.4 | **66.9** | **58.0** | **51.1** | **45.2** | <u>77.7</u> | <u>48.7</u> | **59.4** |
| **BioMistral 7B TIES** | 60.1 | <u>65.0</u> | **58.5** | 60.5 | 60.4 | 56.5 | 49.5 | 43.2 | 77.5 | 48.1 | 57.9 |
| **BioMistral 7B SLERP** | 62.5 | 64.7 | 55.8 | <u>62.7</u> | <u>64.8</u> | 56.3 | <u>50.8</u> | <u>44.3</u> | **77.8** | 48.6 | <u>58.8</u> |
| | | | | | | | | | | | |
| **MedAlpaca 7B** | 53.1 | 58.0 | 54.1 | 58.8 | 58.1 | 48.6 | 40.1 | 33.7 | 73.6 | 37.0 | 51.5 |
| **PMC-LLaMA 7B** | 24.5 | 27.7 | 35.3 | 17.4 | 30.3 | 23.3 | 25.5 | 20.2 | 72.9 | 26.6 | 30.4 |
| **MediTron-7B** | 41.6 | 50.3 | 46.4 | 27.9 | 44.4 | 30.8 | 41.6 | 28.1 | 74.9 | 41.3 | 42.7 |
| **BioMedGPT-LM-7B** | 51.4 | 52.0 | 49.4 | 53.3 | 50.7 | 49.1 | 42.5 | 33.9 | 76.8 | 37.6 | 49.7 |
| | | | | | | | | | | | |
| **GPT-3.5 Turbo 1106*** | 74.71 | 74.00 | 65.92 | 72.79 | 72.91 | 64.73 | 57.71 | 50.82 | 72.66 | 53.79 | 66.0 |
Supervised Fine-Tuning (SFT) performance of BioMistral 7B models compared to baselines, measured by accuracy (↑) and averaged across 3 random seeds of 3-shot. DARE, TIES, and SLERP are model merging strategies that combine BioMistral 7B and Mistral 7B Instruct. Best model in bold, and second-best underlined. *GPT-3.5 Turbo performances are reported from the 3-shot results without SFT.
# Citation BibTeX
Arxiv : [https://arxiv.org/abs/2402.10373](https://arxiv.org/abs/2402.10373)
```bibtex
@misc{labrak2024biomistral,
title={BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains},
author={Yanis Labrak and Adrien Bazoge and Emmanuel Morin and Pierre-Antoine Gourraud and Mickael Rouvier and Richard Dufour},
year={2024},
eprint={2402.10373},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
**CAUTION!** Both direct and downstream users need to be informed about the risks, biases, and constraints inherent in the model. While the model can produce natural language text, our exploration of its capabilities and limitations is just beginning. In fields such as medicine, comprehending these limitations is crucial. Hence, we strongly advise against deploying this model for natural language generation in production or for professional tasks in the realm of health and medicine.
|
RichardErkhov/jingyeom_-_KoSoLAR-10.7B-v0.2_1.3_dedup_p-8bits
|
RichardErkhov
| 2024-05-17T12:42:23Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-05-17T12:34:06Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
KoSoLAR-10.7B-v0.2_1.3_dedup_p - bnb 8bits
- Model creator: https://huggingface.co/jingyeom/
- Original model: https://huggingface.co/jingyeom/KoSoLAR-10.7B-v0.2_1.3_dedup_p/
Original model description:
---
license: apache-2.0
---
## Model
base_model : yanolja/KoSOLAR-10.7B-v0.2
## Dataset
* 공개 데이터 수집
* Deduplicating Training Data Makes Language Models Better 알고리즘 활용
## Code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "jingyeom/KoSoLAR-10.7B-v0.2_1.3_dedup"
model = AutoModelForCausalLM.from_pretrained(
model_name,
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Benchmark
**[Ko-LLM-Leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)**
|
abc88767/22c93
|
abc88767
| 2024-05-17T12:40:40Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-17T04:19:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
camilomj/LISADEBUT
|
camilomj
| 2024-05-17T12:35:48Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-05-16T23:18:29Z |
---
license: apache-2.0
---
|
abc88767/4sc93
|
abc88767
| 2024-05-17T12:33:57Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-17T04:13:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/mlabonne_-_NeuralPipe-7B-slerp-gguf
|
RichardErkhov
| 2024-05-17T12:33:50Z | 25 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-05-17T11:07:52Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
NeuralPipe-7B-slerp - GGUF
- Model creator: https://huggingface.co/mlabonne/
- Original model: https://huggingface.co/mlabonne/NeuralPipe-7B-slerp/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [NeuralPipe-7B-slerp.Q2_K.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralPipe-7B-slerp-gguf/blob/main/NeuralPipe-7B-slerp.Q2_K.gguf) | Q2_K | 2.53GB |
| [NeuralPipe-7B-slerp.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralPipe-7B-slerp-gguf/blob/main/NeuralPipe-7B-slerp.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [NeuralPipe-7B-slerp.IQ3_S.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralPipe-7B-slerp-gguf/blob/main/NeuralPipe-7B-slerp.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [NeuralPipe-7B-slerp.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralPipe-7B-slerp-gguf/blob/main/NeuralPipe-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [NeuralPipe-7B-slerp.IQ3_M.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralPipe-7B-slerp-gguf/blob/main/NeuralPipe-7B-slerp.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [NeuralPipe-7B-slerp.Q3_K.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralPipe-7B-slerp-gguf/blob/main/NeuralPipe-7B-slerp.Q3_K.gguf) | Q3_K | 3.28GB |
| [NeuralPipe-7B-slerp.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralPipe-7B-slerp-gguf/blob/main/NeuralPipe-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [NeuralPipe-7B-slerp.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralPipe-7B-slerp-gguf/blob/main/NeuralPipe-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [NeuralPipe-7B-slerp.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralPipe-7B-slerp-gguf/blob/main/NeuralPipe-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [NeuralPipe-7B-slerp.Q4_0.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralPipe-7B-slerp-gguf/blob/main/NeuralPipe-7B-slerp.Q4_0.gguf) | Q4_0 | 3.83GB |
| [NeuralPipe-7B-slerp.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralPipe-7B-slerp-gguf/blob/main/NeuralPipe-7B-slerp.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [NeuralPipe-7B-slerp.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralPipe-7B-slerp-gguf/blob/main/NeuralPipe-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [NeuralPipe-7B-slerp.Q4_K.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralPipe-7B-slerp-gguf/blob/main/NeuralPipe-7B-slerp.Q4_K.gguf) | Q4_K | 4.07GB |
| [NeuralPipe-7B-slerp.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralPipe-7B-slerp-gguf/blob/main/NeuralPipe-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [NeuralPipe-7B-slerp.Q4_1.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralPipe-7B-slerp-gguf/blob/main/NeuralPipe-7B-slerp.Q4_1.gguf) | Q4_1 | 4.24GB |
| [NeuralPipe-7B-slerp.Q5_0.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralPipe-7B-slerp-gguf/blob/main/NeuralPipe-7B-slerp.Q5_0.gguf) | Q5_0 | 4.65GB |
| [NeuralPipe-7B-slerp.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralPipe-7B-slerp-gguf/blob/main/NeuralPipe-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [NeuralPipe-7B-slerp.Q5_K.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralPipe-7B-slerp-gguf/blob/main/NeuralPipe-7B-slerp.Q5_K.gguf) | Q5_K | 4.78GB |
| [NeuralPipe-7B-slerp.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralPipe-7B-slerp-gguf/blob/main/NeuralPipe-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [NeuralPipe-7B-slerp.Q5_1.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralPipe-7B-slerp-gguf/blob/main/NeuralPipe-7B-slerp.Q5_1.gguf) | Q5_1 | 5.07GB |
| [NeuralPipe-7B-slerp.Q6_K.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralPipe-7B-slerp-gguf/blob/main/NeuralPipe-7B-slerp.Q6_K.gguf) | Q6_K | 5.53GB |
| [NeuralPipe-7B-slerp.Q8_0.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralPipe-7B-slerp-gguf/blob/main/NeuralPipe-7B-slerp.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: apache-2.0
tags:
- merge
- mergekit
model-index:
- name: NeuralPipe-7B-slerp
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.75
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralPipe-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.15
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralPipe-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.94
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralPipe-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 59.8
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralPipe-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.64
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralPipe-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.75
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralPipe-7B-slerp
name: Open LLM Leaderboard
---
# NeuralPipe-7B
This model is a merge of the following models made with [mergekit](https://github.com/cg123/mergekit):
* [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## ⚡ Quantized models
Thanks to TheBloke for the quantized models:
* **GGUF**: https://huggingface.co/TheBloke/NeuralPipe-7B-slerp-GGUF
* **AWQ**: https://huggingface.co/TheBloke/NeuralPipe-7B-slerp-AWQ
* **GPTQ**: https://huggingface.co/TheBloke/NeuralPipe-7B-slerp-GPTQ
*
## 🧩 Configuration
```yaml
slices:
- sources:
- model: OpenPipe/mistral-ft-optimized-1218
layer_range: [0, 32]
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: OpenPipe/mistral-ft-optimized-1218
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/NeuralPipe-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
Output:
```
A large language model is an AI system that uses deep learning techniques to process and understand vast amounts of natural language data. It is designed to generate human-like text, perform complex language tasks, and understand the context, nuance, and meaning of textual data. These models are trained on large datasets, often including billions of words, to learn the patterns and relationships in language. As a result, they can generate coherent and contextually relevant text, answer questions, and perform a variety of other language-related tasks. Some well-known large language models include OpenAI's GPT-3, Google's BERT, and Facebook's RoBERTa.
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mlabonne__NeuralPipe-7B-slerp)
| Metric |Value|
|---------------------------------|----:|
|Avg. |71.17|
|AI2 Reasoning Challenge (25-Shot)|67.75|
|HellaSwag (10-Shot) |86.15|
|MMLU (5-Shot) |63.94|
|TruthfulQA (0-shot) |59.80|
|Winogrande (5-shot) |79.64|
|GSM8k (5-shot) |69.75|
|
RichardErkhov/jingyeom_-_KoSoLAR-10.7B-v0.2_1.3_dedup_p-4bits
|
RichardErkhov
| 2024-05-17T12:32:50Z | 78 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-05-17T12:27:33Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
KoSoLAR-10.7B-v0.2_1.3_dedup_p - bnb 4bits
- Model creator: https://huggingface.co/jingyeom/
- Original model: https://huggingface.co/jingyeom/KoSoLAR-10.7B-v0.2_1.3_dedup_p/
Original model description:
---
license: apache-2.0
---
## Model
base_model : yanolja/KoSOLAR-10.7B-v0.2
## Dataset
* 공개 데이터 수집
* Deduplicating Training Data Makes Language Models Better 알고리즘 활용
## Code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "jingyeom/KoSoLAR-10.7B-v0.2_1.3_dedup"
model = AutoModelForCausalLM.from_pretrained(
model_name,
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Benchmark
**[Ko-LLM-Leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)**
|
MoGP/g_x_Moji_stratify_reg
|
MoGP
| 2024-05-17T12:25:50Z | 110 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-17T08:23:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Jass07/mux_may17
|
Jass07
| 2024-05-17T12:25:25Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-17T12:20:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DucPhanBa/llama2-finetuned-qlora-new-version
|
DucPhanBa
| 2024-05-17T12:23:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-17T12:22:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bhoopendrakumar/passport200
|
bhoopendrakumar
| 2024-05-17T12:20:33Z | 50 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-05-17T12:19:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Mullerjo/Reinforce-v1
|
Mullerjo
| 2024-05-17T12:20:13Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-05-17T12:11:07Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 68.00 +/- 31.01
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
TheMistoAI/MistoLine
|
TheMistoAI
| 2024-05-17T12:17:27Z | 7,042 | 427 |
diffusers
|
[
"diffusers",
"art",
"stable diffusion",
"ControlNet",
"SDXL",
"Diffusion-XL",
"text-to-image",
"arxiv:2302.05543",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-05-07T10:15:16Z |
---
license: openrail++
tags:
- art
- stable diffusion
- ControlNet
- SDXL
- Diffusion-XL
pipeline_tag: text-to-image
---
# MistoLine
## Control Every Line!

[GitHub Repo](https://github.com/TheMistoAI/MistoLine)
## NEWS!!!!! Anyline-preprocessor is released!!!!
[Anyline Repo](https://github.com/TheMistoAI/ComfyUI-Anyline)
**MistoLine: A Versatile and Robust SDXL-ControlNet Model for Adaptable Line Art Conditioning.**
MistoLine is an SDXL-ControlNet model that can adapt to any type of line art input, demonstrating high accuracy and excellent stability. It can generate high-quality images (with a short side greater than 1024px) based on user-provided line art of various types, including hand-drawn sketches, different ControlNet line preprocessors, and model-generated outlines. MistoLine eliminates the need to select different ControlNet models for different line preprocessors, as it exhibits strong generalization capabilities across diverse line art conditions.
We developed MistoLine by employing a novel line preprocessing algorithm **[Anyline](https://github.com/TheMistoAI/ComfyUI-Anyline)** and retraining the ControlNet model based on the Unet of stabilityai/ stable-diffusion-xl-base-1.0, along with innovations in large model training engineering. MistoLine showcases superior performance across
different types of line art inputs, surpassing existing ControlNet models in terms of detail restoration, prompt alignment, and stability, particularly in more complex scenarios.
MistoLine maintains consistency with the ControlNet architecture released by @lllyasviel, as illustrated in the following schematic diagram:


*reference:https://github.com/lllyasviel/ControlNet*
More information about ControlNet can be found in the following references:
https://github.com/lllyasviel/ControlNet
https://huggingface.co/docs/diffusers/main/en/api/pipelines/controlnet_sdxl
The model is compatible with most SDXL models, except for PlaygroundV2.5, CosXL, and SDXL-Lightning(maybe). It can be used in conjunction with LCM and other ControlNet models.
The following usage of this model is not allowed:
* Violating laws and regulations
* Harming or exploiting minors
* Creating and spreading false information
* Infringing on others' privacy
* Defaming or harassing others
* Automated decision-making that harms others' legal rights
* Discrimination based on social behavior or personal characteristics
* Exploiting the vulnerabilities of specific groups to mislead their behavior
* Discrimination based on legally protected characteristics
* Providing medical advice and diagnostic results
* Improperly generating and using information for purposes such as law enforcement and immigration
If you use or distribute this model for commercial purposes, you must comply with the following conditions:
1. Clearly acknowledge the contribution of TheMisto.ai to this model in the documentation, website, or other prominent and visible locations of your product.
Example: "This product uses the MistoLine-SDXL-ControlNet developed by TheMisto.ai."
2. If your product includes about screens, readme files, or other similar display areas, you must include the above attribution information in those areas.
3. If your product does not have the aforementioned areas, you must include the attribution information in other reasonable locations within the product to ensure that end-users can notice it.
4. You must not imply in any way that TheMisto.ai endorses or promotes your product. The use of the attribution information is solely to indicate the origin of this model.
If you have any questions about how to provide attribution in specific cases, please contact [email protected].
署名条款
如果您在商业用途中使用或分发本模型,您必须满足以下条件:
1. 在产品的文档,网站,或其他主要可见位置,明确提及 TheMisto.ai 对本软件的贡献。
示例: "本产品使用了 TheMisto.ai 开发的 MistoLine-SDXL-ControlNet。"
2. 如果您的产品包含有关屏幕,说明文件,或其他类似的显示区域,您必须在这些区域中包含上述署名信息。
3. 如果您的产品没有上述区域,您必须在产品的其他合理位置包含署名信息,以确保最终用户能够注意到。
4. 您不得以任何方式暗示 TheMisto.ai 为您的产品背书或促销。署名信息的使用仅用于表明本模型的来源。
如果您对如何在特定情况下提供署名有任何疑问,请联系[email protected]。
The model output is not censored and the authors do not endorse the opinions in the generated content. Use at your own risk.
## Apply with Different Line Preprocessors

## Compere with Other Controlnets

## Application Examples
### Sketch Rendering
*The following case only utilized MistoLine as the controlnet:*

### Model Rendering
*The following case only utilized Anyline as the preprocessor and MistoLine as the controlnet.*

## ComfyUI Recommended Parameters
```
sampler steps:30
CFG:7.0
sampler_name:dpmpp_2m_sde
scheduler:karras
denoise:0.93
controlnet_strength:1.0
stargt_percent:0.0
end_percent:0.9
```
## Diffusers pipeline
Make sure to first install the libraries:
```
pip install accelerate transformers safetensors opencv-python diffusers
```
And then we're ready to go:
```
from diffusers import ControlNetModel, StableDiffusionXLControlNetPipeline, AutoencoderKL
from diffusers.utils import load_image
from PIL import Image
import torch
import numpy as np
import cv2
prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting"
negative_prompt = 'low quality, bad quality, sketches'
image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png")
controlnet_conditioning_scale = 0.5
controlnet = ControlNetModel.from_pretrained(
"TheMistoAI/MistoLine",
torch_dtype=torch.float16,
variant="fp16",
)
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
controlnet=controlnet,
vae=vae,
torch_dtype=torch.float16,
)
pipe.enable_model_cpu_offload()
image = np.array(image)
image = cv2.Canny(image, 100, 200)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
image = Image.fromarray(image)
images = pipe(
prompt, negative_prompt=negative_prompt, image=image, controlnet_conditioning_scale=controlnet_conditioning_scale,
).images
images[0].save(f"hug_lab.png")
```
## Checkpoints
* mistoLine_rank256.safetensors : General usage version, for ComfyUI and AUTOMATIC1111-WebUI.
* mistoLine_fp16.safetensors : FP16 weights, for ComfyUI and AUTOMATIC1111-WebUI.
## !!!mistoLine_rank256.safetensors better than mistoLine_fp16.safetensors
## !!!mistoLine_rank256.safetensors 表现更加出色!!
## ComfyUI Usage

## 中国(大陆地区)便捷下载地址:
链接:https://pan.baidu.com/s/1DbZWmGJ40Uzr3Iz9RNBG_w?pwd=8mzs
提取码:8mzs
## Citation
```
@misc{
title={Adding Conditional Control to Text-to-Image Diffusion Models},
author={Lvmin Zhang, Anyi Rao, Maneesh Agrawala},
year={2023},
eprint={2302.05543},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
FO-UA/adapt-llm-Misc-Fr-40Xr256Xa512
|
FO-UA
| 2024-05-17T12:16:22Z | 0 | 1 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:AdaptLLM/finance-chat",
"base_model:adapter:AdaptLLM/finance-chat",
"region:us"
] | null | 2024-05-17T12:15:39Z |
---
library_name: peft
base_model: AdaptLLM/finance-chat
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.0
|
abhipsapanda/imagecaptioning
|
abhipsapanda
| 2024-05-17T12:16:16Z | 46 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-05-17T10:54:20Z |
---
license: apache-2.0
---
|
moetezsa/mistral_charttotext_v2
|
moetezsa
| 2024-05-17T12:15:27Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"unsloth",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-bnb-4bit",
"base_model:adapter:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"region:us"
] | null | 2024-05-17T11:02:57Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- unsloth
- generated_from_trainer
base_model: unsloth/mistral-7b-bnb-4bit
model-index:
- name: mistral_charttotext_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_charttotext_v2
This model is a fine-tuned version of [unsloth/mistral-7b-bnb-4bit](https://huggingface.co/unsloth/mistral-7b-bnb-4bit) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 3407
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.16.0
- Tokenizers 0.15.2
|
Angelectronic/mistral-inst_full
|
Angelectronic
| 2024-05-17T12:13:41Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-17T05:04:58Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
34.45 Bleu score, 130k pairs, 3 epoch
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
caesium94/models-colorist-v1-2e-4
|
caesium94
| 2024-05-17T12:06:55Z | 210 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-17T12:05:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MUNKOO/PANDA
|
MUNKOO
| 2024-05-17T12:01:59Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-05-17T12:01:59Z |
---
license: apache-2.0
---
|
Ramikan-BR/tinyllama-coder-py-4bit_LORA
|
Ramikan-BR
| 2024-05-17T12:01:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/tinyllama-chat-bnb-4bit",
"base_model:finetune:unsloth/tinyllama-chat-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-17T12:00:40Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/tinyllama-chat-bnb-4bit
---
# Uploaded model
- **Developed by:** Ramikan-BR
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AlekseiPravdin/Seamaiiza-7B-v2-128k-gguf
|
AlekseiPravdin
| 2024-05-17T11:55:04Z | 105 | 1 | null |
[
"gguf",
"merge",
"mergekit",
"lazymergekit",
"AlekseiPravdin/KSI-RP-NSK-128k-7B",
"AlekseiPravdin/Seamaiiza-7B-v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-16T11:10:01Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- AlekseiPravdin/KSI-RP-NSK-128k-7B
- AlekseiPravdin/Seamaiiza-7B-v1
---
# Seamaiiza-7B-v2-128k ⭐️⭐️⭐️❤️🔥(Story Streaming)
Seamaiiza-7B-v2-128k is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [AlekseiPravdin/Seamaiiza-7B-v1](https://huggingface.co/AlekseiPravdin/Seamaiiza-7B-v1)
* [AlekseiPravdin/KSI-RP-NSK-128k-7B](https://huggingface.co/AlekseiPravdin/KSI-RP-NSK-128k-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: AlekseiPravdin/Seamaiiza-7B-v1
layer_range: [0, 32]
- model: AlekseiPravdin/KSI-RP-NSK-128k-7B
layer_range: [0, 32]
merge_method: slerp
base_model: AlekseiPravdin/KSI-RP-NSK-128k-7B
parameters:
t:
- filter: self_attn
value: [0, 0.57, 0.33, 0.73, 1]
- filter: mlp
value: [1, 0.53, 0.77, 0.37, 0]
- value: 0.53
dtype: bfloat16
```
|
damgomz/ft_bs64_lr6_base_x2
|
damgomz
| 2024-05-17T11:52:18Z | 111 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"fill-mask",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-05-16T15:14:14Z |
---
language: en
tags:
- fill-mask
kwargs:
timestamp: '2024-05-17T13:52:15'
project_name: ft_bs64_lr6_base_x2_emissions_tracker
run_id: 064ee03d-8888-4a8d-b7cc-2fabb82586f2
duration: 14284.394858121872
emissions: 0.0093445332329461
emissions_rate: 6.54177746118027e-07
cpu_power: 42.5
gpu_power: 0.0
ram_power: 7.5
cpu_energy: 0.1686347934653361
gpu_energy: 0
ram_energy: 0.0297587275415659
energy_consumed: 0.1983935210069022
country_name: Switzerland
country_iso_code: CHE
region: .nan
cloud_provider: .nan
cloud_region: .nan
os: Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34
python_version: 3.10.4
codecarbon_version: 2.3.4
cpu_count: 3
cpu_model: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz
gpu_count: .nan
gpu_model: .nan
longitude: .nan
latitude: .nan
ram_total_size: 20
tracking_mode: machine
on_cloud: N
pue: 1.0
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 14284.394858121872 |
| Emissions (Co2eq in kg) | 0.0093445332329461 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 7.5 |
| CPU energy (kWh) | 0.1686347934653361 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0297587275415659 |
| Consumed energy (kWh) | 0.1983935210069022 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 3 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.027497460101884603 |
| Emissions (Co2eq in kg) | 0.005594721319431066 |
## Note
17 May 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_bs64_lr6_base_x2 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 5e-06 |
| batch_size | 64 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 32580 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | Accuracy | Recall
---|---|---|---|---
| 0 | 0.509996 | 0.432674 | 0.800442 | 0.815951 |
| 1 | 0.378199 | 0.369624 | 0.834315 | 0.825153 |
| 2 | 0.330051 | 0.377433 | 0.835788 | 0.924847 |
| 3 | 0.294745 | 0.346708 | 0.849779 | 0.878834 |
| 4 | 0.271853 | 0.396932 | 0.829897 | 0.757669 |
| 5 | 0.212074 | 0.384297 | 0.846097 | 0.848160 |
|
jhliu/DischargeGen-Llama3-lora-BHC
|
jhliu
| 2024-05-17T11:42:19Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:adapter:meta-llama/Meta-Llama-3-8B",
"region:us"
] | null | 2024-05-17T11:20:33Z |
---
library_name: peft
base_model: meta-llama/Meta-Llama-3-8B
---
# Model Description
Model developed for the ["Discharge Me!"](https://stanford-aimi.github.io/discharge-me/) Share Task at BioNLP ACL'24 to generate discharge summary sections.
|
jhliu/DischargeGen-PRIMERA-BHC
|
jhliu
| 2024-05-17T11:40:13Z | 93 | 0 |
transformers
|
[
"transformers",
"safetensors",
"led",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-05-17T11:25:24Z |
# Model Description
Model developed for the ["Discharge Me!"](https://stanford-aimi.github.io/discharge-me/) Share Task at BioNLP ACL'24 to generate discharge summary sections.
|
RichardErkhov/krevas_-_SOLAR-10.7B-gguf
|
RichardErkhov
| 2024-05-17T11:37:39Z | 6 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-05-17T09:33:41Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
SOLAR-10.7B - GGUF
- Model creator: https://huggingface.co/krevas/
- Original model: https://huggingface.co/krevas/SOLAR-10.7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [SOLAR-10.7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/krevas_-_SOLAR-10.7B-gguf/blob/main/SOLAR-10.7B.Q2_K.gguf) | Q2_K | 3.73GB |
| [SOLAR-10.7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/krevas_-_SOLAR-10.7B-gguf/blob/main/SOLAR-10.7B.IQ3_XS.gguf) | IQ3_XS | 4.14GB |
| [SOLAR-10.7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/krevas_-_SOLAR-10.7B-gguf/blob/main/SOLAR-10.7B.IQ3_S.gguf) | IQ3_S | 4.37GB |
| [SOLAR-10.7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/krevas_-_SOLAR-10.7B-gguf/blob/main/SOLAR-10.7B.Q3_K_S.gguf) | Q3_K_S | 4.34GB |
| [SOLAR-10.7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/krevas_-_SOLAR-10.7B-gguf/blob/main/SOLAR-10.7B.IQ3_M.gguf) | IQ3_M | 4.51GB |
| [SOLAR-10.7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/krevas_-_SOLAR-10.7B-gguf/blob/main/SOLAR-10.7B.Q3_K.gguf) | Q3_K | 4.84GB |
| [SOLAR-10.7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/krevas_-_SOLAR-10.7B-gguf/blob/main/SOLAR-10.7B.Q3_K_M.gguf) | Q3_K_M | 4.84GB |
| [SOLAR-10.7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/krevas_-_SOLAR-10.7B-gguf/blob/main/SOLAR-10.7B.Q3_K_L.gguf) | Q3_K_L | 5.26GB |
| [SOLAR-10.7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/krevas_-_SOLAR-10.7B-gguf/blob/main/SOLAR-10.7B.IQ4_XS.gguf) | IQ4_XS | 5.43GB |
| [SOLAR-10.7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/krevas_-_SOLAR-10.7B-gguf/blob/main/SOLAR-10.7B.Q4_0.gguf) | Q4_0 | 5.66GB |
| [SOLAR-10.7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/krevas_-_SOLAR-10.7B-gguf/blob/main/SOLAR-10.7B.IQ4_NL.gguf) | IQ4_NL | 5.72GB |
| [SOLAR-10.7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/krevas_-_SOLAR-10.7B-gguf/blob/main/SOLAR-10.7B.Q4_K_S.gguf) | Q4_K_S | 5.7GB |
| [SOLAR-10.7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/krevas_-_SOLAR-10.7B-gguf/blob/main/SOLAR-10.7B.Q4_K.gguf) | Q4_K | 6.02GB |
| [SOLAR-10.7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/krevas_-_SOLAR-10.7B-gguf/blob/main/SOLAR-10.7B.Q4_K_M.gguf) | Q4_K_M | 6.02GB |
| [SOLAR-10.7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/krevas_-_SOLAR-10.7B-gguf/blob/main/SOLAR-10.7B.Q4_1.gguf) | Q4_1 | 6.27GB |
| [SOLAR-10.7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/krevas_-_SOLAR-10.7B-gguf/blob/main/SOLAR-10.7B.Q5_0.gguf) | Q5_0 | 6.89GB |
| [SOLAR-10.7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/krevas_-_SOLAR-10.7B-gguf/blob/main/SOLAR-10.7B.Q5_K_S.gguf) | Q5_K_S | 6.89GB |
| [SOLAR-10.7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/krevas_-_SOLAR-10.7B-gguf/blob/main/SOLAR-10.7B.Q5_K.gguf) | Q5_K | 7.08GB |
| [SOLAR-10.7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/krevas_-_SOLAR-10.7B-gguf/blob/main/SOLAR-10.7B.Q5_K_M.gguf) | Q5_K_M | 7.08GB |
| [SOLAR-10.7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/krevas_-_SOLAR-10.7B-gguf/blob/main/SOLAR-10.7B.Q5_1.gguf) | Q5_1 | 7.51GB |
| [SOLAR-10.7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/krevas_-_SOLAR-10.7B-gguf/blob/main/SOLAR-10.7B.Q6_K.gguf) | Q6_K | 8.2GB |
| [SOLAR-10.7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/krevas_-_SOLAR-10.7B-gguf/blob/main/SOLAR-10.7B.Q8_0.gguf) | Q8_0 | 10.62GB |
Original model description:
---
license: apache-2.0
language:
- ko
---
# Model Card for SOLAR-10.7B
## Hardware and Software
* **Hardware**: We utilized an A100x4 * 1 for training our model
* **Training Factors**: We fine-tuned this model using a combination of the [DeepSpeed library](https://github.com/microsoft/DeepSpeed) and the [HuggingFace TRL Trainer](https://huggingface.co/docs/trl/trainer) / [HuggingFace Accelerate](https://huggingface.co/docs/accelerate/index)
|
juliaromberg/bert-base-german-cased_cimt-argument-concreteness
|
juliaromberg
| 2024-05-17T11:35:23Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"public participation",
"text classification",
"argument quality",
"de",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-10T19:18:17Z |
---
license: cc-by-nc-sa-4.0
language:
- de
library_name: transformers
tags:
- public participation
- text classification
- argument quality
---
## Model Description
**bert-base-german-cased_cimt-argument-concreteness** is a fine-tuned BERT model that is built to predict the concreteness of argumentative utterances in urban planning processes. It has been trained to recognize three classes: high concreteness (LABEL_2), intermediate concreteness (LABEL_1), and low concreteness (LABEL_0).
Specifically, this model is a *bert-base-german-cased* that was fine-tuned on https://github.com/juliaromberg/cimt-argument-concreteness-dataset.
## Background
This work is based on research in the project CIMT, which investigates the chances and challenges of involving citizens in political decisions in the context of sustainable mobility transitions. (for more information, visit https://www.cimt-hhu.de/en/)
## Details & Evaluation Results
Can be found in the corresponding publication https://aclanthology.org/2022.argmining-1.11/.
## Usage
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("juliaromberg/bert-base-german-cased_cimt-argument-concreteness")
model = AutoModelForSequenceClassification.from_pretrained("juliaromberg/bert-base-german-cased_cimt-argument-concreteness")
```
### Citation
```
@inproceedings{romberg-2022-perspective,
title = "Is Your Perspective Also My Perspective? Enriching Prediction with Subjectivity",
author = "Romberg, Julia",
editor = "Lapesa, Gabriella and
Schneider, Jodi and
Jo, Yohan and
Saha, Sougata",
booktitle = "Proceedings of the 9th Workshop on Argument Mining",
month = oct,
year = "2022",
address = "Online and in Gyeongju, Republic of Korea",
publisher = "International Conference on Computational Linguistics",
url = "https://aclanthology.org/2022.argmining-1.11",
pages = "115--125",
abstract = "Although argumentation can be highly subjective, the common practice with supervised machine learning is to construct and learn from an aggregated ground truth formed from individual judgments by majority voting, averaging, or adjudication. This approach leads to a neglect of individual, but potentially important perspectives and in many cases cannot do justice to the subjective character of the tasks. One solution to this shortcoming are multi-perspective approaches, which have received very little attention in the field of argument mining so far. In this work we present PerspectifyMe, a method to incorporate perspectivism by enriching a task with subjectivity information from the data annotation process. We exemplify our approach with the use case of classifying argument concreteness, and provide first promising results for the recently published CIMT PartEval Argument Concreteness Corpus.",
}
```
|
jhliu/DischargeGen-PRIMERA-DI
|
jhliu
| 2024-05-17T11:34:03Z | 93 | 0 |
transformers
|
[
"transformers",
"safetensors",
"led",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-05-17T11:25:42Z |
# Model Description
Model developed for the ["Discharge Me!"](https://stanford-aimi.github.io/discharge-me/) Share Task at BioNLP ACL'24 to generate discharge summary sections.
|
Kartik2503/Phi-3-mini-128k-instruct-lora
|
Kartik2503
| 2024-05-17T11:32:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-17T11:22:25Z |
---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
rhaymison/Mistral-portuguese-luana-7b-chat
|
rhaymison
| 2024-05-17T11:21:21Z | 11 | 4 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"Misral",
"Portuguese",
"7b",
"chat",
"portugues",
"conversational",
"pt",
"dataset:rhaymison/ultrachat-easy-use",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-09T14:40:26Z |
---
language:
- pt
license: apache-2.0
library_name: transformers
tags:
- Misral
- Portuguese
- 7b
- chat
- portugues
base_model: mistralai/Mistral-7B-Instruct-v0.2
datasets:
- rhaymison/ultrachat-easy-use
pipeline_tag: text-generation
model-index:
- name: Mistral-portuguese-luana-7b-chat
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: ENEM Challenge (No Images)
type: eduagarcia/enem_challenge
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 59.13
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Mistral-portuguese-luana-7b-chat
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BLUEX (No Images)
type: eduagarcia-temp/BLUEX_without_images
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 49.24
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Mistral-portuguese-luana-7b-chat
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: OAB Exams
type: eduagarcia/oab_exams
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 36.58
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Mistral-portuguese-luana-7b-chat
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 RTE
type: assin2
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 90.47
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Mistral-portuguese-luana-7b-chat
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 STS
type: eduagarcia/portuguese_benchmark
split: test
args:
num_few_shot: 15
metrics:
- type: pearson
value: 76.55
name: pearson
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Mistral-portuguese-luana-7b-chat
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: FaQuAD NLI
type: ruanchaves/faquad-nli
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 66.75
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Mistral-portuguese-luana-7b-chat
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HateBR Binary
type: ruanchaves/hatebr
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 77.46
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Mistral-portuguese-luana-7b-chat
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: PT Hate Speech Binary
type: hate_speech_portuguese
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 69.45
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Mistral-portuguese-luana-7b-chat
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: tweetSentBR
type: eduagarcia-temp/tweetsentbr
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 59.63
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Mistral-portuguese-luana-7b-chat
name: Open Portuguese LLM Leaderboard
---
# Mistral-portuguese-luana-7b-chat
<p align="center">
<img src="https://raw.githubusercontent.com/rhaymisonbetini/huggphotos/main/luana-chat.webp" width="50%" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
</p>
This model was trained with a superset of 250,000 chat in Portuguese.
The model comes to help fill the gap in models in Portuguese. Tuned from the Mistral 7b, the model was adjusted mainly for chat.
# How to use
### FULL MODEL : A100
### HALF MODEL: L4
### 8bit or 4bit : T4 or V100
You can use the model in its normal form up to 4-bit quantization. Below we will use both approaches.
Remember that verbs are important in your prompt. Tell your model how to act or behave so that you can guide them along the path of their response.
Important points like these help models (even smaller models like 7b) to perform much better.
```python
!pip install -q -U transformers
!pip install -q -U accelerate
!pip install -q -U bitsandbytes
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model = AutoModelForCausalLM.from_pretrained("rhaymison/Mistral-portuguese-luana-7b-chat", device_map= {"": 0})
tokenizer = AutoTokenizer.from_pretrained("rhaymison/Mistral-portuguese-luana-7b-chat")
model.eval()
```
You can use with Pipeline but in this example i will use such as Streaming
```python
inputs = tokenizer([f"""<s>[INST] Estou pensando em levar minha esposa e meu gatinha para fazer algo diferente. Você teria alguma coisa em mente ?.
[/INST]"""], return_tensors="pt")
inputs.to(model.device)
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=128)
```
If you are having a memory problem such as "CUDA Out of memory", you should use 4-bit or 8-bit quantization.
For the complete model in colab you will need the A100.
If you want to use 4bits or 8bits, T4 or L4 will already solve the problem.
# 4bits example
```python
from transformers import BitsAndBytesConfig
import torch
nb_4bit_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True
)
model = AutoModelForCausalLM.from_pretrained(
base_model,
quantization_config=bnb_config,
device_map={"": 0}
)
```
# LangChain
<div style="background-color:#f0f0f0; padding:10px; border-radius:10px; font-family: 'Arial';">
<p><strong><span style="color:#3366cc;">Human:</span></strong> Ola Luana, estou precisando de umas dicas para o final de semana.
Poderia me ajudar ?</p>
<p><strong><span style="color:#ff6600;">Luana:</span></strong> Olá! Claro, ficarei feliz em ajudá-lo. O que você tem em mente para o final de semana?</p>
<p><strong><span style="color:#3366cc;">Human:</span></strong> Estou pensando em levar minha esposa e meu gatinha para fazer algo diferente.
Você teria alguma coisa em mente ?</p>
<p><strong><span style="color:#ff6600;">Luana:</span></strong> Claro! Você pode considerar fazer um piquenique na praia, visitar um museu
ou fazer uma caminhada na natureza. O que você e sua esposa gostam de fazer juntos?</p>
<p><strong><span style="color:#3366cc;">Human:</span></strong> Eu e minha esposa gostamos muito de caminhadas.
Temos uma mochila que carregamos nosso gatinha para todo lado.</p>
<p><strong><span style="color:#ff6600;">Luana:</span></strong> Parece ótimo! Você pode considerar fazer uma caminhada em um parque local ou reserva natural. Você
também pode pesquisar trilhas mais longas para fazer em um fim de semana.
Além disso, você pode trazer um piquenique com você para saborear durante a caminhada.</p>
<p><strong><span style="color:#3366cc;">Human:</span></strong> Otimas ideias voce tem me dado. O que será que é gostoso levar em um piquenique ?</p>
<p><strong><span style="color:#ff6600;">Luana:</span></strong> Você pode trazer sanduíches, frutas, vegetais, queijo e biscoitos.
Você também pode trazer uma garrafa de água ou um refrigerante para beber.
Além disso, você pode trazer um saco de lixo para descartar resíduos de maneira responsável.</p>
<p><strong><span style="color:#3366cc;">Human:</span></strong> Olha só que dica bacana. Realmente devo levar uma sacola para nao descartar
nada na Natureza e assim me divertir com minha esposa.</p>
<p><strong><span style="color:#ff6600;">Luana:</span></strong> Sim, é importante descartar resíduos de maneira responsável e proteger o meio ambiente.
Você pode considerar fazer um piquenique em um parque local ou reserva natural onde
possa descartar resíduos de maneira responsável.</p>
<p><strong><span style="color:#3366cc;">Human:</span></strong> Perfeito. Muito Obrigado Luana. Otimas dicas. Até a proxima !</p>
<p><strong><span style="color:#ff6600;">Luana:</span></strong> De nada! Fique em contato se precisar de mais ajuda. Tenha um ótimo fim de semana!</p>
</div>
# [Open Portuguese LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/rhaymison/Mistral-portuguese-luana-7b-chat)
| Metric | Value |
|--------------------------|---------|
|Average |**65.03**|
|ENEM Challenge (No Images)| 59.13|
|BLUEX (No Images) | 49.24|
|OAB Exams | 36.58|
|Assin2 RTE | 90.47|
|Assin2 STS | 76.55|
|FaQuAD NLI | 66.75|
|HateBR Binary | 77.46|
|PT Hate Speech Binary | 69.45|
|tweetSentBR | 59.63|
### Comments
Any idea, help or report will always be welcome.
email: [email protected]
<div style="display:flex; flex-direction:row; justify-content:left">
<a href="https://www.linkedin.com/in/rhaymison-cristian-betini-2b3016175/" target="_blank">
<img src="https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white">
</a>
<a href="https://github.com/rhaymisonbetini" target="_blank">
<img src="https://img.shields.io/badge/GitHub-100000?style=for-the-badge&logo=github&logoColor=white">
</a>
</div>
|
rhaymison/transcription-portuguese-legal-whisper-peft
|
rhaymison
| 2024-05-17T11:19:40Z | 0 | 3 |
transformers
|
[
"transformers",
"safetensors",
"Automatic Speech Recognition",
"ASR",
"whisper",
"pt",
"dataset:rhaymison/whisper-legal-portuguese",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-13T23:17:46Z |
---
license: apache-2.0
base_model: openai/whisper-large-v3
datasets:
- rhaymison/whisper-legal-portuguese
language:
- pt
metrics:
- wer
library_name: transformers
tags:
- Automatic Speech Recognition
- ASR
- whisper
---
<p align="center">
<img src="https://raw.githubusercontent.com/rhaymisonbetini/huggphotos/main/052724ae-35bb-4e6d-aab4-2035820c6d07.webp"width="50%" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
</p>
# About the model
This model is a fine tuning of whisper v3 with adjustment for cases of judicial actions in the Portuguese language.
The model was adjusted with more than 20 hours of hearing audios, direct instructions on the constitution and others.
This model is available with peft adapters
## Installing dependencies
```
!pip install transformers
!pip install einops accelerate bitsandbytes
!pip install sentence_transformers
!pip install git+https://github.com/huggingface/peft.git
```
## Loading and configuring the model
```python
from peft import PeftModel, PeftConfig
from transformers import WhisperForConditionalGeneration,BitsAndBytesConfig
task = "transcribe"
language = "portuguese"
nf8_config = BitsAndBytesConfig(
load_in_8bit=True
)
peft_model_id = "rhaymison/legal-whisper-portuguese-peft"
peft_config = PeftConfig.from_pretrained(peft_model_id)
model = WhisperForConditionalGeneration.from_pretrained(peft_config.base_model_name_or_path,
quantization_config=nf8_config,
device_map="auto"
)
model = PeftModel.from_pretrained(model, peft_model_id)
```
# Load in 4bit
ps: In 4 bits the model may not present great performance, I recommend using 8 which has a high level of quality
```python
import torch
nf4_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.float16
)
model = WhisperForConditionalGeneration.from_pretrained(peft_config.base_model_name_or_path,
quantization_config=nf4_config,
device_map="auto")
```
## Loading processor
```python
from transformers import WhisperProcessor
processor = WhisperProcessor.from_pretrained(peft_config.base_model_name_or_path, language=language, task=task)
```
Converting our audio to a sample rate of 16000 and saving.
If necessary install or pydub
```python
from pydub import AudioSegment
audio = AudioSegment.from_wav('/content/audio.wav')
audio = audio.set_frame_rate(16000)
audio.export('z.wav', format='wav')
```
# Configuring the pipeline
```python
import torch
from transformers import pipeline
device = "cuda:0" if torch.cuda.is_available() else "cpu"
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
max_new_tokens=128,
chunk_length_s=30,
batch_size=16, #adapter for your use case
return_timestamps=True,
torch_dtype = torch.float16
)
```
# Performing transcription
```python
transcription = pipe("/content/z.wav",generate_kwargs={"language": "portuguese"})
```
The reference audio was extracted from the video below.
output:
Video: https://www.youtube.com/watch?v=fHJaQDMSIX4
```python
#Nome completo? Anderson Lacerda. O senhor sofreu agressão no momento da sua prisão? Não, senhora. Tem alguma doença? Tem alguns.
#O que o senhor tem? Pressão, diabetes. Mas o senhor toma medicação de uso controlado? Tomo. Então o senhor tem problema de pressão alta? Insulina, injetável.
#Então, mais algumas. Coração. Então, você tem problema de pressão alta? Tem. Pressão alta e diabetes? Sim. Eu sou infartado. Tá.
#E toma medicações pro coração. Faz tratamento. É isso? As três doenças. Ok. É... É isso? É isso Faz tratamento.
#É isso? As três doenças. Ok. Os senhores estão aqui em razão de um mandado de prisão temporária. Contra os senhores, uma investigação por crime de tráfico de drogas.
#O prazo de prisão dos senhores é de 30 dias inicialmente. Eventualmente, se verificar a participação dos senhores nesses delitos aí investigados, a prisão temporária pode
#se converter em prisão preventiva, mas caso não se comprove a participação dos senhores, os senhores são colocados em liberdade. Alguma dúvida? Doutores,
#algum requerimento, alguma dúvida? Sem requerimento. A defesa de âncio tem, excelvida. A defesa de Anderson tem excelência. Pois não, doutor.
#A defesa de Anderson requer que haja reconhecimento da ilegalidade da prisão temporária, haja vista que, segundo a ótica defensiva, ainda existem fundamentos que a
#posse invasar, e eu vou elencar pelo menos três aqui, que são, pelo menos, nessa ocasião, de salutar importância.
# É bem verdade que a defesa reconhece aumento de mérito. Todavia, como dito, a defesa reconhece que não existe fundamento.
#A lei diz que a prisão temporária vai ser decretada quando for imprescindível para as investigações, inclusive entendimento recente do Supremo Tribunal Federal.
#é insendível para as investigações, inclusive o entendimento recente do Supremo Tribunal Federal.
#No caso de Intela, pesava ainda em favor do meu cliente, do meu constituinte, o senhor Anderson, três mandagens de prisão, ou seja,
#tinha três prisões preventivas decretadas pelo juízo de alujar. É importante ressaltar que no processo de alujar, excelência, houve uma operação policial,
#meses posteriores à denúncia feita pelo próprio senhor Anderson, acerca de extorsão cometida pela equipe policial.
#Assim sendo, Excelência, entendo a defesa que, uma vez que havia a prisão preventiva, não havia razão para essa prisão temporária,
#aquela tinha que ser cumprida primeiro, até porque é o seguinte, uma vez cumprida na prisão preventiva, ele ficaria a serviço e
#a disposição na Secretaria de Administração Penitenciária e qualquer unidade prisional e não na delegacia, que é justamente o
#que a gente quer evitar. Haja vista que pesa justamente essa questão dessa possível retaliação, inclusive colocando em risco a
#integridade física e a vida do cliente. A razão pela qual a defesa pede que a Vossa Excelência, nos termos do artigo 5º
#a Constituição Federal, relaxe essa prisão temporária porque ela, sob essa ótica, ela é ilegal. É o que a defesa requer.
```
# Information
If you have any feedback, adjustments or wanted to contribute as a project, please feel free to contact us:
[email protected]
<div style="display:flex; flex-direction:row; justify-content:left">
<a href="https://www.linkedin.com/in/rhaymison-cristian-betini-2b3016175/" target="_blank">
<img src="https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white">
</a>
<a href="https://github.com/rhaymisonbetini" target="_blank">
<img src="https://img.shields.io/badge/GitHub-100000?style=for-the-badge&logo=github&logoColor=white">
</a>
</div>
|
Liuxuanxi/macdonaldsnegative_Reason_model
|
Liuxuanxi
| 2024-05-17T11:13:00Z | 121 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-17T11:12:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LuckyMan123/hemingway-lora
|
LuckyMan123
| 2024-05-17T11:11:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-17T11:11:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
moetezsa/mistral_wikitable_text
|
moetezsa
| 2024-05-17T11:10:17Z | 2 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"unsloth",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-bnb-4bit",
"base_model:adapter:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"region:us"
] | null | 2024-05-17T10:54:53Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- unsloth
- generated_from_trainer
base_model: unsloth/mistral-7b-bnb-4bit
model-index:
- name: mistral_wikitable_text
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_wikitable_text
This model is a fine-tuned version of [unsloth/mistral-7b-bnb-4bit](https://huggingface.co/unsloth/mistral-7b-bnb-4bit) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 3407
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.16.0
- Tokenizers 0.15.2
|
vetrivelm5/bert-finetuned-ner4
|
vetrivelm5
| 2024-05-17T11:07:55Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-17T11:07:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DUAL-GPO-2/phi-2-gpo-v3-i2
|
DUAL-GPO-2
| 2024-05-17T11:03:42Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"phi",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"custom_code",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:DUAL-GPO-2/phi-2-gpo-v34-merged-i1",
"base_model:adapter:DUAL-GPO-2/phi-2-gpo-v34-merged-i1",
"region:us"
] | null | 2024-05-17T08:42:42Z |
---
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
base_model: DUAL-GPO-2/phi-2-gpo-v34-merged-i1
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: phi-2-gpo-v3-i2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-gpo-v3-i2
This model is a fine-tuned version of [DUAL-GPO-2/phi-2-gpo-v34-merged-i1](https://huggingface.co/DUAL-GPO-2/phi-2-gpo-v34-merged-i1) on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 3
- gradient_accumulation_steps: 4
- total_train_batch_size: 48
- total_eval_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
RichardErkhov/mlabonne_-_NeuralPipe-7B-slerp-4bits
|
RichardErkhov
| 2024-05-17T10:58:36Z | 78 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-05-17T10:54:45Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
NeuralPipe-7B-slerp - bnb 4bits
- Model creator: https://huggingface.co/mlabonne/
- Original model: https://huggingface.co/mlabonne/NeuralPipe-7B-slerp/
Original model description:
---
license: apache-2.0
tags:
- merge
- mergekit
model-index:
- name: NeuralPipe-7B-slerp
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.75
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralPipe-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.15
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralPipe-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.94
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralPipe-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 59.8
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralPipe-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.64
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralPipe-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.75
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralPipe-7B-slerp
name: Open LLM Leaderboard
---
# NeuralPipe-7B
This model is a merge of the following models made with [mergekit](https://github.com/cg123/mergekit):
* [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## ⚡ Quantized models
Thanks to TheBloke for the quantized models:
* **GGUF**: https://huggingface.co/TheBloke/NeuralPipe-7B-slerp-GGUF
* **AWQ**: https://huggingface.co/TheBloke/NeuralPipe-7B-slerp-AWQ
* **GPTQ**: https://huggingface.co/TheBloke/NeuralPipe-7B-slerp-GPTQ
*
## 🧩 Configuration
```yaml
slices:
- sources:
- model: OpenPipe/mistral-ft-optimized-1218
layer_range: [0, 32]
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: OpenPipe/mistral-ft-optimized-1218
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/NeuralPipe-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
Output:
```
A large language model is an AI system that uses deep learning techniques to process and understand vast amounts of natural language data. It is designed to generate human-like text, perform complex language tasks, and understand the context, nuance, and meaning of textual data. These models are trained on large datasets, often including billions of words, to learn the patterns and relationships in language. As a result, they can generate coherent and contextually relevant text, answer questions, and perform a variety of other language-related tasks. Some well-known large language models include OpenAI's GPT-3, Google's BERT, and Facebook's RoBERTa.
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mlabonne__NeuralPipe-7B-slerp)
| Metric |Value|
|---------------------------------|----:|
|Avg. |71.17|
|AI2 Reasoning Challenge (25-Shot)|67.75|
|HellaSwag (10-Shot) |86.15|
|MMLU (5-Shot) |63.94|
|TruthfulQA (0-shot) |59.80|
|Winogrande (5-shot) |79.64|
|GSM8k (5-shot) |69.75|
|
SangBinCho/long_context_32k_testing
|
SangBinCho
| 2024-05-17T10:58:16Z | 1,406 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"region:us"
] | null | 2024-05-17T10:57:26Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
SangBinCho/long_context_16k_testing_2
|
SangBinCho
| 2024-05-17T10:57:39Z | 1,405 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"region:us"
] | null | 2024-05-17T10:57:16Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
SangBinCho/long_context_16k_testing_1
|
SangBinCho
| 2024-05-17T10:56:58Z | 1,833 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"region:us"
] | null | 2024-05-17T10:50:31Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
statking/zephyr-7b-sft-qlora
|
statking
| 2024-05-17T10:54:24Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:HuggingFaceH4/ultrachat_200k",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-05-14T05:15:29Z |
---
license: apache-2.0
library_name: peft
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
datasets:
- HuggingFaceH4/ultrachat_200k
model-index:
- name: zephyr-7b-sft-qlora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/statking/huggingface/runs/dnr1n96z)
# zephyr-7b-sft-qlora
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the HuggingFaceH4/ultrachat_200k dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9439
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9454 | 1.0 | 2179 | 0.9439 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.41.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Onlysmokehuazi/Huazi_Sentiment_Analysis
|
Onlysmokehuazi
| 2024-05-17T10:49:32Z | 110 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-17T10:48:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
KenanKhan/a2c-PandaReachDense-v3
|
KenanKhan
| 2024-05-17T10:47:39Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-05-17T10:42:42Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -19.09 +/- 7.09
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
amaye15/google-siglip-base-patch16-224-batch64-lr5e-05-standford-dogs
|
amaye15
| 2024-05-17T10:42:31Z | 120 | 0 |
transformers
|
[
"transformers",
"safetensors",
"siglip",
"image-classification",
"generated_from_trainer",
"dataset:stanford-dogs",
"base_model:google/siglip-base-patch16-224",
"base_model:finetune:google/siglip-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-05-17T10:42:10Z |
---
license: apache-2.0
base_model: google/siglip-base-patch16-224
tags:
- generated_from_trainer
datasets:
- stanford-dogs
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: google-siglip-base-patch16-224-batch64-lr5e-05-standford-dogs
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: stanford-dogs
type: stanford-dogs
config: default
split: full
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8364917395529641
- name: F1
type: f1
value: 0.8328749982143954
- name: Precision
type: precision
value: 0.8377481660081763
- name: Recall
type: recall
value: 0.8330663170433035
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# google-siglip-base-patch16-224-batch64-lr5e-05-standford-dogs
This model is a fine-tuned version of [google/siglip-base-patch16-224](https://huggingface.co/google/siglip-base-patch16-224) on the stanford-dogs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5612
- Accuracy: 0.8365
- F1: 0.8329
- Precision: 0.8377
- Recall: 0.8331
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 4.822 | 0.1550 | 10 | 4.2549 | 0.0782 | 0.0493 | 0.0987 | 0.0726 |
| 4.236 | 0.3101 | 20 | 3.5279 | 0.1907 | 0.1507 | 0.2201 | 0.1830 |
| 3.5066 | 0.4651 | 30 | 2.5316 | 0.3319 | 0.2941 | 0.4180 | 0.3205 |
| 2.8064 | 0.6202 | 40 | 2.1243 | 0.4361 | 0.4090 | 0.5324 | 0.4282 |
| 2.441 | 0.7752 | 50 | 1.5798 | 0.5510 | 0.5250 | 0.6242 | 0.5438 |
| 2.0985 | 0.9302 | 60 | 1.4242 | 0.5843 | 0.5577 | 0.6400 | 0.5768 |
| 1.8689 | 1.0853 | 70 | 1.1481 | 0.6625 | 0.6456 | 0.7143 | 0.6565 |
| 1.6588 | 1.2403 | 80 | 1.1937 | 0.6465 | 0.6361 | 0.7062 | 0.6439 |
| 1.5807 | 1.3953 | 90 | 0.9818 | 0.7058 | 0.6890 | 0.7438 | 0.6981 |
| 1.4851 | 1.5504 | 100 | 1.0181 | 0.7000 | 0.6839 | 0.7373 | 0.6959 |
| 1.5033 | 1.7054 | 110 | 1.0169 | 0.6914 | 0.6845 | 0.7490 | 0.6883 |
| 1.3022 | 1.8605 | 120 | 0.9087 | 0.7276 | 0.7170 | 0.7643 | 0.7222 |
| 1.3106 | 2.0155 | 130 | 0.8385 | 0.7432 | 0.7352 | 0.7667 | 0.7363 |
| 1.1721 | 2.1705 | 140 | 0.8957 | 0.7128 | 0.7026 | 0.7592 | 0.7075 |
| 1.131 | 2.3256 | 150 | 0.8730 | 0.7259 | 0.7149 | 0.7687 | 0.7196 |
| 1.1223 | 2.4806 | 160 | 0.8132 | 0.7546 | 0.7457 | 0.7855 | 0.7482 |
| 1.0688 | 2.6357 | 170 | 0.7485 | 0.7704 | 0.7601 | 0.7863 | 0.7631 |
| 1.0686 | 2.7907 | 180 | 0.7559 | 0.7651 | 0.7587 | 0.7920 | 0.7609 |
| 0.9733 | 2.9457 | 190 | 0.7779 | 0.7553 | 0.7458 | 0.7797 | 0.7521 |
| 0.9287 | 3.1008 | 200 | 0.7048 | 0.7818 | 0.7756 | 0.7981 | 0.7756 |
| 0.8746 | 3.2558 | 210 | 0.6848 | 0.7867 | 0.7774 | 0.8034 | 0.7822 |
| 0.7982 | 3.4109 | 220 | 0.6930 | 0.7884 | 0.7796 | 0.8025 | 0.7846 |
| 0.823 | 3.5659 | 230 | 0.7041 | 0.7804 | 0.7717 | 0.7975 | 0.7752 |
| 0.8713 | 3.7209 | 240 | 0.7418 | 0.7755 | 0.7646 | 0.8053 | 0.7711 |
| 0.8651 | 3.8760 | 250 | 0.6847 | 0.7828 | 0.7773 | 0.8048 | 0.7782 |
| 0.784 | 4.0310 | 260 | 0.6662 | 0.7923 | 0.7841 | 0.8097 | 0.7860 |
| 0.6894 | 4.1860 | 270 | 0.6980 | 0.7843 | 0.7781 | 0.8024 | 0.7779 |
| 0.7727 | 4.3411 | 280 | 0.6629 | 0.7833 | 0.7804 | 0.8030 | 0.7798 |
| 0.6978 | 4.4961 | 290 | 0.6820 | 0.7845 | 0.7800 | 0.8011 | 0.7820 |
| 0.7032 | 4.6512 | 300 | 0.6148 | 0.8032 | 0.7969 | 0.8094 | 0.7985 |
| 0.6978 | 4.8062 | 310 | 0.6457 | 0.7940 | 0.7872 | 0.8085 | 0.7892 |
| 0.66 | 4.9612 | 320 | 0.6242 | 0.8088 | 0.8033 | 0.8246 | 0.8058 |
| 0.5706 | 5.1163 | 330 | 0.6404 | 0.7966 | 0.7905 | 0.8097 | 0.7928 |
| 0.5456 | 5.2713 | 340 | 0.7147 | 0.7872 | 0.7767 | 0.8060 | 0.7819 |
| 0.5869 | 5.4264 | 350 | 0.6267 | 0.8066 | 0.8016 | 0.8188 | 0.8025 |
| 0.6022 | 5.5814 | 360 | 0.6197 | 0.8061 | 0.8028 | 0.8209 | 0.8027 |
| 0.5676 | 5.7364 | 370 | 0.6061 | 0.8059 | 0.8005 | 0.8140 | 0.8024 |
| 0.5456 | 5.8915 | 380 | 0.6018 | 0.8069 | 0.8006 | 0.8254 | 0.8033 |
| 0.56 | 6.0465 | 390 | 0.6126 | 0.8090 | 0.8037 | 0.8206 | 0.8045 |
| 0.4582 | 6.2016 | 400 | 0.6122 | 0.8115 | 0.8062 | 0.8196 | 0.8061 |
| 0.4594 | 6.3566 | 410 | 0.6058 | 0.8122 | 0.8081 | 0.8235 | 0.8082 |
| 0.4868 | 6.5116 | 420 | 0.5890 | 0.8195 | 0.8131 | 0.8300 | 0.8141 |
| 0.4841 | 6.6667 | 430 | 0.5909 | 0.8175 | 0.8119 | 0.8250 | 0.8133 |
| 0.4537 | 6.8217 | 440 | 0.5889 | 0.8195 | 0.8153 | 0.8261 | 0.8164 |
| 0.4807 | 6.9767 | 450 | 0.6105 | 0.8144 | 0.8104 | 0.8300 | 0.8106 |
| 0.4051 | 7.1318 | 460 | 0.5917 | 0.8171 | 0.8103 | 0.8217 | 0.8131 |
| 0.3727 | 7.2868 | 470 | 0.6037 | 0.8166 | 0.8116 | 0.8262 | 0.8125 |
| 0.4034 | 7.4419 | 480 | 0.6407 | 0.8032 | 0.8003 | 0.8146 | 0.8015 |
| 0.3684 | 7.5969 | 490 | 0.6205 | 0.8061 | 0.7997 | 0.8176 | 0.8008 |
| 0.416 | 7.7519 | 500 | 0.5855 | 0.8258 | 0.8207 | 0.8364 | 0.8211 |
| 0.3947 | 7.9070 | 510 | 0.5802 | 0.8214 | 0.8179 | 0.8283 | 0.8179 |
| 0.3731 | 8.0620 | 520 | 0.5870 | 0.8239 | 0.8191 | 0.8324 | 0.8188 |
| 0.3203 | 8.2171 | 530 | 0.5783 | 0.8265 | 0.8211 | 0.8302 | 0.8216 |
| 0.337 | 8.3721 | 540 | 0.5836 | 0.8200 | 0.8162 | 0.8247 | 0.8166 |
| 0.3396 | 8.5271 | 550 | 0.5992 | 0.8156 | 0.8121 | 0.8253 | 0.8115 |
| 0.3355 | 8.6822 | 560 | 0.5755 | 0.8229 | 0.8182 | 0.8281 | 0.8187 |
| 0.3273 | 8.8372 | 570 | 0.5819 | 0.8246 | 0.8194 | 0.8268 | 0.8208 |
| 0.3181 | 8.9922 | 580 | 0.5840 | 0.8205 | 0.8174 | 0.8279 | 0.8168 |
| 0.2855 | 9.1473 | 590 | 0.5997 | 0.8144 | 0.8098 | 0.8213 | 0.8103 |
| 0.254 | 9.3023 | 600 | 0.5863 | 0.8183 | 0.8132 | 0.8251 | 0.8133 |
| 0.2781 | 9.4574 | 610 | 0.5779 | 0.8224 | 0.8169 | 0.8275 | 0.8195 |
| 0.2691 | 9.6124 | 620 | 0.5816 | 0.8219 | 0.8177 | 0.8257 | 0.8186 |
| 0.3018 | 9.7674 | 630 | 0.5814 | 0.8297 | 0.8250 | 0.8370 | 0.8253 |
| 0.2615 | 9.9225 | 640 | 0.5761 | 0.8299 | 0.8261 | 0.8377 | 0.8262 |
| 0.2707 | 10.0775 | 650 | 0.5640 | 0.8326 | 0.8283 | 0.8385 | 0.8284 |
| 0.2482 | 10.2326 | 660 | 0.5685 | 0.8246 | 0.8206 | 0.8284 | 0.8218 |
| 0.2493 | 10.3876 | 670 | 0.5717 | 0.8241 | 0.8208 | 0.8311 | 0.8199 |
| 0.2167 | 10.5426 | 680 | 0.5741 | 0.8246 | 0.8204 | 0.8273 | 0.8204 |
| 0.2628 | 10.6977 | 690 | 0.5791 | 0.8248 | 0.8205 | 0.8281 | 0.8216 |
| 0.2316 | 10.8527 | 700 | 0.5770 | 0.8321 | 0.8272 | 0.8348 | 0.8284 |
| 0.2326 | 11.0078 | 710 | 0.5755 | 0.8280 | 0.8249 | 0.8348 | 0.8249 |
| 0.2001 | 11.1628 | 720 | 0.5783 | 0.8336 | 0.8299 | 0.8354 | 0.8310 |
| 0.1759 | 11.3178 | 730 | 0.5804 | 0.8345 | 0.8302 | 0.8367 | 0.8311 |
| 0.202 | 11.4729 | 740 | 0.5820 | 0.8316 | 0.8278 | 0.8353 | 0.8280 |
| 0.2191 | 11.6279 | 750 | 0.5724 | 0.8324 | 0.8279 | 0.8341 | 0.8287 |
| 0.1955 | 11.7829 | 760 | 0.5957 | 0.8226 | 0.8181 | 0.8268 | 0.8198 |
| 0.1972 | 11.9380 | 770 | 0.5722 | 0.8294 | 0.8254 | 0.8318 | 0.8263 |
| 0.1848 | 12.0930 | 780 | 0.5731 | 0.8311 | 0.8269 | 0.8339 | 0.8281 |
| 0.1613 | 12.2481 | 790 | 0.5682 | 0.8382 | 0.8344 | 0.8397 | 0.8356 |
| 0.1665 | 12.4031 | 800 | 0.5565 | 0.8350 | 0.8325 | 0.8365 | 0.8325 |
| 0.1739 | 12.5581 | 810 | 0.5738 | 0.8360 | 0.8328 | 0.8395 | 0.8326 |
| 0.1744 | 12.7132 | 820 | 0.5628 | 0.8360 | 0.8327 | 0.8387 | 0.8328 |
| 0.1737 | 12.8682 | 830 | 0.5712 | 0.8355 | 0.8320 | 0.8395 | 0.8324 |
| 0.1635 | 13.0233 | 840 | 0.5745 | 0.8309 | 0.8256 | 0.8328 | 0.8269 |
| 0.1689 | 13.1783 | 850 | 0.5781 | 0.8326 | 0.8288 | 0.8358 | 0.8294 |
| 0.1611 | 13.3333 | 860 | 0.5740 | 0.8328 | 0.8280 | 0.8349 | 0.8289 |
| 0.1624 | 13.4884 | 870 | 0.5656 | 0.8324 | 0.8279 | 0.8328 | 0.8287 |
| 0.1635 | 13.6434 | 880 | 0.5618 | 0.8319 | 0.8276 | 0.8328 | 0.8280 |
| 0.1395 | 13.7984 | 890 | 0.5648 | 0.8350 | 0.8311 | 0.8368 | 0.8312 |
| 0.1489 | 13.9535 | 900 | 0.5666 | 0.8341 | 0.8304 | 0.8370 | 0.8304 |
| 0.1174 | 14.1085 | 910 | 0.5700 | 0.8358 | 0.8321 | 0.8400 | 0.8320 |
| 0.1274 | 14.2636 | 920 | 0.5720 | 0.8331 | 0.8295 | 0.8366 | 0.8295 |
| 0.134 | 14.4186 | 930 | 0.5657 | 0.8353 | 0.8311 | 0.8369 | 0.8317 |
| 0.1327 | 14.5736 | 940 | 0.5662 | 0.8343 | 0.8308 | 0.8367 | 0.8307 |
| 0.1165 | 14.7287 | 950 | 0.5654 | 0.8341 | 0.8301 | 0.8355 | 0.8303 |
| 0.1277 | 14.8837 | 960 | 0.5661 | 0.8345 | 0.8308 | 0.8360 | 0.8310 |
| 0.1221 | 15.0388 | 970 | 0.5615 | 0.8370 | 0.8335 | 0.8388 | 0.8335 |
| 0.1194 | 15.1938 | 980 | 0.5632 | 0.8353 | 0.8318 | 0.8369 | 0.8319 |
| 0.1126 | 15.3488 | 990 | 0.5616 | 0.8362 | 0.8326 | 0.8376 | 0.8327 |
| 0.1256 | 15.5039 | 1000 | 0.5612 | 0.8365 | 0.8329 | 0.8377 | 0.8331 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
RDson/Llama-3-Magenta-Instruct-4x8B-MoE
|
RDson
| 2024-05-17T10:42:13Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"llama",
"3",
"llama 3",
"4x8b",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-15T08:48:30Z |
---
tags:
- moe
- llama
- '3'
- llama 3
- 4x8b
---
# Llama-3-Magenta-Instruct-4x8B-MoE
<img src="https://i.imgur.com/c1Mv8cy.png" width="640"/>
## You should also check out the updated [Llama-3-Peach-Instruct-4x8B-MoE](https://huggingface.co/RDson/Llama-3-Peach-Instruct-4x8B-MoE)!
GGUF files are available here: [Llama-3-Magenta-Instruct-4x8B-MoE-GGUF](https://huggingface.co/RDson/Llama-3-Magenta-Instruct-4x8B-MoE-GGUF).
This is a experimental MoE using Mergekit, created from
* [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
* [nvidia/Llama3-ChatQA-1.5-8B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B)
* [Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R](https://huggingface.co/Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R)
* [Muhammad2003/Llama3-8B-OpenHermes-DPO](https://huggingface.co/Muhammad2003/Llama3-8B-OpenHermes-DPO)
Mergekit yaml file:
```
base_model: Meta-Llama-3-8B-Instruct
experts:
- source_model: Meta-Llama-3-8B-Instruct
positive_prompts:
- "explain"
- "chat"
- "assistant"
- "think"
- "roleplay"
- "versatile"
- "helpful"
- "factual"
- "integrated"
- "adaptive"
- "comprehensive"
- "balanced"
negative_prompts:
- "specialized"
- "narrow"
- "focused"
- "limited"
- "specific"
- source_model: ChatQA-1.5-8B
positive_prompts:
- "python"
- "math"
- "solve"
- "code"
- "programming"
negative_prompts:
- "sorry"
- "cannot"
- "factual"
- "concise"
- "straightforward"
- "objective"
- "dry"
- source_model: SFR-Iterative-DPO-LLaMA-3-8B-R
positive_prompts:
- "chat"
- "assistant"
- "AI"
- "instructive"
- "clear"
- "directive"
- "helpful"
- "informative"
- source_model: Llama3-8B-OpenHermes-DPO
positive_prompts:
- "analytical"
- "accurate"
- "logical"
- "knowledgeable"
- "precise"
- "calculate"
- "compute"
- "solve"
- "work"
- "python"
- "code"
- "javascript"
- "programming"
- "algorithm"
- "tell me"
- "assistant"
negative_prompts:
- "creative"
- "abstract"
- "imaginative"
- "artistic"
- "emotional"
- "mistake"
- "inaccurate"
gate_mode: hidden
dtype: float16
```
Some inspiration for the Mergekit yaml file is from [LoneStriker/Umbra-MoE-4x10.7-2.4bpw-h6-exl2](https://huggingface.co/LoneStriker/Umbra-MoE-4x10.7-2.4bpw-h6-exl2).
|
lordjia/wawj-1.8b-chat-gguf
|
lordjia
| 2024-05-17T10:41:56Z | 7 | 1 | null |
[
"gguf",
"chat",
"dialect",
"text-generation",
"zh",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-05-15T14:30:52Z |
---
license: apache-2.0
language:
- zh
pipeline_tag: text-generation
tags:
- chat
- dialect
---
# 北京方言对话模型
基于 Qwen1.5 大模型微调的北京方言对话模型,提供口语化的北京方言对话体验。项目包括 7B 和 14B 版本,使用北京方言情景喜剧台词微调。
## 模型描述
- **7B 版本 (`wawj-7b-chat-gguf`)**: 中等大小模型,适用于平衡性能和质量。
- **14B 版本 (`wawj-14b-chat-gguf`)**: 大模型,优化了口语中的幽默感,适用于高质量生成。
## 使用指南
### 在 LM Studio 使用
在 LM Studio 中搜索 `lordjia/wawj` 直接使用模型,无需手动下载。设置 System Prompt 为:
以口语化的北京方言与用户对话。
### 在 Ollama 使用
在 Ollama 中,您可以直接使用已经提供的 Modelfile 来创建模型实例,无需进行额外设置。
## 模型链接
- [7B 版本](https://huggingface.co/lordjia/wawj-7b-chat-gguf)
- [14B 版本](https://huggingface.co/lordjia/wawj-14b-chat-gguf)
|
abbenedek/whisper-small.en-finetuning2-D3K
|
abbenedek
| 2024-05-17T10:39:47Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-17T10:20:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
abbenedek/abbenedekwhisper-small.en-finetuning2-D3K
|
abbenedek
| 2024-05-17T10:39:45Z | 123 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small.en",
"base_model:finetune:openai/whisper-small.en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-05-17T09:53:27Z |
---
license: apache-2.0
base_model: openai/whisper-small.en
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: abbenedekwhisper-small.en-finetuning2-D3K
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# abbenedekwhisper-small.en-finetuning2-D3K
This model is a fine-tuned version of [openai/whisper-small.en](https://huggingface.co/openai/whisper-small.en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 7.0682
- Cer: 53.2554
- Wer: 135.0993
- Ser: 100.0
- Cer Clean: 0.5008
- Wer Clean: 0.6623
- Ser Clean: 1.7544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-08
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer | Wer | Ser | Cer Clean | Wer Clean | Ser Clean |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:--------:|:-----:|:---------:|:---------:|:---------:|
| 7.4342 | 0.05 | 10 | 7.0727 | 53.2554 | 135.0993 | 100.0 | 0.5008 | 0.6623 | 1.7544 |
| 7.2902 | 0.11 | 20 | 7.0722 | 53.2554 | 135.0993 | 100.0 | 0.5008 | 0.6623 | 1.7544 |
| 6.9726 | 0.16 | 30 | 7.0711 | 53.2554 | 135.0993 | 100.0 | 0.5008 | 0.6623 | 1.7544 |
| 7.3598 | 0.21 | 40 | 7.0705 | 53.2554 | 135.0993 | 100.0 | 0.5008 | 0.6623 | 1.7544 |
| 7.0578 | 0.27 | 50 | 7.0682 | 53.2554 | 135.0993 | 100.0 | 0.5008 | 0.6623 | 1.7544 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.14.5
- Tokenizers 0.15.2
|
wgj0714/IE_model
|
wgj0714
| 2024-05-17T10:39:22Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:klue/roberta-large",
"base_model:finetune:klue/roberta-large",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-17T09:47:33Z |
---
base_model: klue/roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: IE_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IE_model
This model is a fine-tuned version of [klue/roberta-large](https://huggingface.co/klue/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4341
- Accuracy: 0.9007
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9432 | 1.0 | 574 | 0.5054 | 0.8537 |
| 0.4765 | 2.0 | 1148 | 0.4133 | 0.8589 |
| 0.4471 | 3.0 | 1722 | 0.4228 | 0.8693 |
| 0.3666 | 4.0 | 2296 | 0.4627 | 0.8815 |
| 0.3508 | 5.0 | 2870 | 0.3704 | 0.8833 |
| 0.325 | 6.0 | 3444 | 0.3704 | 0.9024 |
| 0.2662 | 7.0 | 4018 | 0.3733 | 0.9024 |
| 0.226 | 8.0 | 4592 | 0.4024 | 0.8972 |
| 0.1978 | 9.0 | 5166 | 0.4198 | 0.9042 |
| 0.186 | 10.0 | 5740 | 0.4341 | 0.9007 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.19.0
- Tokenizers 0.15.2
|
duyntnet/Dr_Samantha-7b-imatrix-GGUF
|
duyntnet
| 2024-05-17T10:37:31Z | 7 | 0 |
transformers
|
[
"transformers",
"gguf",
"imatrix",
"Dr_Samantha-7b",
"text-generation",
"en",
"license:other",
"region:us"
] |
text-generation
| 2024-05-17T08:39:08Z |
---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- Dr_Samantha-7b
---
Quantizations of https://huggingface.co/sethuiyer/Dr_Samantha-7b
# From original readme
## Prompt Template
```text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
What is your name?
### Response:
My name is Samantha.
```
|
damgomz/ThunBERT_bs32_lr5
|
damgomz
| 2024-05-17T10:35:40Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"pretraining",
"fill-mask",
"en",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-05-15T09:33:16Z |
---
language: en
tags:
- fill-mask
kwargs:
timestamp: '2024-05-17T12:35:37'
project_name: ThunBERT_bs32_lr5_emissions_tracker
run_id: bb30bce0-dc39-4380-9fb8-0368180d020b
duration: 180540.8281199932
emissions: 0.1889692277270039
emissions_rate: 1.046684174957971e-06
cpu_power: 42.5
gpu_power: 0.0
ram_power: 37.5
cpu_energy: 2.1313811261523066
gpu_energy: 0
ram_energy: 1.8806192077472823
energy_consumed: 4.012000333899577
country_name: Switzerland
country_iso_code: CHE
region: .nan
cloud_provider: .nan
cloud_region: .nan
os: Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34
python_version: 3.10.4
codecarbon_version: 2.3.4
cpu_count: 4
cpu_model: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz
gpu_count: .nan
gpu_model: .nan
longitude: .nan
latitude: .nan
ram_total_size: 100
tracking_mode: machine
on_cloud: N
pue: 1.0
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 180540.8281199932 |
| Emissions (Co2eq in kg) | 0.1889692277270039 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 37.5 |
| CPU energy (kWh) | 2.1313811261523066 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 1.8806192077472823 |
| Consumed energy (kWh) | 4.012000333899577 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 4 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.3475410941309869 |
| Emissions (Co2eq in kg) | 0.07071182434699733 |
## Note
15 May 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ThunBERT_bs32_lr5 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 5e-05 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 20393 |
## Training and Testing steps
Epoch | Train Loss | Test Loss
---|---|---
| 0.0 | 4.767287 | 3.817442 |
| 0.5 | 3.344291 | 3.201500 |
| 1.0 | 3.131225 | 3.044940 |
| 1.5 | 2.992692 | 2.967517 |
| 2.0 | 2.911544 | 2.897029 |
| 2.5 | 2.861460 | 2.844424 |
| 3.0 | 2.797808 | 2.814696 |
| 3.5 | 2.738420 | 2.776461 |
| 4.0 | 2.698689 | 2.730229 |
| 4.5 | 2.671872 | 2.693672 |
| 5.0 | 2.628263 | 2.662948 |
| 5.5 | 2.587204 | 2.639905 |
| 6.0 | 2.565847 | 2.620656 |
|
damgomz/ThunBERT_bs32_lr6
|
damgomz
| 2024-05-17T10:33:26Z | 119 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"pretraining",
"fill-mask",
"en",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-05-15T09:33:44Z |
---
language: en
tags:
- fill-mask
kwargs:
timestamp: '2024-05-17T12:33:24'
project_name: ThunBERT_bs32_lr6_emissions_tracker
run_id: 0e370511-86b8-4ed9-a6b0-d95eceb9448b
duration: 180383.6925137043
emissions: 0.1888047561130275
emissions_rate: 1.0466841735079988e-06
cpu_power: 42.5
gpu_power: 0.0
ram_power: 37.5
cpu_energy: 2.1295260795647915
gpu_energy: 0
ram_energy: 1.878982362146172
energy_consumed: 4.008508441710953
country_name: Switzerland
country_iso_code: CHE
region: .nan
cloud_provider: .nan
cloud_region: .nan
os: Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34
python_version: 3.10.4
codecarbon_version: 2.3.4
cpu_count: 4
cpu_model: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz
gpu_count: .nan
gpu_model: .nan
longitude: .nan
latitude: .nan
ram_total_size: 100
tracking_mode: machine
on_cloud: N
pue: 1.0
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 180383.6925137043 |
| Emissions (Co2eq in kg) | 0.1888047561130275 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 37.5 |
| CPU energy (kWh) | 2.1295260795647915 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 1.878982362146172 |
| Consumed energy (kWh) | 4.008508441710953 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 4 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.34723860808888074 |
| Emissions (Co2eq in kg) | 0.07065027956786751 |
## Note
15 May 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ThunBERT_bs32_lr6 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 5e-06 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 20427 |
## Training and Testing steps
Epoch | Train Loss | Test Loss
---|---|---
| 0.0 | 6.533427 | 5.411902 |
| 0.5 | 3.138879 | 3.017244 |
| 1.0 | 2.992936 | 2.933242 |
| 1.5 | 2.941352 | 2.887388 |
| 2.0 | 2.903994 | 2.857268 |
| 2.5 | 2.873236 | 2.842794 |
| 3.0 | 2.853003 | 2.823149 |
| 3.5 | 2.836404 | 2.804283 |
| 4.0 | 2.823654 | 2.797022 |
| 4.5 | 2.808353 | 2.784016 |
| 5.0 | 2.792922 | 2.781907 |
| 5.5 | 2.785201 | 2.784908 |
| 6.0 | 2.782254 | 2.771417 |
|
roshinishetty333/llama-7b-lora-tuned-per
|
roshinishetty333
| 2024-05-17T10:29:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-15T18:58:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
manjugeorge/wav2vec2-base-Manju
|
manjugeorge
| 2024-05-17T10:28:54Z | 83 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_17_0",
"base_model:facebook/wav2vec2-large-xlsr-53",
"base_model:finetune:facebook/wav2vec2-large-xlsr-53",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-05-16T14:34:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
base_model: facebook/wav2vec2-large-xlsr-53
datasets:
- common_voice_17_0
metrics:
- wer
model-index:
- name: wav2vec2-base-Manju
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: common_voice_17_0
type: common_voice_17_0
config: ml
split: None
args: ml
metrics:
- type: wer
value: 1.0
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-Manju
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice_17_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0961
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:---:|
| 12.5584 | 1.9763 | 500 | 4.0961 | 1.0 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
anjali12-3/results
|
anjali12-3
| 2024-05-17T10:26:16Z | 108 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-05-17T10:25:26Z |
---
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.96 | 12 | 13.1601 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Vamsi-Chowdary/finetune-bloom7b
|
Vamsi-Chowdary
| 2024-05-17T10:23:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-17T10:20:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
euiyulsong/Mistral-7B-ORPO-6kpre
|
euiyulsong
| 2024-05-17T10:06:22Z | 78 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"orpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-05-17T10:01:33Z |
---
library_name: transformers
tags:
- trl
- orpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Hamza12rdsdsf/blip2-opt-2.7b-onepiece-fine-tuned-v2
|
Hamza12rdsdsf
| 2024-05-17T09:57:49Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-17T09:57:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dejankocic/questionAnswering
|
dejankocic
| 2024-05-17T09:56:23Z | 0 | 0 | null |
[
"en",
"license:apache-2.0",
"region:us"
] | null | 2024-05-17T09:55:35Z |
---
license: apache-2.0
language:
- en
---
|
MoGP/g_x_Moji_stratify
|
MoGP
| 2024-05-17T09:51:42Z | 109 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-17T08:23:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sarraj19/metaphor_classifier_tamil
|
sarraj19
| 2024-05-17T09:47:38Z | 108 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google/muril-base-cased",
"base_model:finetune:google/muril-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-13T21:36:54Z |
---
license: apache-2.0
base_model: google/muril-base-cased
tags:
- generated_from_trainer
model-index:
- name: metaphor_classifier_tamil
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# metaphor_classifier_tamil
This model is a fine-tuned version of [google/muril-base-cased](https://huggingface.co/google/muril-base-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 6
### Training results
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
awais-nayyar/Dreambooth-SDXL-Awi-3
|
awais-nayyar
| 2024-05-17T09:46:21Z | 6 | 1 |
diffusers
|
[
"diffusers",
"autotrain",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-05-17T09:46:18Z |
---
tags:
- autotrain
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A photo of Awais Nayyar wearing casual clothes, standing and smiling.
license: openrail++
---
# AutoTrain SDXL LoRA DreamBooth - awais-nayyar/Dreambooth-SDXL-Awi-3
<Gallery />
## Model description
These are awais-nayyar/Dreambooth-SDXL-Awi-3 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: None.
## Trigger words
You should use A photo of Awais Nayyar wearing casual clothes, standing and smiling. to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](awais-nayyar/Dreambooth-SDXL-Awi-3/tree/main) them in the Files & versions tab.
|
wwe180/Llama3-12B-Chinese-lingyang-Q6_K-GGUF
|
wwe180
| 2024-05-17T09:40:53Z | 4 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:hfl/llama-3-chinese-8b-instruct-v2",
"base_model:quantized:hfl/llama-3-chinese-8b-instruct-v2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-17T09:40:28Z |
---
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
base_model:
- hfl/llama-3-chinese-8b-instruct-v2
---
# wwe180/Llama3-13B-Chinese-lingyang-Q6_K-GGUF
This model was converted to GGUF format from [`wwe180/Llama3-13B-Chinese-lingyang`](https://huggingface.co/wwe180/Llama3-13B-Chinese-lingyang) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/wwe180/Llama3-13B-Chinese-lingyang) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo wwe180/Llama3-13B-Chinese-lingyang-Q6_K-GGUF --model llama3-13b-chinese-lingyang.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo wwe180/Llama3-13B-Chinese-lingyang-Q6_K-GGUF --model llama3-13b-chinese-lingyang.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama3-13b-chinese-lingyang.Q6_K.gguf -n 128
```
|
Audino/my-awesomev3-modelv2-small
|
Audino
| 2024-05-17T09:39:49Z | 108 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-05-17T09:39:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wahid028/llama3-finetuned-legalQA
|
wahid028
| 2024-05-17T09:37:47Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-17T09:13:52Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yadvender12/AV_MAE_2class_DF1M
|
yadvender12
| 2024-05-17T09:32:15Z | 66 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2024-05-17T06:11:05Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: AV_MAE_2class_DF1M
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AV_MAE_2class_DF1M
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3200
- Accuracy: 0.9335
- F1: 0.9334
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 28240
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.2874 | 0.16 | 4515 | 0.3124 | 0.9181 | 0.9181 |
| 0.4418 | 1.16 | 9030 | 0.3153 | 0.9261 | 0.9260 |
| 0.453 | 2.16 | 13546 | 0.2692 | 0.9305 | 0.9305 |
| 0.2247 | 3.16 | 18062 | 0.2746 | 0.9308 | 0.9307 |
| 0.0751 | 4.16 | 22579 | 0.2604 | 0.9382 | 0.9380 |
| 0.0827 | 5.16 | 27095 | 0.2913 | 0.9366 | 0.9365 |
| 0.1658 | 6.04 | 28240 | 0.3200 | 0.9335 | 0.9334 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.1+cu118
- Datasets 2.18.0
- Tokenizers 0.15.1
|
moetezsa/mistral_finetune_wikitable
|
moetezsa
| 2024-05-17T09:27:21Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"unsloth",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-bnb-4bit",
"base_model:adapter:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"region:us"
] | null | 2024-05-17T09:11:32Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- unsloth
- generated_from_trainer
base_model: unsloth/mistral-7b-bnb-4bit
model-index:
- name: mistral_finetune_wikitable
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_finetune_wikitable
This model is a fine-tuned version of [unsloth/mistral-7b-bnb-4bit](https://huggingface.co/unsloth/mistral-7b-bnb-4bit) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 3407
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.16.0
- Tokenizers 0.15.2
|
amazingT/q-FrozenLake-v1-8x8-noSlippery
|
amazingT
| 2024-05-17T09:25:29Z | 0 | 0 | null |
[
"FrozenLake-v1-8x8-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-05-17T09:25:27Z |
---
tags:
- FrozenLake-v1-8x8-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8-no_slippery
type: FrozenLake-v1-8x8-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="amazingT/q-FrozenLake-v1-8x8-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
kyrylo-gorbachov/llama-3-function-calling-v0.4
|
kyrylo-gorbachov
| 2024-05-17T09:24:37Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-17T09:11:50Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** kyrylo-gorbachov
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Lizzrn/ppo-LunarLander-v2
|
Lizzrn
| 2024-05-17T09:23:57Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-05-17T09:23:40Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 265.72 +/- 19.12
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
utrobinmv/t5_translate_en_ru_zh_small_1024
|
utrobinmv
| 2024-05-17T09:22:08Z | 438,239 | 31 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"translation",
"ru",
"zh",
"en",
"dataset:ccmatrix",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-01-07T14:33:20Z |
---
language:
- ru
- zh
- en
tags:
- translation
license: apache-2.0
datasets:
- ccmatrix
metrics:
- sacrebleu
widget:
- example_title: translate zh-ru
text: >
translate to ru: 开发的目的是为用户提供个人同步翻译。
- example_title: translate ru-en
text: >
translate to en: Цель разработки — предоставить пользователям личного синхронного переводчика.
- example_title: translate en-ru
text: >
translate to ru: The purpose of the development is to provide users with a personal synchronized interpreter.
- example_title: translate en-zh
text: >
translate to zh: The purpose of the development is to provide users with a personal synchronized interpreter.
- example_title: translate zh-en
text: >
translate to en: 开发的目的是为用户提供个人同步解释器。
- example_title: translate ru-zh
text: >
translate to zh: Цель разработки — предоставить пользователям личного синхронного переводчика.
---
# T5 English, Russian and Chinese multilingual machine translation
This model represents a conventional T5 transformer in multitasking mode for translation into the required language, precisely configured for machine translation for pairs: ru-zh, zh-ru, en-zh, zh-en, en-ru, ru-en.
The model can perform direct translation between any pair of Russian, Chinese or English languages. For translation into the target language, the target language identifier is specified as a prefix 'translate to <lang>:'. In this case, the source language may not be specified, in addition, the source text may be multilingual.
Example translate Russian to Chinese
```python
from transformers import T5ForConditionalGeneration, T5Tokenizer
device = 'cuda' #or 'cpu' for translate on cpu
model_name = 'utrobinmv/t5_translate_en_ru_zh_small_1024'
model = T5ForConditionalGeneration.from_pretrained(model_name)
model.to(device)
tokenizer = T5Tokenizer.from_pretrained(model_name)
prefix = 'translate to zh: '
src_text = prefix + "Цель разработки — предоставить пользователям личного синхронного переводчика."
# translate Russian to Chinese
input_ids = tokenizer(src_text, return_tensors="pt")
generated_tokens = model.generate(**input_ids.to(device))
result = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(result)
#开发的目的就是向用户提供个性化的同步翻译。
```
and Example translate Chinese to Russian
```python
from transformers import T5ForConditionalGeneration, T5Tokenizer
device = 'cuda' #or 'cpu' for translate on cpu
model_name = 'utrobinmv/t5_translate_en_ru_zh_small_1024'
model = T5ForConditionalGeneration.from_pretrained(model_name)
model.to(device)
tokenizer = T5Tokenizer.from_pretrained(model_name)
prefix = 'translate to ru: '
src_text = prefix + "开发的目的就是向用户提供个性化的同步翻译。"
# translate Russian to Chinese
input_ids = tokenizer(src_text, return_tensors="pt")
generated_tokens = model.generate(**input_ids.to(device))
result = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(result)
#Цель разработки - предоставить персонализированный синхронный перевод для пользователей.
```
##
## Languages covered
Russian (ru_RU), Chinese (zh_CN), English (en_US)
|
RichardErkhov/Locutusque_-_Hercules-2.5-Mistral-7B-8bits
|
RichardErkhov
| 2024-05-17T09:22:05Z | 78 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-05-17T09:15:57Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Hercules-2.5-Mistral-7B - bnb 8bits
- Model creator: https://huggingface.co/Locutusque/
- Original model: https://huggingface.co/Locutusque/Hercules-2.5-Mistral-7B/
Original model description:
---
license: apache-2.0
library_name: transformers
tags:
- not-for-all-audiences
- chemistry
- math
- code
- physics
base_model: mistralai/Mistral-7B-v0.1
datasets:
- Locutusque/hercules-v2.0
- Locutusque/hercules-v2.5
model-index:
- name: Hercules-2.5-Mistral-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 62.03
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Hercules-2.5-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 83.79
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Hercules-2.5-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.49
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Hercules-2.5-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 43.44
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Hercules-2.5-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.72
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Hercules-2.5-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 49.05
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Hercules-2.5-Mistral-7B
name: Open LLM Leaderboard
---
# Model Card: Hercules-2.5-Mistral-7B

## Model Description
Hercules-2.5-Mistral-7B is a fine-tuned language model derived from Mistralai/Mistral-7B-v0.1. It is specifically designed to excel in instruction following, function calls, and conversational interactions across various scientific and technical domains. The dataset used for fine-tuning, also named Hercules-v2.5, expands upon the diverse capabilities of OpenHermes-2.5 with contributions from numerous curated datasets. This fine-tuning has hercules-v2.5 with enhanced abilities in:
- Complex Instruction Following: Understanding and accurately executing multi-step instructions, even those involving specialized terminology.
- Function Calling: Seamlessly interpreting and executing function calls, providing appropriate input and output values.
- Domain-Specific Knowledge: Engaging in informative and educational conversations about Biology, Chemistry, Physics, Mathematics, Medicine, Computer Science, and more.
## Intended Uses & Potential Bias
Hercules-2.5-Mistral-7B is well-suited to the following applications:
- Specialized Chatbots: Creating knowledgeable chatbots and conversational agents in scientific and technical fields.
- Instructional Assistants: Supporting users with educational and step-by-step guidance in various disciplines.
- Code Generation and Execution: Facilitating code execution through function calls, aiding in software development and prototyping.
**Important Note: Although Hercules-v2.5 is carefully constructed, it's important to be aware that the underlying data sources may contain biases or reflect harmful stereotypes. Use this model with caution and consider additional measures to mitigate potential biases in its responses.**
## Limitations and Risks
- Toxicity: The dataset may still contain toxic or harmful examples despite cleaning efforts.
- Hallucinations and Factual Errors: Like other language models, Hercules-2.0-Mistral-7B may generate incorrect or misleading information, especially in specialized domains where it lacks sufficient expertise.
- Potential for Misuse: The ability to engage in technical conversations and execute function calls could be misused for malicious purposes.
## Evaluation Metrics
To provide suitable benchmarks for Hercules-2.5-Mistral-7B, consider using a combination of the following metrics:
- Instruction Following: Task-specific evaluation datasets for instruction following in relevant domains (e.g., datasets specifically focused on math problems, code generation, etc.).
- Function Calling: Evaluate the model's accuracy in interpreting and executing function calls with varying inputs and outputs.
- Conversational Quality: Assess the model's ability to maintain coherence, naturalness, and informativeness across conversational turns.
## Training Data
Hercules-2.5-Mistral-7B is fine-tuned from the following sources:
- cognitivecomputations/dolphin (first 300k examples)
- Evol Instruct 70K && 140K
- teknium/GPT4-LLM-Cleaned
- jondurbin/airoboros-3.2
- AlekseyKorshuk/camel-chatml
- CollectiveCognition/chats-data-2023-09-22
- Nebulous/lmsys-chat-1m-smortmodelsonly
- glaiveai/glaive-code-assistant-v2
- glaiveai/glaive-code-assistant
- glaiveai/glaive-function-calling-v2
- garage-bAInd/Open-Platypus
- meta-math/MetaMathQA
- teknium/GPTeacher-General-Instruct
- GPTeacher roleplay datasets
- BI55/MedText
- pubmed_qa labeled subset
- M4-ai/LDJnr_combined_inout_format
- Unnatural Instructions
- CollectiveCognition/chats-data-2023-09-27
- CollectiveCognition/chats-data-2023-10-16
## Training Procedure
- This model was trained on 8 kaggle TPUs, using torch xla SPMD for high MXU efficiency. There was no expense on my end (meaning you can reproduce this too!)
- A learning rate of 2e-06 with the Adam optimizer. A linear scheduler was used, with an end factor of 0.3. A low learning rate was used to prevent exploding gradients.
- No mixed precision was used, with the default dtype being bfloat16.
- Trained on 200,000 examples of Hercules-v2.0 and 100,000 examples of Hercules-v2.5
- No model parameters were frozen.
- This model was trained on OpenAI's ChatML prompt format. Because this model has function calling capabilities, the prompt format is slightly different, here's what it would look like: ```<|im_start|>system\n{message}<|im_end|>\n<|im_start|>user\n{user message}<|im_end|>\n<|im_start|>call\n{function call message}<|im_end|>\n<|im_start|>function\n{function response message}<|im_end|>\n<|im_start|>assistant\n{assistant message}</s>```
This model was fine-tuned using the TPU-Alignment repository. https://github.com/Locutusque/TPU-Alignment
# Updates
- **🔥 Earned a score of nearly 64 on Open LLM Leaderboard, outperforming most merge-free SFT mistral fine-tunes 🔥**
# Quants
exl2 by @bartowski https://huggingface.co/bartowski/Hercules-2.5-Mistral-7B-exl2
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Locutusque__Hercules-2.5-Mistral-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |63.59|
|AI2 Reasoning Challenge (25-Shot)|62.03|
|HellaSwag (10-Shot) |83.79|
|MMLU (5-Shot) |63.49|
|TruthfulQA (0-shot) |43.44|
|Winogrande (5-shot) |79.72|
|GSM8k (5-shot) |49.05|
|
KimByeongSu/gpt-neo-125m-cs-finetuning
|
KimByeongSu
| 2024-05-17T09:22:02Z | 107 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/gpt-neo-125m",
"base_model:finetune:EleutherAI/gpt-neo-125m",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-17T08:52:23Z |
---
license: mit
base_model: EleutherAI/gpt-neo-125m
tags:
- generated_from_trainer
model-index:
- name: gpt-neo-125m-cs-finetuning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-125m-cs-finetuning
This model is a fine-tuned version of [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1602
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.2942 | 1.0 | 1331 | 3.2314 |
| 3.1075 | 2.0 | 2662 | 3.1742 |
| 3.0252 | 3.0 | 3993 | 3.1602 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.13.1+cu117
- Datasets 2.14.6
- Tokenizers 0.15.0
|
pakawadeep/typhoon-7b-lora-ctfl-th-gec-100st-augmented
|
pakawadeep
| 2024-05-17T09:20:46Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-17T09:17:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
salts-models/absa-salts
|
salts-models
| 2024-05-17T09:20:16Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-17T09:19:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
blackhole33/uzbek-speaker-verification-v6
|
blackhole33
| 2024-05-17T09:19:33Z | 2 | 0 |
nemo
|
[
"nemo",
"pytorch",
"NeMo",
"license:cc-by-4.0",
"region:us"
] | null | 2024-05-17T09:19:23Z |
---
license: cc-by-4.0
library_name: nemo
tags:
- pytorch
- NeMo
---
# Uzbek-speaker-verification-v6
<style>
img {
display: inline;
}
</style>
[](#model-architecture)
| [](#model-architecture)
| [](#datasets)
**Put a short model description here.**
See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/index.html) for complete architecture details.
## NVIDIA NeMo: Training
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest Pytorch version.
```
pip install nemo_toolkit['all']
```
## How to Use this Model
The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
### Automatically instantiate the model
**NOTE**: Please update the model class below to match the class of the model being uploaded.
```python
import nemo.core import ModelPT
model = ModelPT.from_pretrained("ai-nightcoder/uzbek-speaker-verification-v6")
```
### NOTE
Add some information about how to use the model here. An example is provided for ASR inference below.
### Transcribing using Python
First, let's get a sample
```
wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav
```
Then simply do:
```
asr_model.transcribe(['2086-149220-0033.wav'])
```
### Transcribing many audio files
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py pretrained_name="ai-nightcoder/uzbek-speaker-verification-v6" audio_dir=""
```
### Input
**Add some information about what are the inputs to this model**
### Output
**Add some information about what are the outputs of this model**
## Model Architecture
**Add information here discussing architectural details of the model or any comments to users about the model.**
## Training
**Add information here about how the model was trained. It should be as detailed as possible, potentially including the the link to the script used to train as well as the base config used to train the model. If extraneous scripts are used to prepare the components of the model, please include them here.**
### NOTE
An example is provided below for ASR
The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_transducer/speech_to_text_rnnt_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/fastconformer/fast-conformer_transducer_bpe.yaml).
The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
### Datasets
**Try to provide as detailed a list of datasets as possible. If possible, provide links to the datasets on HF by adding it to the manifest section at the top of the README (marked by ---).**
### NOTE
An example for the manifest section is provided below for ASR datasets
datasets:
- librispeech_asr
- fisher_corpus
- Switchboard-1
- WSJ-0
- WSJ-1
- National-Singapore-Corpus-Part-1
- National-Singapore-Corpus-Part-6
- vctk
- voxpopuli
- europarl
- multilingual_librispeech
- mozilla-foundation/common_voice_8_0
- MLCommons/peoples_speech
The corresponding text in this section for those datasets is stated below -
The model was trained on 64K hours of English speech collected and prepared by NVIDIA NeMo and Suno teams.
The training dataset consists of private subset with 40K hours of English speech plus 24K hours from the following public datasets:
- Librispeech 960 hours of English speech
- Fisher Corpus
- Switchboard-1 Dataset
- WSJ-0 and WSJ-1
- National Speech Corpus (Part 1, Part 6)
- VCTK
- VoxPopuli (EN)
- Europarl-ASR (EN)
- Multilingual Librispeech (MLS EN) - 2,000 hour subset
- Mozilla Common Voice (v7.0)
- People's Speech - 12,000 hour subset
## Performance
**Add information here about the performance of the model. Discuss what is the metric that is being used to evaluate the model and if there are external links explaning the custom metric, please link to it.
### NOTE
An example is provided below for ASR metrics list that can be added to the top of the README
model-index:
- name: PUT_MODEL_NAME
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: AMI (Meetings test)
type: edinburghcstr/ami
config: ihm
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 17.10
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Earnings-22
type: revdotcom/earnings22
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 14.11
Provide any caveats about the results presented in the top of the discussion so that nuance is not lost.
It should ideally be in a tabular format (you can use the following website to make your tables in markdown format - https://www.tablesgenerator.com/markdown_tables)**
## Limitations
**Discuss any practical limitations to the model when being used in real world cases. They can also be legal disclaimers, or discussion regarding the safety of the model (particularly in the case of LLMs).**
### Note
An example is provided below
Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
## License
License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license.
## References
**Provide appropriate references in the markdown link format below. Please order them numerically.**
[1] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
|
euiyulsong/pretraining_arceasy_syn_cont_6k
|
euiyulsong
| 2024-05-17T09:18:35Z | 78 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-05-17T09:13:45Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/krevas_-_SOLAR-10.7B-4bits
|
RichardErkhov
| 2024-05-17T09:17:15Z | 78 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-05-17T09:07:49Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
SOLAR-10.7B - bnb 4bits
- Model creator: https://huggingface.co/krevas/
- Original model: https://huggingface.co/krevas/SOLAR-10.7B/
Original model description:
---
license: apache-2.0
language:
- ko
---
# Model Card for SOLAR-10.7B
## Hardware and Software
* **Hardware**: We utilized an A100x4 * 1 for training our model
* **Training Factors**: We fine-tuned this model using a combination of the [DeepSpeed library](https://github.com/microsoft/DeepSpeed) and the [HuggingFace TRL Trainer](https://huggingface.co/docs/trl/trainer) / [HuggingFace Accelerate](https://huggingface.co/docs/accelerate/index)
|
Miamiahong/jpmodel_remote-work_distilbert-base-uncased_0517
|
Miamiahong
| 2024-05-17T09:16:10Z | 125 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-17T08:32:24Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: jpmodel_remote-work_distilbert-base-uncased_0517
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jpmodel_remote-work_distilbert-base-uncased_0517
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4293
- Accuracy: {'accuracy': 0.9476614699331849}
- F1: {'f1': 0.9316670582946814}
- Precision: {'precision': 0.9211843955719234}
- Recall: {'recall': 0.9476614699331849}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------------------------------:|:--------------------------:|:---------------------------------:|:------------------------------:|
| No log | 1.0 | 449 | 0.2487 | {'accuracy': 0.9532293986636972} | {'f1': 0.9304040652863451} | {'precision': 0.9086462864767536} | {'recall': 0.9532293986636972} |
| 0.2176 | 2.0 | 898 | 0.2366 | {'accuracy': 0.9532293986636972} | {'f1': 0.9304040652863451} | {'precision': 0.9086462864767536} | {'recall': 0.9532293986636972} |
| 0.1796 | 3.0 | 1347 | 0.2228 | {'accuracy': 0.9526726057906458} | {'f1': 0.9320734514025724} | {'precision': 0.9182722571033837} | {'recall': 0.9526726057906458} |
| 0.1469 | 4.0 | 1796 | 0.2856 | {'accuracy': 0.9437639198218263} | {'f1': 0.9282364670603435} | {'precision': 0.9135405361560103} | {'recall': 0.9437639198218263} |
| 0.1045 | 5.0 | 2245 | 0.3386 | {'accuracy': 0.9437639198218263} | {'f1': 0.9280406899884679} | {'precision': 0.9132963430863958} | {'recall': 0.9437639198218263} |
| 0.0742 | 6.0 | 2694 | 0.3708 | {'accuracy': 0.9437639198218263} | {'f1': 0.928813770000516} | {'precision': 0.9155656638103506} | {'recall': 0.9437639198218263} |
| 0.0401 | 7.0 | 3143 | 0.3897 | {'accuracy': 0.9437639198218263} | {'f1': 0.9291849652492169} | {'precision': 0.9199457677450203} | {'recall': 0.9437639198218263} |
| 0.0263 | 8.0 | 3592 | 0.4163 | {'accuracy': 0.9471046770601337} | {'f1': 0.9322848244083336} | {'precision': 0.9235426032908877} | {'recall': 0.9471046770601337} |
| 0.0149 | 9.0 | 4041 | 0.4249 | {'accuracy': 0.9471046770601337} | {'f1': 0.9313864813181381} | {'precision': 0.9211608097664751} | {'recall': 0.9471046770601337} |
| 0.0149 | 10.0 | 4490 | 0.4293 | {'accuracy': 0.9476614699331849} | {'f1': 0.9316670582946814} | {'precision': 0.9211843955719234} | {'recall': 0.9476614699331849} |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
kishorea/llama3_finetuned_1705
|
kishorea
| 2024-05-17T09:07:25Z | 78 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-05-17T09:03:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
imljls/gpt_review_senti
|
imljls
| 2024-05-17T09:07:16Z | 122 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-17T09:06:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
chansung/mental_health_counseling_v0.1
|
chansung
| 2024-05-17T09:07:01Z | 3 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"gemma",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:chansung/mental_health_counseling_conversations",
"base_model:google/gemma-7b",
"base_model:adapter:google/gemma-7b",
"license:gemma",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-05-17T07:13:06Z |
---
license: gemma
library_name: peft
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
base_model: google/gemma-7b
datasets:
- chansung/mental_health_counseling_conversations
model-index:
- name: mental_health_counseling_v0.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mental_health_counseling_v0.1
This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on the chansung/mental_health_counseling_conversations dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0755
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.357 | 1.0 | 120 | 2.2527 |
| 1.8805 | 2.0 | 240 | 1.8901 |
| 1.5884 | 3.0 | 360 | 1.6506 |
| 0.976 | 4.0 | 480 | 1.4923 |
| 0.494 | 5.0 | 600 | 1.4527 |
| 0.6722 | 6.0 | 720 | 1.5256 |
| 0.4155 | 7.0 | 840 | 1.6937 |
| 0.3646 | 8.0 | 960 | 1.9389 |
| 0.1977 | 9.0 | 1080 | 2.0741 |
| 0.072 | 10.0 | 1200 | 2.0755 |
### Framework versions
- PEFT 0.11.0
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
AP-HP/dummy-ner
|
AP-HP
| 2024-05-17T09:05:33Z | 126 | 0 |
edsnlp
|
[
"edsnlp",
"safetensors",
"token-classification",
"fr",
"license:bsd-3-clause",
"region:us"
] |
token-classification
| 2024-05-17T01:30:02Z |
---
license: bsd-3-clause
language:
- fr
library_name: edsnlp
pipeline_tag: token-classification
---
# Dummy EDS-NLP NER model
This model was trained on the DEFT 2020 Track 3 dataset, but it's main purpose is to
test the integration of EDS-NLP with the Hugging Face Hub.
|
RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf
|
RichardErkhov
| 2024-05-17T09:01:12Z | 47 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-16T22:14:04Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
autotrain-mixtral7x8b-math - GGUF
- Model creator: https://huggingface.co/abhishek/
- Original model: https://huggingface.co/abhishek/autotrain-mixtral7x8b-math/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [autotrain-mixtral7x8b-math.Q2_K.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.Q2_K.gguf) | Q2_K | 16.12GB |
| [autotrain-mixtral7x8b-math.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.IQ3_XS.gguf) | IQ3_XS | 18.02GB |
| [autotrain-mixtral7x8b-math.IQ3_S.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.IQ3_S.gguf) | IQ3_S | 19.03GB |
| [autotrain-mixtral7x8b-math.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.Q3_K_S.gguf) | Q3_K_S | 19.03GB |
| [autotrain-mixtral7x8b-math.IQ3_M.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.IQ3_M.gguf) | IQ3_M | 19.96GB |
| [autotrain-mixtral7x8b-math.Q3_K.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.Q3_K.gguf) | Q3_K | 21.0GB |
| [autotrain-mixtral7x8b-math.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.Q3_K_M.gguf) | Q3_K_M | 21.0GB |
| [autotrain-mixtral7x8b-math.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.Q3_K_L.gguf) | Q3_K_L | 22.51GB |
| [autotrain-mixtral7x8b-math.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.IQ4_XS.gguf) | IQ4_XS | 23.63GB |
| [autotrain-mixtral7x8b-math.Q4_0.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.Q4_0.gguf) | Q4_0 | 24.63GB |
| [autotrain-mixtral7x8b-math.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.IQ4_NL.gguf) | IQ4_NL | 24.91GB |
| [autotrain-mixtral7x8b-math.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.Q4_K_S.gguf) | Q4_K_S | 24.91GB |
| [autotrain-mixtral7x8b-math.Q4_K.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.Q4_K.gguf) | Q4_K | 26.49GB |
| [autotrain-mixtral7x8b-math.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.Q4_K_M.gguf) | Q4_K_M | 26.49GB |
| [autotrain-mixtral7x8b-math.Q4_1.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.Q4_1.gguf) | Q4_1 | 27.32GB |
| [autotrain-mixtral7x8b-math.Q5_0.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.Q5_0.gguf) | Q5_0 | 30.02GB |
| [autotrain-mixtral7x8b-math.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.Q5_K_S.gguf) | Q5_K_S | 30.02GB |
| [autotrain-mixtral7x8b-math.Q5_K.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.Q5_K.gguf) | Q5_K | 30.95GB |
| [autotrain-mixtral7x8b-math.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.Q5_K_M.gguf) | Q5_K_M | 30.95GB |
| [autotrain-mixtral7x8b-math.Q5_1.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.Q5_1.gguf) | Q5_1 | 32.71GB |
| [autotrain-mixtral7x8b-math.Q6_K.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/blob/main/autotrain-mixtral7x8b-math.Q6_K.gguf) | Q6_K | 35.74GB |
| [autotrain-mixtral7x8b-math.Q8_0.gguf](https://huggingface.co/RichardErkhov/abhishek_-_autotrain-mixtral7x8b-math-gguf/tree/main/) | Q8_0 | 46.22GB |
Original model description:
---
tags:
- autotrain
- text-generation-inference
- text-generation
library_name: transformers
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
xzybit/multi_summarize
|
xzybit
| 2024-05-17T09:00:45Z | 108 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-05-17T07:04:23Z |
---
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: multi_summarize
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multi_summarize
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9849
- Rouge1: 0.0883
- Rouge2: 0.0266
- Rougel: 0.0685
- Rougelsum: 0.0685
- Gen Len: 18.9808
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 496 | 3.0124 | 0.0865 | 0.0259 | 0.0672 | 0.0672 | 18.8085 |
| 3.391 | 2.0 | 992 | 2.9849 | 0.0883 | 0.0266 | 0.0685 | 0.0685 | 18.9808 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
roxana92/en-ar-translator
|
roxana92
| 2024-05-17T08:55:06Z | 116 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-05-14T14:45:20Z |
---
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: en-ar-translator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# en-ar-translator
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0068
- Bleu: 0.0
- Gen Len: 18.2009
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:----:|:-------:|
| 0.0147 | 1.0 | 3220 | 0.0080 | 0.0 | 18.0979 |
| 0.0117 | 2.0 | 6440 | 0.0068 | 0.0 | 18.2009 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1+cu121
- Tokenizers 0.15.2
|
RichardErkhov/instructlab_-_merlinite-7b-lab-gguf
|
RichardErkhov
| 2024-05-17T08:54:27Z | 165 | 0 | null |
[
"gguf",
"arxiv:2403.01081",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-17T07:24:47Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
merlinite-7b-lab - GGUF
- Model creator: https://huggingface.co/instructlab/
- Original model: https://huggingface.co/instructlab/merlinite-7b-lab/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [merlinite-7b-lab.Q2_K.gguf](https://huggingface.co/RichardErkhov/instructlab_-_merlinite-7b-lab-gguf/blob/main/merlinite-7b-lab.Q2_K.gguf) | Q2_K | 2.53GB |
| [merlinite-7b-lab.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/instructlab_-_merlinite-7b-lab-gguf/blob/main/merlinite-7b-lab.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [merlinite-7b-lab.IQ3_S.gguf](https://huggingface.co/RichardErkhov/instructlab_-_merlinite-7b-lab-gguf/blob/main/merlinite-7b-lab.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [merlinite-7b-lab.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/instructlab_-_merlinite-7b-lab-gguf/blob/main/merlinite-7b-lab.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [merlinite-7b-lab.IQ3_M.gguf](https://huggingface.co/RichardErkhov/instructlab_-_merlinite-7b-lab-gguf/blob/main/merlinite-7b-lab.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [merlinite-7b-lab.Q3_K.gguf](https://huggingface.co/RichardErkhov/instructlab_-_merlinite-7b-lab-gguf/blob/main/merlinite-7b-lab.Q3_K.gguf) | Q3_K | 3.28GB |
| [merlinite-7b-lab.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/instructlab_-_merlinite-7b-lab-gguf/blob/main/merlinite-7b-lab.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [merlinite-7b-lab.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/instructlab_-_merlinite-7b-lab-gguf/blob/main/merlinite-7b-lab.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [merlinite-7b-lab.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/instructlab_-_merlinite-7b-lab-gguf/blob/main/merlinite-7b-lab.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [merlinite-7b-lab.Q4_0.gguf](https://huggingface.co/RichardErkhov/instructlab_-_merlinite-7b-lab-gguf/blob/main/merlinite-7b-lab.Q4_0.gguf) | Q4_0 | 3.83GB |
| [merlinite-7b-lab.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/instructlab_-_merlinite-7b-lab-gguf/blob/main/merlinite-7b-lab.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [merlinite-7b-lab.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/instructlab_-_merlinite-7b-lab-gguf/blob/main/merlinite-7b-lab.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [merlinite-7b-lab.Q4_K.gguf](https://huggingface.co/RichardErkhov/instructlab_-_merlinite-7b-lab-gguf/blob/main/merlinite-7b-lab.Q4_K.gguf) | Q4_K | 4.07GB |
| [merlinite-7b-lab.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/instructlab_-_merlinite-7b-lab-gguf/blob/main/merlinite-7b-lab.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [merlinite-7b-lab.Q4_1.gguf](https://huggingface.co/RichardErkhov/instructlab_-_merlinite-7b-lab-gguf/blob/main/merlinite-7b-lab.Q4_1.gguf) | Q4_1 | 4.24GB |
| [merlinite-7b-lab.Q5_0.gguf](https://huggingface.co/RichardErkhov/instructlab_-_merlinite-7b-lab-gguf/blob/main/merlinite-7b-lab.Q5_0.gguf) | Q5_0 | 4.65GB |
| [merlinite-7b-lab.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/instructlab_-_merlinite-7b-lab-gguf/blob/main/merlinite-7b-lab.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [merlinite-7b-lab.Q5_K.gguf](https://huggingface.co/RichardErkhov/instructlab_-_merlinite-7b-lab-gguf/blob/main/merlinite-7b-lab.Q5_K.gguf) | Q5_K | 4.78GB |
| [merlinite-7b-lab.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/instructlab_-_merlinite-7b-lab-gguf/blob/main/merlinite-7b-lab.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [merlinite-7b-lab.Q5_1.gguf](https://huggingface.co/RichardErkhov/instructlab_-_merlinite-7b-lab-gguf/blob/main/merlinite-7b-lab.Q5_1.gguf) | Q5_1 | 5.07GB |
| [merlinite-7b-lab.Q6_K.gguf](https://huggingface.co/RichardErkhov/instructlab_-_merlinite-7b-lab-gguf/blob/main/merlinite-7b-lab.Q6_K.gguf) | Q6_K | 5.53GB |
| [merlinite-7b-lab.Q8_0.gguf](https://huggingface.co/RichardErkhov/instructlab_-_merlinite-7b-lab-gguf/blob/main/merlinite-7b-lab.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
pipeline_tag: text-generation
tags:
- merlinite
- mistral
- ibm
- lab
- labrador
- labradorite
license: apache-2.0
language:
- en
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Merlinite 7b 🔥 [Paper](https://arxiv.org/abs/2403.01081)
### Overview

### Performance
| Model | Alignment | Base | Teacher | MTBench (Avg) * | MMLU(5-shot) | ARC-C(25-shot) | HellaSwag(10-shot) | Winogrande(5-shot) | GSM8K(5-shot- strict) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| [Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) | RLHF | Llama-2-13b | Human Annotators | 6.65 | 54.58 | 59.81 | 82.52 | 75.93 | 34.80 |
| [Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b) | Progressive Training | Llama-2-13b | GPT-4 | 6.15 | 60.37 * | 59.73 | 79.86 | 78.22 | 48.22 |
| [WizardLM-13B-V1.2](https://huggingface.co/WizardLM/WizardLM-13B-V1.2) | Evol-Instruct | Llama-2-13b | GPT-4 | 7.20 | 54.83 | 60.24 | 82.62 | 76.40 | 43.75 |
| [Labradorite-13b](https://huggingface.co/ibm/labradorite-13b) | Large-scale Alignment for chatBots (LAB) | Llama-2-13b | Mixtral-8x7B-Instruct | 7.23 | 58.89 | 61.69 | 83.15 | 79.56 | 40.11 |
| [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | SFT | Mistral-7B-v0.1 | - | 6.84 | 60.37 | 63.65 | 84.76 | 76.80 | 41.85 |
| [zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) | SFT/DPO | Mistral-7B-v0.1 | GPT-4 | 7.34 | 61.07 | 63.74 | 84.19 | 78.06 | 34.04 |
| [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | SFT | Mistral-7B-v0.1 | - | 7.6** | 60.78 | 63.14 | 84.88 | 77.19 | 40.03 |
| Merlinite-7b | Large-scale Alignment for chatBots (LAB) | Mistral-7B-v0.1 | Mixtral-8x7B-Instruct | 7.66 | 64.88 | 63.99 | 84.37 | 78.24 | 44.58 |
[*] Numbers for models other than Merlinite-7b and [Labradorite-13b](https://huggingface.co/ibm/labradorite-13b) (ours) are taken from [lmsys/chatbot-arena-leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard)
[**] Numbers taken from [MistralAI Release Blog](https://mistral.ai/news/la-plateforme/)
### Method
LAB: **L**arge-scale **A**lignment for chat**B**ots is a novel synthetic data-based alignment tuning method for LLMs from IBM Research. Merlinite-7b is a Mistral-7b-derivative model trained with the LAB methodology, using Mixtral-8x7b-Instruct as a teacher model.
LAB consists of three key components:
1. Taxonomy-driven data curation process
2. Large-scale synthetic data generator
3. Two-phased-training with replay buffers

LAB approach allows for adding new knowledge and skills, in an incremental fashion, to an already pre-trained model without suffering from catastrophic forgetting.
Taxonomy is a tree of seed examples that are used to prompt a teacher model to generate synthetic data. Taxonomy allows the data curator or the model designer to easily specify a diverse set of the knowledge-domains and skills that they would like to include in their LLM. At a high level, these can be categorized into three high-level bins - knowledge, foundational skills, and compositional skills. The leaf nodes of the taxonomy are tasks associated with one or more seed examples.

During the synthetic data generation, **unlike previous approaches where seed examples are uniformly drawn from the entire pool (i.e. self-instruct), we use the taxonomy to drive the sampling process**: For each knowledge/skill, we only use the local examples within the leaf node as seeds to prompt the teacher model.
This makes the teacher model better exploit the task distributions defined by the local examples of each node and the diversity in the taxonomy itself ensures the entire generation covers a wide range of tasks, as illustrated below. In turns, this allows for using Mixtral 8x7B as the teacher model for generation while performing very competitively with models such as ORCA-2, WizardLM, and Zephyr Beta that rely on synthetic data generated by much larger and capable models like GPT-4.

For adding new domain-specific knowledge, we provide an external knowledge source (document) and prompt the model to generate questions and answers based on the document.
Foundational skills such as reasoning and compositional skills such as creative writing are generated through in-context learning using the seed examples from the taxonomy.
Additionally, to ensure the data is high-quality and safe, we employ steps to check the questions and answers to ensure that they are grounded and safe. This is done using the same teacher model that generated the data.
Our training consists of two major phases: knowledge tuning and skills tuning.
There are two steps in knowledge tuning where the first step learns simple knowledge (short samples) and the second step learns complicated knowledge (longer samples).
The second step uses replay a replay buffer with data from the first step.
Both foundational skills and compositional skills are learned during the skills tuning phases, where a replay buffer of data from the knowledge phase is used.
Importantly, we use a set of hyper-parameters for training that are very different from standard small-scale supervised fine-training: larger batch size and carefully optimized learning rate and scheduler.

## Model description
- **Language(s):** Primarily English
- **License:** Apache 2.0
- **Base model:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
- **Teacher Model:** [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
## Prompt Template
```python
sys_prompt = "You are an AI language model developed by IBM Research. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior."
prompt = f'<|system|>\n{sys_prompt}\n<|user|>\n{inputs}\n<|assistant|>\n'
stop_token = '<|endoftext|>'
```
We advise utilizing the system prompt employed during the model's training for optimal inference performance, as there could be performance variations based on the provided instructions.
## Bias, Risks, and Limitations
Merlinite-7b has not been aligned to human preferences, so the model might produce problematic outputs. The model might also maintain the limitations and constraints that arise from the base model.
The model undergoes training on synthetic data, leading to the potential inheritance of both advantages and limitations from the underlying teacher models and data generation methods. The incorporation of safety measures during Merlinite-7b's training process is considered beneficial. However, a nuanced understanding of the associated risks requires detailed studies for more accurate quantification.
In the absence of adequate safeguards and RLHF, there exists a risk of malicious utilization of these models for generating disinformation or harmful content. Caution is urged against complete reliance on a specific language model for crucial decisions or impactful information, as preventing these models from fabricating content is not straightforward. Additionally, it remains uncertain whether smaller models might exhibit increased susceptibility to hallucination in ungrounded generation scenarios due to their reduced sizes and memorization capacities. This aspect is currently an active area of research, and we anticipate more rigorous exploration, comprehension, and mitigations in this domain.
|
mradermacher/MiquSuperdark-70B-v2-GGUF
|
mradermacher
| 2024-05-17T08:45:15Z | 13 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:ddh0/MiquSuperdark-70B-v2",
"base_model:quantized:ddh0/MiquSuperdark-70B-v2",
"license:unknown",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-17T00:11:16Z |
---
base_model: ddh0/MiquSuperdark-70B-v2
language:
- en
library_name: transformers
license: unknown
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/ddh0/MiquSuperdark-70B-v2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MiquSuperdark-70B-v2-GGUF/resolve/main/MiquSuperdark-70B-v2.Q2_K.gguf) | Q2_K | 25.6 | |
| [GGUF](https://huggingface.co/mradermacher/MiquSuperdark-70B-v2-GGUF/resolve/main/MiquSuperdark-70B-v2.IQ3_XS.gguf) | IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/MiquSuperdark-70B-v2-GGUF/resolve/main/MiquSuperdark-70B-v2.IQ3_S.gguf) | IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MiquSuperdark-70B-v2-GGUF/resolve/main/MiquSuperdark-70B-v2.Q3_K_S.gguf) | Q3_K_S | 30.0 | |
| [GGUF](https://huggingface.co/mradermacher/MiquSuperdark-70B-v2-GGUF/resolve/main/MiquSuperdark-70B-v2.IQ3_M.gguf) | IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/MiquSuperdark-70B-v2-GGUF/resolve/main/MiquSuperdark-70B-v2.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MiquSuperdark-70B-v2-GGUF/resolve/main/MiquSuperdark-70B-v2.Q3_K_L.gguf) | Q3_K_L | 36.2 | |
| [GGUF](https://huggingface.co/mradermacher/MiquSuperdark-70B-v2-GGUF/resolve/main/MiquSuperdark-70B-v2.IQ4_XS.gguf) | IQ4_XS | 37.3 | |
| [GGUF](https://huggingface.co/mradermacher/MiquSuperdark-70B-v2-GGUF/resolve/main/MiquSuperdark-70B-v2.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MiquSuperdark-70B-v2-GGUF/resolve/main/MiquSuperdark-70B-v2.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MiquSuperdark-70B-v2-GGUF/resolve/main/MiquSuperdark-70B-v2.Q5_K_S.gguf) | Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/MiquSuperdark-70B-v2-GGUF/resolve/main/MiquSuperdark-70B-v2.Q5_K_M.gguf) | Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/MiquSuperdark-70B-v2-GGUF/resolve/main/MiquSuperdark-70B-v2.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/MiquSuperdark-70B-v2-GGUF/resolve/main/MiquSuperdark-70B-v2.Q6_K.gguf.part2of2) | Q6_K | 56.7 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/MiquSuperdark-70B-v2-GGUF/resolve/main/MiquSuperdark-70B-v2.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/MiquSuperdark-70B-v2-GGUF/resolve/main/MiquSuperdark-70B-v2.Q8_0.gguf.part2of2) | Q8_0 | 73.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
iMahdiGhazavi/nlp-question-answering-hw
|
iMahdiGhazavi
| 2024-05-17T08:43:17Z | 129 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"question-answering",
"QA",
"QuestionAnswering",
"en",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-05-17T07:25:02Z |
---
language:
- en
library_name: transformers
tags:
- QA
- QuestionAnswering
---
I trained this model on the SQuAD dataset for the Question Answering NLP task as a homework for my NLP Course during the spring of 2024 semester.
|
mradermacher/MiquSuperdark-70B-v1-GGUF
|
mradermacher
| 2024-05-17T08:43:03Z | 10 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:ddh0/MiquSuperdark-70B-v1",
"base_model:quantized:ddh0/MiquSuperdark-70B-v1",
"license:unknown",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-17T04:33:11Z |
---
base_model: ddh0/MiquSuperdark-70B-v1
language:
- en
library_name: transformers
license: unknown
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/ddh0/MiquSuperdark-70B-v1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MiquSuperdark-70B-v1-GGUF/resolve/main/MiquSuperdark-70B-v1.Q2_K.gguf) | Q2_K | 25.6 | |
| [GGUF](https://huggingface.co/mradermacher/MiquSuperdark-70B-v1-GGUF/resolve/main/MiquSuperdark-70B-v1.IQ3_XS.gguf) | IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/MiquSuperdark-70B-v1-GGUF/resolve/main/MiquSuperdark-70B-v1.IQ3_S.gguf) | IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MiquSuperdark-70B-v1-GGUF/resolve/main/MiquSuperdark-70B-v1.Q3_K_S.gguf) | Q3_K_S | 30.0 | |
| [GGUF](https://huggingface.co/mradermacher/MiquSuperdark-70B-v1-GGUF/resolve/main/MiquSuperdark-70B-v1.IQ3_M.gguf) | IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/MiquSuperdark-70B-v1-GGUF/resolve/main/MiquSuperdark-70B-v1.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MiquSuperdark-70B-v1-GGUF/resolve/main/MiquSuperdark-70B-v1.Q3_K_L.gguf) | Q3_K_L | 36.2 | |
| [GGUF](https://huggingface.co/mradermacher/MiquSuperdark-70B-v1-GGUF/resolve/main/MiquSuperdark-70B-v1.IQ4_XS.gguf) | IQ4_XS | 37.3 | |
| [GGUF](https://huggingface.co/mradermacher/MiquSuperdark-70B-v1-GGUF/resolve/main/MiquSuperdark-70B-v1.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MiquSuperdark-70B-v1-GGUF/resolve/main/MiquSuperdark-70B-v1.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MiquSuperdark-70B-v1-GGUF/resolve/main/MiquSuperdark-70B-v1.Q5_K_S.gguf) | Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/MiquSuperdark-70B-v1-GGUF/resolve/main/MiquSuperdark-70B-v1.Q5_K_M.gguf) | Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/MiquSuperdark-70B-v1-GGUF/resolve/main/MiquSuperdark-70B-v1.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/MiquSuperdark-70B-v1-GGUF/resolve/main/MiquSuperdark-70B-v1.Q6_K.gguf.part2of2) | Q6_K | 56.7 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/MiquSuperdark-70B-v1-GGUF/resolve/main/MiquSuperdark-70B-v1.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/MiquSuperdark-70B-v1-GGUF/resolve/main/MiquSuperdark-70B-v1.Q8_0.gguf.part2of2) | Q8_0 | 73.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Audino/my-awesomev3-modelv2-10epochs-small
|
Audino
| 2024-05-17T08:40:29Z | 108 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-05-17T08:40:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
udayakumar-cyb/meeting_model_phi3_128k
|
udayakumar-cyb
| 2024-05-17T08:39:09Z | 156 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-17T08:35:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.