Datasets:
modelId
stringlengths 5
137
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-03-26 12:27:25
| downloads
int64 0
223M
| likes
int64 0
10.1k
| library_name
stringclasses 397
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-03-26 12:27:02
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Akhilies/fatima-fellowship-23 | Akhilies | "2023-03-21T07:06:18" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-03-21T07:02:52" | ---
license: openrail
---
To run the model you need Keras. Load the model to a notebook using the command: model = keras.models.load_model('path/fake_news_detection.h5')
evaluate its performance using the command: model.evaluate(img, label) |
Tippawan/pr-corrected-v4_100 | Tippawan | "2024-11-07T06:46:48" | 117 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-11-07T06:46:26" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
funzin-jskim/BGE-M3-finetuned-test_similarities | funzin-jskim | "2025-03-05T06:50:27" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-03-05T06:26:14" | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.10.15
- Sentence Transformers: 3.2.0
- Transformers: 4.45.2
- PyTorch: 2.4.1+cu121
- Accelerate: 1.0.1
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
silviasapora/gemma-7b-sft-basic-5e-5-05-vsh3p4 | silviasapora | "2025-03-10T15:56:36" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma",
"text-generation",
"generated_from_trainer",
"alignment-handbook",
"trl",
"orpo",
"conversational",
"dataset:argilla/dpo-mix-7k",
"arxiv:2403.07691",
"base_model:google/gemma-7b",
"base_model:finetune:google/gemma-7b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-10T13:26:18" | ---
base_model: google/gemma-7b
datasets:
- argilla/dpo-mix-7k
library_name: transformers
model_name: google/gemma-7b
tags:
- generated_from_trainer
- alignment-handbook
- trl
- orpo
licence: license
---
# Model Card for google/gemma-7b
This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on the [['argilla/dpo-mix-7k']](https://huggingface.co/datasets/['argilla/dpo-mix-7k']) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="silviasapora/gemma-7b-sft-basic-5e-5-05-vsh3p4", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/silvias/huggingface/runs/w35o198h)
This model was trained with ORPO, a method introduced in [ORPO: Monolithic Preference Optimization without Reference Model](https://huggingface.co/papers/2403.07691).
### Framework versions
- TRL: 0.13.0
- Transformers: 4.48.1
- Pytorch: 2.5.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite ORPO as:
```bibtex
@article{hong2024orpo,
title = {{ORPO: Monolithic Preference Optimization without Reference Model}},
author = {Jiwoo Hong and Noah Lee and James Thorne},
year = 2024,
eprint = {arXiv:2403.07691}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
dolo650/flan-t5-base-qlora-peft | dolo650 | "2023-12-25T03:54:05" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/flan-t5-base",
"base_model:adapter:google/flan-t5-base",
"region:us"
] | null | "2023-12-25T03:46:41" | ---
library_name: peft
base_model: google/flan-t5-base
---
# Model Card for Model ID
This is a flan-t5-base model finetuned using QLoRA (PEFT)
on dialogSum dataset : https://huggingface.co/datasets/knkarthick/dialogsum
## Model Details
### Training Details:
This is just a basic fine tuned model using below training args and params
lora_config = LoraConfig(
r=16,
lora_alpha=32,
target_modules=['q','k','v','o'],
lora_dropout=.05,
bias='none',
task_type=TaskType.SEQ_2_SEQ_LM #flan-t5
)
output_dir = f'/kaggle/working/qlora-peft-flant5-base-dialogue-summary-training-{str(int(time.time()))}'
peft_training_args_4bit = TrainingArguments(
output_dir=output_dir,
auto_find_batch_size=True,
learning_rate=1e-3, # Higher learning rate than full fine-tuning.
num_train_epochs=200,
logging_steps=10,
max_steps=200
)
peft_trainer_4bit = Trainer(
model=peft_model_4bit,
args=peft_training_args_4bit,
train_dataset=tokenized_dataset_cleaned["train"],
eval_dataset=tokenized_dataset_cleaned['validation']
)
Recorded training loss as below:
Step Training Loss
10 29.131100
20 4.856900
30 3.241400
40 1.346500
50 0.560900
60 0.344000
70 0.258600
80 0.201600
90 0.202900
100 0.198700
110 0.185000
120 0.177200
130 0.161400
140 0.164200
150 0.164300
160 0.165800
170 0.168700
180 0.155100
190 0.161200
200 0.170300
Rouge1 score for 100 test dataset(out of 1500) is :
ORIGINAL MODEL:
{'rouge1': 0.2232663790087573, 'rouge2': 0.06084131871447254, 'rougeL': 0.1936115999187245, 'rougeLsum': 0.19319411133637282}
PEFT MODEL:
{'rouge1': 0.34502805897556865, 'rouge2': 0.11517693222074701, 'rougeL': 0.2800665095598698, 'rougeLsum': 0.27941257109947587}
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
SimmonsSongHW/Qwen2.5-14B-Instruct-GGUF-Imatrix | SimmonsSongHW | "2025-03-19T06:17:34" | 0 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-03-19T04:32:45" | ---
license: apache-2.0
---
|
adammandic87/4b349891-4b45-4978-9b4b-2c27cc328c88 | adammandic87 | "2025-01-16T04:45:42" | 6 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-14m",
"base_model:adapter:EleutherAI/pythia-14m",
"region:us"
] | null | "2025-01-16T04:44:39" | ---
library_name: peft
base_model: EleutherAI/pythia-14m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4b349891-4b45-4978-9b4b-2c27cc328c88
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-14m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 383015878ad193ec_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/383015878ad193ec_train_data.json
type:
field_input: answer
field_instruction: question
field_output: gt_answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: adammandic87/4b349891-4b45-4978-9b4b-2c27cc328c88
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/383015878ad193ec_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bc9a117a-c96a-487c-a18a-ae45373095ca
wandb_project: birthday-sn56-19-Gradients-On-Demand
wandb_run: your_name
wandb_runid: bc9a117a-c96a-487c-a18a-ae45373095ca
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 4b349891-4b45-4978-9b4b-2c27cc328c88
This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 7.0448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 29.0932 | 0.0001 | 1 | 7.2527 |
| 29.7527 | 0.0004 | 3 | 7.2387 |
| 29.9112 | 0.0009 | 6 | 7.1879 |
| 30.3351 | 0.0013 | 9 | 7.0448 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
rajtest/tinyllama-v3 | rajtest | "2024-06-27T17:20:33" | 11 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"gguf",
"llama",
"trl",
"sft",
"unsloth",
"generated_from_trainer",
"base_model:unsloth/tinyllama-bnb-4bit",
"base_model:adapter:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-27T14:34:24" | ---
base_model: unsloth/tinyllama-bnb-4bit
library_name: peft
license: apache-2.0
tags:
- trl
- sft
- unsloth
- generated_from_trainer
model-index:
- name: tinyllama-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinyllama-v3
This model is a fine-tuned version of [unsloth/tinyllama-bnb-4bit](https://huggingface.co/unsloth/tinyllama-bnb-4bit) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 3407
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 525
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1 |
hexgrad/Kokoro-82M-v1.1-zh | hexgrad | "2025-03-04T05:39:52" | 720 | 49 | null | [
"text-to-speech",
"arxiv:2306.07691",
"arxiv:2203.02395",
"base_model:hexgrad/Kokoro-82M",
"base_model:finetune:hexgrad/Kokoro-82M",
"license:apache-2.0",
"region:us"
] | text-to-speech | "2025-02-27T02:15:25" | ---
license: apache-2.0
base_model:
- hexgrad/Kokoro-82M
pipeline_tag: text-to-speech
---
🐈 GitHub: https://github.com/hexgrad/kokoro
<audio controls><source src="https://huggingface.co/hexgrad/Kokoro-82M-v1.1-zh/resolve/main/samples/HEARME_en.wav" type="audio/wav"></audio>
**Kokoro** is an open-weight series of small but powerful TTS models.
This model is the result of a short training run that added 100 Chinese speakers from a professional dataset. The Chinese data was freely and permissively granted to us by [LongMaoData](https://www.longmaosoft.com/), a professional dataset company. Thank you for making this model possible.
Separately, some crowdsourced synthetic English data also entered the training mix:<sup>[1]</sup>
- 1 hour of Maple, an American female.
- 1 hour of Sol, another American female.
- And 1 hour of Vale, an older British female.
This model is not a strict upgrade over its predecessor since it drops many voices, but it is released early to gather feedback on new voices and tokenization. Aside from the Chinese dataset and the 3 hours of English, the rest of the data was left behind for this training run. The goal is to push the model series forward and ultimately restore some of the voices that were left behind.
Current guidance from the U.S. Copyright Office indicates that synthetic data generally does not qualify for copyright protection. Since this synthetic data is crowdsourced, the model trainer is not bound by any Terms of Service. This Apache licensed model also aligns with OpenAI's stated mission of broadly distributing the benefits of AI. If you would like to help further that mission, consider contributing permissive audio data to the cause.
<sup>[1] LongMaoData had no involvement in the crowdsourced synthetic English data.</sup><br/>
<sup>[2] The following Chinese text is machine-translated.</sup>
> Kokoro 是一系列体积虽小但功能强大的 TTS 模型。
>
> 该模型是经过短期训练的结果,从专业数据集中添加了100名中文使用者。中文数据由专业数据集公司「[龙猫数据](https://www.longmaosoft.com/)」免费且无偿地提供给我们。感谢你们让这个模型成为可能。
>
> 另外,一些众包合成英语数据也进入了训练组合:
> - 1小时的 Maple,美国女性。
> - 1小时的 Sol,另一位美国女性。
> - 和1小时的 Vale,一位年长的英国女性。
>
> 由于该模型删除了许多声音,因此它并不是对其前身的严格升级,但它提前发布以收集有关新声音和标记化的反馈。除了中文数据集和3小时的英语之外,其余数据都留在本次训练中。目标是推动模型系列的发展,并最终恢复一些被遗留的声音。
>
> 美国版权局目前的指导表明,合成数据通常不符合版权保护的资格。由于这些合成数据是众包的,因此模型训练师不受任何服务条款的约束。该 Apache 许可模式也符合 OpenAI 所宣称的广泛传播 AI 优势的使命。如果您愿意帮助进一步完成这一使命,请考虑为此贡献许可的音频数据。
<audio controls><source src="https://huggingface.co/hexgrad/Kokoro-82M-v1.1-zh/resolve/main/samples/HEARME_zf_001.wav" type="audio/wav"></audio>
<audio controls><source src="https://huggingface.co/hexgrad/Kokoro-82M-v1.1-zh/resolve/main/samples/HEARME_zm_010.wav" type="audio/wav"></audio>
- [Releases](#releases)
- [Usage](#usage)
- [Samples](https://huggingface.co/hexgrad/Kokoro-82M-v1.1-zh/blob/main/samples) ↗️
- [Model Facts](#model-facts)
- [Acknowledgements](#acknowledgements)
### Releases
| Model | Published | Training Data | Langs & Voices | SHA256 |
| ----- | --------- | ------------- | -------------- | ------ |
| **v1.1-zh** | **2025 Feb 26** | **>100 hours** | **2 & 103** | `b1d8410f` |
| [v1.0](https://huggingface.co/hexgrad/Kokoro-82M) | 2025 Jan 27 | Few hundred hrs | 8 & 54 | `496dba11` |
| [v0.19](https://huggingface.co/hexgrad/kLegacy/tree/main/v0.19) | 2024 Dec 25 | <100 hrs | 1 & 10 | `3b0c392f` |
| Training Costs | v0.19 | v1.0 | v1.1-zh | **Total** |
| -------------- | ----- | ---- | ------- | --------- |
| in A100 80GB GPU hours | 500 | 500 | 120 | **1120** |
| average hourly rate | $0.80/h | $1.20/h | $0.90/h | |
| in USD | $400 | $600 | $110 | **$1110** |
### Usage
You can run this cell on [Google Colab](https://colab.research.google.com/).
```py
!pip install -q kokoro>=0.8.2 "misaki[zh]>=0.8.2" soundfile
!apt-get -qq -y install espeak-ng > /dev/null 2>&1
from IPython.display import display, Audio
!wget https://huggingface.co/hexgrad/Kokoro-82M-v1.1-zh/resolve/main/samples/make_en.py
!python make_en.py
display(Audio('HEARME_en.wav', rate=24000, autoplay=True))
!wget https://huggingface.co/hexgrad/Kokoro-82M-v1.1-zh/resolve/main/samples/make_zh.py
!python make_zh.py
display(Audio('HEARME_zf_001.wav', rate=24000, autoplay=False))
```
TODO: Improve usage. Similar to https://hf.co/hexgrad/Kokoro-82M#usage but you should pass `repo_id='hexgrad/Kokoro-82M-v1.1-zh'` when constructing a `KModel` or `KPipeline`. See [`make_en.py`](https://huggingface.co/hexgrad/Kokoro-82M-v1.1-zh/blob/main/samples/make_en.py) and [`make_zh.py`](https://huggingface.co/hexgrad/Kokoro-82M-v1.1-zh/blob/main/samples/make_zh.py).
### Model Facts
**Architecture:**
- StyleTTS 2: https://arxiv.org/abs/2306.07691
- ISTFTNet: https://arxiv.org/abs/2203.02395
- Decoder only: no diffusion, no encoder release
- 82 million parameters, same as https://hf.co/hexgrad/Kokoro-82M
**Architected by:** Li et al @ https://github.com/yl4579/StyleTTS2
**Trained by**: `@rzvzn` on Discord
**Languages:** English, Chinese
**Model SHA256 Hash:** `b1d8410fa44dfb5c15471fd6c4225ea6b4e9ac7fa03c98e8bea47a9928476e2b`
### Acknowledgements
TODO: Write acknowledgements. Similar to https://hf.co/hexgrad/Kokoro-82M#acknowledgements
<img src="https://static0.gamerantimages.com/wordpress/wp-content/uploads/2024/08/terminator-zero-41-1.jpg" width="400" alt="kokoro" />
|
JohnJumon/resnet50_jellyfish_classifier | JohnJumon | "2024-02-21T10:48:38" | 30 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"resnet",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/resnet-50",
"base_model:finetune:microsoft/resnet-50",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-02-21T09:41:58" | ---
license: apache-2.0
base_model: microsoft/resnet-50
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: resnet50_jellyfish_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet50_jellyfish_classifier
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1954
- Accuracy: 0.9444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 23 | 1.2120 | 0.5611 |
| No log | 2.0 | 46 | 0.6042 | 0.7667 |
| No log | 3.0 | 69 | 0.3322 | 0.8667 |
| No log | 4.0 | 92 | 0.4372 | 0.8722 |
| No log | 5.0 | 115 | 0.2465 | 0.9167 |
| No log | 6.0 | 138 | 0.2132 | 0.9333 |
| No log | 7.0 | 161 | 0.1954 | 0.9444 |
| No log | 8.0 | 184 | 0.1981 | 0.9167 |
| No log | 9.0 | 207 | 0.1531 | 0.9389 |
| No log | 10.0 | 230 | 0.1495 | 0.9389 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
jonomon/gpt3-kor-small_based_on_gpt2_core_ml | jonomon | "2023-12-28T04:55:15" | 12 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"coreml",
"gpt2",
"text-generation",
"ko",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-12T21:35:46" | ---
language: ko
tags:
- text-generation
---
# Bert base model for Korean
* 70GB Korean text dataset and 42000 lower-cased subwords are used
* Check the model performance and other language models for Korean in [github](https://github.com/kiyoungkim1/LM-kor)
```python
from transformers import BertTokenizerFast, GPT2LMHeadModel
tokenizer_gpt3 = BertTokenizerFast.from_pretrained("kykim/gpt3-kor-small_based_on_gpt2")
input_ids = tokenizer_gpt3.encode("text to tokenize")[1:] # remove cls token
model_gpt3 = GPT2LMHeadModel.from_pretrained("kykim/gpt3-kor-small_based_on_gpt2")
``` |
usmanihanif/aesthetic_classifier | usmanihanif | "2025-02-20T23:22:08" | 0 | 0 | null | [
"safetensors",
"dinov2",
"fashion",
"aesthetics",
"classification",
"shopping",
"zero-shot-image-classification",
"en",
"base_model:facebook/dinov2-base",
"base_model:finetune:facebook/dinov2-base",
"license:mit",
"region:us"
] | zero-shot-image-classification | "2025-02-20T23:16:54" | ---
license: mit
language:
- en
base_model:
- facebook/dinov2-base
pipeline_tag: zero-shot-image-classification
tags:
- fashion
- aesthetics
- classification
- shopping
--- |
lesso12/f01480ec-6ce7-4b3c-9303-91367246caed | lesso12 | "2025-02-10T01:42:53" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:oopsung/llama2-7b-koNqa-test-v1",
"base_model:adapter:oopsung/llama2-7b-koNqa-test-v1",
"region:us"
] | null | "2025-02-10T01:21:00" | ---
library_name: peft
base_model: oopsung/llama2-7b-koNqa-test-v1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f01480ec-6ce7-4b3c-9303-91367246caed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# f01480ec-6ce7-4b3c-9303-91367246caed
This model is a fine-tuned version of [oopsung/llama2-7b-koNqa-test-v1](https://huggingface.co/oopsung/llama2-7b-koNqa-test-v1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1468
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000212
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0009 | 1 | 2.8770 |
| 0.4067 | 0.0443 | 50 | 0.5001 |
| 0.2624 | 0.0887 | 100 | 0.3697 |
| 0.24 | 0.1330 | 150 | 0.3292 |
| 0.1908 | 0.1774 | 200 | 0.3351 |
| 0.166 | 0.2217 | 250 | 0.2703 |
| 0.1555 | 0.2661 | 300 | 0.2393 |
| 0.1737 | 0.3104 | 350 | 0.1916 |
| 0.1532 | 0.3548 | 400 | 0.1630 |
| 0.1183 | 0.3991 | 450 | 0.1467 |
| 0.1017 | 0.4435 | 500 | 0.1468 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
digiplay/cocotifacute_v1 | digiplay | "2023-07-22T14:10:25" | 24 | 3 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-06-22T21:30:26" | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/93191/cocotifacute
Original Author's DEMO image :
KO_tmb.jpeg)
Sample image I made :


|
Acreedlmt/Gigi | Acreedlmt | "2023-05-18T14:51:10" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-05-15T15:49:24" | ---
license: creativeml-openrail-m
---
|
sd-concepts-library/dancing-cactus | sd-concepts-library | "2022-11-17T15:13:24" | 0 | 2 | null | [
"license:mit",
"region:us"
] | null | "2022-11-17T15:13:21" | ---
license: mit
---
### Dancing cactus on Stable Diffusion
This is the `<dancing-cactus>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
mradermacher/llama3.2_3b_122824_uncensored-i1-GGUF | mradermacher | "2025-02-08T15:36:14" | 1,008 | 2 | transformers | [
"transformers",
"gguf",
"llama",
"unsloth",
"uncensored",
"llama-3.2",
"llama.cpp",
"inference",
"en",
"dataset:mlabonne/FineTome-100k",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:PawanKrd/math-gpt-4o-200k",
"dataset:V3N0M/Jenna-50K-Alpaca-Uncensored",
"base_model:carsenk/llama3.2_3b_122824_uncensored",
"base_model:quantized:carsenk/llama3.2_3b_122824_uncensored",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-01-20T01:36:49" | ---
base_model: carsenk/llama3.2_3b_122824_uncensored
datasets:
- mlabonne/FineTome-100k
- microsoft/orca-math-word-problems-200k
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- PawanKrd/math-gpt-4o-200k
- V3N0M/Jenna-50K-Alpaca-Uncensored
language:
- en
library_name: transformers
license: llama3.2
quantized_by: mradermacher
tags:
- llama
- unsloth
- uncensored
- llama-3.2
- llama.cpp
- gguf
- inference
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/carsenk/llama3.2_3b_122824_uncensored
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/llama3.2_3b_122824_uncensored-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3b_122824_uncensored-i1-GGUF/resolve/main/llama3.2_3b_122824_uncensored.i1-IQ1_S.gguf) | i1-IQ1_S | 1.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3b_122824_uncensored-i1-GGUF/resolve/main/llama3.2_3b_122824_uncensored.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3b_122824_uncensored-i1-GGUF/resolve/main/llama3.2_3b_122824_uncensored.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3b_122824_uncensored-i1-GGUF/resolve/main/llama3.2_3b_122824_uncensored.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3b_122824_uncensored-i1-GGUF/resolve/main/llama3.2_3b_122824_uncensored.i1-IQ2_S.gguf) | i1-IQ2_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3b_122824_uncensored-i1-GGUF/resolve/main/llama3.2_3b_122824_uncensored.i1-IQ2_M.gguf) | i1-IQ2_M | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3b_122824_uncensored-i1-GGUF/resolve/main/llama3.2_3b_122824_uncensored.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3b_122824_uncensored-i1-GGUF/resolve/main/llama3.2_3b_122824_uncensored.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3b_122824_uncensored-i1-GGUF/resolve/main/llama3.2_3b_122824_uncensored.i1-Q2_K.gguf) | i1-Q2_K | 1.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3b_122824_uncensored-i1-GGUF/resolve/main/llama3.2_3b_122824_uncensored.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3b_122824_uncensored-i1-GGUF/resolve/main/llama3.2_3b_122824_uncensored.i1-IQ3_S.gguf) | i1-IQ3_S | 1.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3b_122824_uncensored-i1-GGUF/resolve/main/llama3.2_3b_122824_uncensored.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3b_122824_uncensored-i1-GGUF/resolve/main/llama3.2_3b_122824_uncensored.i1-IQ3_M.gguf) | i1-IQ3_M | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3b_122824_uncensored-i1-GGUF/resolve/main/llama3.2_3b_122824_uncensored.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3b_122824_uncensored-i1-GGUF/resolve/main/llama3.2_3b_122824_uncensored.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3b_122824_uncensored-i1-GGUF/resolve/main/llama3.2_3b_122824_uncensored.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3b_122824_uncensored-i1-GGUF/resolve/main/llama3.2_3b_122824_uncensored.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.0 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3b_122824_uncensored-i1-GGUF/resolve/main/llama3.2_3b_122824_uncensored.i1-Q4_0.gguf) | i1-Q4_0 | 2.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3b_122824_uncensored-i1-GGUF/resolve/main/llama3.2_3b_122824_uncensored.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3b_122824_uncensored-i1-GGUF/resolve/main/llama3.2_3b_122824_uncensored.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3b_122824_uncensored-i1-GGUF/resolve/main/llama3.2_3b_122824_uncensored.i1-Q4_1.gguf) | i1-Q4_1 | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3b_122824_uncensored-i1-GGUF/resolve/main/llama3.2_3b_122824_uncensored.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3b_122824_uncensored-i1-GGUF/resolve/main/llama3.2_3b_122824_uncensored.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3b_122824_uncensored-i1-GGUF/resolve/main/llama3.2_3b_122824_uncensored.i1-Q6_K.gguf) | i1-Q6_K | 2.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
adriantheuma/raven-lora-no-tools | adriantheuma | "2024-01-25T10:00:07" | 0 | 0 | peft | [
"peft",
"en",
"dataset:adriantheuma/raven-data",
"license:apache-2.0",
"region:us"
] | null | "2024-01-22T10:48:24" | ---
library_name: peft
license: apache-2.0
datasets:
- adriantheuma/raven-data
language:
- en
---
### Training details
* Prompt tokenisation: [LlamaTokenizer](https://huggingface.co/docs/transformers/model_doc/llama2#transformers.LlamaTokenizer).
* Maximum context length: 1,204 tokens
* Per device train batch: 1
* Gradient accumulation: 128 steps (achieving the equivalent batch_size of 128)
* Quantisation: 8-bit
* Optimiser: adamw
* Learning_rate: 3 × 10−4
* warmup_steps: 100
* epochs: 5
* Low Rank Adaptation (LoRA)
* rank: 16
* alpha: 16
* dropout: 0.05
* target modules: q_proj, k_proj, v_proj, and o_proj
This setup reduces the trainable parameters to 26,214,400 or 0.2% of the base [Llama 2 13B Chat](https://huggingface.co/docs/transformers/model_doc/llama2) model.
### Training hardware
This model is trained on commodity hardware equipped with a:
* 13th Gen Intel(R) Core(TM) i7-13700KF CPU at 3.40 GHz
* 64 GB installed RAM
* NVIDIA GeForce RTX 4090 GPU with 24 GB onboard RAM.
The trained model consumed 100 GPU hours during training. |
csetesz/wieszt2000 | csetesz | "2025-03-19T18:21:36" | 0 | 0 | null | [
"license:other",
"region:us"
] | null | "2025-03-19T16:45:01" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
HyperX-Sentience/Flux-Mini-GGUF | HyperX-Sentience | "2025-03-13T14:57:21" | 0 | 0 | null | [
"gguf",
"license:apache-2.0",
"region:us"
] | null | "2025-03-13T13:33:00" | ---
license: apache-2.0
---
|
rahulnbiju007/SmolLM-135M-Instruct-newModel-lora-text-classification | rahulnbiju007 | "2025-01-24T11:31:08" | 16 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-24T11:28:14" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
minhhien0811/deita_reason_arena_3552 | minhhien0811 | "2024-08-29T07:52:16" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-08-29T07:49:41" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Kvisten/nocco-apple-flux-v8 | Kvisten | "2024-09-25T21:18:49" | 9 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.1-schnell",
"base_model:adapter:black-forest-labs/FLUX.1-schnell",
"license:apache-2.0",
"region:us"
] | text-to-image | "2024-09-25T21:18:35" | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
widget:
- text: Green Nocco BCAA+ Apple can on mossy stump. Slim, tall, metallic top. White
'Nocco' logo, 'BCAA+ Apple' below. Oblique stripes. Large 'Apple' label. 330ml,
5,000mg BCAA/100ml. Foggy ancient forest. Dewdrops on can. High-res photo
output:
url: samples/1727299068674__000002500_0.jpg
- text: a green aluminum can of 'NOCCO BCAA+ Apple' energy drink can in a hand of
a person
output:
url: samples/1727299077455__000002500_1.jpg
- text: Green Nocco Apple Can on a sandy beach at sunset, surrounded by seashells
and gentle ocean waves
output:
url: samples/1727299086225__000002500_2.jpg
- text: The image is a high-resolution photograph featuring a can of energy drink,
specifically a product called 'Nocco BCAA+ Apple.' The slim and tall can is
predominantly green with a metallic silver top, and it is positioned vertically
in the center of the frame. The brand name 'Nocco' is prominently displayed
in bold white letters at the top of the can, while 'BCAA+ Apple' is written
underneath it in smaller white text. there are two oblique green stripes at
the center. Below that, the word 'Apple' is printed in a larger, bold white
font, indicating the flavor. The can also includes nutritional information,
specifying that it contains 330 ml and is 5.000 mg of BCAA (Branched Chain
Amino Acids) per 100 ml of the product. on top of a wooden table during goldenhour,
window open on the sea landscape
output:
url: samples/1727299094994__000002500_3.jpg
- text: Green Nocco Apple Can, green aluminum can dropping into crystal-clear water,
apple taste, commercial-style, the water should create dramatic splashes and
bubbles, surrounding the can in all directions, capturing the moment of
impact, high-resolution, colorful, (from above:1.2), photo by Gregory Colbert
output:
url: samples/1727299103766__000002500_4.jpg
base_model: black-forest-labs/FLUX.1-schnell
instance_prompt: Green Nocco Apple Can
license: apache-2.0
---
# nocco_apple_v8
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
<Gallery />
## Trigger words
You should use `Green Nocco Apple Can` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](/Kvisten/nocco-apple-flux-v8/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-schnell', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('Kvisten/nocco-apple-flux-v8', weight_name='nocco_apple_v8.safetensors')
image = pipeline('Green Nocco BCAA+ Apple can on mossy stump. Slim, tall, metallic top. White 'Nocco' logo, 'BCAA+ Apple' below. Oblique stripes. Large 'Apple' label. 330ml, 5,000mg BCAA/100ml. Foggy ancient forest. Dewdrops on can. High-res photo').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
awindb/swinv2-tiny-patch4-window8-256-finetuned-lora-food101 | awindb | "2024-01-08T15:07:36" | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"dataset:cats_vs_dogs",
"base_model:microsoft/swinv2-tiny-patch4-window8-256",
"base_model:adapter:microsoft/swinv2-tiny-patch4-window8-256",
"license:apache-2.0",
"region:us"
] | null | "2024-01-08T14:34:25" | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
datasets:
- cats_vs_dogs
metrics:
- accuracy
base_model: microsoft/swinv2-tiny-patch4-window8-256
model-index:
- name: swinv2-tiny-patch4-window8-256-finetuned-lora-food101
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-tiny-patch4-window8-256-finetuned-lora-food101
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on the cats_vs_dogs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0096
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 0.0096 | 1.0 |
| No log | 2.0 | 2 | 0.0025 | 1.0 |
| No log | 3.0 | 3 | 0.0006 | 1.0 |
| No log | 4.0 | 4 | 0.0002 | 1.0 |
| No log | 5.0 | 5 | 0.0001 | 1.0 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.0+cpu
- Datasets 2.16.1
- Tokenizers 0.15.0 |
Gergoe/mt5-small-finetuned-amazon-en-es | Gergoe | "2022-05-16T22:42:55" | 9 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | "2022-05-01T19:48:09" | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2891
- Rouge1: 15.35
- Rouge2: 6.4925
- Rougel: 14.8921
- Rougelsum: 14.6312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 7.0622 | 1.0 | 1276 | 3.5617 | 13.2417 | 4.8928 | 12.8258 | 12.8078 |
| 4.0768 | 2.0 | 2552 | 3.4329 | 14.5681 | 6.4922 | 14.0621 | 13.9709 |
| 3.7736 | 3.0 | 3828 | 3.3393 | 15.1942 | 6.5262 | 14.7138 | 14.6049 |
| 3.5951 | 4.0 | 5104 | 3.3122 | 14.8813 | 6.2962 | 14.507 | 14.3477 |
| 3.477 | 5.0 | 6380 | 3.2991 | 15.0992 | 6.3888 | 14.8397 | 14.5606 |
| 3.4084 | 6.0 | 7656 | 3.3035 | 15.1897 | 6.2292 | 14.6686 | 14.4488 |
| 3.3661 | 7.0 | 8932 | 3.2959 | 15.3489 | 6.5702 | 14.9211 | 14.701 |
| 3.3457 | 8.0 | 10208 | 3.2891 | 15.35 | 6.4925 | 14.8921 | 14.6312 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.7.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
theojolliffe/distilbart-cnn-arxiv-pubmed-v3-e12 | theojolliffe | "2022-05-09T08:38:28" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-05-08T20:46:19" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: distilbart-cnn-arxiv-pubmed-v3-e12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-arxiv-pubmed-v3-e12
This model is a fine-tuned version of [theojolliffe/distilbart-cnn-arxiv-pubmed](https://huggingface.co/theojolliffe/distilbart-cnn-arxiv-pubmed) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8157
- Rouge1: 56.7429
- Rouge2: 41.0185
- Rougel: 44.1014
- Rougelsum: 54.8121
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 1.5037 | 1.0 | 795 | 1.0815 | 52.4727 | 33.4915 | 35.3774 | 50.1955 | 142.0 |
| 0.8894 | 2.0 | 1590 | 0.9462 | 52.8867 | 34.0406 | 36.5249 | 50.4636 | 141.5741 |
| 0.7037 | 3.0 | 2385 | 0.8841 | 53.7966 | 35.0969 | 38.4158 | 51.3369 | 142.0 |
| 0.4914 | 4.0 | 3180 | 0.8437 | 52.6766 | 34.0573 | 36.8907 | 50.3088 | 142.0 |
| 0.3945 | 5.0 | 3975 | 0.8067 | 54.3147 | 36.2081 | 39.6366 | 52.1494 | 142.0 |
| 0.2799 | 6.0 | 4770 | 0.8403 | 54.2813 | 37.0786 | 39.9196 | 51.9176 | 141.9815 |
| 0.2211 | 7.0 | 5565 | 0.8207 | 53.9403 | 36.517 | 39.0372 | 51.4491 | 141.9815 |
| 0.1795 | 8.0 | 6360 | 0.8014 | 55.6607 | 39.3082 | 41.8295 | 53.4674 | 142.0 |
| 0.1428 | 9.0 | 7155 | 0.8051 | 55.0575 | 38.823 | 41.8849 | 52.9606 | 142.0 |
| 0.1358 | 10.0 | 7950 | 0.8149 | 56.6986 | 41.0 | 43.5207 | 54.6402 | 142.0 |
| 0.1122 | 11.0 | 8745 | 0.8134 | 56.5416 | 40.9495 | 44.2989 | 54.5623 | 142.0 |
| 0.0873 | 12.0 | 9540 | 0.8157 | 56.7429 | 41.0185 | 44.1014 | 54.8121 | 142.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
feliphe-galiza/llama-3-1B.italian-hypernyms | feliphe-galiza | "2024-04-21T20:15:32" | 0 | 1 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-04-21T20:15:23" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** feliphe-galiza
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
trenden/abb55420-e09a-4163-a929-ed4673a4768d | trenden | "2025-01-12T15:15:26" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28",
"base_model:adapter:rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28",
"region:us"
] | null | "2025-01-12T14:44:57" | ---
library_name: peft
base_model: rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28
tags:
- axolotl
- generated_from_trainer
model-index:
- name: abb55420-e09a-4163-a929-ed4673a4768d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 4ec8fd381e34173a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4ec8fd381e34173a_train_data.json
type:
field_input: context
field_instruction: question
field_output: final_decision
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: trenden/abb55420-e09a-4163-a929-ed4673a4768d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/4ec8fd381e34173a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 27ad312b-6f5c-48af-b4ca-a7144b0876b3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 27ad312b-6f5c-48af-b4ca-a7144b0876b3
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# abb55420-e09a-4163-a929-ed4673a4768d
This model is a fine-tuned version of [rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28](https://huggingface.co/rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5666
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 13.2355 | 0.0000 | 1 | 13.6415 |
| 13.8652 | 0.0001 | 3 | 13.1060 |
| 9.3058 | 0.0002 | 6 | 6.7350 |
| 3.1001 | 0.0004 | 9 | 3.5666 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mrferr3t/f7a237aa-36c9-4c0e-88df-d841e330c7c0 | mrferr3t | "2025-01-28T10:52:21" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-360M-Instruct",
"base_model:adapter:unsloth/SmolLM2-360M-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-01-28T10:48:15" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-360M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f7a237aa-36c9-4c0e-88df-d841e330c7c0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-360M-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 22b70be0f94320a3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/22b70be0f94320a3_train_data.json
type:
field_instruction: sentence1
field_output: sentence2
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/f7a237aa-36c9-4c0e-88df-d841e330c7c0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 11
micro_batch_size: 2
mlflow_experiment_name: /tmp/22b70be0f94320a3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d02f8361-84b9-4479-bc3c-c6ea227f1563
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d02f8361-84b9-4479-bc3c-c6ea227f1563
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f7a237aa-36c9-4c0e-88df-d841e330c7c0
This model is a fine-tuned version of [unsloth/SmolLM2-360M-Instruct](https://huggingface.co/unsloth/SmolLM2-360M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7493
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.6161 | 0.0002 | 1 | 3.8398 |
| 3.3484 | 0.0007 | 3 | 3.8375 |
| 3.9161 | 0.0015 | 6 | 3.8177 |
| 3.1883 | 0.0022 | 9 | 3.7493 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
louisbrulenaudet/lemone-embed-pro | louisbrulenaudet | "2024-10-02T22:56:30" | 6,515 | 2 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"new",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:303863",
"loss:CachedGISTEmbedLoss",
"legal",
"taxation",
"fiscalité",
"tax",
"custom_code",
"fr",
"dataset:louisbrulenaudet/code-impots",
"dataset:louisbrulenaudet/code-impots-annexe-iv",
"dataset:louisbrulenaudet/code-impots-annexe-iii",
"dataset:louisbrulenaudet/code-impots-annexe-i",
"dataset:louisbrulenaudet/code-impots-annexe-ii",
"dataset:louisbrulenaudet/livre-procedures-fiscales",
"dataset:louisbrulenaudet/bofip",
"arxiv:1908.10084",
"base_model:Alibaba-NLP/gte-multilingual-base",
"base_model:finetune:Alibaba-NLP/gte-multilingual-base",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-09-29T23:29:08" | ---
base_model: Alibaba-NLP/gte-multilingual-base
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
- dot_accuracy@1
- dot_accuracy@3
- dot_accuracy@5
- dot_accuracy@10
- dot_precision@1
- dot_precision@3
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@3
- dot_recall@5
- dot_recall@10
- dot_ndcg@10
- dot_mrr@10
- dot_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:303863
- loss:CachedGISTEmbedLoss
- legal
- taxation
- fiscalité
- tax
widget:
- source_sentence: >-
Élucider la signification de 'navire de plaisance' d'après l'article 217
undecies du Code général des impôts et détailler les différents types
d'investissements concernés.
sentences:
- >-
Selon l'article 217 undecies du Code général des impôts, pour bénéficier de
la déduction fiscale, les investissements doivent être réalisés sous forme
de souscriptions au capital de sociétés qui gèrent des concessions de
service public local. Ces investissements doivent être spécifiquement
orientés vers des activités productives assignées à ces concessions pour une
durée minimale de cinq ans. En outre, ces concessions doivent opérer
exclusivement dans des secteurs éligibles situés dans les départements ou
collectivités d'outre-mer, contribuant ainsi au développement économique des
territoires ultramarins.
- >-
Dans le contexte de l'article 217 undecies du Code général des impôts, un
'navire de plaisance' désigne une embarcation spécifiquement utilisée pour
des activités de loisir, excluant ainsi toute utilisation professionnelle
telle que la pêche ou le transport. Les investissements pertinents pouvant
bénéficier de cet agrément incluent non seulement l'achat ou la construction
de ces navires, mais aussi leur utilisation dans des activités de tourisme
comme la location sous différentes formes, les voyages organisés et la pêche
de loisir, ainsi que les investissements dans les infrastructures et
équipements nécessaires à ces activités touristiques.
- >-
L'article R. 257 B-1 du Livre des Procédures Fiscales organise les modalités
pratiques relatives à l'information du contribuable quant à la mise en œuvre
d'une compensation fiscale de recouvrement. Cette disposition confère au
contribuable le droit d'être informé en amont de la réalisation de la
compensation. Ce dispositif implique que le comptable public est tenu de
communiquer avec le contribuable, afin de l'éclairer sur le processus et les
conséquences de cette opération. L'information préalable joue un rôle
crucial, car elle accorde au redevable l'opportunité de comprendre les
ajustements à venir sur ses comptes vis-à-vis de l'administration fiscale.
- source_sentence: >-
Énumérer en détail les informations requises par l'article 50-00 G, Annexe
IV du Code général des impôts concernant la déclaration récapitulative
mensuelle que doit établir l'entrepositaire agréé.
sentences:
- >-
Pour se conformer aux dispositions imposées par l'article 50-00 G, Annexe IV
du Code général des impôts, l'entrepositaire agréé est tenu de rédiger une
déclaration récapitulative mensuelle distincte pour chaque entrepôt fiscal
suspensif des droits d'accises qu'il gère. Une telle déclaration doit
comprendre : les noms ou la dénomination de l'entreprise, l'adresse du siège
social ou du principal établissement, le numéro d'identification de
l'entrepôt fiscal, l'adresse de l'entrepôt fiscal, le lieu de tenue de la
comptabilité matières, l'année et le mois concernés par la déclaration, la
date et le lieu d'établissement de la déclaration ainsi que la signature et
le cachet de l'entreprise. Elle doit également indiquer la raison sociale de
la caution ou, le cas échéant, la mention 'Dispense'. Au besoin, elle peut
comporter des mentions relatives aux comptes d'âge ou de vieillissement, les
références aux contrats d'achat qui exigent un visa de l'établissement
mentionné dans l'article L. 621-1 du Code rural et de la pêche maritime, les
numéros d'enregistrement des contrats d'achat et les numéros des
déclarations de transactions soumises aux interprofessions, ainsi que l'avis
de blocage, l'engagement de garantie ou la mainlevée de warrant agricole ou
de l'engagement de garantie, selon l'applicabilité à chaque cas particulier.
- >-
L'intégration de Mayotte dans le champ d'application du Code général des
impôts, rendant ainsi les entreprises mahoraises éligibles au crédit d'impôt
pour investissements productifs outre-mer, a été actée par le législateur au
travers de la loi n° 2010-1487 du 7 décembre 2010. Cette loi a élevé Mayotte
au statut de département, étendant à ce titre l'ensemble des dispositions du
CGI. L'ordonnance n° 2013-837 du 19 septembre 2013 est venue quant à elle
expliciter les adaptations nécessaires au code des douanes et au CGI pour
Mayotte. Conséquence directe de ces textes, les entreprises exerçant à
Mayotte peuvent prétendre au crédit d'impôt en vigueur dès le 1er janvier
2014, conformément à l'article 244 quater W du CGI.
- >-
Le relevé des frais généraux prévu à l'article 54 quater du Code général des
impôts doit comporter les renseignements propres à l'exercice pour lequel il
est fourni et ceux qui se rapportent à l'exercice précédent.
- source_sentence: >-
Quels sont les éléments que doit contenir la demande déposée auprès de la
direction générale des finances publiques pour que les sociétés, compagnies
ou entreprises françaises puissent bénéficier du régime fiscal prévu pour
l'émission de séries spéciales d'obligations à l'étranger ?
sentences:
- >-
Pour le premier exercice comptable de l'entreprise d'une durée de quatorze
mois, le plafond standard d'exonération de 61 000 € est ajusté au prorata de
la durée, donnant un nouveau plafond d'exonération de 71 166 € (61 000 € x
14/12).
- >-
Pour être admises à bénéficier du régime fiscal prévu au 1 de l'article 131
ter du Code général des impôts, les sociétés, compagnies ou entreprises
françaises qui se proposent d'émettre à l'étranger des séries spéciales
d'obligations, doivent déposer au préalable une demande spéciale à la
direction générale des finances publiques. Cette demande indique la date et
les conditions de l'émission ainsi que le nombre, le montant et les numéros
des titres à émettre.
- >-
Pour atténuer certaines contraintes fiscales, les sociétés étrangères
exerçant une activité sur le territoire français ont la possibilité de
restreindre le montant de la retenue à la source, qu'elles sont tenues de
verser en vertu de l'article 115 quinquies du Code général des impôts, à une
somme équivalente à l'impôt définitivement dû. Cette réduction prend en
considération les prévisions de distributions de dividendes et le lieu de
résidence fiscale des actionnaires. Pour bénéficier de ce dispositif,
lesdites sociétés doivent expressément formuler une demande en référence à
la directive pertinente et la joindre à la déclaration n° 2777-D-SD. Cela
implique un suivi rigoureux de l'impact des distributions réelles et des
domiciliations des bénéficiaires afin d'éviter les insuffisances de
versement, sous peine de régularisation ultérieure accompagnée de l'intérêt
de retard selon les articles 1727 et 1729 du même code.
- source_sentence: >-
Expliquez comment est organisé le recouvrement de l'impôt sur la fortune
immobilière en référence aux modalités décrites dans l'article 1658 du Code
général des impôts.
sentences:
- >-
Dans le contexte de la déclaration des revenus fonciers, la société doit
émettre une attestation annuelle qui doit être remise à chaque associé au
plus tard le deuxième jour ouvré après le 1er mai, selon les modalités
fixées par le décret n° 2009-316 du 20 mars 2009. Cette attestation revêt
une importance cruciale puisqu'elle permet aux associés de renseigner
correctement leur déclaration de revenus fonciers via l'imprimé n° 2044
spécial. Elle doit recenser des informations précises : l'identité et
l'adresse de l'associé, la détention des parts au cours de l'année, le
respect des conditions de loyer, le montant de l'amortissement ainsi que le
revenu net foncier qui découle des parts de l'associé, tant dans le régime
de droit commun qu'en incluant la déduction liée à l'amortissement.
- >-
Le recouvrement de l'impôt sur la fortune immobilière s'orchestre
conformément aux dispositions disposées dans l'article 1658 du Code général
des impôts. Cela implique que les techniques, les procédures, ainsi que les
moyens d'exécution prévus pour le recouvrement de cet impôt sont alignés sur
ceux établis pour l'impôt sur le revenu.
- >-
L'article 981 du Code général des impôts établit que les normes régissant
les droits d'enregistrement, sauf spécification contraire, sont adaptées à
la gestion de l'impôt sur la fortune immobilière. Cela signifie que les
méthodes de contrôle, telles que les audits et inspections, ainsi que les
procédures de règlement des contentieux sont extensibles à l'impôt sur la
fortune immobilière. Cette approche garantit une uniformité des pratiques
administratives fiscales, facilitant ainsi une application homogène et
cohérente des lois fiscales relatives à la fortune immobilière.
- source_sentence: >-
Exposer les modalités de dérogation au secret fiscal autorisant le juge à
demander des documents fiscaux nécessaires pour résoudre un litige, en vertu
de l'article L. 143 du Livre des Procédures Fiscales.
sentences:
- >-
Selon les dispositions du Bulletin officiel des finances
publiques-instructions administratives, spécifiquement le
BOI-DJC-SECR-10-20-50, le procureur de la République détient le droit, dans
le contexte de toute investigation judiciaire, qu'elle relève d'une enquête
de flagrance, préliminaire ou autre, de solliciter des renseignements ou
documents essentiels à l'enquête auprès de l'administration fiscale. Cette
sollicitation peut être adressée directement ou via un officier de police
judiciaire agissant sur une réquisition du procureur. Conformément à
l'article L.141 A du Livre des procédures fiscales, le secret fiscal ne
constitue pas un frein légal à la transmission des informations ou documents
exigés par le procureur.
- >-
L'article 199 novovicies du Code général des impôts dispose de modalités de
réduction d'impôt spécifiques pour les transactions d'acquisition et de
construction durant les années 2023 et 2024. En 2023, les bénéfices de cette
réduction s'établissent à 4,5 % pour la première phase triennale et à 2,5 %
pour la seconde. Pour les opérations effectuées en 2024, les réductions
offertes sont de 3 % pendant la première période triennale et de 2 % pour la
suivante. Ces pourcentages se rapportent aux acquisitions non mentionnées au
5° du B du I ainsi qu'aux constructions référencées au 1° du B du I, avec
nécessité que le permis de construire ait été délivré durant l'année
correspondante.
- >-
Conformément aux dispositions de l'article L. 143 du Livre des Procédures
Fiscales, le secret fiscal peut être levé dans le cadre d'un litige par
décision du juge. Cette mesure vise à autoriser la présentation de documents
fiscaux, jugés utiles par le magistrat pour trancher une affaire. La levée
de ce secret est toutefois soumise à une interprétation stricte, de sorte
que seuls les documents réellement susceptibles d'éclairer le juge sur
l'étendue du préjudice des individus impliqués peuvent être divulgués. Les
renseignements qui n'ont de pertinence que pour des questions périphériques
de la procédure ou qui se rapportent uniquement à l'application d'un
jugement déjà prononcé sont exclus de cette possibilité de communication.
co2_eq_emissions:
emissions: 2036.3553910202609
energy_consumed: 5.516569338938681
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: AMD EPYC 9V84 96-Core Processor
ram_total_size: 314.68053817749023
hours_used: 9.954
hardware_used: 1 x NVIDIA H100 NVL
model-index:
- name: SentenceTransformer based on Alibaba-NLP/gte-multilingual-base
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Lemone
type: Lemone
metrics:
- type: cosine_accuracy@1
value: 0.9736673089274245
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9916506101477199
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.993577392421323
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9967886962106616
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.9736673089274245
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.33055020338257335
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1987154784842646
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09967886962106615
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.9736673089274245
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9916506101477199
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.993577392421323
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9967886962106616
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9865226900324854
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9830947793375538
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9832069316895906
name: Cosine Map@100
- type: dot_accuracy@1
value: 0.9736673089274245
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.9916506101477199
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.993577392421323
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.9967886962106616
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.9736673089274245
name: Dot Precision@1
- type: dot_precision@3
value: 0.33055020338257335
name: Dot Precision@3
- type: dot_precision@5
value: 0.1987154784842646
name: Dot Precision@5
- type: dot_precision@10
value: 0.09967886962106615
name: Dot Precision@10
- type: dot_recall@1
value: 0.9736673089274245
name: Dot Recall@1
- type: dot_recall@3
value: 0.9916506101477199
name: Dot Recall@3
- type: dot_recall@5
value: 0.993577392421323
name: Dot Recall@5
- type: dot_recall@10
value: 0.9967886962106616
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.9865226900324854
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.9830947793375538
name: Dot Mrr@10
- type: dot_map@100
value: 0.9832069316895906
name: Dot Map@100
license: apache-2.0
language:
- fr
datasets:
- louisbrulenaudet/code-impots
- louisbrulenaudet/code-impots-annexe-iv
- louisbrulenaudet/code-impots-annexe-iii
- louisbrulenaudet/code-impots-annexe-i
- louisbrulenaudet/code-impots-annexe-ii
- louisbrulenaudet/livre-procedures-fiscales
- louisbrulenaudet/bofip
---
<img src="assets/thumbnail.webp">
# Lemone-Embed: A Series of Fine-Tuned Embedding Models for French Taxation
<div class="not-prose bg-gradient-to-r from-gray-50-to-white text-gray-900 border" style="border-radius: 8px; padding: 0.5rem 1rem;">
<p>This series is made up of 7 models, 3 basic models of different sizes trained on 1 epoch, 3 models trained on 2 epochs making up the Boost series and a Pro model with a non-Roberta architecture.</p>
</div>
This sentence transformers model, specifically designed for French taxation, has been fine-tuned on a dataset comprising 43 million tokens, integrating a blend of semi-synthetic and fully synthetic data generated by GPT-4 Turbo and Llama 3.1 70B, which have been further refined through evol-instruction tuning and manual curation.
The model is tailored to meet the specific demands of information retrieval across large-scale tax-related corpora, supporting the implementation of production-ready Retrieval-Augmented Generation (RAG) applications. Its primary purpose is to enhance the efficiency and accuracy of legal processes in the taxation domain, with an emphasis on delivering consistent performance in real-world settings, while also contributing to advancements in legal natural language processing research.
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base) <!-- at revision 7fc06782350c1a83f88b15dd4b38ef853d3b8503 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Developed by:** Louis Brulé Naudet
- **Funded by:** Microsoft for Startups
- **Shared by:** Louis Brulé Naudet
- **Model type:** Sentence Transformers
- **Language(s) (NLP):** FR
- **License:** Apache 2
- **Finetuned from model:** [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base)
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NewModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("louisbrulenaudet/lemone-gte-embed-max")
# Run inference
sentences = [
"Exposer les modalités de dérogation au secret fiscal autorisant le juge à demander des documents fiscaux nécessaires pour résoudre un litige, en vertu de l'article L. 143 du Livre des Procédures Fiscales.",
"Conformément aux dispositions de l'article L. 143 du Livre des Procédures Fiscales, le secret fiscal peut être levé dans le cadre d'un litige par décision du juge. Cette mesure vise à autoriser la présentation de documents fiscaux, jugés utiles par le magistrat pour trancher une affaire. La levée de ce secret est toutefois soumise à une interprétation stricte, de sorte que seuls les documents réellement susceptibles d'éclairer le juge sur l'étendue du préjudice des individus impliqués peuvent être divulgués. Les renseignements qui n'ont de pertinence que pour des questions périphériques de la procédure ou qui se rapportent uniquement à l'application d'un jugement déjà prononcé sont exclus de cette possibilité de communication.",
"Selon les dispositions du Bulletin officiel des finances publiques-instructions administratives, spécifiquement le BOI-DJC-SECR-10-20-50, le procureur de la République détient le droit, dans le contexte de toute investigation judiciaire, qu'elle relève d'une enquête de flagrance, préliminaire ou autre, de solliciter des renseignements ou documents essentiels à l'enquête auprès de l'administration fiscale. Cette sollicitation peut être adressée directement ou via un officier de police judiciaire agissant sur une réquisition du procureur. Conformément à l'article L.141 A du Livre des procédures fiscales, le secret fiscal ne constitue pas un frein légal à la transmission des informations ou documents exigés par le procureur.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `Lemone`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.9737 |
| cosine_accuracy@3 | 0.9917 |
| cosine_accuracy@5 | 0.9936 |
| cosine_accuracy@10 | 0.9968 |
| cosine_precision@1 | 0.9737 |
| cosine_precision@3 | 0.3306 |
| cosine_precision@5 | 0.1987 |
| cosine_precision@10 | 0.0997 |
| cosine_recall@1 | 0.9737 |
| cosine_recall@3 | 0.9917 |
| cosine_recall@5 | 0.9936 |
| cosine_recall@10 | 0.9968 |
| cosine_ndcg@10 | 0.9865 |
| cosine_mrr@10 | 0.9831 |
| **cosine_map@100** | **0.9832** |
| dot_accuracy@1 | 0.9737 |
| dot_accuracy@3 | 0.9917 |
| dot_accuracy@5 | 0.9936 |
| dot_accuracy@10 | 0.9968 |
| dot_precision@1 | 0.9737 |
| dot_precision@3 | 0.3306 |
| dot_precision@5 | 0.1987 |
| dot_precision@10 | 0.0997 |
| dot_recall@1 | 0.9737 |
| dot_recall@3 | 0.9917 |
| dot_recall@5 | 0.9936 |
| dot_recall@10 | 0.9968 |
| dot_ndcg@10 | 0.9865 |
| dot_mrr@10 | 0.9831 |
| dot_map@100 | 0.9832 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
* Size: 303,863 training samples
* Columns: <code>query</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | query | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 27 tokens</li><li>mean: 51.44 tokens</li><li>max: 137 tokens</li></ul> | <ul><li>min: 39 tokens</li><li>mean: 197.8 tokens</li><li>max: 1607 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 224.41 tokens</li><li>max: 2735 tokens</li></ul> |
* Loss: [<code>CachedGISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedgistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NewModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 128
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 5.517 kWh
- **Carbon Emitted**: 2.036 kg of CO2
- **Hours Used**: 9.954 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA H100 NVL
- **CPU Model**: AMD EPYC 9V84 96-Core Processor
- **RAM Size**: 314.68 GB
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.1.1
- Transformers: 4.44.2
- PyTorch: 2.3.0+cu121
- Accelerate: 0.33.0
- Datasets: 2.21.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
If you use this code in your research, please use the following BibTeX entry.
```BibTeX
@misc{louisbrulenaudet2024,
author = {Louis Brulé Naudet},
title = {Lemone-Embed: A Series of Fine-Tuned Embedding Models for French Taxation},
year = {2024}
howpublished = {\url{https://huggingface.co/datasets/louisbrulenaudet/lemone-embed-pro}},
}
```
## Feedback
If you have any feedback, please reach out at [[email protected]](mailto:[email protected]). |
sara3023/sara | sara3023 | "2025-03-26T10:59:18" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-03-26T10:59:08" | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sara3023
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
usman123haziq321/MY-model-UMAN | usman123haziq321 | "2024-09-29T17:44:46" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-09-29T17:39:11" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aleegis10/25580dc1-512f-45d1-8655-40f043e54c0c | aleegis10 | "2025-01-23T01:35:23" | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-0.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-01-23T01:04:24" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 25580dc1-512f-45d1-8655-40f043e54c0c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-0.5B-Instruct
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 88477edce7fa88a5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/88477edce7fa88a5_train_data.json
type:
field_input: knowledge
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: aleegis10/25580dc1-512f-45d1-8655-40f043e54c0c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/88477edce7fa88a5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: c4c6d4ec-5dd3-4ec3-a806-0da11ee598c9
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: c4c6d4ec-5dd3-4ec3-a806-0da11ee598c9
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 25580dc1-512f-45d1-8655-40f043e54c0c
This model is a fine-tuned version of [unsloth/Qwen2-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3654
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.619 | 0.0001 | 1 | 0.5536 |
| 0.2675 | 0.0067 | 50 | 0.4052 |
| 0.2608 | 0.0135 | 100 | 0.3826 |
| 0.2499 | 0.0202 | 150 | 0.3697 |
| 0.289 | 0.0270 | 200 | 0.3654 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kenken6696/Llama-3.2-3B_known_unknown_fix_middle | kenken6696 | "2024-12-23T08:09:04" | 80 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-23T08:06:05" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Dema99/Unit1 | Dema99 | "2024-03-25T22:27:16" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-03-25T22:26:54" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 271.91 +/- 19.85
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
NovelAI/nerdstash-tokenizer-v1 | NovelAI | "2023-08-02T23:11:38" | 0 | 6 | null | [
"tokenizer",
"novelai",
"sentencepiece",
"en",
"ja",
"license:gpl-2.0",
"region:us"
] | null | "2023-08-02T22:50:10" | ---
license: gpl-2.0
language:
- en
- ja
tags:
- tokenizer
- novelai
- sentencepiece
---
# Tokenizer
Finetune here to talk a bit about [NovelAI](https://novelai.net/)'s new tokenizer that I worked on. First a quick reminder. In most cases, our models don't see words as individual letters. Instead, text is broken down into tokens, which are words or word fragments. For example, the sentence “`The quick brown fox jumps over the goblin.`” would tokenize as “`The| quick| brown| fox| jumps| over| the| go|bl|in.`” in the Pile tokenizer used by GPT-NeoX 20B and Krake, with each | signifying a boundary between tokens.
When deciding on a tokenizer for a model, there are various criteria to consider. The first and most obvious is the vocabulary size. It may be tempting to just set it very high to ensure that every word or even multiple words, as in the case of the tokenizer used by AI21's Jurassic models, gets its own distinct token. However, this has the drawback that the model will be less able to generalize. That means it will not be able to make use of meaningful patterns in how words are spelt, such as similarities between words ending in “-ize”. It will also be less robust against misspellings. At the same time, the vocabulary of the tokenizer should not be too small. Common words should have their own token. The same goes for Unicode characters that are likely to show up in tokenized texts, because otherwise they will have to be constructed by the model byte-by-byte, which is much harder for the model. A good trade-off with regards to vocabulary size is around 32000 tokens for a single language vocabulary. This also has the benefit of fitting easily within 16 bits, which makes handling tokenized data easier in many cases.
The type of tokenizer is another important decision to make. Unigram tokenizers have been shown to produce much more meaningful tokenizations of words, while so far the predominantly used tokenizer type for large language models (LLM) is BPE (byte pair encoding). The most common implementation of BPE is probably the GPT2 one, but Google's sentencepiece implementation of BPE offers the not so slight advantage of natively being able to tokenize Unicode characters directly, without having to assemble them from bytes, which requires additional tokens representing partial Unicode code points to be added to the vocabulary, wasting some additional space. For example, “🙂” consists of four bytes “`F0 9F 99 82`”, so in traditional BPE, `F0` would first get merged with `9F` to make up `F09F`, which is then merged with `99` to make up `F09F99`, which is then merged with `82`, so two additional intermediate tokens would have to be added to the vocabulary. At the same time, sentencepiece also supports tokenizing arbitrary binary data using byte tokens.
Finally, the compression ratio achieved by the tokenizer is important to consider. If a given text tokenizes into less tokens, this will allow the LLM to see more text at once, given the fixed size of context it can see at a maximum, which is important for users of the LLm. It will also influence how much text you need to achieve a certain amount of tokens if, say, you are trying to meet a certain amount of training data. If your tokenizer compresses text less efficiently, you may more easily achieve a dataset of a given size, but it stands to reason that a model trained on such a less efficiently tokenized dataset of a given size will learn less than one trained of on a same sized dataset that was tokenized with a tokenizer that achieves a higher compression ratio, because in effect, it will see less bits of actually information during training.
With all these things in mind, we decided that we want our own tokenizer for the models we will train, that is better optimized for our use cases, such as storytelling.
Tokenizers are trained on data, so we started by extracting small randomized subsets from the various distinct subsets of our model training dataset and used these to evaluate the available tokenizer training approaches. Both Huggingface's tokenizers library and Google's sentencepiece support training tokenizers of different types. A preliminary investigation showed that sentencepiece's trainer is more memory efficient, although a training dataset in the low double digit gibibytes still required a compute node with 1TB of RAM to run successfully. Due to this, we decided to use sentencepiece.
We originally decided on a vocabulary size of 32000, but when training Genji V2, we found that modifying an existing tokenizer to support an additional language was not a pleasant experience. As it seems likely that we will want to do similar [language transfer learning](https://blog.novelai.net/data-efficient-language-transfer-with-gpt-j-45daedaaf35a) in the future, we have decided to have our tokenizer accommodate both English and Japanese from the start. For this reason, we decided to double the vocabulary size to 64000, which then was close to filling up the available token ID space of 16 bits, so we went all the way to a vocabulary size of 65535 tokens. During tokenizer training, I carefully balanced the training data in such a way that latin alphabet tokens of a length of at least 2 characters and Japanese language tokens take up approximately the same amount of token space. Bumping the vocabulary size up to 65535 also allows more Unicode character tokens such as emoji. For the Japanese part of tokenizer training data, we used our existing Genji training data and a comparatively smaller amount of Japanese Wikipedia.
We have manually added tokens for certain multi-whitespace strings and have set up the tokenizer in such a way that numbers are tokenized digit by digit. Tokenizing numbers digit by digit may slightly reduce compression ratio in number heavy texts, but it will also allow the LLM to more effectively learn how to handle numeric values.
Considering the possible benefits of Unigram tokenizers, we started out by training a Unigram tokenizer. This took multiple runs of rebalancing the dataset between languages and also between the different subsets of our main datasets to get the token distribution to look the way we want. Each Unigram training run took a few hours. For the sake of comparison, we also trained a BPE model, which again required multiple runs to rebalance the dataset. BPE runs ended up much slower, taking nearly a whole day.
Both tokenizers were then evaluated on a held-out part of the dataset. The idea was that, if the compression ratios are similar or Unigram is only slightly worse, we would use the Unigram tokenizer to benefit from the more natural word segmentation. We found that the BPE tokenizer has a 25-29% higher compression ratio on the largest parts of our English language dataset. This unexpectedly large gap in performance led us to choose the BPE tokenizer over the Unigram one and also explains the continuing prevalence of BPE tokenizers for LLMs. We also compared the compression ratio of our tokenizer to the LLaMa tokenizer, which is a sentencepiece based BPE tokenizer with a 32000 token vocabulary. In comparison to the LLaMa tokenizer, we find our tokenizer to achieve a 7-19% higher compression ratio on the largest parts of our English language dataset.
Finally, I would like to give some stats about token distribution. Our tokenizer contains 28586 tokens made up of latin alphabet characters with a minimum length of two. Tokens with a leading space are included in this. It contains 18123 Japanese tokens longer than a single character and 9626 tokens for Japanese and Chinese characters, which cannot be easily told apart for the sake of these stats due to the Unicode han unification. 9200 other tokens are included. This space is taken up mostly by Unicode characters such as emoji.
For comparison, the LLaMa tokenizer contains 23964 tokens made up only of latin alphabet characters, no Japanese token longer than a single character, 836 Japanese characters and 7224 other tokens.
## JavaScript implementation
The JavaScript implementation used by the NovelAI frontend can be found [here](https://github.com/NovelAI/nai-js-tokenizer).
## V2
For [V2](https://huggingface.co/NovelAI/nerdstash-tokenizer-v2/), the original digit special tokens were replaced with english contractions. Digits will therefore be encoded using corresponding the byte tokens instead.
# Example usage with transformers
Since it seems to be the most up-to-date class for using sentencepiece tokenizers in transformers, this tokenizer uses the `LlamaTokenizer` class. Note that the `LlamaTokenizerFast` class is not supported. `AutoTokenizer` selects the fast version and is also not supported.
```python
from transformers import LlamaTokenizer
tokenizer = LlamaTokenizer.from_pretrained("NovelAI/nerdstash-tokenizer-v1")
print(tokenizer.encode("Hello, world!"))
```
## Example usage with sentencepiece
```python
import sentencepiece as spm
s = spm.SentencePieceProcessor(model_file='tokenizer.model')
text = "The quick brown fox jumps over the goblin."
print("Text:", text)
print("Token IDs:", s.encode(text))
# Token IDs: [541, 1939, 6573, 22820, 22734, 712, 336, 34477, 49230]
print("Readable tokens:", s.encode(text, out_type=str))
# Readable tokens: ['The', '▁quick', '▁brown', '▁fox', '▁jumps', '▁over', '▁the', '▁goblin', '.']
```
## License
The tokenizer is licensed under the GNU General Public License, version 2. |
sail-rvc/darkyfnfmodel | sail-rvc | "2023-07-14T07:36:38" | 1 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | "2023-07-14T07:36:15" |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# darkyfnfmodel
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:36:38
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
jkazdan/16R-16F-gemma-2-2b_hs2_iter1_sftsd0 | jkazdan | "2024-09-26T06:51:12" | 6 | 0 | null | [
"safetensors",
"gemma2",
"trl",
"sft",
"generated_from_trainer",
"base_model:google/gemma-2-2b",
"base_model:finetune:google/gemma-2-2b",
"license:gemma",
"region:us"
] | null | "2024-09-26T06:48:36" | ---
license: gemma
base_model: google/gemma-2-2b
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: 16R-16F-gemma-2-2b_hs2_iter1_sftsd0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 16R-16F-gemma-2-2b_hs2_iter1_sftsd0
This model is a fine-tuned version of [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3956
- Num Input Tokens Seen: 14304
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 8
- eval_batch_size: 16
- seed: 0
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|
| No log | 0 | 0 | 1.3956 | 0 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Mriganka1999/PandaSlideDense-v3usingSAC-PandaSlideDense-v3 | Mriganka1999 | "2024-06-12T13:15:43" | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaSlideDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-06-12T13:10:16" | ---
library_name: stable-baselines3
tags:
- PandaSlideDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaSlideDense-v3
type: PandaSlideDense-v3
metrics:
- type: mean_reward
value: -21.93 +/- 5.74
name: mean_reward
verified: false
---
# **SAC** Agent playing **PandaSlideDense-v3**
This is a trained model of a **SAC** agent playing **PandaSlideDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
eswardivi/q-FrozenLake-v1-4x4-noSlippery | eswardivi | "2023-02-22T09:34:59" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-02-22T09:34:50" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="eswardivi/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
justinj92/dolmin-0.2-Q3_K_M-GGUF | justinj92 | "2024-04-07T14:38:37" | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-04-07T14:36:19" | ---
tags:
- llama-cpp
- gguf-my-repo
---
# justinj92/dolmin-0.2-Q3_K_M-GGUF
This model was converted to GGUF format from [`justinj92/dolmin-0.2`](https://huggingface.co/justinj92/dolmin-0.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/justinj92/dolmin-0.2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo justinj92/dolmin-0.2-Q3_K_M-GGUF --model dolmin-0.2.Q3_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo justinj92/dolmin-0.2-Q3_K_M-GGUF --model dolmin-0.2.Q3_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m dolmin-0.2.Q3_K_M.gguf -n 128
```
|
rsouza17/vozreimp3 | rsouza17 | "2024-03-30T20:57:38" | 0 | 0 | null | [
"arxiv:1910.09700",
"license:openrail",
"region:us"
] | null | "2024-03-29T16:45:55" | ---
license: openrail
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Mahmoud3899/bool | Mahmoud3899 | "2024-08-18T14:47:37" | 106 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-08-18T13:38:08" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lixtic/rare-puppers | lixtic | "2023-12-13T15:31:19" | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-13T15:31:13" | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9850746393203735
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Cat

#### Dog

#### Horse

#### lion
 |
roa7n/gpt2-human_nontata_promoters-randomized_5_layers_0.0003_lr_8_e | roa7n | "2023-09-28T07:08:40" | 1 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-09-28T07:08:38" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
mlx-community/Qwen2.5-3B-Instruct-4bit | mlx-community | "2024-09-18T19:07:05" | 160,966 | 0 | mlx | [
"mlx",
"safetensors",
"qwen2",
"chat",
"text-generation",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-3B",
"base_model:finetune:Qwen/Qwen2.5-3B",
"license:other",
"region:us"
] | text-generation | "2024-09-18T19:06:43" | ---
base_model: Qwen/Qwen2.5-3B
language:
- en
license: other
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-3B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- mlx
---
# mlx-community/Qwen2.5-3B-Instruct-4bit
The Model [mlx-community/Qwen2.5-3B-Instruct-4bit](https://huggingface.co/mlx-community/Qwen2.5-3B-Instruct-4bit) was converted to MLX format from [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) using mlx-lm version **0.18.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Qwen2.5-3B-Instruct-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
mission-impossible-lms/partial-reverse-gpt2-no-pos | mission-impossible-lms | "2024-11-04T21:13:11" | 5 | 0 | null | [
"safetensors",
"gpt2",
"custom_code",
"arxiv:2401.06416",
"region:us"
] | null | "2024-11-02T18:58:34" | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for *PartialReverse* GPT-2 (without Positional Encodings)
<!-- Provide a quick summary of what the model is/does. -->
This is one model in a collection of models trained on the impossible
languages of [Kallini et al. 2024](https://arxiv.org/abs/2401.06416).
This model is a GPT-2 Small model trained *without positional encodings*
from scratch on the ***PartialReverse***
language. We include a total of 30 checkpoints over the course of
model training, from step 100 to 3000 in increments of 100 steps.
The main branch contains the final checkpoint (3000), and the other
checkpoints are accessible as revisions.

## Model Details
- **Developed by:** Julie Kallini, Isabel Papadimitriou, Richard Futrell, Kyle Mahowald, Christopher Potts
- **Model type:** Causal Language Model
- **Language(s) (NLP):** English
- **GitHub Repository:** https://github.com/jkallini/mission-impossible-language-models
- **Paper:** https://arxiv.org/pdf/2401.06416
## Uses
This artefact is solely intended for the study of language learning
and acquisition in computational models. It should not be
used in any production setting.
## How to Get Started with the Model
Use the code below to get started with the model.
**Important:** This will download our modified GPT-2 code that does
not have absolute positional encodings. If using this model in the
same environment as another GPT-2 model with positional encodings,
load the second model as a `GPT2Model` explicitly.
```python
from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer
import torch
# Load model and tokenizer
model_id = "mission-impossible-lms/partial-reverse-gpt2-no-pos"
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_id)
# Set up the prompt and encode it
prompt = "He clean"
inputs = tokenizer(prompt, return_tensors="pt")
# Generate text
output = model.generate(inputs.input_ids, max_length=20)
# Decode and print the generated text
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
```
By default, the `main` branch of this model repo loads the
last model checkpoint (3000). To access the other checkpoints,
use the `revision` argument:
```
model = GPT2LMHeadModel.from_pretrained(model_id, revision="checkpoint-500")
```
This loads the model at checkpoint 500.
## Training Details
### Training Data
This model was trained on the [100M-word BabyLM dataset](https://babylm.github.io/).
Before training, we first transform the dataset into
the corresponding impossible language, as described in
our paper.
### Training Procedure
This model was trained for 3,000 gradient steps with
a batch size of 2^19 tokens. We train with a learning
rate that linearly warms up from 0 to 6e-4 over 300 steps.
## Environmental Impact
- **Hardware Type:** NVIDIA RTX 3090 (24GB) + NVIDIA RTX A6000 (48GB) GPUs.
- **Hours used:** ~24 hours.
## Citation
```bibtex
@inproceedings{kallini-etal-2024-mission,
title = "Mission: Impossible Language Models",
author = "Kallini, Julie and
Papadimitriou, Isabel and
Futrell, Richard and
Mahowald, Kyle and
Potts, Christopher",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.787",
doi = "10.18653/v1/2024.acl-long.787",
pages = "14691--14714",
}
```
## Model Card Authors
Julie Kallini
## Model Card Contact
[email protected] |
happylayers/sc72 | happylayers | "2024-04-28T12:03:23" | 90 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-28T12:02:00" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hmatzner/ppo-LunarLander-v2 | hmatzner | "2023-03-07T21:17:37" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-07T21:17:13" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 246.04 +/- 22.66
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mradermacher/Meta-Llama-3-8B-finetuned-balanced-v2-GGUF | mradermacher | "2025-03-18T12:26:27" | 198 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:YipingZhang/Meta-Llama-3-8B-finetuned-balanced-v2",
"base_model:quantized:YipingZhang/Meta-Llama-3-8B-finetuned-balanced-v2",
"endpoints_compatible",
"region:us"
] | null | "2025-03-17T19:56:58" | ---
base_model: YipingZhang/Meta-Llama-3-8B-finetuned-balanced-v2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/YipingZhang/Meta-Llama-3-8B-finetuned-balanced-v2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-finetuned-balanced-v2-GGUF/resolve/main/Meta-Llama-3-8B-finetuned-balanced-v2.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-finetuned-balanced-v2-GGUF/resolve/main/Meta-Llama-3-8B-finetuned-balanced-v2.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-finetuned-balanced-v2-GGUF/resolve/main/Meta-Llama-3-8B-finetuned-balanced-v2.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-finetuned-balanced-v2-GGUF/resolve/main/Meta-Llama-3-8B-finetuned-balanced-v2.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-finetuned-balanced-v2-GGUF/resolve/main/Meta-Llama-3-8B-finetuned-balanced-v2.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-finetuned-balanced-v2-GGUF/resolve/main/Meta-Llama-3-8B-finetuned-balanced-v2.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-finetuned-balanced-v2-GGUF/resolve/main/Meta-Llama-3-8B-finetuned-balanced-v2.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-finetuned-balanced-v2-GGUF/resolve/main/Meta-Llama-3-8B-finetuned-balanced-v2.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-finetuned-balanced-v2-GGUF/resolve/main/Meta-Llama-3-8B-finetuned-balanced-v2.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-finetuned-balanced-v2-GGUF/resolve/main/Meta-Llama-3-8B-finetuned-balanced-v2.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-finetuned-balanced-v2-GGUF/resolve/main/Meta-Llama-3-8B-finetuned-balanced-v2.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-finetuned-balanced-v2-GGUF/resolve/main/Meta-Llama-3-8B-finetuned-balanced-v2.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
DarthReca/depth-any-canopy-base | DarthReca | "2024-08-10T06:56:31" | 92 | 1 | transformers | [
"transformers",
"safetensors",
"depth_anything",
"depth-estimation",
"arxiv:2408.04523",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | depth-estimation | "2024-07-29T13:09:13" | ---
license: apache-2.0
---
# Depth Any Canopy Base
<!-- Provide a quick summary of what the model is/does. -->
This is the base version of Depth Any Canopy presented in Depth Any Canopy Paper. A [Small version](https://huggingface.co/DarthReca/depth-any-canopy-small) is also available.
## Model Details
<!-- Provide a longer summary of what this model is. -->
The model is Depth-Anything-Base finetuned for canopy height estimation on a filtered set of [EarthView](https://huggingface.co/datasets/satellogic/EarthView).
- **License:** Apache 2.0
- **Finetuned from model:** [Depth-Anything-Base](https://huggingface.co/depth-anything/Depth-Anything-V2-Base-hf)
## Uses and Limitations
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
The model is capable of working with aerial imagery of NEON. The coverage is limited to the US. We cannot guarantee its generalizability over other areas of the globe.
The images cover only RGB channels; no study of hyperspectral imagery was done.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("depth-estimation", model="DarthReca/depth-any-canopy-base")
# Load model directly
from transformers import AutoModelForDepthEstimation
model = AutoModelForDepthEstimation.from_pretrained("DarthReca/depth-any-canopy-base")
```
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
- **Carbon Emitted:** 0.14 kgCO2
Carbon emissions are estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute).
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@misc{cambrin2024depthcanopyleveragingdepth,
title={Depth Any Canopy: Leveraging Depth Foundation Models for Canopy Height Estimation},
author={Daniele Rege Cambrin and Isaac Corley and Paolo Garza},
year={2024},
eprint={2408.04523},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2408.04523},
}
``` |
MarsupialAI/Gemmasutra-Mini-2B-v1_iMatrix_GGUF | MarsupialAI | "2024-08-03T22:20:26" | 479 | 5 | null | [
"gguf",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-08-03T21:50:21" | ---
license: other
---
iMatrix GGUFs for https://huggingface.co/TheDrummer/Gemmasutra-Mini-2B-v1
iMat generated using Kalomaze's groups_merged.txt |
disi-unibo-nlp/mistral-SFT-medqa-medmcqa-triples-cot-2bs-2acc-3ep | disi-unibo-nlp | "2024-11-07T15:03:35" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-07T14:30:02" | ---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** disi-unibo-nlp
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
buhologa/espepinedo | buhologa | "2025-01-17T18:54:53" | 13 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-01-17T17:59:27" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Espepinedo
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('buhologa/espepinedo', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
YakovElm/Apache15SetFitModel_balance_ratio_Half | YakovElm | "2023-06-01T01:23:34" | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | "2023-06-01T01:22:58" | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# YakovElm/Apache15SetFitModel_balance_ratio_Half
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("YakovElm/Apache15SetFitModel_balance_ratio_Half")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Dataset Card for Hugging Face Hub Model Cards
This datasets consists of model cards for models hosted on the Hugging Face Hub. The model cards are created by the community and provide information about the model, its performance, its intended uses, and more. This dataset is updated on a daily basis and includes publicly available models on the Hugging Face Hub.
This dataset is made available to help support users wanting to work with a large number of Model Cards from the Hub. We hope that this dataset will help support research in the area of Model Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.
Dataset Details
Uses
There are a number of potential uses for this dataset including:
- text mining to find common themes in model cards
- analysis of the model card format/content
- topic modelling of model cards
- analysis of the model card metadata
- training language models on model cards
Out-of-Scope Use
[More Information Needed]
Dataset Structure
This dataset has a single split.
Dataset Creation
Curation Rationale
The dataset was created to assist people in working with model cards. In particular it was created to support research in the area of model cards and their use. It is possible to use the Hugging Face Hub API or client library to download model cards and this option may be preferable if you have a very specific use case or require a different format.
Source Data
The source data is README.md
files for models hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the model card directory.
Data Collection and Processing
The data is downloaded using a CRON job on a daily basis.
Who are the source data producers?
The source data producers are the creators of the model cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the model card in this repository although this information can be gathered from the Hugging Face Hub API.
Annotations [optional]
There are no additional annotations in this dataset beyond the model card content.
Annotation process
N/A
Who are the annotators?
N/A
Personal and Sensitive Information
We make no effort to anonymize the data. Whilst we don't expect the majority of model cards to contain personal or sensitive information, it is possible that some model cards may contain this information. Model cards may also link to websites or email addresses.
Bias, Risks, and Limitations
Model cards are created by the community and we do not have any control over the content of the model cards. We do not review the content of the model cards and we do not make any claims about the accuracy of the information in the model cards. Some model cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the model. As a result this dataset may contain examples of bias.
Whilst we do not directly download any images linked to in the model cards, some model cards may include images. Some of these images may not be suitable for all audiences.
Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
Citation
No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.
Dataset Card Authors
Dataset Card Contact
- Downloads last month
- 1,446