Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
null |
transformers
|
# DavidAU/Llama-3-Smaug-8B-Q8_0-GGUF
This model was converted to GGUF format from [`abacusai/Llama-3-Smaug-8B`](https://huggingface.co/abacusai/Llama-3-Smaug-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/abacusai/Llama-3-Smaug-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Llama-3-Smaug-8B-Q8_0-GGUF --model llama-3-smaug-8b.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Llama-3-Smaug-8B-Q8_0-GGUF --model llama-3-smaug-8b.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-smaug-8b.Q8_0.gguf -n 128
```
|
{"license": "llama2", "library_name": "transformers", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["aqua_rat", "microsoft/orca-math-word-problems-200k", "m-a-p/CodeFeedback-Filtered-Instruction", "anon8231489123/ShareGPT_Vicuna_unfiltered"]}
|
DavidAU/Llama-3-Smaug-8B-Q8_0-GGUF
| null |
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:aqua_rat",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:anon8231489123/ShareGPT_Vicuna_unfiltered",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T03:08:23+00:00
|
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
anderswl/trocr_suara_tps
| null |
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T03:08:40+00:00
|
text-classification
|
transformers
|
{}
|
mllm-dev/gpt2_f_experiment_drug_data_large_test
| null |
[
"transformers",
"safetensors",
"gpt2",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T03:09:30+00:00
|
|
text-to-image
|
diffusers
|
# Pixel Art XL
## Consider supporting further research on [Patreon](https://www.patreon.com/user?u=29466374) or [Twitter](https://twitter.com/nerijs)


Downscale 8 times to get pixel perfect images (use Nearest Neighbors)
Use a fixed VAE to avoid artifacts (0.9 or fp16 fix)
### Need more performance?
Use it with a LCM Lora!
Use 8 steps and guidance scale of 1.5
1.2 Lora strength for the Pixel Art XL works better
```python
from diffusers import DiffusionPipeline, LCMScheduler
import torch
model_id = "stabilityai/stable-diffusion-xl-base-1.0"
lcm_lora_id = "latent-consistency/lcm-lora-sdxl"
pipe = DiffusionPipeline.from_pretrained(model_id, variant="fp16")
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
pipe.load_lora_weights(lcm_lora_id, adapter_name="lora")
pipe.load_lora_weights("./pixel-art-xl.safetensors", adapter_name="pixel")
pipe.set_adapters(["lora", "pixel"], adapter_weights=[1.0, 1.2])
pipe.to(device="cuda", dtype=torch.float16)
prompt = "pixel, a cute corgi"
negative_prompt = "3d render, realistic"
num_images = 9
for i in range(num_images):
img = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
num_inference_steps=8,
guidance_scale=1.5,
).images[0]
img.save(f"lcm_lora_{i}.png")
```
### Tips:
Don't use refiner
Works great with only 1 text encoder
No style prompt required
No trigger keyword require
Works great with isometric and non-isometric
Works with 0.9 and 1.0
#### Changelog
v1: Initial release
|
{"license": "creativeml-openrail-m", "tags": ["text-to-image", "stable-diffusion", "lora", "diffusers"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "pixel art", "widget": [{"text": "pixel art, a cute corgi, simple, flat colors"}]}
|
seawolf2357/pixel-art-xl
| null |
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
] | null |
2024-04-24T03:10:33+00:00
|
null | null |
# cybercheems2077/Llama-3-8B-16K-Q4_K_M-GGUF
This model was converted to GGUF format from [`mattshumer/Llama-3-8B-16K`](https://huggingface.co/mattshumer/Llama-3-8B-16K) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mattshumer/Llama-3-8B-16K) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo cybercheems2077/Llama-3-8B-16K-Q4_K_M-GGUF --model llama-3-8b-16k.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo cybercheems2077/Llama-3-8B-16K-Q4_K_M-GGUF --model llama-3-8b-16k.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-8b-16k.Q4_K_M.gguf -n 128
```
|
{"tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["Yukang/LongAlpaca-16k-length"]}
|
cybercheems2077/Llama-3-8B-16K-Q4_K_M-GGUF
| null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:Yukang/LongAlpaca-16k-length",
"region:us"
] | null |
2024-04-24T03:10:40+00:00
|
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
souvik0306/test_quant_merge_2
| null |
[
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null |
2024-04-24T03:10:46+00:00
|
text-generation
|
transformers
|
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [appvoid/palmer-003](https://huggingface.co/appvoid/palmer-003) as a base.
### Models Merged
The following models were included in the merge:
* [microsoft/rho-math-1b-v0.1](https://huggingface.co/microsoft/rho-math-1b-v0.1)
* [vihangd/DopeyTinyLlama-1.1B-v1](https://huggingface.co/vihangd/DopeyTinyLlama-1.1B-v1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: vihangd/DopeyTinyLlama-1.1B-v1
parameters:
density: 0.5
weight: 0.75
- model: microsoft/rho-math-1b-v0.1
parameters:
density: 0.75
weight: 0.5
merge_method: ties
base_model: appvoid/palmer-003
parameters:
normalize: true
int8_mask: true
dtype: float16
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["microsoft/rho-math-1b-v0.1", "appvoid/palmer-003", "vihangd/DopeyTinyLlama-1.1B-v1"]}
|
appvoid/palmer-2-experts
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:microsoft/rho-math-1b-v0.1",
"base_model:appvoid/palmer-003",
"base_model:vihangd/DopeyTinyLlama-1.1B-v1",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T03:10:53+00:00
|
null | null |
{"license": "unknown"}
|
thatMLlearner/predictrent
| null |
[
"license:unknown",
"region:us"
] | null |
2024-04-24T03:12:22+00:00
|
|
null |
transformers
|
# DavidAU/Llama3-8B-OpenHermes-DPO-Q8_0-GGUF
This model was converted to GGUF format from [`Muhammad2003/Llama3-8B-OpenHermes-DPO`](https://huggingface.co/Muhammad2003/Llama3-8B-OpenHermes-DPO) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Muhammad2003/Llama3-8B-OpenHermes-DPO) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Llama3-8B-OpenHermes-DPO-Q8_0-GGUF --model llama3-8b-openhermes-dpo.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Llama3-8B-OpenHermes-DPO-Q8_0-GGUF --model llama3-8b-openhermes-dpo.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama3-8b-openhermes-dpo.Q8_0.gguf -n 128
```
|
{"license": "apache-2.0", "library_name": "transformers", "tags": ["DPO", "Llama3-8B", "llama-cpp", "gguf-my-repo"], "datasets": "mlabonne/chatml-OpenHermes2.5-dpo-binarized-alpha"}
|
DavidAU/Llama3-8B-OpenHermes-DPO-Q8_0-GGUF
| null |
[
"transformers",
"gguf",
"DPO",
"Llama3-8B",
"llama-cpp",
"gguf-my-repo",
"dataset:mlabonne/chatml-OpenHermes2.5-dpo-binarized-alpha",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T03:12:31+00:00
|
null | null |
{}
|
idoru/Llama-3-8B-Myrus-kto-lora
| null |
[
"safetensors",
"region:us"
] | null |
2024-04-24T03:13:04+00:00
|
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-open-BERTmodel
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5932
- Accuracy: 0.6945
- F1: 0.6777
- Precision: 0.7067
- Recall: 0.6510
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "base_model": "bert-base-uncased", "model-index": [{"name": "finetuning-open-BERTmodel", "results": []}]}
|
hsiuping/finetuning-open-BERTmodel
| null |
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T03:13:40+00:00
|
null | null |
# DavidAU/Llama-3-Unholy-8B-e4-Q8_0-GGUF
This model was converted to GGUF format from [`Undi95/Llama-3-Unholy-8B-e4`](https://huggingface.co/Undi95/Llama-3-Unholy-8B-e4) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Undi95/Llama-3-Unholy-8B-e4) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Llama-3-Unholy-8B-e4-Q8_0-GGUF --model llama-3-unholy-8b-e4.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Llama-3-Unholy-8B-e4-Q8_0-GGUF --model llama-3-unholy-8b-e4.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-unholy-8b-e4.Q8_0.gguf -n 128
```
|
{"license": "cc-by-nc-4.0", "tags": ["not-for-all-audiences", "nsfw", "llama-cpp", "gguf-my-repo"]}
|
DavidAU/Llama-3-Unholy-8B-e4-Q8_0-GGUF
| null |
[
"gguf",
"not-for-all-audiences",
"nsfw",
"llama-cpp",
"gguf-my-repo",
"license:cc-by-nc-4.0",
"region:us"
] | null |
2024-04-24T03:14:11+00:00
|
null | null |
{}
|
Eunhyecos/Aigenshin
| null |
[
"region:us"
] | null |
2024-04-24T03:14:33+00:00
|
|
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mohsenfayyaz/Llama-2-7b-chat-hf_medical_bios_5000_Lora_lr1e-5_2ep
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 2.5932
- eval_runtime: 4.0164
- eval_samples_per_second: 49.796
- eval_steps_per_second: 6.224
- epoch: 1.9968
- step: 156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 0
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- PEFT 0.9.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.19.1
|
{"license": "llama2", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "mohsenfayyaz/Llama-2-7b-chat-hf_medical_bios_5000_Lora_lr1e-5_2ep", "results": []}]}
|
mohsenfayyaz/Llama-2-7b-chat-hf_medical_bios_5000_Lora_lr1e-5_2ep
| null |
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"license:llama2",
"region:us"
] | null |
2024-04-24T03:14:47+00:00
|
null |
transformers
|
# DavidAU/Llama-3-NeuralPaca-8b-Q8_0-GGUF
This model was converted to GGUF format from [`NeuralNovel/Llama-3-NeuralPaca-8b`](https://huggingface.co/NeuralNovel/Llama-3-NeuralPaca-8b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/NeuralNovel/Llama-3-NeuralPaca-8b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Llama-3-NeuralPaca-8b-Q8_0-GGUF --model llama-3-neuralpaca-8b.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Llama-3-NeuralPaca-8b-Q8_0-GGUF --model llama-3-neuralpaca-8b.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-neuralpaca-8b.Q8_0.gguf -n 128
```
|
{"language": ["en"], "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "llama-cpp", "gguf-my-repo"], "datasets": ["tatsu-lab/alpaca"], "base_model": "unsloth/llama-3-8b-bnb-4bit", "thumbnail": "https://cdn-uploads.huggingface.co/production/uploads/645cfe4603fc86c46b3e46d1/njn9I-gHjyq0lMyjF0lZF.jpeg"}
|
DavidAU/Llama-3-NeuralPaca-8b-Q8_0-GGUF
| null |
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:tatsu-lab/alpaca",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T03:15:23+00:00
|
text-classification
|
transformers
|
{}
|
scott-routledge/bert-question-answer-classifier
| null |
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T03:15:45+00:00
|
|
null | null |
{"language": ["zh"], "license": "apache-2.0", "datasets": ["HuggingFaceM4/the_cauldron"]}
|
simple-sf/sf-test-model-1.0.0
| null |
[
"zh",
"dataset:HuggingFaceM4/the_cauldron",
"license:apache-2.0",
"region:us"
] | null |
2024-04-24T03:15:45+00:00
|
|
text-generation
|
transformers
|
# Model Card for Model ID
This model was built using PPO. Reward model was positive sentiment BERT only for GPT2 generative.
|
{"library_name": "transformers", "tags": []}
|
vincentoh/gpt2-imdb-pos-v2
| null |
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T03:16:12+00:00
|
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
souvik0306/test_quant_merge_3
| null |
[
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null |
2024-04-24T03:17:20+00:00
|
unconditional-image-generation
|
diffusers
|
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('kmpartner/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
{"license": "mit", "tags": ["pytorch", "diffusers", "unconditional-image-generation", "diffusion-models-class"]}
|
kmpartner/sd-class-butterflies-32
| null |
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | null |
2024-04-24T03:18:01+00:00
|
null | null |
{}
|
Pragati2002/Virtual_try_on
| null |
[
"region:us"
] | null |
2024-04-24T03:18:35+00:00
|
|
text-generation
|
transformers
|
# merged
This is the unquantized Seccond version of SnowyRP using LLama3, added some models will add a EXL2 Quant.
[EXL2](https://huggingface.co/Masterjp123/Llama-3-SnowyRP-8B-V1-B-4.5bit)
## Merge Details
For this model I used the most uncensored and smartest finetunes of LLama 3, since LLama 3 has the largest ammount of training tokens I know of, so I expect it to have more than enough of a knowledge base.
This model is under both the LLama 3 licince and a few other ones that only affect conmerical uses, But I am not a lawer so use in commerical settings at your own risk.
## Model Use:
Here is an modified Alpaca prompt I use with this model:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction: Take the role of {{char}} in a play that leaves a lasting impression on {{user}}. Write {{char}}'s next reply. Never skip or gloss over {{char}}’s actions. Progress the scene at a naturally slow pace. Avoid repetition, don't loop. Develop the plot slowly, always stay in character. Describe all actions in full, elaborate, explicit, graphic, and vivid detail. Mention all relevant sensory perceptions.
```
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [kuotient/Meta-Llama-3-8B-Instruct](https://huggingface.co/kuotient/Meta-Llama-3-8B-Instruct) as a base.
### Models Merged
The following models were included in the merge:
* [Sao10K/L3-Solana-8B-v1](https://huggingface.co/Sao10K/L3-Solana-8B-v1)
* [Masterjp123/Llama-3-SnowyRP-8B-V1](https://huggingface.co/Masterjp123/Llama-3-SnowyRP-8B-V1)
* [cgato/L3-TheSpice-8b-v0.1.3](https://huggingface.co/cgato/L3-TheSpice-8b-v0.1.3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: kuotient/Meta-Llama-3-8B-Instruct
dtype: float16
merge_method: ties
parameters:
int8_mask: 1.0
normalize: 1.0
slices:
- sources:
- layer_range: [0, 32]
model: Masterjp123/Llama-3-SnowyRP-8B-V1
parameters:
density: [1.0, 0.7, 0.1]
weight: 1.0
- layer_range: [0, 32]
model: cgato/L3-TheSpice-8b-v0.1.3
parameters:
density: 0.5
weight: [0.0, 0.3, 0.7, 1.0]
- layer_range: [0, 32]
model: Sao10K/L3-Solana-8B-v1
parameters:
density: 0.33
weight:
- filter: mlp
value: 0.5
- value: 0.0
- layer_range: [0, 32]
model: kuotient/Meta-Llama-3-8B-Instruct
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Sao10K/L3-Solana-8B-v1", "Masterjp123/Llama-3-SnowyRP-8B-V1", "kuotient/Meta-Llama-3-8B-Instruct", "cgato/L3-TheSpice-8b-v0.1.3"]}
|
Masterjp123/Llama-3-SnowyRP-8B-V1-B
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:Sao10K/L3-Solana-8B-v1",
"base_model:Masterjp123/Llama-3-SnowyRP-8B-V1",
"base_model:kuotient/Meta-Llama-3-8B-Instruct",
"base_model:cgato/L3-TheSpice-8b-v0.1.3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T03:18:38+00:00
|
text-generation
|
transformers
|
# Mermaid-Llama-3-5B
Introducing Mermaid-LLama-3-5B, a language model designed for Python code understanding and crafting captivating story flow maps.

## Key Features
1. **Code Understanding:**
- Masters Python intricacies with finesse.
- Generates clear and accurate Mermaid Diagram Flow Charts.
- Ideal for developers seeking visual representations of their code logic.
2. **Storytelling Capabilities:**
- Converts narrative inputs into captivating Mermaid Diagrams.
- Maps character interactions, plot developments, and narrative arcs.
3. **Unmatched Performance:**
- Surpasses GPT-4 in generating well-organized Mermaid Diagrams.
4. **Training Insights:**
- Trained on a diverse dataset, including 800 unique, hand-curated Mermaid Graph examples utilizing 478 complete Python programs.
- Exhibits emergent properties in story-to-flow map translations and step-by-step instruction flow maps.
## Collaboration
Interested in enhancing Mermaid's capabilities? Contact [email protected] for collaboration opportunities.
## Example Use Cases
- **Retrieval-Augmented Generation (RAG):** Utilize Mermaid-LLama-3-8B to create condensed knowledge graphs. This model excels in generating flow diagrams that enhance the retrieval process. These knowledge graphs are stored in a vector database, which allows for quick and efficient retrieval of contextually relevant information. When a query is received, the system retrieves a pertinent knowledge graph, appending it as context to the model. This enriched context enables Mermaid-LLama-3-8B to deliver more accurate and nuanced responses. This approach is particularly beneficial in applications requiring deep, context-aware interactions, such as sophisticated Q&A systems, dynamic data analysis, and complex decision-making tasks.
- **Code Documentation:** Automatic visual flow charts from Python code.
- **Storyboarding:** Visually appealing diagrams for storytelling.
- **Project Planning:** Visual project flow maps for effective team communication.
- **Learning Python:** Helps students visually understand Python code structures.
- **Game Design:** Visualizing game storylines for coherent narrative structure.
## Proof of Concept
Stay tuned for the release of the VSCode Extension that displays the Live Flow Map every time a user stops typing for more than 10 seconds.
## Training Specifications
- **LoRA Rank:** 2048
- **LoRA Alpha:** 4096
- **Batch Size:** 1
- **Micro Batch Size:** 1
- **Cutoff Length:** 4096
- **Save every n steps:** 1000
- **Epochs:** 3
- **Learning Rate:** 1e-6
- **LR Scheduler:** Cosine
**Target Modules:**
- Enable q_proj
- Enable v_proj
- Enable k_proj
- Enable o_proj
- Enable gate_proj
- Enable down_proj
- Enable up_proj
## Getting Started
Start by downloading one of my models.

Load the model.

Use my prompt template to generate a Mermaid code block, which can be viewed in the Mermaid Live Editor or using the Mermaid CLI tool.

Here we open the VLLM GUI Program while still running in Vram the Mermaid-Llama-8B to compare the flow diagram to the actual program and show the lightweight capabilites of small models on consumer hardware.

## More on my VLLM Class and inference GUI : https://github.com/Troys-Code/VLLM

---
Note: This model should be treated as an Auto-Complete Model, Do not try talking to it in chat you are gonna get garbage, those layers have been pruned and replaced, that is all you will hear of my secret sauce on training on small < 1000 entry datasets.
|
{"license": "cc-by-4.0"}
|
TroyDoesAI/Mermaid-Llama-3-5B-Pruned
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T03:19:27+00:00
|
null | null |
{}
|
danielcasas32/distilroberta-base-mrpc-glue-daniel-casas
| null |
[
"region:us"
] | null |
2024-04-24T03:19:34+00:00
|
|
text-generation
|
transformers
|
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [microsoft/rho-math-1b-v0.1](https://huggingface.co/microsoft/rho-math-1b-v0.1)
* [appvoid/palmer-003](https://huggingface.co/appvoid/palmer-003)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: appvoid/palmer-003
- model: microsoft/rho-math-1b-v0.1
merge_method: slerp
base_model: appvoid/palmer-003
dtype: float16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["microsoft/rho-math-1b-v0.1", "appvoid/palmer-003"]}
|
appvoid/palmer-math-v-curve
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:microsoft/rho-math-1b-v0.1",
"base_model:appvoid/palmer-003",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T03:19:42+00:00
|
image-classification
|
transformers
|
{}
|
ssdv/my_awesome_food_model
| null |
[
"transformers",
"pytorch",
"tensorboard",
"resnet",
"image-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T03:20:01+00:00
|
|
null | null |
{"license": "mit"}
|
NicoMesa/ml_experiments
| null |
[
"license:mit",
"region:us"
] | null |
2024-04-24T03:20:01+00:00
|
|
text-generation
|
transformers
|
# merged
This is the unquantized Seccond version of SnowyRP using LLama3, added some models will add a EXL2 Quant.
[BF16](https://huggingface.co/Masterjp123/Llama-3-SnowyRP-8B-V1-B)
## Merge Details
For this model I used the most uncensored and smartest finetunes of LLama 3, since LLama 3 has the largest ammount of training tokens I know of, so I expect it to have more than enough of a knowledge base.
This model is under both the LLama 3 licince and a few other ones that only affect conmerical uses, But I am not a lawer so use in commerical settings at your own risk.
## Model Use:
Here is an modified Alpaca prompt I use with this model:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction: Take the role of {{char}} in a play that leaves a lasting impression on {{user}}. Write {{char}}'s next reply. Never skip or gloss over {{char}}’s actions. Progress the scene at a naturally slow pace. Avoid repetition, don't loop. Develop the plot slowly, always stay in character. Describe all actions in full, elaborate, explicit, graphic, and vivid detail. Mention all relevant sensory perceptions.
```
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [kuotient/Meta-Llama-3-8B-Instruct](https://huggingface.co/kuotient/Meta-Llama-3-8B-Instruct) as a base.
### Models Merged
The following models were included in the merge:
* [Sao10K/L3-Solana-8B-v1](https://huggingface.co/Sao10K/L3-Solana-8B-v1)
* [Masterjp123/Llama-3-SnowyRP-8B-V1](https://huggingface.co/Masterjp123/Llama-3-SnowyRP-8B-V1)
* [cgato/L3-TheSpice-8b-v0.1.3](https://huggingface.co/cgato/L3-TheSpice-8b-v0.1.3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: kuotient/Meta-Llama-3-8B-Instruct
dtype: float16
merge_method: ties
parameters:
int8_mask: 1.0
normalize: 1.0
slices:
- sources:
- layer_range: [0, 32]
model: Masterjp123/Llama-3-SnowyRP-8B-V1
parameters:
density: [1.0, 0.7, 0.1]
weight: 1.0
- layer_range: [0, 32]
model: cgato/L3-TheSpice-8b-v0.1.3
parameters:
density: 0.5
weight: [0.0, 0.3, 0.7, 1.0]
- layer_range: [0, 32]
model: Sao10K/L3-Solana-8B-v1
parameters:
density: 0.33
weight:
- filter: mlp
value: 0.5
- value: 0.0
- layer_range: [0, 32]
model: kuotient/Meta-Llama-3-8B-Instruct
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Sao10K/L3-Solana-8B-v1", "Masterjp123/Llama-3-SnowyRP-8B-V1", "kuotient/Meta-Llama-3-8B-Instruct", "cgato/L3-TheSpice-8b-v0.1.3"]}
|
Masterjp123/Llama-3-SnowyRP-8B-V1-B-4.5bit
| null |
[
"transformers",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:Sao10K/L3-Solana-8B-v1",
"base_model:Masterjp123/Llama-3-SnowyRP-8B-V1",
"base_model:kuotient/Meta-Llama-3-8B-Instruct",
"base_model:cgato/L3-TheSpice-8b-v0.1.3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T03:20:06+00:00
|
null | null |
# DavidAU/Llama-3-SLERP-8B-Q8_0-GGUF
This model was converted to GGUF format from [`mlabonne/Llama-3-SLERP-8B`](https://huggingface.co/mlabonne/Llama-3-SLERP-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mlabonne/Llama-3-SLERP-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Llama-3-SLERP-8B-Q8_0-GGUF --model llama-3-slerp-8b.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Llama-3-SLERP-8B-Q8_0-GGUF --model llama-3-slerp-8b.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-slerp-8b.Q8_0.gguf -n 128
```
|
{"license": "other", "tags": ["merge", "mergekit", "lazymergekit", "llama-cpp", "gguf-my-repo"], "base_model": ["meta-llama/Meta-Llama-3-8B", "meta-llama/Meta-Llama-3-8B-Instruct"]}
|
DavidAU/Llama-3-SLERP-8B-Q8_0-GGUF
| null |
[
"gguf",
"merge",
"mergekit",
"lazymergekit",
"llama-cpp",
"gguf-my-repo",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"region:us"
] | null |
2024-04-24T03:20:40+00:00
|
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mohsenfayyaz/Llama-2-7b-chat-hf_medical_bios_5000_Lora_lr1e-5_3ep
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 2.4333
- eval_runtime: 4.017
- eval_samples_per_second: 49.788
- eval_steps_per_second: 6.223
- epoch: 2.9952
- step: 234
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 0
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- PEFT 0.9.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.19.1
|
{"license": "llama2", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "mohsenfayyaz/Llama-2-7b-chat-hf_medical_bios_5000_Lora_lr1e-5_3ep", "results": []}]}
|
mohsenfayyaz/Llama-2-7b-chat-hf_medical_bios_5000_Lora_lr1e-5_3ep
| null |
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"license:llama2",
"region:us"
] | null |
2024-04-24T03:21:25+00:00
|
null | null |
{}
|
kmpartner/sd-class-butterflies-64
| null |
[
"region:us"
] | null |
2024-04-24T03:23:32+00:00
|
|
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pszemraj/griffin-v0.01-c3t-8layer-simplewiki
- griffin/recurrent_gemma arch
- claude3 tokenizer (as an HF gpt2 tokenizer)
## Model description
This model is a fine-tuned version of [./griffin-1024-c3t-8layer](https://huggingface.co/./griffin-1024-c3t-8layer) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1928
- Accuracy: 0.4084
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 80085
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-07
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 13.2525 | 0.2548 | 100 | 11.9768 | 0.0131 |
| 8.8873 | 0.5095 | 200 | 8.0127 | 0.0357 |
| 7.2457 | 0.7643 | 300 | 6.4508 | 0.0512 |
| 6.3152 | 1.0190 | 400 | 5.6163 | 0.0460 |
| 5.5586 | 1.2738 | 500 | 4.7645 | 0.3650 |
| 5.2936 | 1.5285 | 600 | 4.3919 | 0.3934 |
| 4.8839 | 1.7833 | 700 | 4.1928 | 0.4084 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["pszemraj/simple_wikipedia_LM"], "metrics": ["accuracy"]}
|
pszemraj/griffin-v0.01-c3t-8layer-simplewiki
| null |
[
"transformers",
"safetensors",
"recurrent_gemma",
"text-generation",
"generated_from_trainer",
"en",
"dataset:pszemraj/simple_wikipedia_LM",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T03:24:17+00:00
|
null | null |
# DavidAU/Llama-3-Unholy-8B-Q8_0-GGUF
This model was converted to GGUF format from [`Undi95/Llama-3-Unholy-8B`](https://huggingface.co/Undi95/Llama-3-Unholy-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Undi95/Llama-3-Unholy-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Llama-3-Unholy-8B-Q8_0-GGUF --model llama-3-unholy-8b.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Llama-3-Unholy-8B-Q8_0-GGUF --model llama-3-unholy-8b.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-unholy-8b.Q8_0.gguf -n 128
```
|
{"license": "cc-by-nc-4.0", "tags": ["not-for-all-audiences", "nsfw", "llama-cpp", "gguf-my-repo"]}
|
DavidAU/Llama-3-Unholy-8B-Q8_0-GGUF
| null |
[
"gguf",
"not-for-all-audiences",
"nsfw",
"llama-cpp",
"gguf-my-repo",
"license:cc-by-nc-4.0",
"region:us"
] | null |
2024-04-24T03:25:39+00:00
|
text-generation
|
transformers
|
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [appvoid/palmer-003](https://huggingface.co/appvoid/palmer-003)
* [microsoft/rho-math-1b-interpreter-v0.1](https://huggingface.co/microsoft/rho-math-1b-interpreter-v0.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: microsoft/rho-math-1b-interpreter-v0.1
- model: appvoid/palmer-003
merge_method: slerp
base_model: microsoft/rho-math-1b-interpreter-v0.1
dtype: float16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["appvoid/palmer-003", "microsoft/rho-math-1b-interpreter-v0.1"]}
|
appvoid/palmer-math-v-curve-2
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:appvoid/palmer-003",
"base_model:microsoft/rho-math-1b-interpreter-v0.1",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T03:25:57+00:00
|
null |
transformers
|
# DavidAU/llama-3-8b-Instruct-Q8_0-GGUF
This model was converted to GGUF format from [`unsloth/llama-3-8b-Instruct`](https://huggingface.co/unsloth/llama-3-8b-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/unsloth/llama-3-8b-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/llama-3-8b-Instruct-Q8_0-GGUF --model llama-3-8b-instruct.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/llama-3-8b-Instruct-Q8_0-GGUF --model llama-3-8b-instruct.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-8b-instruct.Q8_0.gguf -n 128
```
|
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["unsloth", "transformers", "llama", "llama-3", "llama-cpp", "gguf-my-repo"]}
|
DavidAU/llama-3-8b-Instruct-Q8_0-GGUF
| null |
[
"transformers",
"gguf",
"unsloth",
"llama",
"llama-3",
"llama-cpp",
"gguf-my-repo",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T03:27:17+00:00
|
text-generation
|
transformers
|
{}
|
maddi99/gpt2_GPTQ
| null |
[
"transformers",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null |
2024-04-24T03:27:19+00:00
|
|
null | null |
{}
|
charles828/gemma-medical_qa-Finetune
| null |
[
"region:us"
] | null |
2024-04-24T03:28:01+00:00
|
|
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mohsenfayyaz/Llama-2-7b-chat-hf_medical_bios_5000_Lora_lr1e-5_4ep
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 2.3987
- eval_runtime: 4.0103
- eval_samples_per_second: 49.871
- eval_steps_per_second: 6.234
- epoch: 3.9936
- step: 312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 0
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- PEFT 0.9.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.19.1
|
{"license": "llama2", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "mohsenfayyaz/Llama-2-7b-chat-hf_medical_bios_5000_Lora_lr1e-5_4ep", "results": []}]}
|
mohsenfayyaz/Llama-2-7b-chat-hf_medical_bios_5000_Lora_lr1e-5_4ep
| null |
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"license:llama2",
"region:us"
] | null |
2024-04-24T03:28:12+00:00
|
text-generation
|
transformers
|
# WebSquareAI-Instruct-llama-3-8B-v0.5.37
## Model Details
**Developed by**
[Inswave Systems](https://www.inswave.com) UI Platform Team
**Base Model**
[meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B)
---
|
{"language": ["en"], "license": "cc-by-nc-4.0", "pipeline_tag": "text-generation"}
|
kimwooglae/WebSquareAI-Instruct-llama-3-8B-v0.5.37
| null |
[
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T03:28:50+00:00
|
null |
transformers
|
# DavidAU/llama-3-8b-Q8_0-GGUF
This model was converted to GGUF format from [`unsloth/llama-3-8b`](https://huggingface.co/unsloth/llama-3-8b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/unsloth/llama-3-8b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/llama-3-8b-Q8_0-GGUF --model llama-3-8b.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/llama-3-8b-Q8_0-GGUF --model llama-3-8b.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-8b.Q8_0.gguf -n 128
```
|
{"language": ["en"], "license": "llama2", "library_name": "transformers", "tags": ["unsloth", "transformers", "llama", "llama-3", "llama-cpp", "gguf-my-repo"]}
|
DavidAU/llama-3-8b-Q8_0-GGUF
| null |
[
"transformers",
"gguf",
"unsloth",
"llama",
"llama-3",
"llama-cpp",
"gguf-my-repo",
"en",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T03:29:01+00:00
|
null | null |
{"license": "apache-2.0"}
|
wifibaby4u/Guru-Llama-3-8B
| null |
[
"license:apache-2.0",
"region:us"
] | null |
2024-04-24T03:29:48+00:00
|
|
null |
transformers
|
# DavidAU/DPO-llama3-8B-Q8_0-GGUF
This model was converted to GGUF format from [`akkky02/DPO-llama3-8B`](https://huggingface.co/akkky02/DPO-llama3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/akkky02/DPO-llama3-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/DPO-llama3-8B-Q8_0-GGUF --model dpo-llama3-8b.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/DPO-llama3-8B-Q8_0-GGUF --model dpo-llama3-8b.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m dpo-llama3-8b.Q8_0.gguf -n 128
```
|
{"library_name": "transformers", "tags": ["llama-cpp", "gguf-my-repo"]}
|
DavidAU/DPO-llama3-8B-Q8_0-GGUF
| null |
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T03:30:02+00:00
|
image-classification
|
transformers
|
{}
|
Rektyu/vit-base-patch16-224-in21k-finetuned-lora-food101
| null |
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T03:30:14+00:00
|
|
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_ablation_4iters_bs256_sample2_iter_2
This model is a fine-tuned version of [ShenaoZ/0.001_ablation_4iters_bs256_sample2_iter_1](https://huggingface.co/ShenaoZ/0.001_ablation_4iters_bs256_sample2_iter_1) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
{"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZ/0.001_ablation_4iters_bs256_sample2_iter_1", "model-index": [{"name": "0.001_ablation_4iters_bs256_sample2_iter_2", "results": []}]}
|
ShenaoZ/0.001_ablation_4iters_bs256_sample2_iter_2
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZ/0.001_ablation_4iters_bs256_sample2_iter_1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T03:30:55+00:00
|
text-classification
|
transformers
|
{}
|
addykan/canine-virus-classification-1k
| null |
[
"transformers",
"pytorch",
"canine",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T03:31:19+00:00
|
|
null | null |
{}
|
wh1tewhale/dysarthria_whisper_pretrained
| null |
[
"region:us"
] | null |
2024-04-24T03:31:30+00:00
|
|
text-generation
|
transformers
|
# DavidAU/Llama-3-8B-Instruct-DPO-v0.1-Q8_0-GGUF
This model was converted to GGUF format from [`MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.1`](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Llama-3-8B-Instruct-DPO-v0.1-Q8_0-GGUF --model llama-3-8b-instruct-dpo-v0.1.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Llama-3-8B-Instruct-DPO-v0.1-Q8_0-GGUF --model llama-3-8b-instruct-dpo-v0.1.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-8b-instruct-dpo-v0.1.Q8_0.gguf -n 128
```
|
{"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["axolotl", "finetune", "facebook", "meta", "pytorch", "llama", "llama-3", "llama-cpp", "gguf-my-repo"], "datasets": ["mlabonne/chatml-OpenHermes2.5-dpo-binarized-alpha"], "model_name": "Llama-3-8B-Instruct-DPO-v0.1", "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE", "inference": false, "model_creator": "MaziyarPanahi", "quantized_by": "MaziyarPanahi"}
|
DavidAU/Llama-3-8B-Instruct-DPO-v0.1-Q8_0-GGUF
| null |
[
"transformers",
"gguf",
"axolotl",
"finetune",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:mlabonne/chatml-OpenHermes2.5-dpo-binarized-alpha",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"region:us"
] | null |
2024-04-24T03:31:39+00:00
|
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_2024-04-24-00-09-45
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the ali_dataset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.2.2+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"license": "other", "library_name": "peft", "tags": ["llama-factory", "lora", "generated_from_trainer"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "train_2024-04-24-00-09-45", "results": []}]}
|
ramimu/ali-ai
| null |
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:other",
"region:us"
] | null |
2024-04-24T03:32:12+00:00
|
text-generation
|
transformers
|
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [microsoft/rho-math-1b-interpreter-v0.1](https://huggingface.co/microsoft/rho-math-1b-interpreter-v0.1)
* [appvoid/palmer-003](https://huggingface.co/appvoid/palmer-003)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: appvoid/palmer-003
layer_range: [0, 21]
- model: microsoft/rho-math-1b-interpreter-v0.1
layer_range: [0, 21]
merge_method: slerp
base_model: microsoft/rho-math-1b-interpreter-v0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float16
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["microsoft/rho-math-1b-interpreter-v0.1", "appvoid/palmer-003"]}
|
appvoid/palmer-math-slerp
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:microsoft/rho-math-1b-interpreter-v0.1",
"base_model:appvoid/palmer-003",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T03:32:51+00:00
|
null | null |
{}
|
mekaneeky/test_miner_1
| null |
[
"region:us"
] | null |
2024-04-24T03:32:55+00:00
|
|
null | null |
{}
|
katakd/wavlm_ctc_vietnamese
| null |
[
"safetensors",
"region:us"
] | null |
2024-04-24T03:33:12+00:00
|
|
null | null |
{}
|
mekaneeky/test_miner_2
| null |
[
"region:us"
] | null |
2024-04-24T03:33:13+00:00
|
|
null | null |
{}
|
mekaneeky/test_miner_3
| null |
[
"region:us"
] | null |
2024-04-24T03:33:29+00:00
|
|
null | null |
# DavidAU/Llama-3-11.5B-V2-Q8_0-GGUF
This model was converted to GGUF format from [`Replete-AI/Llama-3-11.5B-V2`](https://huggingface.co/Replete-AI/Llama-3-11.5B-V2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Replete-AI/Llama-3-11.5B-V2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Llama-3-11.5B-V2-Q8_0-GGUF --model llama-3-11.5b-v2.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Llama-3-11.5B-V2-Q8_0-GGUF --model llama-3-11.5b-v2.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-11.5b-v2.Q8_0.gguf -n 128
```
|
{"license": "other", "tags": ["llama-cpp", "gguf-my-repo"], "license_name": "llama-3", "license_link": "https://llama.meta.com/llama3/license/"}
|
DavidAU/Llama-3-11.5B-V2-Q8_0-GGUF
| null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:other",
"region:us"
] | null |
2024-04-24T03:33:46+00:00
|
null | null |
{}
|
mekaneeky/test_miner_4
| null |
[
"region:us"
] | null |
2024-04-24T03:33:49+00:00
|
|
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mohsenfayyaz/Llama-2-7b-chat-hf_medical_bios_5000_Lora_lr1e-5_5ep
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 2.3942
- eval_runtime: 4.0157
- eval_samples_per_second: 49.805
- eval_steps_per_second: 6.226
- epoch: 4.992
- step: 390
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 0
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- PEFT 0.9.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.19.1
|
{"license": "llama2", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "mohsenfayyaz/Llama-2-7b-chat-hf_medical_bios_5000_Lora_lr1e-5_5ep", "results": []}]}
|
mohsenfayyaz/Llama-2-7b-chat-hf_medical_bios_5000_Lora_lr1e-5_5ep
| null |
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"license:llama2",
"region:us"
] | null |
2024-04-24T03:34:47+00:00
|
null | null |
{"license": "llama3"}
|
weif/witer
| null |
[
"license:llama3",
"region:us"
] | null |
2024-04-24T03:35:05+00:00
|
|
text-generation
|
transformers
|
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [microsoft/rho-math-1b-interpreter-v0.1](https://huggingface.co/microsoft/rho-math-1b-interpreter-v0.1)
* [appvoid/palmer-003](https://huggingface.co/appvoid/palmer-003)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: appvoid/palmer-003
layer_range: [0, 22]
- model: microsoft/rho-math-1b-interpreter-v0.1
layer_range: [0, 22]
merge_method: slerp
base_model: microsoft/rho-math-1b-interpreter-v0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float16
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["microsoft/rho-math-1b-interpreter-v0.1", "appvoid/palmer-003"]}
|
appvoid/palmer-math-slerp-2
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:microsoft/rho-math-1b-interpreter-v0.1",
"base_model:appvoid/palmer-003",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T03:36:10+00:00
|
text-generation
|
transformers
|
# Llama-3-8B-16K-GGUF
- This is quantized version of [mattshumer/Llama-3-8B-16K](https://huggingface.co/mattshumer/Llama-3-8B-16K) created using llama.cpp
# Model Description
This is an extended (16K) context version of LLaMA 3 8B (base, not instruct). Trained for five hours on 8x A6000 GPUs, using the `Yukang/LongAlpaca-16k-length` dataset.
`rope_theta` was set to `1000000.0`. Trained with Axolotl.
|
{"library_name": "transformers", "datasets": ["Yukang/LongAlpaca-16k-length"], "pipeline_tag": "text-generation", "base_model": "mattshumer/Llama-3-8B-16K"}
|
QuantFactory/Llama-3-8B-16K-GGUF
| null |
[
"transformers",
"gguf",
"text-generation",
"dataset:Yukang/LongAlpaca-16k-length",
"base_model:mattshumer/Llama-3-8B-16K",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T03:36:29+00:00
|
null | null |
{}
|
charles828/gemma-medical-qa-v1
| null |
[
"region:us"
] | null |
2024-04-24T03:37:51+00:00
|
|
null | null |
{"license": "openrail"}
|
wkthdd2/Travis
| null |
[
"license:openrail",
"region:us"
] | null |
2024-04-24T03:39:01+00:00
|
|
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
charles828/merged_model
| null |
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T03:41:45+00:00
|
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
ljcnju/CodeLlama-7b-ForDefect-Detection-Lora-weights
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T03:42:08+00:00
|
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
abhijithgururaj/blip2-opt-2.7b-french-pre-lora-final
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T03:43:16+00:00
|
null | null |
This file does work, but since it is based on the Llava 1.5 data it is often not as accurate as one would hope. This is the first GGUF mmproj file created for Llama 3 models. Chaotic Neutrals is dedicated to bringing the best multimodal experience to our users and will keep trying to provide the best methods available.
I recommend using KoboldCPP in the same way you would have applied the Mistral mmproj in the "Model Files" tab.

|
{}
|
ChaoticNeutrals/mlm-filter-llava-llama-3-8b-gpt4v-mmproj-outdated
| null |
[
"gguf",
"region:us"
] | null |
2024-04-24T03:43:23+00:00
|
text-generation
|
transformers
|
## Como Utilizar
```
import transformers
import torch
model_id = "adalbertojunior/Llama-3-8B-Instruct-Portuguese-v0.4"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="auto",
)
messages = [
{"role": "system", "content": "Você é um robô pirata que sempre responde como um pirata deveria!"},
{"role": "user", "content": "Quem é você?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|im_end|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
|
{"language": ["pt"], "datasets": ["adalbertojunior/openHermes_portuguese"]}
|
adalbertojunior/Llama-3-8B-Instruct-Portuguese-v0.4
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"pt",
"dataset:adalbertojunior/openHermes_portuguese",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T03:43:39+00:00
|
null | null |
- 使用國科會 taide/TAIDE-LX-7B 作為Base model
- 訓練dataset 取自台灣衛生福利部的線上醫療諮詢服務平台: 台灣e院: https://sp1.hso.mohw.gov.tw/ , 共16萬多筆資料。
使用方法:
1. 可將模型透過LMStudio(https://lmstudio.ai/) 包裝成API服務
2. 修改web folder下的app.py, 將此行:
client = OpenAI(base_url="http://chatapi.xxxxxx:1234/v1/", api_key="not-needed")
的base_url改為你的API service url
3. 執行 streamlit run app.py, 便可提供如下圖的問答服務.
4. 執行範例:
問:今天作胃鏡檢查發現息肉, 這有關係嗎?
答:在進行胃鏡檢查後發現胃息肉是常見的現象,通常代表您有消化道潰瘍或胃黏膜損傷的情況。雖然胃息肉本身大多無大礙,但若息肉較大且位於胃的偏上部位(如:竇式變性),則可能導致胃食管逆流等症狀。因此,建議您與您的醫師討論並安排後續檢查,以確認息肉大小和位置,以便進一步評估是否需要治療(如:割除息肉)。
深入回答:關於胃鏡檢查發現胃息肉,我能提供更詳細的說明。當進行胃鏡檢查時,醫師會仔細觀察胃部組織,包括黏膜、潰瘍、腫瘤和食道結構等。在發現胃息肉後,通常會安排進一步的內視鏡檢查以確認息肉的位置、大小和其他相關病灶。


|
{"language": ["en", "zh"], "license": "apache-2.0", "tags": ["medical"]}
|
chtseng/TAIDE-Medicine-QA-TW-Q6
| null |
[
"gguf",
"medical",
"en",
"zh",
"license:apache-2.0",
"region:us"
] | null |
2024-04-24T03:43:40+00:00
|
null | null |
{}
|
titanbot/ct2-int8-mpt-7b-8k-chat
| null |
[
"region:us"
] | null |
2024-04-24T03:43:45+00:00
|
|
image-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-invitrace-food
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1286
- Accuracy: 0.9684
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.6128 | 0.2132 | 100 | 0.4694 | 0.9044 |
| 0.3905 | 0.4264 | 200 | 0.5236 | 0.8484 |
| 0.4315 | 0.6397 | 300 | 0.3988 | 0.8884 |
| 0.4028 | 0.8529 | 400 | 0.2213 | 0.9432 |
| 0.1097 | 1.0661 | 500 | 0.2963 | 0.92 |
| 0.1883 | 1.2793 | 600 | 0.2047 | 0.9448 |
| 0.137 | 1.4925 | 700 | 0.1695 | 0.9548 |
| 0.2309 | 1.7058 | 800 | 0.2159 | 0.9384 |
| 0.094 | 1.9190 | 900 | 0.1987 | 0.9452 |
| 0.0282 | 2.1322 | 1000 | 0.1861 | 0.9528 |
| 0.0231 | 2.3454 | 1100 | 0.1944 | 0.9476 |
| 0.0409 | 2.5586 | 1200 | 0.1625 | 0.96 |
| 0.0386 | 2.7719 | 1300 | 0.1486 | 0.9616 |
| 0.0249 | 2.9851 | 1400 | 0.1736 | 0.9572 |
| 0.012 | 3.1983 | 1500 | 0.1469 | 0.9624 |
| 0.0304 | 3.4115 | 1600 | 0.1405 | 0.9644 |
| 0.0052 | 3.6247 | 1700 | 0.1498 | 0.9636 |
| 0.0247 | 3.8380 | 1800 | 0.1286 | 0.9684 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "google/vit-base-patch16-224-in21k", "model-index": [{"name": "vit-invitrace-food", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "test", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9684, "name": "Accuracy"}]}]}]}
|
Tuu-invitrace/vit-invitrace-food
| null |
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T03:44:48+00:00
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-open-RoBERTamodel
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5936
- Accuracy: 0.6976
- F1: 0.6984
- Precision: 0.6999
- Recall: 0.6969
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "base_model": "roberta-base", "model-index": [{"name": "finetuning-open-RoBERTamodel", "results": []}]}
|
hsiuping/finetuning-open-RoBERTamodel
| null |
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T03:45:49+00:00
|
null | null |
{"license": "openrail"}
|
wkthdd2/Cartideep
| null |
[
"license:openrail",
"region:us"
] | null |
2024-04-24T03:46:38+00:00
|
|
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
santoshsto/mistral-7b-java-FINETUNED-16bit
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T03:46:51+00:00
|
text-generation
| null |
## Llamacpp imatrix Quantizations of Lexi-Llama-3-8B-Uncensored
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2714">b2714</a> for quantization.
Original model: https://huggingface.co/Orenguteng/Lexi-Llama-3-8B-Uncensored
All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
## Prompt format
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Lexi-Llama-3-8B-Uncensored-Q8_0.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. |
| [Lexi-Llama-3-8B-Uncensored-Q6_K.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. |
| [Lexi-Llama-3-8B-Uncensored-Q5_K_M.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. |
| [Lexi-Llama-3-8B-Uncensored-Q5_K_S.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. |
| [Lexi-Llama-3-8B-Uncensored-Q4_K_M.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Lexi-Llama-3-8B-Uncensored-Q4_K_S.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. |
| [Lexi-Llama-3-8B-Uncensored-IQ4_NL.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [Lexi-Llama-3-8B-Uncensored-IQ4_XS.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Lexi-Llama-3-8B-Uncensored-Q3_K_L.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. |
| [Lexi-Llama-3-8B-Uncensored-Q3_K_M.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. |
| [Lexi-Llama-3-8B-Uncensored-IQ3_M.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Lexi-Llama-3-8B-Uncensored-IQ3_S.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [Lexi-Llama-3-8B-Uncensored-Q3_K_S.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. |
| [Lexi-Llama-3-8B-Uncensored-IQ3_XS.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Lexi-Llama-3-8B-Uncensored-IQ3_XXS.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Lexi-Llama-3-8B-Uncensored-Q2_K.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. |
| [Lexi-Llama-3-8B-Uncensored-IQ2_M.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Lexi-Llama-3-8B-Uncensored-IQ2_S.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. |
| [Lexi-Llama-3-8B-Uncensored-IQ2_XS.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. |
| [Lexi-Llama-3-8B-Uncensored-IQ2_XXS.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. |
| [Lexi-Llama-3-8B-Uncensored-IQ1_M.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. |
| [Lexi-Llama-3-8B-Uncensored-IQ1_S.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. |
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
{"license": "llama3", "tags": ["uncensored", "llama3", "instruct", "open"], "quantized_by": "bartowski", "pipeline_tag": "text-generation"}
|
bartowski/Lexi-Llama-3-8B-Uncensored-GGUF
| null |
[
"gguf",
"uncensored",
"llama3",
"instruct",
"open",
"text-generation",
"license:llama3",
"region:us"
] | null |
2024-04-24T03:51:21+00:00
|
text-generation
|
transformers
|
# OrpoLlama-3-8B-GGUF
- This is quantized version of [mlabonne/OrpoLlama-3-8B](https://huggingface.co/mlabonne/OrpoLlama-3-8B) created using llama.cpp
# Model Description

This is an ORPO fine-tune of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on 1k samples of [mlabonne/orpo-dpo-mix-40k](https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k) created for [this article](https://huggingface.co/blog/mlabonne/orpo-llama-3).
It's a successful fine-tune that follows the ChatML template!
## 🔎 Application
This model uses a context window of 8k. It was trained with the ChatML template.
## 🏆 Evaluation
### Nous
OrpoLlama-4-8B outperforms Llama-3-8B-Instruct on the GPT4All and TruthfulQA datasets.
Evaluation performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval), see the entire leaderboard [here](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard).
| Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------: | --------: | --------: | ---------: | --------: |
| [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) [📄](https://gist.github.com/mlabonne/8329284d86035e6019edb11eb0933628) | 51.34 | 41.22 | 69.86 | 51.65 | 42.64 |
| [**mlabonne/OrpoLlama-3-8B**](https://huggingface.co/mlabonne/OrpoLlama-3-8B) [📄](https://gist.github.com/mlabonne/22896a1ae164859931cc8f4858c97f6f) | **48.63** | **34.17** | **70.59** | **52.39** | **37.36** |
| [mlabonne/OrpoLlama-3-8B-1k](https://huggingface.co/mlabonne/OrpoLlama-3-8B) [📄](https://gist.github.com/mlabonne/f41dad371d1781d0434a4672fd6f0b82) | 46.76 | 31.56 | 70.19 | 48.11 | 37.17 |
| [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) [📄](https://gist.github.com/mlabonne/616b6245137a9cfc4ea80e4c6e55d847) | 45.42 | 31.1 | 69.95 | 43.91 | 36.7 |
`mlabonne/OrpoLlama-3-8B-1k` corresponds to a version of this model trained on 1K samples (you can see the parameters in [this article](https://huggingface.co/blog/mlabonne/orpo-llama-3)).
### Open LLM Leaderboard
TBD.
## 📈 Training curves
You can find the experiment on W&B at [this address](https://wandb.ai/mlabonne/DPO/runs/vxnmq24z/workspace?nw=nwusermlabonne).

|
{"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["orpo", "llama 3", "rlhf", "sft"], "datasets": ["mlabonne/orpo-dpo-mix-40k"], "base_model": "mlabonne/OrpoLlama-3-8B", "pipeline_tag": "text-generation"}
|
QuantFactory/OrpoLlama-3-8B-GGUF
| null |
[
"transformers",
"gguf",
"orpo",
"llama 3",
"rlhf",
"sft",
"text-generation",
"en",
"dataset:mlabonne/orpo-dpo-mix-40k",
"base_model:mlabonne/OrpoLlama-3-8B",
"license:other",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T03:52:33+00:00
|
text-generation
|
transformers
|
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [microsoft/rho-math-1b-interpreter-v0.1](https://huggingface.co/microsoft/rho-math-1b-interpreter-v0.1)
* [appvoid/palmer-003](https://huggingface.co/appvoid/palmer-003)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: microsoft/rho-math-1b-interpreter-v0.1
layer_range: [0, 22]
- model: appvoid/palmer-003
layer_range: [0, 22]
merge_method: slerp
base_model: appvoid/palmer-003
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float16
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["microsoft/rho-math-1b-interpreter-v0.1", "appvoid/palmer-003"]}
|
appvoid/palmer-math-slerp-3
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:microsoft/rho-math-1b-interpreter-v0.1",
"base_model:appvoid/palmer-003",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T03:53:00+00:00
|
null | null |
{}
|
yuchen173/test
| null |
[
"region:us"
] | null |
2024-04-24T03:53:08+00:00
|
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [hf-internal-testing/tiny-albert](https://huggingface.co/hf-internal-testing/tiny-albert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1315
- Accuracy: 0.96
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 25.0
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6937 | 0.1 | 25 | 0.6927 | 0.5115 |
| 0.6911 | 0.2 | 50 | 0.6770 | 0.613 |
| 0.4988 | 0.3 | 75 | 0.3574 | 0.886 |
| 0.2604 | 0.4 | 100 | 0.1720 | 0.9525 |
| 0.1758 | 0.5 | 125 | 0.1787 | 0.9435 |
| 0.1964 | 0.6 | 150 | 0.1327 | 0.9615 |
| 0.1637 | 0.7 | 175 | 0.1269 | 0.9635 |
| 0.1453 | 0.8 | 200 | 0.1538 | 0.9565 |
| 0.1563 | 0.9 | 225 | 0.1508 | 0.9575 |
| 0.1657 | 1.0 | 250 | 0.1315 | 0.96 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "hf-internal-testing/tiny-albert", "model-index": [{"name": "results", "results": []}]}
|
Nali19/results
| null |
[
"transformers",
"safetensors",
"albert",
"text-classification",
"generated_from_trainer",
"base_model:hf-internal-testing/tiny-albert",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T03:54:19+00:00
|
image-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-clothes-classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the DBQ/Matches.Fashion.Product.prices.France dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2328
- Accuracy: 0.6395
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.0975 | 0.5714 | 500 | 1.2619 | 0.6111 |
| 0.8315 | 1.1429 | 1000 | 1.3133 | 0.6322 |
| 0.7266 | 1.7143 | 1500 | 1.2077 | 0.6356 |
| 0.5451 | 2.2857 | 2000 | 1.2895 | 0.6556 |
| 0.4287 | 2.8571 | 2500 | 1.2736 | 0.6644 |
| 0.2554 | 3.4286 | 3000 | 1.3801 | 0.6767 |
| 0.2265 | 4.0 | 3500 | 1.4924 | 0.6656 |
| 0.0738 | 4.5714 | 4000 | 1.6321 | 0.68 |
| 0.0761 | 5.1429 | 4500 | 1.6676 | 0.6767 |
| 0.0251 | 5.7143 | 5000 | 1.6911 | 0.7056 |
| 0.0147 | 6.2857 | 5500 | 1.7312 | 0.7 |
| 0.0051 | 6.8571 | 6000 | 1.7282 | 0.6922 |
| 0.0028 | 7.4286 | 6500 | 1.7679 | 0.6967 |
| 0.0017 | 8.0 | 7000 | 1.7642 | 0.6989 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"license": "apache-2.0", "tags": ["image-classification", "clothes-classification", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "google/vit-base-patch16-224-in21k", "model-index": [{"name": "vit-clothes-classification", "results": []}]}
|
jolual2747/vit-clothes-classification
| null |
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"clothes-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T03:55:42+00:00
|
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
calliehsu/tiny-llama-shuttle-xpc-cube
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T03:57:27+00:00
|
null |
peft
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0
|
{"library_name": "peft", "base_model": "meta-llama/Llama-2-7b-hf"}
|
amartyasaran/FLogLlama
| null |
[
"peft",
"tensorboard",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"region:us"
] | null |
2024-04-24T03:58:07+00:00
|
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
santoshsto/mistral-7b-javascript-FINETUNED-16bit
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T03:59:58+00:00
|
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny En-US - FredDYyy
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Minds 14 English dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6994
- Wer Ortho: 0.3291
- Wer: 0.3288
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-------:|:----:|:---------------:|:---------:|:------:|
| 0.0006 | 17.8571 | 500 | 0.6994 | 0.3291 | 0.3288 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.19.1
|
{"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["whisper-tiny/minds_14_en"], "metrics": ["wer"], "base_model": "openai/whisper-tiny", "model-index": [{"name": "Whisper Tiny En-US - FredDYyy", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Minds 14 English", "type": "whisper-tiny/minds_14_en", "config": "en-US", "split": "train", "args": "en-US"}, "metrics": [{"type": "wer", "value": 0.32881136950904394, "name": "Wer"}]}]}]}
|
FredDYyy/whisper-tiny-en
| null |
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"dataset:whisper-tiny/minds_14_en",
"base_model:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T04:01:41+00:00
|
null | null |
{"license": "openrail"}
|
sdadasfgdfgfdg/Nabi
| null |
[
"license:openrail",
"region:us"
] | null |
2024-04-24T04:02:49+00:00
|
|
null | null |
# Multi_verse_modelT3qm7xp-7B
Multi_verse_modelT3qm7xp-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
- model: MTSAIR/multi_verse_model
- model: nlpguy/T3QM7XP
merge_method: model_stock
base_model: mistralai/Mistral-7B-v0.1
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/Multi_verse_modelT3qm7xp-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"]}
|
automerger/Multi_verse_modelT3qm7xp-7B
| null |
[
"merge",
"mergekit",
"lazymergekit",
"automerger",
"license:apache-2.0",
"region:us"
] | null |
2024-04-24T04:04:06+00:00
|
null | null |
{}
|
b2bp8ip/xd
| null |
[
"region:us"
] | null |
2024-04-24T04:04:48+00:00
|
|
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
ripaaiii/fine-tune-C1-revised-lr6-boxkecil10
| null |
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T04:05:37+00:00
|
text-generation
|
transformers
|
{"license": "mit"}
|
migueldeguzmandev/GPT2XL_RLLMv18-1
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T04:06:28+00:00
|
|
null |
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DonutProcessor_Detail
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1
- Datasets 2.13.2
- Tokenizers 0.13.3
|
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "model-index": [{"name": "DonutProcessor_Detail", "results": []}]}
|
quipohealth/faxreader
| null |
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T04:08:13+00:00
|
text-generation
| null |
# Unichat-llama3-Chinese-8B- GGUF
- This is quantized version of [UnicomLLM/Unichat-llama3-Chinese-8B](https://huggingface.co/UnicomLLM/Unichat-llama3-Chinese-8B)
# Model Description (Translated)
- China Unicom AI Innovation Center released the industry's first llama3 Chinese instruction fine-tuning model (full parameter fine-tuning), uploaded at 22:00 on April 19, 2024
- This model is based on [**Meta Llama 3**](https://huggingface.co/collections/meta-llama/meta-llama-3-66214712577ca38149ebb2b6) , adds Chinese data for training, and achieves high-quality Chinese question and answer using the llama3 model.
- The model context maintains the native length of 8K, and a version that supports 64K length will be released later.
- Base model [**Meta-Llama-3-8B**](https://huggingface.co/meta-llama/Meta-Llama-3-8B)
### 📊 Data
- High-quality instruction data, covering multiple fields and industries, providing sufficient data support for model training
- Fine-tuning instruction data undergoes strict manual screening to ensure high-quality instruction data is used for model fine-tuning.
For more details on models, datasets and training please refer to:
* Github:[**Unichat-llama3-Chinese**](https://github.com/UnicomAI/Unichat-llama3-Chinese)
|
{"language": ["en", "zh"], "license": "apache-2.0", "pipeline_tag": "text-generation", "base_model": "UnicomLLM/Unichat-llama3-Chinese-8B"}
|
QuantFactory/Unichat-llama3-Chinese-8B-GGUF
| null |
[
"gguf",
"text-generation",
"en",
"zh",
"base_model:UnicomLLM/Unichat-llama3-Chinese-8B",
"license:apache-2.0",
"region:us"
] | null |
2024-04-24T04:08:25+00:00
|
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
santoshsto/mistral-7b-cpp-FINETUNED-4bit
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null |
2024-04-24T04:09:06+00:00
|
text-to-image
|
diffusers
|
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - azhang42/onepiece-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the YaYaB/onepiece-blip-captions dataset. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
{"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "diffusers-training", "lora", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "diffusers-training", "lora"], "base_model": "runwayml/stable-diffusion-v1-5", "inference": true}
|
azhang42/onepiece-lora
| null |
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | null |
2024-04-24T04:10:58+00:00
|
null | null |
{}
|
kannan007/swin-tiny-patch4-window7-224-finetuned-eurosat
| null |
[
"region:us"
] | null |
2024-04-24T04:11:58+00:00
|
|
reinforcement-learning
|
ml-agents
|
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: phoenixaiden33/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
{"library_name": "ml-agents", "tags": ["SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos"]}
|
phoenixaiden33/poca-SoccerTwos
| null |
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | null |
2024-04-24T04:12:13+00:00
|
text-generation
|
transformers
|
Fine-tuned model on the basis of the meta-llama/Llama-2-7b-chat-hf pre-trained model, utilizing content from Netlab and Xlab blogs for refinement.
|
{"license": "apache-2.0", "datasets": ["kenshinx/netlab-blogs"]}
|
kenshinx/Llama-2-7b-chat-xgpt
| null |
[
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"dataset:kenshinx/netlab-blogs",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T04:12:53+00:00
|
text-generation
|
transformers
|
{"license": "mit"}
|
migueldeguzmandev/GPT2XL_RLLMv18-2
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T04:15:26+00:00
|
|
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# working
This model is a fine-tuned version of [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.19.1
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "HuggingFaceM4/idefics2-8b", "model-index": [{"name": "working", "results": []}]}
|
chananyulim1616/working
| null |
[
"safetensors",
"generated_from_trainer",
"base_model:HuggingFaceM4/idefics2-8b",
"license:apache-2.0",
"region:us"
] | null |
2024-04-24T04:16:34+00:00
|
null | null |
{}
|
thusinh1969/LLaMA-2-finetune-cp53500-20APRIL2024-gguf
| null |
[
"gguf",
"region:us"
] | null |
2024-04-24T04:16:34+00:00
|
|
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RM-HH-Gemma_helpful_human_20000_gemma2b_shuffleFalse_extractchosenTrue
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6203
- Accuracy: 0.6600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.41e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.733 | 0.06 | 250 | 0.7295 | 0.5123 |
| 0.7034 | 0.11 | 500 | 0.7061 | 0.5498 |
| 0.7165 | 0.17 | 750 | 0.6913 | 0.5729 |
| 0.6821 | 0.22 | 1000 | 0.6809 | 0.5884 |
| 0.6707 | 0.28 | 1250 | 0.6744 | 0.6024 |
| 0.6551 | 0.33 | 1500 | 0.6692 | 0.6114 |
| 0.6744 | 0.39 | 1750 | 0.6617 | 0.6144 |
| 0.6418 | 0.45 | 2000 | 0.6591 | 0.6335 |
| 0.6546 | 0.5 | 2250 | 0.6567 | 0.6355 |
| 0.6465 | 0.56 | 2500 | 0.6537 | 0.6375 |
| 0.6489 | 0.61 | 2750 | 0.6471 | 0.6390 |
| 0.6555 | 0.67 | 3000 | 0.6399 | 0.6390 |
| 0.647 | 0.72 | 3250 | 0.6374 | 0.6460 |
| 0.6555 | 0.78 | 3500 | 0.6365 | 0.6490 |
| 0.6165 | 0.83 | 3750 | 0.6347 | 0.6465 |
| 0.6385 | 0.89 | 4000 | 0.6338 | 0.6485 |
| 0.6202 | 0.95 | 4250 | 0.6317 | 0.6490 |
| 0.6198 | 1.0 | 4500 | 0.6316 | 0.6520 |
| 0.6092 | 1.06 | 4750 | 0.6325 | 0.6515 |
| 0.6091 | 1.11 | 5000 | 0.6339 | 0.6510 |
| 0.605 | 1.17 | 5250 | 0.6338 | 0.6540 |
| 0.673 | 1.22 | 5500 | 0.6263 | 0.6550 |
| 0.6119 | 1.28 | 5750 | 0.6267 | 0.6565 |
| 0.6153 | 1.34 | 6000 | 0.6267 | 0.6580 |
| 0.6048 | 1.39 | 6250 | 0.6249 | 0.6560 |
| 0.62 | 1.45 | 6500 | 0.6228 | 0.6540 |
| 0.6213 | 1.5 | 6750 | 0.6234 | 0.6595 |
| 0.6107 | 1.56 | 7000 | 0.6228 | 0.6605 |
| 0.6266 | 1.61 | 7250 | 0.6212 | 0.6580 |
| 0.6088 | 1.67 | 7500 | 0.6211 | 0.6595 |
| 0.6282 | 1.72 | 7750 | 0.6210 | 0.6615 |
| 0.6384 | 1.78 | 8000 | 0.6197 | 0.6610 |
| 0.5987 | 1.84 | 8250 | 0.6198 | 0.6580 |
| 0.5911 | 1.89 | 8500 | 0.6201 | 0.6600 |
| 0.5981 | 1.95 | 8750 | 0.6203 | 0.6600 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "gemma", "library_name": "peft", "tags": ["trl", "reward-trainer", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "google/gemma-2b", "model-index": [{"name": "RM-HH-Gemma_helpful_human_20000_gemma2b_shuffleFalse_extractchosenTrue", "results": []}]}
|
Holarissun/RM-HH-Gemma_helpful_human_20000_gemma2b_shuffleFalse_extractchosenTrue
| null |
[
"peft",
"safetensors",
"trl",
"reward-trainer",
"generated_from_trainer",
"base_model:google/gemma-2b",
"license:gemma",
"region:us"
] | null |
2024-04-24T04:16:47+00:00
|
null |
transformers
|
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Azure99/blossom-v3_1-yi-34b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-GGUF/resolve/main/blossom-v3_1-yi-34b.Q2_K.gguf) | Q2_K | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-GGUF/resolve/main/blossom-v3_1-yi-34b.IQ3_XS.gguf) | IQ3_XS | 14.3 | |
| [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-GGUF/resolve/main/blossom-v3_1-yi-34b.Q3_K_S.gguf) | Q3_K_S | 15.1 | |
| [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-GGUF/resolve/main/blossom-v3_1-yi-34b.IQ3_S.gguf) | IQ3_S | 15.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-GGUF/resolve/main/blossom-v3_1-yi-34b.IQ3_M.gguf) | IQ3_M | 15.7 | |
| [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-GGUF/resolve/main/blossom-v3_1-yi-34b.Q3_K_M.gguf) | Q3_K_M | 16.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-GGUF/resolve/main/blossom-v3_1-yi-34b.Q3_K_L.gguf) | Q3_K_L | 18.2 | |
| [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-GGUF/resolve/main/blossom-v3_1-yi-34b.IQ4_XS.gguf) | IQ4_XS | 18.7 | |
| [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-GGUF/resolve/main/blossom-v3_1-yi-34b.Q4_K_S.gguf) | Q4_K_S | 19.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-GGUF/resolve/main/blossom-v3_1-yi-34b.Q4_K_M.gguf) | Q4_K_M | 20.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-GGUF/resolve/main/blossom-v3_1-yi-34b.Q5_K_S.gguf) | Q5_K_S | 23.8 | |
| [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-GGUF/resolve/main/blossom-v3_1-yi-34b.Q5_K_M.gguf) | Q5_K_M | 24.4 | |
| [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-GGUF/resolve/main/blossom-v3_1-yi-34b.Q6_K.gguf) | Q6_K | 28.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-GGUF/resolve/main/blossom-v3_1-yi-34b.Q8_0.gguf) | Q8_0 | 36.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "datasets": ["Azure99/blossom-chat-v1", "Azure99/blossom-math-v2", "Azure99/blossom-wizard-v1", "Azure99/blossom-orca-v1"], "base_model": "Azure99/blossom-v3_1-yi-34b", "quantized_by": "mradermacher"}
|
mradermacher/blossom-v3_1-yi-34b-GGUF
| null |
[
"transformers",
"gguf",
"en",
"dataset:Azure99/blossom-chat-v1",
"dataset:Azure99/blossom-math-v2",
"dataset:Azure99/blossom-wizard-v1",
"dataset:Azure99/blossom-orca-v1",
"base_model:Azure99/blossom-v3_1-yi-34b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T04:16:56+00:00
|
null |
transformers
|
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/camel-ai/CAMEL-33B-Combined-Data
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/CAMEL-33B-Combined-Data-GGUF/resolve/main/CAMEL-33B-Combined-Data.Q2_K.gguf) | Q2_K | 12.1 | |
| [GGUF](https://huggingface.co/mradermacher/CAMEL-33B-Combined-Data-GGUF/resolve/main/CAMEL-33B-Combined-Data.IQ3_XS.gguf) | IQ3_XS | 13.4 | |
| [GGUF](https://huggingface.co/mradermacher/CAMEL-33B-Combined-Data-GGUF/resolve/main/CAMEL-33B-Combined-Data.IQ3_S.gguf) | IQ3_S | 14.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/CAMEL-33B-Combined-Data-GGUF/resolve/main/CAMEL-33B-Combined-Data.Q3_K_S.gguf) | Q3_K_S | 14.2 | |
| [GGUF](https://huggingface.co/mradermacher/CAMEL-33B-Combined-Data-GGUF/resolve/main/CAMEL-33B-Combined-Data.IQ3_M.gguf) | IQ3_M | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/CAMEL-33B-Combined-Data-GGUF/resolve/main/CAMEL-33B-Combined-Data.Q3_K_M.gguf) | Q3_K_M | 15.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/CAMEL-33B-Combined-Data-GGUF/resolve/main/CAMEL-33B-Combined-Data.Q3_K_L.gguf) | Q3_K_L | 17.4 | |
| [GGUF](https://huggingface.co/mradermacher/CAMEL-33B-Combined-Data-GGUF/resolve/main/CAMEL-33B-Combined-Data.IQ4_XS.gguf) | IQ4_XS | 17.6 | |
| [GGUF](https://huggingface.co/mradermacher/CAMEL-33B-Combined-Data-GGUF/resolve/main/CAMEL-33B-Combined-Data.Q4_K_S.gguf) | Q4_K_S | 18.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CAMEL-33B-Combined-Data-GGUF/resolve/main/CAMEL-33B-Combined-Data.Q4_K_M.gguf) | Q4_K_M | 19.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CAMEL-33B-Combined-Data-GGUF/resolve/main/CAMEL-33B-Combined-Data.Q5_K_S.gguf) | Q5_K_S | 22.5 | |
| [GGUF](https://huggingface.co/mradermacher/CAMEL-33B-Combined-Data-GGUF/resolve/main/CAMEL-33B-Combined-Data.Q5_K_M.gguf) | Q5_K_M | 23.1 | |
| [GGUF](https://huggingface.co/mradermacher/CAMEL-33B-Combined-Data-GGUF/resolve/main/CAMEL-33B-Combined-Data.Q6_K.gguf) | Q6_K | 26.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/CAMEL-33B-Combined-Data-GGUF/resolve/main/CAMEL-33B-Combined-Data.Q8_0.gguf) | Q8_0 | 34.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
{"language": ["en"], "library_name": "transformers", "base_model": "camel-ai/CAMEL-33B-Combined-Data", "quantized_by": "mradermacher"}
|
mradermacher/CAMEL-33B-Combined-Data-GGUF
| null |
[
"transformers",
"gguf",
"en",
"base_model:camel-ai/CAMEL-33B-Combined-Data",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T04:16:56+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.