modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
Weyaxi/CollectiveCognition-v1.1-Nebula-7B | Weyaxi | "2023-11-25T17:47:04Z" | 1,104 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:garage-bAInd/Open-Platypus",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-08T13:57:07Z" | ---
license: cc-by-nc-4.0
datasets:
- garage-bAInd/Open-Platypus
language:
- en
---
<a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
# OpenOrca-Nebula-7B
OpenOrca-Nebula-7B is a merge of [teknium/CollectiveCognition-v1.1-Mistral-7B](https://huggingface.co/teknium/CollectiveCognition-v1.1-Mistral-7B) and [PulsarAI/Nebula-7B](https://huggingface.co/Weyaxi/PulsarAI/Nebula-7B)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_PulsarAI__CollectiveCognition-v1.1-Nebula-7B)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 53.79 |
| ARC (25-shot) | 58.11 |
| HellaSwag (10-shot) | 82.39 |
| MMLU (5-shot) | 57.03 |
| TruthfulQA (0-shot) | 53.53 |
| Winogrande (5-shot) | 73.72 |
| GSM8K (5-shot) | 9.55 |
| DROP (3-shot) | 42.17 |
|
TheBloke/claude2-alpaca-7B-GGUF | TheBloke | "2023-11-10T10:46:17Z" | 1,104 | 12 | transformers | [
"transformers",
"gguf",
"llama",
"en",
"dataset:umd-zhou-lab/claude2_alpaca",
"base_model:umd-zhou-lab/claude2-alpaca-7B",
"license:llama2",
"text-generation-inference",
"region:us"
] | null | "2023-11-10T10:42:03Z" | ---
base_model: umd-zhou-lab/claude2-alpaca-7B
datasets:
- umd-zhou-lab/claude2_alpaca
inference: false
language:
- en
license: llama2
model_creator: Tianyi Lab @ UMD
model_name: Claude2 Alpaca 7B
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Claude2 Alpaca 7B - GGUF
- Model creator: [Tianyi Lab @ UMD](https://huggingface.co/umd-zhou-lab)
- Original model: [Claude2 Alpaca 7B](https://huggingface.co/umd-zhou-lab/claude2-alpaca-7B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Tianyi Lab @ UMD's Claude2 Alpaca 7B](https://huggingface.co/umd-zhou-lab/claude2-alpaca-7B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/claude2-alpaca-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/claude2-alpaca-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/claude2-alpaca-7B-GGUF)
* [Tianyi Lab @ UMD's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/umd-zhou-lab/claude2-alpaca-7B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [claude2-alpaca-7b.Q2_K.gguf](https://huggingface.co/TheBloke/claude2-alpaca-7B-GGUF/blob/main/claude2-alpaca-7b.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [claude2-alpaca-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/claude2-alpaca-7B-GGUF/blob/main/claude2-alpaca-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [claude2-alpaca-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/claude2-alpaca-7B-GGUF/blob/main/claude2-alpaca-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [claude2-alpaca-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/claude2-alpaca-7B-GGUF/blob/main/claude2-alpaca-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [claude2-alpaca-7b.Q4_0.gguf](https://huggingface.co/TheBloke/claude2-alpaca-7B-GGUF/blob/main/claude2-alpaca-7b.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [claude2-alpaca-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/claude2-alpaca-7B-GGUF/blob/main/claude2-alpaca-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [claude2-alpaca-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/claude2-alpaca-7B-GGUF/blob/main/claude2-alpaca-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [claude2-alpaca-7b.Q5_0.gguf](https://huggingface.co/TheBloke/claude2-alpaca-7B-GGUF/blob/main/claude2-alpaca-7b.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [claude2-alpaca-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/claude2-alpaca-7B-GGUF/blob/main/claude2-alpaca-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [claude2-alpaca-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/claude2-alpaca-7B-GGUF/blob/main/claude2-alpaca-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [claude2-alpaca-7b.Q6_K.gguf](https://huggingface.co/TheBloke/claude2-alpaca-7B-GGUF/blob/main/claude2-alpaca-7b.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [claude2-alpaca-7b.Q8_0.gguf](https://huggingface.co/TheBloke/claude2-alpaca-7B-GGUF/blob/main/claude2-alpaca-7b.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/claude2-alpaca-7B-GGUF and below it, a specific filename to download, such as: claude2-alpaca-7b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/claude2-alpaca-7B-GGUF claude2-alpaca-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/claude2-alpaca-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/claude2-alpaca-7B-GGUF claude2-alpaca-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m claude2-alpaca-7b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/claude2-alpaca-7B-GGUF", model_file="claude2-alpaca-7b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, ้ฟๆ, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjรคreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Tianyi Lab @ UMD's Claude2 Alpaca 7B
# Model Card for umd-zhou-lab/claude2-alpaca-7B
<!-- Provide a quick summary of what the model is/does. -->
This model is trained by fine-tuning llama-2 with claude2 alpaca data.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** UMD Tianyi Zhou Lab
- **Model type:** An auto-regressive language model based on the transformer architecture
- **License:** Llama 2 Community License Agreement
- **Finetuned from model:** [meta-llama/Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b)
### Model Sources
<!-- Provide the basic links for the model. -->
- **GitHub:** [Claude2-Alpaca](https://github.com/Lichang-Chen/claude2-alpaca)
- **Data:** [claude2_alpaca](https://huggingface.co/datasets/umd-zhou-lab/claude2_alpaca)
## Uses
The primary use of this model is research on large language models and chatbots.
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
## Training
We use the prompt from [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
| Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay |
| --- | ---: | ---: | ---: | ---: | ---: |
| Model (7B) | 128 | 2e-5 | 3 | 4096 | 0 |
## Performance
Compared to the llama2-chat, our models can have better average performance.<br>
| | Average | ARC | HellaSwag | MMLU | TruthfulQA | Alpaca_Eval | Avg Length |
|---|---|---|---|---|---|---|---|
| Llama-2-7b-chat | 56.335 | 52.9 | 78.55 | 48.32 | 45.57 | 71.37 | 1479 |
| Llama-2-13b-chat | 59.935 | 59.04| 81.94 | 54.64 | 44.12 | 81.09 | 1513 |
|||||||||
| claude_alpaca-7b | 57.78 | 56.66 | 81.17 | 46.58 | 46.71 | 71.23 | 1066 |
| claude_alpaca-13b | 61.29 | 61.18 | 84.08 | 55.74 | 44.18 | 78.93 | 1127 |
## Citation
Please consider citing our paper if you think our codes, data, or models are useful. Thank you!
```
@misc{claude2-alpaca,
author = {Lichang Chen and Khalid Saifullah and Ming Li and Tianyi Zhou and Heng Huang},
title = {Claude2-Alpaca: Instruction tuning datasets distilled from claude},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/Lichang-Chen/claude2-alpaca}},
}
```
<!-- original-model-card end -->
|
Edentns/DataVortexS-10.7B-v0.3 | Edentns | "2024-02-24T14:18:23Z" | 1,104 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"dataset:jojo0217/korean_rlhf_dataset",
"base_model:hyeogi/SOLAR-10.7B-dpo-v0.1",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-06T03:06:51Z" | ---
tags:
- text-generation
license: cc-by-nc-sa-4.0
language:
- ko
base_model: hyeogi/SOLAR-10.7B-dpo-v0.1
pipeline_tag: text-generation
datasets:
- jojo0217/korean_rlhf_dataset
---
# **DataVortexS-10.7B-v0.3**
<img src="./DataVortex.png" alt="DataVortex" style="height: 8em;">
## Our Team
| Research & Engineering | Product Management |
| :--------------------: | :----------------: |
| Kwangseok Yang | Seunghyun Choi |
| Jeongwon Choi | Hyoseok Choi |
## **Model Details**
### **Base Model**
[hyeogi/SOLAR-10.7B-dpo-v0.1](https://huggingface.co/hyeogi/SOLAR-10.7B-dpo-v0.1)
### **Trained On**
- **OS**: Ubuntu 20.04
- **GPU**: H100 80GB 1ea
- **transformers**: v4.36.2
### **Dataset**
- [jojo0217/korean_rlhf_dataset](https://huggingface.co/datasets/jojo0217/korean_rlhf_dataset)
### **Instruction format**
It follows **Alpaca** format.
E.g.
```python
text = """\
๋น์ ์ ์ฌ๋๋ค์ด ์ ๋ณด๋ฅผ ์ฐพ์ ์ ์๋๋ก ๋์์ฃผ๋ ์ธ๊ณต์ง๋ฅ ๋น์์
๋๋ค.
### Instruction:
๋ํ๋ฏผ๊ตญ์ ์๋๋ ์ด๋์ผ?
### Response:
๋ํ๋ฏผ๊ตญ์ ์๋๋ ์์ธ์
๋๋ค.
### Instruction:
์์ธ ์ธ๊ตฌ๋ ์ด ๋ช ๋ช
์ด์ผ?
"""
```
## **Model Benchmark**
### **[Ko LM Eval Harness](https://github.com/Beomi/ko-lm-evaluation-harness)**
| Task | 0-shot | 5-shot | 10-shot | 50-shot |
| :--------------- | -------------: | -------------: | ------------: | -------------: |
| kobest_boolq | 0.606754 | 0.553485 | 0.583201 | 0.587602 |
| kobest_copa | 0.603643 | 0.625567 | 0.618533 | 0.627404 |
| kobest_hellaswag | 0.360793 | 0.366002 | 0.37105 | 0.357439 |
| kobest_sentineg | 0.652929 | 0.751097 | 0.742426 | 0.760152 |
| **Average** | **0.55602975** | **0.57403775** | **0.5788025** | **0.58314925** |
### **[Ko-LLM-Leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)**
| Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
| ------: | -----: | -----------: | ------: | ------------: | --------------: |
| 37.57 | 33.87 | 42.47 | 28.21 | 46.09 | 37.19 |
## **Implementation Code**
This model contains the chat_template instruction format.
You can use the code below.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("Edentns/DataVortexS-10.7B-v0.3")
tokenizer = AutoTokenizer.from_pretrained("Edentns/DataVortexS-10.7B-v0.3")
messages = [
{"role": "system", "content": "๋น์ ์ ์ฌ๋๋ค์ด ์ ๋ณด๋ฅผ ์ฐพ์ ์ ์๋๋ก ๋์์ฃผ๋ ์ธ๊ณต์ง๋ฅ ๋น์์
๋๋ค."},
{"role": "user", "content": "๋ํ๋ฏผ๊ตญ์ ์๋๋ ์ด๋์ผ?"},
{"role": "assistant", "content": "๋ํ๋ฏผ๊ตญ์ ์๋๋ ์์ธ์
๋๋ค."},
{"role": "user", "content": "์์ธ ์ธ๊ตฌ๋ ์ด ๋ช ๋ช
์ด์ผ?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## **License**
The model is licensed under the [cc-by-nc-sa-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license, which allows others to copy, modify, and share the work non-commercially, as long as they give appropriate credit and distribute any derivative works under the same license.
<div align="center">
<a href="https://edentns.com/">
<img src="./Logo.png" alt="Logo" style="height: 3em;">
</a>
</div>
|
Herry443/Mistral-7B-KNUT-ref | Herry443 | "2024-02-06T03:42:38Z" | 1,104 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"ko",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-06T15:23:44Z" | ---
license: cc-by-nc-4.0
language:
- ko
library_name: transformers
tags:
- mistral
---
### Model Details
- Base Model: [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
### Datasets
- sampling [kyujinpy/KOR-OpenOrca-Platypus-v2](https://huggingface.co/datasets/kyujinpy/KOR-OpenOrca-Platypus-v2)
- sampling [HumanF-MarkrAI/WIKI_QA_Near_dedup](https://huggingface.co/datasets/HumanF-MarkrAI/WIKI_QA_Near_dedup)
- sampling [kyujinpy/KoCoT_2000](https://huggingface.co/datasets/HumanF-MarkrAI/WIKI_QA_Near_dedup)
|
cocoirun/Yi-Ko-6B-instruct-v1.0 | cocoirun | "2024-01-11T01:46:54Z" | 1,104 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-08T01:25:34Z" | ---
license: cc-by-sa-4.0
---
<h1>instruct ๋ชจ๋ธ v1.0</h1>
<b><ํ์ต ๋ฐ์ดํฐ ๊ตฌ์ถ></b>
Open-Orca-ko ๋ฐ์ดํฐ๋ฅผ ๋ถ์ํ์ฌ ํ์คํฌ๋ฅผ ์ถ์ถํ ๋ค
ํด๋น ํ์คํฌ์ ๋ง์ถฐ์ NLP ๊ด๋ จ ์คํ์์ค ๋ฐ์ดํฐ๋ฅผ ํ์ฉํ์ฌ ํ์ต๋ฐ์ดํฐ๋ฅผ ์์ฒด์ ์ผ๋ก
์ฝ 4๋ง๊ฑด(์ญ์ฌ, ๊ณผํ, ์ํ, ๊ธฐ๊ณ๋
ํด, ๋ฆฌ๋ทฐ ๋ถ์) ๊ตฌ์ถํ์๊ณ ,
๊ทธ ์ธ์ Open-Orca-Ko์์ ๋ฐ์ดํฐ๋ฅผ ์ผ๋ถ ํํฐ๋งํ์ฌ ์ ์ ํด๊ฑฐ๋ KoBEST ๋ฐ์ดํฐ๋ฅผ ํจ๊ป ์ถ๊ฐํ์์ต๋๋ค.
aihub ์ผ๋ฐ์์ ๋ฐ ๊ธฐ๊ณ๋
ํด ๋ฐ์ดํฐ๋ฅผ ํ์ฉํ์ฌ ์ถ๊ฐ๋ก ํ์ต ๋ฐ์ดํฐ๋ฅผ ๊ตฌ์ถ(ํํ์ ๊ด๋ จ, ๊ธฐ๊ณ๋
ํด ๊ด๋ จ ๋ฐ ์์ฝ)
๊ฐ์ข
๋ธ๋ก๊ทธ์์ ์ญ์ฌ ๋ฐ ์์ ํด์ฆ๋ฅผ ์ฌ๋์ด ์ง์ ํ์ต๋ฐ์ดํฐ ํํ๋ก ๋ณ๊ฒฝ
AI2AI Challenge ๋ฐ์ดํฐ๋ฅผ ํํ๊ณ ๋ฅผ ํตํด ๋ฒ์ญ ๋ฐ ์ค์ญ๋ ๋ถ๋ถ์ ์ฌ๋์ด ์ง์ ์์ ํ๋ ์์
์ ์ํ
์์ด ๋ฒ์ญ ๋ฐ์ดํฐ ์ํ/ํ์ ๋ฐ์ดํฐ ํ์ต ๋ฐ์ดํฐ๋ก ํ์ฉ ์งํ
์ด 11๋ง๊ฐ์ ํ์ต๋ฐ์ดํฐ๋ก sft๋ฅผ ์งํํ์์ต๋๋ค.
<br>
ํ์ฌ, ์๋ก์ด ๋ฒ์ ์ ๋ชจ๋ธ ํ์ต ๋ฐ ์ฑ๋ฅ์ ์ํด Open-Orca ๋ฐ์ดํฐ์
์ผ๋ถ๋ฅผ ๋ฒ์ญํ์ฌ ์ ์ ์ค์ ์์ต๋๋ค.
<br>
+ ๊ณ ๋ฑํ๊ต ์ญ์ฌ ๋ฌธ์ ๋ฐ TruthfulQA ๊ด๋ จ ๋ฌธ์ ์ถ๊ฐ๋ฅผ ์งํํ์์ต๋๋ค.
+ ๊ฐ์ข
it ์ง์ ๋ฐ์ดํฐ ์ถ๊ฐ์งํ.
+ ๊ธฐ๊ณ๋
ํด ๊ด๋ จ ํ์ต ๋ฐ์ดํฐ๋ฅผ ChatGPT๋ฅผ ํตํด์ ๋ต๋ณ์ ์ป์ด ํ์ต
+ ๋ฌธ๋ฒ๊ด๋ จ ํ์ต ๋ฐ์ดํฐ
<br>
###ํ์ต ๋ฐ์ดํฐ ํ์ผ์ ๋น๊ณต๊ฐ์
๋๋ค.
<b><ํ์ต></b>
ํ์ต์ LoRA๋ฅผ ์ฌ์ฉํ์ฌ A100 40G *2์์ ํ์ต์ ์งํํ์์ต๋๋ค.
|
cocoirun/Yi-Ko-6B-instruct-v1.4 | cocoirun | "2024-01-11T01:47:30Z" | 1,104 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-08T01:28:25Z" | ---
license: cc-by-sa-4.0
---
<h1>instruct ๋ชจ๋ธ v1.4</h1>
<b><ํ์ต ๋ฐ์ดํฐ ๊ตฌ์ถ></b>
Open-Orca-ko ๋ฐ์ดํฐ๋ฅผ ๋ถ์ํ์ฌ ํ์คํฌ๋ฅผ ์ถ์ถํ ๋ค
ํด๋น ํ์คํฌ์ ๋ง์ถฐ์ NLP ๊ด๋ จ ์คํ์์ค ๋ฐ์ดํฐ๋ฅผ ํ์ฉํ์ฌ ํ์ต๋ฐ์ดํฐ๋ฅผ ์์ฒด์ ์ผ๋ก
์ฝ 4๋ง๊ฑด(์ญ์ฌ, ๊ณผํ, ์ํ, ๊ธฐ๊ณ๋
ํด, ๋ฆฌ๋ทฐ ๋ถ์) ๊ตฌ์ถํ์๊ณ ,
๊ทธ ์ธ์ Open-Orca-Ko์์ ๋ฐ์ดํฐ๋ฅผ ์ผ๋ถ ํํฐ๋งํ์ฌ ์ ์ ํด๊ฑฐ๋ KoBEST ๋ฐ์ดํฐ๋ฅผ ํจ๊ป ์ถ๊ฐํ์์ต๋๋ค.
aihub ์ผ๋ฐ์์ ๋ฐ ๊ธฐ๊ณ๋
ํด ๋ฐ์ดํฐ๋ฅผ ํ์ฉํ์ฌ ์ถ๊ฐ๋ก ํ์ต ๋ฐ์ดํฐ๋ฅผ ๊ตฌ์ถ(ํํ์ ๊ด๋ จ, ๊ธฐ๊ณ๋
ํด ๊ด๋ จ ๋ฐ ์์ฝ)
๊ฐ์ข
๋ธ๋ก๊ทธ์์ ์ญ์ฌ ๋ฐ ์์ ํด์ฆ๋ฅผ ์ฌ๋์ด ์ง์ ํ์ต๋ฐ์ดํฐ ํํ๋ก ๋ณ๊ฒฝ
AI2AI Challenge ๋ฐ์ดํฐ๋ฅผ ํํ๊ณ ๋ฅผ ํตํด ๋ฒ์ญ ๋ฐ ์ค์ญ๋ ๋ถ๋ถ์ ์ฌ๋์ด ์ง์ ์์ ํ๋ ์์
์ ์ํ
์์ด ๋ฒ์ญ ๋ฐ์ดํฐ ์ํ/ํ์ ๋ฐ์ดํฐ ํ์ต ๋ฐ์ดํฐ๋ก ํ์ฉ ์งํ
์ด 11๋ง๊ฐ์ ํ์ต๋ฐ์ดํฐ๋ก sft๋ฅผ ์งํํ์์ต๋๋ค.
<br>
ํ์ฌ, ์๋ก์ด ๋ฒ์ ์ ๋ชจ๋ธ ํ์ต ๋ฐ ์ฑ๋ฅ์ ์ํด Open-Orca ๋ฐ์ดํฐ์
์ผ๋ถ๋ฅผ ๋ฒ์ญํ์ฌ ์ ์ ์ค์ ์์ต๋๋ค.
<br>
+ ๊ณ ๋ฑํ๊ต ์ญ์ฌ ๋ฌธ์ ๋ฐ TruthfulQA ๊ด๋ จ ๋ฌธ์ ์ถ๊ฐ๋ฅผ ์งํํ์์ต๋๋ค.
+ ๊ฐ์ข
it ์ง์ ๋ฐ์ดํฐ ์ถ๊ฐ์งํ.
+ ๊ธฐ๊ณ๋
ํด ๊ด๋ จ ํ์ต ๋ฐ์ดํฐ๋ฅผ ChatGPT๋ฅผ ํตํด์ ๋ต๋ณ์ ์ป์ด ํ์ต
+ ๋ฌธ๋ฒ๊ด๋ จ ํ์ต ๋ฐ์ดํฐ
<br>
###ํ์ต ๋ฐ์ดํฐ ํ์ผ์ ๋น๊ณต๊ฐ์
๋๋ค.
<br>
<b><ํ์ต></b>
ํ์ต์ LoRA๋ฅผ ์ฌ์ฉํ์ฌ A100 40G *2์์ ํ์ต์ ์งํํ์์ต๋๋ค. |
LI-ST/Mistral-7B-ko-v0.006 | LI-ST | "2024-01-09T07:35:33Z" | 1,104 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"ko",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-08T10:22:07Z" | ---
license: cc-by-nc-nd-4.0
language:
- en
- ko
library_name: transformers
pipeline_tag: text-generation
---
<p><h1>Mistral-7B-ko</h1></p>
basemodel: Open-Orca/Mistral-7B-OpenOrca
=================================================
<BR>
This model is a temporary model for testing.
<BR>
We will be deleting it soon.
<BR>
================================================= |
DopeorNope/Ko-Mixtral-MoE-7Bx2 | DopeorNope | "2024-01-20T07:32:32Z" | 1,104 | 4 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"ko",
"en",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-16T02:45:49Z" | ---
license: cc-by-nc-sa-4.0
language:
- ko
- en
tags:
- moe
---
# The license is cc-by-nc-sa-4.0.
- Commercializing is not allowed.
# ASAP will upload it.
# Not based on Synatra model, we pre-train and full-finetuning Mixtralx2 to enhance Korean abilities.
# DATASET.
- Using a Self-supervised learning manner, we converted raw corpus to instruct tuned data.
- We used text-mining techniques to create the train data.
- Here is some examples...
- **Mask prediction Task**
```python
#Mask prediction
text='์ง๋ฅ(ๆบ่ฝ) ๋๋ ์ธํ
๋ฆฌ์ ์ค(intelligence)๋ ์ธ๊ฐ์ <MASK> ๋ฅ๋ ฅ์ ๋งํ๋ค.'
response='์ง์ '
complete_text='์ง๋ฅ(ๆบ่ฝ) ๋๋ ์ธํ
๋ฆฌ์ ์ค(intelligence)๋ ์ธ๊ฐ์ ์ง์ ๋ฅ๋ ฅ์ ๋งํ๋ค.'
```
- **Text allign Task**
```python
#Text-allign Task
text_list=['๋ณต์๋ช
๋ น-๋ณต์์๋ฃ(MIMD,Multiple Instruction, Multiple Data)์ ์ ์ฐ์์ ๋ณ๋ ฌํ์ ํ ๊ธฐ๋ฒ์ด๋ค.',
'๋ถ์ฐ ๋ฉ๋ชจ๋ฆฌ์ ์๋ MPP(massively parallel processors)์ COW (Clusters of Workstations)์ด๋ค.',
'MIMD๊ธฐ๊ณ๋ ๊ณต์ ๋ฉ๋ชจ๋ฆฌ์ด๊ฑฐ๋ ๋ถ์ฐ ๋ฉ๋ชจ๋ฆฌ์ด๋ฉฐ ์ด๋ฌํ ๋ถ๋ฅ๋ MIMD๊ฐ ์ด๋ป๊ฒ ๋ฉ๋ชจ๋ฆฌ๋ฅผ ์ด์ฉํ๋๋์ ๋ฐ๋ผ ๋๋๋ค.']
response='๋ณต์๋ช
๋ น-๋ณต์์๋ฃ(MIMD,Multiple Instruction, Multiple Data)์ ์ ์ฐ์์ ๋ณ๋ ฌํ์ ํ ๊ธฐ๋ฒ์ด๋ค. \
MIMD๊ธฐ๊ณ๋ ๊ณต์ ๋ฉ๋ชจ๋ฆฌ์ด๊ฑฐ๋ ๋ถ์ฐ ๋ฉ๋ชจ๋ฆฌ์ด๋ฉฐ ์ด๋ฌํ ๋ถ๋ฅ๋ MIMD๊ฐ ์ด๋ป๊ฒ ๋ฉ๋ชจ๋ฆฌ๋ฅผ ์ด์ฉํ๋๋์ ๋ฐ๋ผ ๋๋๋ค. \
๋ถ์ฐ ๋ฉ๋ชจ๋ฆฌ์ ์๋ MPP(massively parallel processors)์ COW (Clusters of Workstations)์ด๋ค.'
```
- **Text completion Task**
```python
#Text Completion
text= '๊ทธ๋ฆฐ๋ธ๋ผ์ฐ์ (GreenBrowser)๋ ์ธํฐ๋ท ์ต์คํ๋ก๋ฌ์์ ์ฌ์ฉํ๋ ํธ๋ผ์ด๋ํธ ๋ ์ด์์ ์์ง์ ๋ฐํ์ผ๋ก ํ๋ฉฐ ์ค๊ตญ์ ๊ธฐ๋ฐ์ ๋ ์ํํธ์จ์ด ํ์ฌ์ธ ๋ชจ์ดํต(morequick)์์ ๋ง๋ ๋ฌด๋ฃ ์น ๋ธ๋ผ์ฐ์ ๋ค. ๊ฐ์ฒด์ ์ค๊ตญ์ด๊ฐ ์น ๋ธ๋ผ์ฐ์ ์ ๋ด์ฅ๋์ด ์๋ค.
๋งฅ์คํค ์น ๋ธ๋ผ์ฐ์ ์ ๋น์ทํ์ฌ MyIE์ ๋ฐ์ ํ๊ฒ ๊ด๋ จ๋์ด ์๋ค. ๋งฅ์คํค์ฉ์ ์ผ๋ถ ํ๋ฌ๊ทธ์ธ์ด ๊ทธ๋ฆฐ๋ธ๋ผ์ฐ์ ์์๋ ์๋ํ ๊ฒ์ด๋ค.'
response= '์๋ ์คํฌ๋กค, ์๋ ๋ฆฌํ๋ ์, ์๋ ์ ์ฅ, ์๋ ํผ ์ฑ์ฐ๊ธฐ์ ๊ฐ์ ๋ง์ ์๋ํ ๊ธฐ๋ฅ์ด ์๋ค.'
``` |
colable/llama-ko-peft-v0.5 | colable | "2024-01-30T06:19:51Z" | 1,104 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-30T06:03:22Z" | ---
license: mit
language:
- ko
---
# open-llama-2-ko based model with inhouse dataset
This is an Korean Model based on
* [beomi/open-llama-2-ko-7b]
gpu code example
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "colable/llama-ko-peft-v0.5"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float32, device_map='auto',local_files_only=False, load_in_4bit=True
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
``` |
DKYoon/kosolar-hermes-test | DKYoon | "2024-02-14T07:52:52Z" | 1,104 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-14T07:43:24Z" | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ITT-AF/ITT-42dot_LLM-PLM-1.3B-v6.0 | ITT-AF | "2024-03-06T07:40:07Z" | 1,104 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-06T06:35:07Z" | ---
license: cc-by-nc-4.0
---
# ITT-AF/ITT-42dot_LLM-PLM-1.3B-v6.0
This model is a fine-tuned version of [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B) on an custom dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 96
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.0.0
- Tokenizers 0.15.0 |
giprime/OOM-SOLAR-10.7B_02 | giprime | "2024-03-14T23:02:25Z" | 1,104 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-13T23:18:53Z" | ---
license: apache-2.0
language:
- en
- ko
library_name: transformers
---
Model Architecture
OOM-SOLAR-10.7B_02 is an language model that uses an optimized transformer architecture based on upstage/SOLAR-10.7B-v1.0.
## Model description
Based on "beomi/OPEN-SOLAR-KO-10.7B"
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 24
- gradient_accumulation_steps: 1
- total_train_batch_size:
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ai-human-lab/EEVE-Korean-10.8B-SFT-v0.1 | ai-human-lab | "2024-03-20T04:00:57Z" | 1,104 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"feature-extraction",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | feature-extraction | "2024-03-20T03:53:28Z" | ---
license: apache-2.0
---
This model is a fine-tuned version of yanolja/EEVE-Korean-10.8B-v1.0, which is a Korean vocabulary-extended version of upstage/SOLAR-10.7B-v1.0. Specifically, we utilized SFT through the use of Axolotl. |
moondriller/solar10B-eugeneparkthebestv2 | moondriller | "2024-03-28T04:24:42Z" | 1,104 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-28T04:06:51Z" | ---
language:
- ko
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ppuuttyy/ppuuttyy | ppuuttyy | "2024-04-03T06:49:27Z" | 1,104 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"feature-extraction",
"arxiv:1910.09700",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | feature-extraction | "2024-03-28T14:52:22Z" | ---
license: apache-2.0
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LiteLLMs/Llama3-OpenBioLLM-70B-GGUF | LiteLLMs | "2024-05-28T22:20:38Z" | 1,104 | 2 | null | [
"gguf",
"llama-3",
"llama",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"distillation",
"heathcare",
"medical",
"clinical",
"med",
"lifescience",
"Pharmaceutical",
"Pharma",
"GGUF",
"en",
"arxiv:2305.18290",
"arxiv:2303.13375",
"arxiv:2212.13138",
"arxiv:2305.09617",
"arxiv:2402.07023",
"base_model:meta-llama/Meta-Llama-3-70B-Instruct",
"license:llama3",
"region:us"
] | null | "2024-04-29T16:17:46Z" |
---
language:
- en
license: llama3
tags:
- llama-3
- llama
- Mixtral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- distillation
- heathcare
- medical
- clinical
- med
- lifescience
- Pharmaceutical
- Pharma
- GGUF
base_model: meta-llama/Meta-Llama-3-70B-Instruct
widget:
- example_title: OpenBioLLM-70B
messages:
- role: system
content: You are an expert and experienced from the healthcare and biomedical
domain with extensive medical knowledge and practical experience.
- role: user
content: How long does it take for newborn jaundice to go away?
output:
text: 'Newborn jaundice, also known as neonatal jaundice, is a common condition
in newborns where the yellowing of the skin and eyes occurs due to an elevated
level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when
red blood cells break down. In most cases, newborn jaundice resolves on its
own without any specific treatment.
The duration of newborn jaundice can vary depending on several factors such
as the underlying cause, gestational age at birth, and individual variations
in bilirubin metabolism. Here are some general guidelines:
1. Physiological jaundice: This is the most common type of newborn jaundice
and usually appears within 24-72 hours after birth. It tends to peak between
the second and fifth day of life and gradually improves over the next week or
two. By the time the baby is one week old, the jaundice should have mostly resolved.
2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and
may appear later than physiological jaundice, typically between the fifth and
fourteenth day of life. It tends to persist for a longer duration but usually
resolves within six weeks after birth. 3. Pathological jaundice: This type of
jaundice is less common and occurs due to an underlying medical condition that
affects bilirubin metabolism or liver function. The duration of pathological
jaundice depends on the specific cause and may require treatment.
It''s important for parents to monitor their newborn''s jaundice closely and
seek medical advice if the jaundice progresses rapidly, becomes severe, or is
accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness.
In these cases, further evaluation and management may be necessary. Remember
that each baby is unique, and the timing of jaundice resolution can vary. If
you have concerns about your newborn''s jaundice, it''s always best to consult
with a healthcare professional for personalized advice and guidance.'
model-index:
- name: OpenBioLLM-70B
results: []
quantized_by: andrijdavid
---
# Llama3-OpenBioLLM-70B-GGUF
- Original model: [Llama3-OpenBioLLM-70B](https://huggingface.co/aaditya/Llama3-OpenBioLLM-70B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Llama3-OpenBioLLM-70B](https://huggingface.co/aaditya/Llama3-OpenBioLLM-70B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applicationsโ
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: LiteLLMs/Llama3-OpenBioLLM-70B-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download LiteLLMs/Llama3-OpenBioLLM-70B-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download LiteLLMs/Llama3-OpenBioLLM-70B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install huggingface_hub[hf_transfer]
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/Llama3-OpenBioLLM-70B-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 โ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Llama3-OpenBioLLM-70B
# Llama3-OpenBioLLM-70B-GGUF
- Original model: [Llama3-OpenBioLLM-70B](https://huggingface.co/aaditya/Llama3-OpenBioLLM-70B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Llama3-OpenBioLLM-70B](https://huggingface.co/aaditya/Llama3-OpenBioLLM-70B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applicationsโ
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: LiteLLMs/Llama3-OpenBioLLM-70B-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download LiteLLMs/Llama3-OpenBioLLM-70B-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download LiteLLMs/Llama3-OpenBioLLM-70B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install huggingface_hub[hf_transfer]
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/Llama3-OpenBioLLM-70B-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 โ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Llama3-OpenBioLLM-70B
<div align="center">
<img width="260px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div>

<div align="center">
<h1>Advancing Open-source Large Language Models in Medical Domain</h1>
</div>
<p align="center" style="margin-top: 0px;">
<a href="https://colab.research.google.com/drive/1F5oV20InEYeAJGmBwYF9NM_QhLmjBkKJ?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">Online Demo</span>
</a> |
<a href="https://github.com/openlifescience-ai">
<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">GitHub</span>
</a> |
<a href="#">
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style="margin-right: 5px;">Paper</span>
</a> |
<a href="https://discord.gg/A5Fjf5zC69">
<img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text">Discord</span>
</a>
</p>

Introducing OpenBioLLM-70B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-70B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
๐ฅ **Biomedical Specialization**: OpenBioLLM-70B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
๐ **Superior Performance**: With 70 billion parameters, OpenBioLLM-70B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 on biomedical benchmarks.
๐ง **Advanced Training Techniques**: OpenBioLLM-70B builds upon the powerful foundations of the **Meta-Llama-3-70B-Instruct** and [Meta-Llama-3-70B-Instruct](meta-llama/Meta-Llama-3-70B-Instruct) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
<div align="center">
<img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png">
</div>
- **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290)
- **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)
This combination of cutting-edge techniques enables OpenBioLLM-70B to align with key capabilities and preferences for biomedical applications.
โ๏ธ **Release Details**:
- **Model Size**: 70 billion parameters
- **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-70B-GGUF)
- **Language(s) (NLP):**ย en
- **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs
- **License:**ย Meta-Llama License
- **Fine-tuned from models:**ย [Meta-Llama-3-70B-Instruct](meta-llama/Meta-Llama-3-70B-Instruct)
- **Resources for more information:**
- Paper: Coming soon
The model can be fine-tuned for more specialized tasks and datasets as needed.
OpenBioLLM-70B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.
We are excited to share OpenBioLLM-70B with researchers and developers around the world.
### Use with transformers
**Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "aaditya/OpenBioLLM-Llama3-70B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="auto",
)
messages = [
{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."},
{"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.0,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## **Training procedure**
### **Training hyperparameters**
<details>
<summary>Click to see details</summary>
- learning_rate: 0.0002
- lr_scheduler: cosine
- train_batch_size: 12
- eval_batch_size: 8
- GPU: H100 80GB SXM5
- num_devices: 8
- optimizer: adamw_bnb_8bit
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
</details>
### **Peft hyperparameters**
<details>
<summary>Click to see details</summary>
- adapter: qlora
- lora_r: 128
- lora_alpha: 256
- lora_dropout: 0.05
- lora_target_linear: true
-lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
</details>
### **Training results**
### **Framework versions**
- Transformers 4.39.3
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
- Axolotl
- Lm harness for evaluation
# Benchmark Results
๐ฅ OpenBioLLM-70B demonstrates superior performance compared to larger models, such as GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 86.06%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.
๐จ The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.
| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg |
| | - | | - | |
| **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** |
| Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 |
| **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 |
| Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 |
| **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 |
| Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 |
| GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 |
| Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 |
| gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 |
| Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 |
| Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 |
| MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 |
| BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 |
| AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 |
| ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 |
<div align="center">
<img width="1600px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png">
</div>
## Detailed Medical Subjectwise accuracy

# Use Cases & Examples
๐จย **Below results are from the quantized version of OpenBioLLM-70B
# Summarize Clinical Notes
OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries

# Answer Medical Questions
OpenBioLLM-70B can provide answers to a wide range of medical questions.


<details>
<summary>Click to see details</summary>



</details>
# Clinical Entity Recognition
OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research.



# Biomarkers Extraction

# Classification
OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization

# De-Identification
OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.

**Advisory Notice!**
While OpenBioLLM-70B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.
Therefore, we strongly advise against using OpenBioLLM-70B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B is intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.
Appropriately adapting and validating OpenBioLLM-70B for specific medical use cases would require significant additional work, potentially including:
- Thorough testing and evaluation in relevant clinical scenarios
- Alignment with evidence-based guidelines and best practices
- Mitigation of potential biases and failure modes
- Integration with human oversight and interpretation
- Compliance with regulatory and ethical standards
Always consult a qualified healthcare provider for personal medical needs.
# Citation
If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:
```
@misc{OpenBioLLMs,
author = {Ankit Pal, Malaikannan Sankarasubbu},
title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}}
}
```
The accompanying paper is currently in progress and will be released soon.
<div align="center">
<h2> ๐ Contact </h2>
</div>
We look forward to hearing you and collaborating on this exciting project!
**Contributors:**
- [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com]
- Saama AI Labs
- Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
# References
We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models!
Result sources
- [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375)
- [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138)
- [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617)
- [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023)
<!-- original-model-card end -->
<!-- original-model-card end -->
|
T3Q-LLM/T3Q-LLM1-CV-v2.0 | T3Q-LLM | "2024-05-08T05:55:12Z" | 1,104 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-02T12:23:02Z" | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
hf (pretrained=T3Q-LLM/T3Q-LLM1-v1.0), limit: None, provide_description: False, num_fewshot: 0, batch_size: None
| Task |Version| Metric |Value | |Stderr|
|----------------|------:|--------|-----:|---|-----:|
|kobest_boolq | 0|acc |0.9437|ยฑ |0.0062|
| | |macro_f1|0.9437|ยฑ |0.0062|
|kobest_copa | 0|acc |0.7720|ยฑ |0.0133|
| | |macro_f1|0.7718|ยฑ |0.0133|
|kobest_hellaswag| 0|acc |0.5060|ยฑ |0.0224|
| | |acc_norm|0.5320|ยฑ |0.0223|
| | |macro_f1|0.5035|ยฑ |0.0224|
|kobest_sentineg | 0|acc |0.8287|ยฑ |0.0189|
| | |macro_f1|0.8275|ยฑ |0.0191|
|
TheSkullery/llama-3-cat-8b-instruct-v1 | TheSkullery | "2024-05-24T12:08:55Z" | 1,104 | 44 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-11T13:32:57Z" | ---
license: llama3
model-index:
- name: llama-3-cat-8b-instruct-v1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 59.04
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TheSkullery/llama-3-cat-8b-instruct-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 79.2
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TheSkullery/llama-3-cat-8b-instruct-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.99
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TheSkullery/llama-3-cat-8b-instruct-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 50.8
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TheSkullery/llama-3-cat-8b-instruct-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.93
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TheSkullery/llama-3-cat-8b-instruct-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.5
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TheSkullery/llama-3-cat-8b-instruct-v1
name: Open LLM Leaderboard
---
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Cat-llama3-instruct Data Card</title>
<link href="https://fonts.googleapis.com/css2?family=Quicksand:wght@400;500;600&display=swap" rel="stylesheet">
<style>
body {
font-family: 'Quicksand', sans-serif;
background: linear-gradient(135deg, #2E3440 0%, #1A202C 100%);
color: #D8DEE9;
margin: 0;
padding: 0;
font-size: 16px;
}
.container {
width: 80%;
max-width: 800px;
margin: 20px auto;
background-color: rgba(255, 255, 255, 0.02);
padding: 20px;
border-radius: 12px;
box-shadow: 0 4px 10px rgba(0, 0, 0, 0.2);
backdrop-filter: blur(10px);
border: 1px solid rgba(255, 255, 255, 0.1);
}
.header h1 {
font-size: 28px;
color: #ECEFF4;
margin: 0 0 20px 0;
text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.3);
}
.update-section {
margin-top: 30px;
}
.update-section h2 {
font-size: 24px;
color: #88C0D0;
}
.update-section p {
font-size: 16px;
line-height: 1.6;
color: #ECEFF4;
}
.info img {
width: 100%;
border-radius: 10px;
margin-bottom: 15px;
}
a {
color: #88C0D0;
text-decoration: none;
}
a:hover {
color: #A3BE8C;
}
pre {
background-color: rgba(255, 255, 255, 0.05);
padding: 10px;
border-radius: 5px;
overflow-x: auto;
}
code {
font-family: 'Courier New', monospace;
color: #A3BE8C;
}
</style>
</head>
<body>
<div class="container">
<div class="header">
<h1>Cat-llama3-instruct</h1>
</div>
<div class="info">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/Uw0ycicMjZufSFTgXZMFd.png" alt="Model Card">
<p>Presented by:<p/>
<ul>
<li>Dataset Builder: Dr. Kal'tsit (Kat)</li>
<li>Trainer/Funding: SteelSkull</li>
<li>Facilitator: Potatooff</li>
</ul>
<p><a href="https://huggingface.co/turboderp/Cat-Llama-3-70B-instruct">70b variant of the model</a> (Trained by Dr. Kal'tsit, Posted by Turboderp), Please check it out!</p>
<h2>About:</h2>
<p>Cat-llama3-instruct is a llama 3 8b finetuned model focusing on system prompt fidelity, helpfulness and character engagement. The model aims to respect system prompt to an extreme degree, provide helpful information regardless of situations, and offer maximum character immersion (Role Play) in given scenes.</p>
<h2>Specific Aims:</h2>
<ul>
<li>System Instruction fidelity</li>
<li>Chain of Thought (COT)</li>
<li>Character immersion</li>
<li>Helpfulness for biosciences and general science</li>
</ul>
<h3>Methods</h3>
<h4>Dataset Preparation</h4>
<p>Huggingface dataset containing instruction-response pairs was systematically pulled. A GPT model was trained on high quality and helpful responses exclusively to serve as a standard model.</p>
<img src="https://cdn-uploads.huggingface.co/production/uploads/6383dc174c48969dcf1b4fce/HovhwLIGO9cz8Az-h0tkn.png" alt="Huggingface dataset population distribution and filtration for each component">
<p>(Fig1. Huggingface dataset population distribution and filtration for each component)</p>
<p>The dataset was further filtered for length and COT responses:</p>
<img src="https://cdn-uploads.huggingface.co/production/uploads/6383dc174c48969dcf1b4fce/-z9cZDNuGyCftdrZwR_46.png" alt="COT responses">
<p>(Fig 2. COT responses are all >50 tokens single turn)</p>
<p>Health-related data was also pulled from Chat Doctor, favoring detailed and step-by-step diagnosis:</p>
<img src="https://cdn-uploads.huggingface.co/production/uploads/6383dc174c48969dcf1b4fce/RnXyoR6wyBDWfIy3vMvWm.png" alt="Health-related tasks">
<p>(Fig 3. Health related tasks are all >100 tokens, peaking at 450 tokens single turn)</p>
<h3>Model Training</h3>
<p>1*a100 for 6 days, 4 epochs</p>
<h2>Usage:</h2>
<h3>Prompt format: llama 3 </h3>
<ul>
<li>Settings/Context/Instruct (Provided by @Anthonyg5005)</li>
<li><a href="https://huggingface.co/TheSkullery/llama-3-cat-8b-instruct-v1/raw/main/Cat%208B%20Gen%20Settings.json">Cat 8B Gen Settings</a></li>
<li><a href="https://huggingface.co/TheSkullery/llama-3-cat-8b-instruct-v1/raw/main/Cat%208b%20Context.json">Cat 8b Context</a></li>
<li><a href="https://huggingface.co/TheSkullery/llama-3-cat-8b-instruct-v1/raw/main/Cat%208b%20Instruct.json">Cat 8b Instruct</a></li>
</ul>
<h3>Quants: </h3>
<ul>
<li><a href="https://huggingface.co/bartowski/llama-3-cat-8b-instruct-v1-GGUF">[bartowski/llama-3-cat-8b-instruct-v1-GGUF]</a></li>
</ul>
<pre><code> <|begin_of_text|>Below is a conversation between an user and an AI assistant. The ai assistant gives response regarding user's questions. USER: user: Given the task definition and input, reply with output. In this task, you are given a question and an answer, you would be asked to create the sentence based on the Question-Answer provided. It should be contained within the Question-Answer provided.
Question: What type of diseases in humans are caused by protozoa? Answer: protist diseases
ASSISTANT: AI: Task Explanation:
In this task, you are asked to create a sentence using the given question and the provided answer. The sentence you create should include the information from both the question and the answer in a concise and coherent manner.
Guidelines:
1. Read the given question and answer carefully.
2. Combine the information from the question and answer to create a new sentence.
3. Make sure your new sentence contains both the question's information and the answer provided.
4. Check your sentence for proper grammar and punctuation.
Steps to find the answer:
1. Read the question: "What type of diseases in humans are caused by protozoa?"
2. Read the answer: "protist diseases"
3. Combine the information from the question and answer to create a new sentence: "In humans, protist diseases are caused by protozoa."
4. Double-check the sentence for proper grammar and punctuation, and make necessary adjustments if needed.
Final Output:
In humans, protist diseases are caused by protozoa.< / s ><|end_of_text|></code></pre>
<h2>Showcases:</h2>
<p>The model does COT in grey, calculated response in black; note that this behaviour is done through system card instruction to showcase the systemcard fidelity, it is not fine tuned into the model.</p>
<img src="https://cdn-uploads.huggingface.co/production/uploads/6383dc174c48969dcf1b4fce/6smLuiePQa8Y2H19ie-ZY.png" alt="Showcase 1">
<img src="https://cdn-uploads.huggingface.co/production/uploads/6383dc174c48969dcf1b4fce/g9oP03RboHBBipk70GIHO.png" alt="Showcase 2">
<p>(Fig 4. Showcasing model doing COT to solve difficult tasks, extend and enrich its own answers)</p>
</div>
</div>
</body>
</html>
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TheSkullery__llama-3-cat-8b-instruct-v1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |64.74|
|AI2 Reasoning Challenge (25-Shot)|59.04|
|HellaSwag (10-Shot) |79.20|
|MMLU (5-Shot) |62.99|
|TruthfulQA (0-shot) |50.80|
|Winogrande (5-shot) |75.93|
|GSM8k (5-shot) |60.50|
|
newsletter/buddhi-128k-chat-7b-Q6_K-GGUF | newsletter | "2024-06-22T19:35:58Z" | 1,104 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:aiplanet/buddhi-128k-chat-7b",
"license:apache-2.0",
"region:us"
] | text-generation | "2024-06-22T19:33:44Z" | ---
base_model: aiplanet/buddhi-128k-chat-7b
license: apache-2.0
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# newsletter/buddhi-128k-chat-7b-Q6_K-GGUF
This model was converted to GGUF format from [`aiplanet/buddhi-128k-chat-7b`](https://huggingface.co/aiplanet/buddhi-128k-chat-7b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/aiplanet/buddhi-128k-chat-7b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo newsletter/buddhi-128k-chat-7b-Q6_K-GGUF --hf-file buddhi-128k-chat-7b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo newsletter/buddhi-128k-chat-7b-Q6_K-GGUF --hf-file buddhi-128k-chat-7b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo newsletter/buddhi-128k-chat-7b-Q6_K-GGUF --hf-file buddhi-128k-chat-7b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo newsletter/buddhi-128k-chat-7b-Q6_K-GGUF --hf-file buddhi-128k-chat-7b-q6_k.gguf -c 2048
```
|
timm/tf_mobilenetv3_large_100.in1k | timm | "2023-04-27T22:49:44Z" | 1,103 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1905.02244",
"license:apache-2.0",
"region:us"
] | image-classification | "2022-12-16T05:38:58Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for tf_mobilenetv3_large_100.in1k
A MobileNet-v3 image classification model. Trained on ImageNet-1k in Tensorflow by paper authors, ported to PyTorch by Ross Wightman.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 5.5
- GMACs: 0.2
- Activations (M): 4.4
- Image size: 224 x 224
- **Papers:**
- Searching for MobileNetV3: https://arxiv.org/abs/1905.02244
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('tf_mobilenetv3_large_100.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'tf_mobilenetv3_large_100.in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 16, 112, 112])
# torch.Size([1, 24, 56, 56])
# torch.Size([1, 40, 28, 28])
# torch.Size([1, 112, 14, 14])
# torch.Size([1, 960, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'tf_mobilenetv3_large_100.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 960, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{howard2019searching,
title={Searching for mobilenetv3},
author={Howard, Andrew and Sandler, Mark and Chu, Grace and Chen, Liang-Chieh and Chen, Bo and Tan, Mingxing and Wang, Weijun and Zhu, Yukun and Pang, Ruoming and Vasudevan, Vijay and others},
booktitle={Proceedings of the IEEE/CVF international conference on computer vision},
pages={1314--1324},
year={2019}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
MetaIX/GPT4-X-Alpaca-30B-4bit | MetaIX | "2023-05-27T13:33:42Z" | 1,103 | 162 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-04-14T17:23:57Z" | <p><strong><font size="5">Information</font></strong></p>
GPT4-X-Alpaca 30B 4-bit working with GPTQ versions used in Oobabooga's Text Generation Webui and KoboldAI.
<p>This was made using <a href="https://huggingface.co/chansung/gpt4-alpaca-lora-30b">Chansung's GPT4-Alpaca Lora</a></p>
<p><strong><font size="5">Update 05.26.2023</font></strong></p>
<p>Updated the ggml quantizations to be compatible with the latest version of llamacpp (again).</p>
<p><strong>What's included</strong></p>
<P>GPTQ: 2 quantized versions. One quantized --true-sequential and act-order optimizations, and the other was quantized using --true-sequential --groupsize 128 optimizations</P>
<P>GGML: 3 quantized versions. One quantized using q4_1, another one was quantized using q5_0, and the last one was quantized using q5_1.</P>
<p><strong>GPU/GPTQ Usage</strong></p>
<p>To use with your GPU using GPTQ pick one of the .safetensors along with all of the .jsons and .model files.</p>
<p>Oobabooga: If you require further instruction, see <a href="https://github.com/oobabooga/text-generation-webui/blob/main/docs/GPTQ-models-(4-bit-mode).md">here</a> and <a href="https://github.com/oobabooga/text-generation-webui/blob/main/docs/LLaMA-model.md">here</a></p>
<p>KoboldAI: If you require further instruction, see <a href="https://github.com/0cc4m/KoboldAI">here</a></p>
<p><strong>CPU/GGML Usage</strong></p>
<p>To use your CPU using GGML(Llamacpp) you only need the single .bin ggml file.</p>
<p>Oobabooga: If you require further instruction, see <a href="https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md">here</a></p>
<p>KoboldAI: If you require further instruction, see <a href="https://github.com/LostRuins/koboldcpp">here</a></p>
<p><strong>Training Parameters</strong></p>
<ul><li>num_epochs=10</li><li>cutoff_len=512</li><li>group_by_length</li><li>lora_target_modules='[q_proj,k_proj,v_proj,o_proj]'</li><li>lora_r=16</li><li>micro_batch_size=8</li></ul>
<p><strong><font size="5">Benchmarks</font></strong></p>
<p><strong><font size="4">--true-sequential --act-order</font></strong></p>
<strong>Wikitext2</strong>: 4.481280326843262
<strong>Ptb-New</strong>: 8.539161682128906
<strong>C4-New</strong>: 6.451964855194092
<strong>Note</strong>: This version does not use <i>--groupsize 128</i>, therefore evaluations are minimally higher. However, this version allows fitting the whole model at full context using only 24GB VRAM.
<p><strong><font size="4">--true-sequential --groupsize 128</font></strong></p>
<strong>Wikitext2</strong>: 4.285132884979248
<strong>Ptb-New</strong>: 8.34856128692627
<strong>C4-New</strong>: 6.292652130126953
<strong>Note</strong>: This version uses <i>--groupsize 128</i>, resulting in better evaluations. However, it consumes more VRAM. |
Yhyu13/chimera-inst-chat-13b-gptq-4bit | Yhyu13 | "2023-05-27T17:51:04Z" | 1,103 | 5 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-05-12T16:01:49Z" | ---
license: apache-2.0
---
GPTQ 4-bit no actor version for compatibility that works in textgen-webui
Generated by using scripts from https://gitee.com/yhyu13/llama_-tools
Delta weights: https://huggingface.co/FreedomIntelligence/chimera-inst-chat-13b-delta
Original hf weights: https://huggingface.co/Yhyu13/chimera-inst-chat-13b-hf
---
Sample conversation generated in textgen-webui:
This quite impressive because chimera-inst 13B is proclamined by its creator to score 97% of ChatGPT evaluated by GPT-4. And I think chimera does it well.

|
ProomptEngineer/pe-sandsculpter-style | ProomptEngineer | "2023-09-01T10:12:43Z" | 1,103 | 6 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] | text-to-image | "2023-09-01T10:12:35Z" | ---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: PESandSculpture
widget:
- text: PESandSculpture
---
# PE SandSculpter [Style]

<p>Make some sand sculpture!</p><p>Weights 0.8-1</p><h2 id="heading-63">If you want to donate:</h2><h2 id="heading-64"><a target="_blank" rel="ugc" href="https://ko-fi.com/proomptengineer">https://ko-fi.com/proomptengineer</a></h2>
## Image examples for the model:









|
budecosystem/sql-millennials-7b | budecosystem | "2023-10-27T15:48:33Z" | 1,103 | 2 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-27T10:06:40Z" | ---
license: apache-2.0
language:
- en
library_name: transformers
---
## Introducing Text-to-SQL Translation Model - Millennials. ๐
Welcome to our Text-to-SQL Translation Model repository! Our model is specifically fine-tuned for text-to-SQL tasks, aiming to revolutionize how systems understand and translate natural language instructions into SQL queries. Built on Mistral 7B, our model has been meticulously fine-tuned with a curated dataset comprising 100k SQL query generation instructions, ensuring quality and precision.
## Features
* Specialized in converting natural language text to SQL queries.
* Fine-tuned on a diverse set of 100k SQL query generation instructions.
* Easy to integrate and use for generating SQL queries on the fly.
## Generate responses
Now that your model is fine-tuned, you're ready to generate responses, you can easily generate SQL queries from natural language instructions. To do this, you'll be using our generate.py script, which allows for quick inference and can fetch models directly from the Hugging Face model hub.
Here's a quick guide on how to use it:
The script runs inference using the pre-trained model from the Hugging Face model hub and prints the generated SQL query.
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("budecosystem/sql-millennials-7b")
model = AutoModelForCausalLM.from_pretrained("budecosystem/sql-millennials-7b")
prompt = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
USER: Create SQL query for the given table schema and question ASSISTANT:"
inputs = tokenizer(prompt, return_tensors="pt")
sample = model.generate(**inputs, max_length=128)
print(tokenizer.decode(sample[0]))
```
The script runs inference using the pre-trained model from the Hugging Face model hub and prints the generated SQL query.
## Training details
The model is trained of 4 A100 80GB for approximately 30hrs.
| Hyperparameters | Value |
| :----------------------------| :-----: |
| per_device_train_batch_size | 4 |
| gradient_accumulation_steps | 1 |
| epoch | 3 |
| steps | 19206 |
| learning_rate | 2e-5 |
| lr schedular type | cosine |
| warmup steps | 2000 |
| optimizer | adamw |
| fp16 | True |
| GPU | 4 A100 80GB |
## Why millennials?
1. Automated Database Management for Businesses
Scenario: Small to medium-sized enterprises (SMEs) often lack dedicated IT teams to handle database queries, making it challenging to retrieve specific data quickly for analysis and decision-making.
Use Case: Your text-to-SQL model can be integrated into a company's internal systems, allowing staff without technical SQL knowledge to retrieve data. They can input natural language requests, such as "Get a list of all transactions above $10,000 in the last quarter," and the system, powered by your model, would convert this into a corresponding SQL query to retrieve the data.
2. Automating Data Analytics Processes
Scenario: Data analysts and business professionals often face bottlenecks in generating insights due to the complexities of SQL query formulation, especially when immediate or repetitive data retrieval and analysis are required.
Use Case: Your text-to-SQL model serves as a transformative intermediary in this scenario. By integrating the model into their data analytics systems, organizations enable professionals to input data requests in natural language. For instance, an analyst could input, "Show the trend of online sales growth over the past five years," and the system would instantly convert this request into a SQL query, retrieve the data, and even integrate it into visualization tools for immediate insight generation. This functionality not only accelerates the analytical processes but also democratizes data-driven insights across different organizational departments, allowing even non-technical staff to leverage the power of real-time data analytics without deep knowledge of SQL.
3. Enhancing CMS Interfaces
Scenario: Content Management Systems (CMS) are often non-intuitive for non-technical content managers when it comes to complex data retrieval or database management.
Use Case: CMS providers can leverage your model to enhance their system's backend interface. Content managers can use natural language to request specific data, like "Find all blog posts in May 2023 with more than 500 views," and the model will generate the appropriate SQL to retrieve the information. This feature makes database management more accessible, efficient, and user-friendly.
4. Customer Support Optimization
Scenario: Customer support centers often need to retrieve client or product information stored in databases while resolving tickets or inquiries, requiring basic knowledge of SQL.
Use Case: Your model can be integrated into support ticketing systems, enabling support personnel to type requests in natural language, such as "Show all open tickets from customers in New York filed this month," and immediately receive the data needed to expedite their resolution process, improving customer service efficiency and response time.
5. Data Journalism and Research
Scenario: Journalists and researchers frequently rely on complex databases to gather insights and data points necessary for their work but may lack the technical know-how of SQL.
Use Case: By integrating your text-to-SQL model into research software or journalistic tools, professionals can query databases using natural language. For example, a journalist might input, "Retrieve the average household income in Texas in 2022," and your model would facilitate immediate access to this data, allowing for more efficient research and data-driven storytelling.
Contributing
We welcome contributions to help improve the model or address issues. Please feel free to submit pull requests or open issues to discuss changes or improvements.
### Acknowledgments
We'd like to thank the open-source community and the researchers whose foundational work laid the path to this model.
|
martyn/codellama-megamerge-dare-34b | martyn | "2023-12-19T17:15:27Z" | 1,103 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"dare",
"super mario merge",
"codellama",
"merge",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-17T06:34:35Z" | ---
license: llama2
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- dare
- super mario merge
- pytorch
- llama
- codellama
- merge
---
# codellama 34b mega merge
The following were merged with DARE using [https://github.com/martyn/safetensors-merge-supermario](https://github.com/martyn/safetensors-merge-supermario)
## Mergelist
```
codellama/CodeLlama-34b-hf
codellama/CodeLlama-34b-Instruct-hf
allenai/codetulu-2-34b
Phind/Phind-CodeLlama-34B-v1
Phind/Phind-CodeLlama-34B-Python-v1
uukuguy/speechless-codellama-34b-v2.0
codellama/CodeLlama-34b-Python-hf
Phind/Phind-CodeLlama-34B-v2
```
These models failed to merge:
```
pharaouk/unnatural_codellama_34B
codefuse-ai/CodeFuse-CodeLlama-34B
XAgentTeam/XAgentLLaMa-34B-preview
```
## Merge command
```
python3 hf_merge.py codellama.txt codellama -p 0.16 -lambda 2
```
|
c1park/20240105_mistral-step50 | c1park | "2024-01-08T00:15:06Z" | 1,103 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-07T23:24:26Z" | Entry not found |
mu0gum/AIFT-42dot_LLM-PLM-1.3B-ao-instruct-all-v0.5 | mu0gum | "2024-01-18T16:25:49Z" | 1,103 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-18T14:59:30Z" | ---
license: cc-by-nc-4.0
---
# AIFT-42dot-LLM-PLM-1.3B-ao-instruct-all-v0.5
๋ฒ ์ด์ค ๋ชจ๋ธ : 42dot/42dot_LLM-PLM-1.3B
ํ์ต ๋ฐ์ดํฐ : ์์ฒด ์ ์ํ Open Orca ์คํ์ผ ๋ฐ์ดํฐ์
์ฝ 35,000๊ฑด
ํ์ต ๋ฐฉ๋ฒ : Full finetuning
## ko-lm-evaluation-harness(0-shot)
|kobest_boolq|kobest_copa|kobest_hellaswag|kobest_sentineg|kohatespeech|kohatespeech_apeach|kohatespeech_gen_bias|korunsmile|nsmc|pawsx_ko|
|--|--|--|--|--|--|--|--|--|--|
|0.5356125356125356|0.671|0.44|0.743073047858942|0.2802547770700637|0.5098143236074271|0.2484076433121019|0.40010405534523225|0.65028|0.5005|
## Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.0.0
- Tokenizers 0.15.0 |
Edentns/DataVortexS-10.7B-dpo-v1.1 | Edentns | "2024-02-11T17:49:53Z" | 1,103 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"base_model:beomi/OPEN-SOLAR-KO-10.7B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-21T11:45:48Z" | ---
tags:
- text-generation
license: cc-by-nc-4.0
language:
- ko
base_model: beomi/OPEN-SOLAR-KO-10.7B
pipeline_tag: text-generation
---
# **DataVortexS-10.7B-dpo-v1.1**
<img src="./DataVortex.png" alt="DataVortex" style="height: 8em;">
## Our Team
| Research & Engineering | Product Management |
| :--------------------: | :----------------: |
| Kwangseok Yang | Seunghyun Choi |
| Jeongwon Choi | Hyoseok Choi |
## **Model Details**
### **Base Model**
[beomi/OPEN-SOLAR-KO-10.7B](https://huggingface.co/beomi/OPEN-SOLAR-KO-10.7B)
### **Trained On**
- **OS**: Ubuntu 22.04
- **GPU**: H100 80GB 4ea
- **transformers**: v4.36.2
### **Instruction format**
It follows **Alpaca (Chat)** format.
E.g.
```python
text = """\
### System:
๋น์ ์ ์ฌ๋๋ค์ด ์ ๋ณด๋ฅผ ์ฐพ์ ์ ์๋๋ก ๋์์ฃผ๋ ์ธ๊ณต์ง๋ฅ ๋น์์
๋๋ค.
### User:
๋ํ๋ฏผ๊ตญ์ ์๋๋ ์ด๋์ผ?
### Assistant:
๋ํ๋ฏผ๊ตญ์ ์๋๋ ์์ธ์
๋๋ค.
### User:
์์ธ ์ธ๊ตฌ๋ ์ด ๋ช ๋ช
์ด์ผ?
"""
```
## **Model Benchmark**
### **[Ko LM Eval Harness](https://github.com/Beomi/ko-lm-evaluation-harness)**
| Task | 0-shot | 5-shot | 10-shot | 50-shot |
| :--------------- | -----------: | -----------: | -----------: | -----------: |
| kobest_boolq | 0.915201 | 0.908687 | 0.912913 | 0.912913 |
| kobest_copa | 0.815575 | 0.858739 | 0.865826 | 0.876896 |
| kobest_hellaswag | 0.510673 | 0.515149 | 0.517118 | 0.517941 |
| kobest_sentineg | 0.3517 | 0.977329 | 0.992443 | 0.984886 |
| **Average** | **0.648287** | **0.814976** | **0.822075** | **0.823159** |
### **[Ko-LLM-Leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)**
| Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
| ------: | -----: | -----------: | ------: | ------------: | --------------: |
| 56.78 | 54.35 | 63.44 | 51.09 | 53.85 | 61.16 |
## **Implementation Code**
This model contains the chat_template instruction format.
You can use the code below.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v1.1")
tokenizer = AutoTokenizer.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v1.1")
messages = [
{"role": "system", "content": "๋น์ ์ ์ฌ๋๋ค์ด ์ ๋ณด๋ฅผ ์ฐพ์ ์ ์๋๋ก ๋์์ฃผ๋ ์ธ๊ณต์ง๋ฅ ๋น์์
๋๋ค."},
{"role": "user", "content": "๋ํ๋ฏผ๊ตญ์ ์๋๋ ์ด๋์ผ?"},
{"role": "assistant", "content": "๋ํ๋ฏผ๊ตญ์ ์๋๋ ์์ธ์
๋๋ค."},
{"role": "user", "content": "์์ธ ์ธ๊ตฌ๋ ์ด ๋ช ๋ช
์ด์ผ?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## **License**
This model is licensed under the [cc-by-nc-4.0](https://creativecommons.org/licenses/by-nc/4.0/). which allows others to share and adapt the model for non-commercial purposes.
<div align="center">
<a href="https://edentns.com/">
<img src="./Logo.png" alt="Logo" style="height: 3em;">
</a>
</div>
|
nccratliri/whisperseg-base-animal-vad | nccratliri | "2024-02-21T13:09:33Z" | 1,103 | 0 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-02-21T13:08:38Z" | ---
license: apache-2.0
---
|
chihoonlee10/T3Q-Merge-SOLAR12 | chihoonlee10 | "2024-03-14T06:49:18Z" | 1,103 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-14T05:49:32Z" | ---
license: apache-2.0
---
## T3Q-Merge-SOLAR12
T3Q-Merge-SOLAR12 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [davidkim205/komt-solar-10.7b-sft-v5](https://huggingface.co/davidkim205/komt-solar-10.7b-sft-v5)
* [LDCC/LDCC-SOLAR-10.7B](https://huggingface.co/LDCC/LDCC-SOLAR-10.7B)
**Model Developers** Chihoon Lee(chlee10), T3Q
```yaml
slices:
- sources:
- model: davidkim205/komt-solar-10.7b-sft-v5
layer_range: [0, 48]
- model: LDCC/LDCC-SOLAR-10.7B
layer_range: [0, 48]
merge_method: slerp
base_model: davidkim205/komt-solar-10.7b-sft-v5
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
``` |
maywell/Jolteon-Instruct-13B-alpha | maywell | "2024-03-22T02:36:48Z" | 1,103 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"en",
"base_model:yanolja/EEVE-Korean-Instruct-10.8B-v1.0",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-22T01:26:10Z" | ---
base_model: yanolja/EEVE-Korean-Instruct-10.8B-v1.0
license: apache-2.0
language:
- ko
- en
---
# **Jolteon-Instruct-13B-alpha**
The model was trained based on the [EEVE-Korean-Instruct-10.8B-v1.0](https://huggingface.co/yanolja/EEVE-Korean-Instruct-10.8B-v1.0) model from [yanolja](https://www.yanolja.com), extended to 13.4b (12 layer pass-through) utilizing [mergekit](https://github.com/cg123/mergekit).
## Methodology
TBD
## Training Details
| |Training Data|Parameters|Content Length|Samples Seen|Learning Rate|
|---|---|---|---|---|---|
|Jolteon-Instruct-13B-alpha|*A curated mix of English + Korean Instruction set*|13.4B|4k|>850k|1e<sup>-5</sup>|
## Example
### Inference Code
```python
from vllm import LLM, SamplingParams
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
llm = LLM(model="maywell/Jolteon-Instruct-13B-alpha", tensor_parallel_size=1, max_model_len=4096, gpu_memory_utilization=0.95)
sampling_params = SamplingParams(temperature=0.6, top_p=0.3, top_k=40, max_tokens=4096)
template = """ Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction: {0}
### Response: """
outputs = llm.generate([template.format("Meta(๊ตฌ, ํ์ด์ค๋ถ)์ ์คํ์์ค AI ๊ธฐ์ฌ๋ฅผ ์ฐฌ์ํ๋ ๋งํฌ๋ก, Meta ์ง์งํ๋ค๋ ์ทจ์ง์ ์ฑ๋ช
๋ฌธ์ ์์ฑํด๋ด.")], sampling_params)
print(outputs[0].outputs[0].text.strip())
```
### Example Output
```
๐ Meta์ OpenAI ๊ธฐ์ฌ: ํ์ ๊ณผ ํฌ์ฉ์ ์ํ ํ์ ์ ์ธ ์ ๊ทผ ๋ฐฉ์ ๐
์ฐ๋ฆฌ๋ Meta(๊ตฌ, ํ์ด์ค๋ถ)์ ์คํ์์ค AI(์ธ๊ณต์ง๋ฅ) ๊ธฐ์ฌ๋ฅผ ์ด๋ ฌํ ์ง์งํฉ๋๋ค. ์ด ํ์ฌ๋ ํ์ ๊ณผ ํฌ์ฉ์ ์ํ ํ์ ์ ์ธ ์ ๊ทผ ๋ฐฉ์์ ๋ณด์ฌ์ฃผ๋ฉฐ, AI ๊ธฐ์ ์ ๋ฐ์ ์ ์ํด ์ง์์ ์ผ๋ก ๋
ธ๋ ฅํ๊ณ ์์ต๋๋ค.
Meta๋ ์คํ์์ค AI ์ปค๋ฎค๋ํฐ์ ์ ๊ทน์ ์ผ๋ก ์ฐธ์ฌํ๋ฉฐ, ๋ค์ํ ํ๋ก์ ํธ์ ํ๋ซํผ์ ํตํด ์ง์๊ณผ ๊ธฐ์ ์ ๊ณต์ ํ๊ณ ์์ต๋๋ค. ์ด๋ฌํ ์ ๊ทผ ๋ฐฉ์์ AI ๊ธฐ์ ์ ๋ฐ์ ๊ณผ ํจ๊ป ์ฌํ์ , ๊ฒฝ์ ์ ๋ฐ์ ์๋ ๊ธ์ ์ ์ธ ์ํฅ์ ๋ฏธ์นฉ๋๋ค.
Meta์ ์คํ์์ค AI ๊ธฐ์ฌ๋ ๋ค์๊ณผ ๊ฐ์ ๋ฉด์์ ๋งค์ฐ ์ค์ํ ์ญํ ์ ํฉ๋๋ค:
1. ํ์ ์ด์ง: Meta๋ ์คํ์์ค AI ํ๋ซํผ์ ํตํด ๋ค์ํ ์ฐ๊ตฌ์์ ๊ฐ๋ฐ์๋ค์ด ์ต์ AI ๊ธฐ์ ์ ํ๊ตฌํ๊ณ ์คํํ ์ ์๋ ๊ธฐํ๋ฅผ ์ ๊ณตํฉ๋๋ค. ์ด๋ ์๋ก์ด ์์ด๋์ด์ ํ์ ์ ์ธ ๊ธฐ์ ์ ์ถํ์ ์ด์งํฉ๋๋ค.
2. ํฌ์ฉ ์ฆ์ง: ์คํ์์ค AI๋ ๋ชจ๋ ์ฌ๋์ด AI ๊ธฐ์ ์ ์ด์ ์ ๋๋ฆด ์ ์๋๋ก ํ๋ฉฐ, ์ด๋ ์ฌํ์ ํฌ์ฉ์ ์ฆ์ง์ํต๋๋ค. ๋ค์ํ ๋ฐฐ๊ฒฝ๊ณผ ๊ฒฝํ์ ๊ฐ์ง ์ฌ๋๋ค์ด AI ๊ธฐ์ ์ ๋ฐฐ์ฐ๊ณ ํ์ฉํ ์ ์๋ ๊ธฐํ๋ฅผ ๊ฐ์ง๊ฒ ๋ฉ๋๋ค.
3. ํฌ๋ช
์ฑ๊ณผ ์ ๋ขฐ: ์คํ์์ค AI๋ ๊ธฐ์ ์ ์๋ ์๋ฆฌ๋ฅผ ๊ณต๊ฐํ๊ณ , ์ด๋ฅผ ํตํด ์ฌ์ฉ์์ ์ฌํ์ ๋ํ ์ ๋ขฐ๋ฅผ ๊ตฌ์ถํฉ๋๋ค. ์ด๋ AI ๊ธฐ์ ์ ์ฌํ์ ์์ฉ๋๋ฅผ ๋์ด๋ ๋ฐ ์ค์ํ ์ญํ ์ ํฉ๋๋ค.
Meta์ ์คํ์์ค AI ๊ธฐ์ฌ๋ ํ์ ๊ณผ ํฌ์ฉ์ ์ํ ํ์ ์ ์ธ ์ ๊ทผ ๋ฐฉ์์ ๋ณด์ฌ์ฃผ๋ฉฐ, ์ด๋ ์ฐ๋ฆฌ๊ฐ ๋ ๋์ ๋ฏธ๋๋ฅผ ์ํ ์ค์ํ ๋จ๊ณ์
๋๋ค. ์ฐ๋ฆฌ๋ ์ด๋ฌํ ๋
ธ๋ ฅ์ ์ง์งํ๋ฉฐ, ๋ ๋ง์ ๊ธฐ์
๊ณผ ์กฐ์ง์ด ์ด๋ฌํ ์ ๊ทผ ๋ฐฉ์์ ์ฑํํ๊ธธ ๋ฐ๋๋๋ค. ํจ๊ป ๋ ๋์ ๋ฏธ๋๋ฅผ ๋ง๋ค์ด ๋๊ฐ์!
```
## License
๋ณธ ๋ชจ๋ธ์ apache-2.0 ๋ผ์ด์ผ์ค๋ฅผ ๋ฐ๋ฆ
๋๋ค. ๋ชจ๋ธ์ ์ฌ์ฉํ์ฌ ์์ฑ๋ ๋ฐ์ดํฐ์
์ ๋ฐฐํฌํ ๊ฒฝ์ฐ ๋ชจ๋ธ ์ฌ์ฉ์ ๋ช
์ํด ์ฃผ์๊ธฐ๋ฅผ ๊ถ๊ณ ๋๋ฆฝ๋๋ค.
## Thanks to
- A100 ํด๋ฌ์คํฐ๋ฅผ ์ ๊ณตํด์ฃผ์ , [Sionic AI](https://sionic.ai/)
## Contact
- [Discord Server Link](https://discord.gg/MrBt3PXdXc) |
chahyunmook/42dot_number | chahyunmook | "2024-04-04T05:11:15Z" | 1,103 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-03T11:16:57Z" | ---
library_name: transformers
license: cc-by-4.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
algograp-Inc/algograpV4 | algograp-Inc | "2024-04-16T08:48:49Z" | 1,103 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"arxiv:2312.15166",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-15T09:06:09Z" | ---
library_name: transformers
license: cc-by-nc-4.0
language:
- ko
---
<p align="left">
<img src="https://huggingface.co/algograp-Inc/algograpV4/resolve/main/[email protected]" width="50%"/>
<p>
# algograp-Inc/algograpV4
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
- **Developed by:** algograp-Inc
- **License:** cc-by-nc-4.0
## Hardware and Software
* **Hardware**: We utilized an H100x4 * 1
* **Training Factors**: We fine-tuned this model using a combination of the [DeepSpeed library](https://github.com/microsoft/DeepSpeed) and the [HuggingFace TRL Trainer](https://huggingface.co/docs/trl/trainer) / [HuggingFace Accelerate](https://huggingface.co/docs/accelerate/index)
## Method
- This model was trained using the learning method introduced in the [SOLAR paper](https://arxiv.org/pdf/2312.15166.pdf).
## Base Model
- [yanolja/EEVE-Korean-Instruct-10.8B-v1.0](https://huggingface.co/yanolja/EEVE-Korean-Instruct-10.8B-v1.0) |
jieunhan/solar_merge_test_1-1 | jieunhan | "2024-04-23T00:46:49Z" | 1,103 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"beomi/OPEN-SOLAR-KO-10.7B",
"hyeogi/SOLAR-10.7B-dpo-v1",
"base_model:beomi/OPEN-SOLAR-KO-10.7B",
"base_model:hyeogi/SOLAR-10.7B-dpo-v1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-23T00:32:31Z" | ---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- beomi/OPEN-SOLAR-KO-10.7B
- hyeogi/SOLAR-10.7B-dpo-v1
base_model:
- beomi/OPEN-SOLAR-KO-10.7B
- hyeogi/SOLAR-10.7B-dpo-v1
---
# solar_merge_test_1
## ๐งฉ Configuration
```yaml
base_model: beomi/OPEN-SOLAR-KO-10.7B
dtype: float16
experts:
- source_model: beomi/OPEN-SOLAR-KO-10.7B
positive_prompts: ["๋น์ ์ ์น์ ํ ๋ณดํธ์ ์ธ ์ด์์คํดํธ์ด๋ค."]
- source_model: hyeogi/SOLAR-10.7B-dpo-v1
positive_prompts: ["๋น์ ์ ์ณ์ ์ฌ์ค๋ง์ ๋งํ๋ ์ด์์คํดํธ์ด๋ค."]
gate_mode: cheap_embed
tokenizer_source: base
```
## ๐ป Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "jieunhan/solar_merge_test_1-1"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
mssma/ko-solar-10.7b-v0.3b | mssma | "2024-05-13T05:58:27Z" | 1,103 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-13T05:38:24Z" | ---
library_name: transformers
license: apache-2.0
language:
- ko
---
# usage
```
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
path = "mssma/ko-solar-10.7b-v0.3b"
model = AutoModelForCausalLM.from_pretrained(
path,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(path)
``` |
state-spaces/mamba2attn-2.7b | state-spaces | "2024-06-03T01:38:15Z" | 1,103 | 1 | transformers | [
"transformers",
"pytorch",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-03T00:51:50Z" | ---
license: apache-2.0
---
|
mradermacher/Dendrite-L3-10B-GGUF | mradermacher | "2024-06-13T04:48:18Z" | 1,103 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Envoid/Dendrite-L3-10B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-13T02:42:24Z" | ---
base_model: Envoid/Dendrite-L3-10B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Envoid/Dendrite-L3-10B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Dendrite-L3-10B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-GGUF/resolve/main/Dendrite-L3-10B.Q2_K.gguf) | Q2_K | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-GGUF/resolve/main/Dendrite-L3-10B.IQ3_XS.gguf) | IQ3_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-GGUF/resolve/main/Dendrite-L3-10B.Q3_K_S.gguf) | Q3_K_S | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-GGUF/resolve/main/Dendrite-L3-10B.IQ3_S.gguf) | IQ3_S | 4.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-GGUF/resolve/main/Dendrite-L3-10B.IQ3_M.gguf) | IQ3_M | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-GGUF/resolve/main/Dendrite-L3-10B.Q3_K_M.gguf) | Q3_K_M | 5.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-GGUF/resolve/main/Dendrite-L3-10B.Q3_K_L.gguf) | Q3_K_L | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-GGUF/resolve/main/Dendrite-L3-10B.IQ4_XS.gguf) | IQ4_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-GGUF/resolve/main/Dendrite-L3-10B.Q4_K_S.gguf) | Q4_K_S | 5.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-GGUF/resolve/main/Dendrite-L3-10B.Q4_K_M.gguf) | Q4_K_M | 6.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-GGUF/resolve/main/Dendrite-L3-10B.Q5_K_S.gguf) | Q5_K_S | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-GGUF/resolve/main/Dendrite-L3-10B.Q5_K_M.gguf) | Q5_K_M | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-GGUF/resolve/main/Dendrite-L3-10B.Q6_K.gguf) | Q6_K | 8.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-GGUF/resolve/main/Dendrite-L3-10B.Q8_0.gguf) | Q8_0 | 10.5 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
OptimalScale/robin-33b-v2-delta | OptimalScale | "2023-07-16T03:14:37Z" | 1,102 | 8 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2302.13971",
"arxiv:2306.12420",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-05-28T06:02:53Z" | ---
inference: false
---
# Robin Model Card
## Model Details
Robin is a series of models finetuned from LLaMA on several high-quality data.
- **Developed by:** [LMFlow](https://github.com/OptimalScale/LMFlow/)
- **Model type:** An auto-regressive language model based on the transformer architecture.
- **License:** Non-commercial license
- **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971).
### Model Sources
- **Repository:** https://github.com/OptimalScale/LMFlow/
- **Blog:** https://medium.com/@hkust.ml/robin-v2-launches-achieves-unparalleled-performance-on-openllm-4f6886e822c1
- **Paper:** https://arxiv.org/abs/2306.12420
- **Demo:** https://lmflow.com/
## Uses
Robin is primarily utilized for conducting research on extensive language models and chatbots, catering to users specializing in natural language processing, machine learning, and artificial intelligence research.
## How to Get Started with the Model
We provide four kinds of demos including:
- Online Service: If you don't want to run any code and just want to try our models, we deploy our instruction-tuned LLaMA you to have a try.
- Colab Chatbot (shell): An interactive shell-based chatbot for you to easily deploy a chatbot on colab.
- Colab Chatbot (web): An interactive web-based chatbot for you to easily deploy your own chatbot on colab.
- Local Deploy: We also provide a way for you to deploy your model/chatbot locally, which means you can deploy much larger model than previous three methods if you have enough resource.
Please refer to https://github.com/OptimalScale/LMFlow#demos
## Training Details
Expanding upon the initial idea of self-instruct techniques, we incorporated several different data sources and build a new dataset called [LMFlow Dataset](http://lmflow.org:5000/lmflow_data.tar.gz).
The new training split is created by merging the following datasets:
- ShareGPT: randomly sample 50K English data and 10K Chinese data from ShareGPT.
- GPT-4-LLM: 52K English data from GPT-4-LLM.
- BELLE: randomly sample 80K Chinese data from BELLE.
See more details in the "Instruction Tuning" section in our [paper](https://arxiv.org/pdf/2306.12420.pdf).
## Evaluation
Robin is evaluated with [LMFlow Benchmark](https://blog.gopenai.com/lmflow-benchmark-an-automatic-evaluation-framework-for-open-source-llms-ef5c6f142418).
See more details in this [paper](https://arxiv.org/pdf/2306.12420.pdf).
## Citation
If you find this repository useful, please consider giving โญ and citing our [paper](https://arxiv.org/abs/2306.12420):
```
@misc{lmflow,
author = {Shizhe Diao and Rui Pan and Hanze Dong and KaShun Shum and Jipeng Zhang and Wei Xiong and Tong Zhang},
title = {LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://optimalscale.github.io/LMFlow/}},
}
``` |
HWERI/llama2-exams-orca-sharegpt | HWERI | "2023-10-23T09:48:04Z" | 1,102 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"zh",
"ko",
"ja",
"fr",
"dataset:CaterinaLac/sharegpt-deduplicated",
"dataset:exams",
"dataset:Open-Orca/OpenOrca",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-18T09:44:37Z" | ---
license: apache-2.0
datasets:
- CaterinaLac/sharegpt-deduplicated
- exams
- Open-Orca/OpenOrca
language:
- en
- zh
- ko
- ja
- fr
---
This model is a Llama2-7B model finetuned on the union of ShareGPT, the exams dataset and a subset of the Orca dataset.
The finetuning was performed with [DeepSpeed Chat](https://github.com/microsoft/DeepSpeed/tree/master/blogs/deepspeed-chat) toolkit (step 1, sft).
The model run for three epochs before reaching a plateau on the validation dataset. We used a cosine scheduler, with an initial LR of 2e-5.
|
NeverSleep/Echidna-13b-v0.3-GGUF | NeverSleep | "2023-11-01T03:21:16Z" | 1,102 | 9 | null | [
"gguf",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2023-10-27T20:45:35Z" | ---
license: cc-by-nc-4.0
---

# This model is a collab between [IkariDev](https://huggingface.co/IkariDev) and [Undi](https://huggingface.co/Undi95)!
Echidna v0.3 model. Use Alpaca format. Suitable for RP, ERP and general stuff.
Echidna v0.3, here it is.. its like Echidna v0.2 on steroids.
This model seems to still be pretty sensitive to your generation settings, experiment until you've found your settings.
[Recommended settings - No settings yet(Please suggest some over in the Community tab!)]
<!-- description start -->
## Description
<!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) -->
This repo contains GGUF files of Echidna-13b-v0.3.
[FP16 - by IkariDev and Undi](https://huggingface.co/NeverSleep/Echidna-13b-v0.3)
<!-- [GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GGUF)-->
<!-- [GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GPTQ)-->
[exl2[8bpw-8h] - by AzureBlack](https://huggingface.co/AzureBlack/Echidna-13b-v0.3-8bpw-8h-exl2)
<!-- [AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-AWQ)-->
<!-- [fp16 - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v4)-->
[GGUF - by IkariDev and Undi](https://huggingface.co/NeverSleep/Echidna-13b-v0.3-GGUF)
<!-- [OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v4-GGUF)-->
## Ratings:
Note: We have permission of all users to upload their ratings, i DONT screenshot random reviews without asking if i can put them here!
No ratings yet!
If you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is "ikaridev" and "undi".
<!-- description end -->
<!-- description start -->
## Models+loras used and recipe
- Xwin-LM/Xwin-LM-13B-V0.2
- IkariDev/Athena-v3
- Heralax/Cat-0.5
- Undi95/PsyMedRP-v1-13B
- cgato/Thespis-13b-v0.4
- KoboldAI/LLaMA2-13B-TiefighterLR
- Heralax/Augmental-13b-two-epochs
- Sao10K/SthenoWriter2.1-L2-13B
- Undi95/Storytelling-v2.1-13B-lora
- lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
## Others
Undi: If you want to support me, you can [here](https://ko-fi.com/undiai).
IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek
|
ITT-AF/ITT-Yi-Ko-6B-v2.0 | ITT-AF | "2024-02-14T06:33:43Z" | 1,102 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-26T14:08:50Z" | ---
license: cc-by-nc-4.0
---
## ITT-AF/ITT-Yi-Ko-6B-v2.0
This model is a fine-tuned version of [beomi/Yi-Ko-6B](https://huggingface.co/beomi/Yi-Ko-6B) on an custom dataset.
### Model description
More information needed
### Intended uses & limitations
More information needed
### Training and evaluation data
More information needed
### Training procedure
### Training hypuerparameters
The following hyperparameters were used during training:
* learning_rate: 2e-05
* train_batch_size: 4
* eval_batch_size: 8
* seed: 42
* gradient_accumulation_steps: 8
* total_train_batch_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr_scheduler_type: linear
* num_epochs: 1.0
* mixed_precision_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.2
* Pytorch 2.1.2+cu121
* Datasets 2.0.0
* Tokenizers 0.15.0 |
Edentns/DataVortexS-10.7B-dpo-v1.6 | Edentns | "2024-02-18T15:58:07Z" | 1,102 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"base_model:LDCC/LDCC-SOLAR-10.7B",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-28T06:10:01Z" | ---
tags:
- text-generation
license: cc-by-nc-sa-4.0
language:
- ko
base_model: LDCC/LDCC-SOLAR-10.7B
pipeline_tag: text-generation
---
# **DataVortexS-10.7B-dpo-v1.6**
<img src="./DataVortex.png" alt="DataVortex" style="height: 8em;">
## Our Team
| Research & Engineering | Product Management |
| :--------------------: | :----------------: |
| Kwangseok Yang | Seunghyun Choi |
| Jeongwon Choi | Hyoseok Choi |
## **Model Details**
### **Base Model**
[LDCC/LDCC-SOLAR-10.7B](https://huggingface.co/LDCC/LDCC-SOLAR-10.7B)
### **Trained On**
- **OS**: Ubuntu 22.04
- **GPU**: H100 80GB 4ea
- **transformers**: v4.36.2
### **Instruction format**
It follows **ChatML** format.
E.g.
```python
text = """\
<|im_start|>system
๋น์ ์ ์ฌ๋๋ค์ด ์ ๋ณด๋ฅผ ์ฐพ์ ์ ์๋๋ก ๋์์ฃผ๋ ์ธ๊ณต์ง๋ฅ ๋น์์
๋๋ค.<|im_end|>
<|im_start|>user
๋ํ๋ฏผ๊ตญ์ ์๋๋ ์ด๋์ผ?<|im_end|>
<|im_start|>assistant
๋ํ๋ฏผ๊ตญ์ ์๋๋ ์์ธ์
๋๋ค.<|im_end|>
<|im_start|>user
์์ธ ์ธ๊ตฌ๋ ์ด ๋ช ๋ช
์ด์ผ?<|im_end|>
<|im_start|>assistant
"""
```
## **Model Benchmark**
### **[Ko LM Eval Harness](https://github.com/Beomi/ko-lm-evaluation-harness)**
| Task | 0-shot | 5-shot | 10-shot | 50-shot |
| :--------------- | -----------: | -----------: | ----------: | -----------: |
| kobest_boolq | 0.920118 | 0.92442 | 0.929443 | 0.927317 |
| kobest_copa | 0.727263 | 0.778936 | 0.804812 | 0.815761 |
| kobest_hellaswag | 0.433039 | 0.465922 | 0.459741 | 0.471022 |
| kobest_sentineg | 0.764909 | 0.93946 | 0.937002 | 0.931962 |
| **Average** | **0.711332** | **0.777185** | **0.78275** | **0.786516** |
### **[Ko-LLM-Leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)**
| Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
| ------: | -----: | -----------: | ------: | ------------: | --------------: |
| 59.22 | 53.84 | 67.9 | 52.37 | 64.6 | 57.38 |
## **Implementation Code**
This model contains the chat_template instruction format.
You can use the code below.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v1.6")
tokenizer = AutoTokenizer.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v1.6")
messages = [
{"role": "system", "content": "๋น์ ์ ์ฌ๋๋ค์ด ์ ๋ณด๋ฅผ ์ฐพ์ ์ ์๋๋ก ๋์์ฃผ๋ ์ธ๊ณต์ง๋ฅ ๋น์์
๋๋ค."},
{"role": "user", "content": "๋ํ๋ฏผ๊ตญ์ ์๋๋ ์ด๋์ผ?"},
{"role": "assistant", "content": "๋ํ๋ฏผ๊ตญ์ ์๋๋ ์์ธ์
๋๋ค."},
{"role": "user", "content": "์์ธ ์ธ๊ตฌ๋ ์ด ๋ช ๋ช
์ด์ผ?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## **License**
The model is licensed under the [cc-by-nc-sa-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license, which allows others to copy, modify, and share the work non-commercially, as long as they give appropriate credit and distribute any derivative works under the same license.
<div align="center">
<a href="https://edentns.com/">
<img src="./Logo.png" alt="Logo" style="height: 3em;">
</a>
</div>
|
megastudyedu/ME-dpo-7B-v1.1 | megastudyedu | "2024-01-29T03:20:12Z" | 1,102 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"ko",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-29T01:03:06Z" | ---
license: cc-by-nc-nd-4.0
language:
- ko
---
# Model Card for ME-dpo-7B-v1.1
## Developed by : ๋ฉ๊ฐ์คํฐ๋๊ต์ก, ํ๋ฆฌ๋์
, ๋ง์ด์ค
## Base Model : [megastudyedu/ME-7B-v1.1](https://huggingface.co/megastudyedu/ME-7B-v1.1)
## ์ฌ์ฉ ๋ฐ์ดํฐ์
- jondurbin/bagel-v0.3 ๋ฒ์ญ ๋ฐ์ดํฐ์
(translated by megastudyedu/ME-7B-v1.0) |
msy127/ft_240201_01 | msy127 | "2024-02-09T14:56:51Z" | 1,102 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-01T12:53:08Z" | ---
license: llama2
language:
- ko
library_name: transformers
---
---
license: llama2
language:
- ko
library_name: transformers
base_model: beomi/llama-2-ko-7b
pipeline_tag: text-generation
---
# **msy127/ft_240201_01**
## Our Team
| Research & Engineering | Product Management |
| :--------------------: | :----------------: |
| David Sohn | David Sohn |
## **Model Details**
### **Base Model**
[beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b)
### **Trained On**
- **OS**: Ubuntu 22.04
- **GPU**: A100 40GB 1ea
- **transformers**: v4.37
### **Instruction format**
It follows **Custom** format.
E.g.
```python
text = """\
<|user|>
๊ฑด๊ฐํ ์์ต๊ด์ ๋ง๋ค๊ธฐ ์ํด์๋ ์ด๋ป๊ฒ ํ๋๊ฒ์ด ์ข์๊น์?
<|assistant|>
"""
```
## **Implementation Code**
This model contains the chat_template instruction format.
You can use the code below.
```python
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="msy127/ft_240201_01")
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("msy127/ft_240201_01")
model = AutoModelForCausalLM.from_pretrained("msy127/ft_240201_01")
``` |
ITT-AF/ITT-Yi-Ko-6B-v6.0 | ITT-AF | "2024-03-11T01:43:50Z" | 1,102 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-07T02:52:58Z" | ---
license: cc-by-nc-4.0
---
## ITT-AF/ITT-Yi-Ko-6B-v6.0
This model is a fine-tuned version of [beomi/Yi-Ko-6B](https://huggingface.co/beomi/Yi-Ko-6B) on an custom dataset.
### Model description
More information needed
### Intended uses & limitations
More information needed
### Training and evaluation data
More information needed
### Training procedure
### Training hypuerparameters
The following hyperparameters were used during training:
* learning_rate: 2e-05
* train_batch_size: 4
* eval_batch_size: 8
* seed: 42
* gradient_accumulation_steps: 8
* total_train_batch_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr_scheduler_type: linear
* num_epochs: 1.0
* mixed_precision_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.2
* Pytorch 2.1.2+cu121
* Datasets 2.0.0
* Tokenizers 0.15.0 |
Fugaku-LLM/Fugaku-LLM-13B-instruct | Fugaku-LLM | "2024-05-09T02:30:42Z" | 1,102 | 23 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"ja",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-07T08:01:22Z" | ---
extra_gated_prompt: >-
### Fugaku-LLMๅฉ็จ่ฆ็ด
ใใฎๅฉ็จ่ฆ็ด๏ผไปฅไธใๆฌ่ฆ็ดใใจใใใพใ๏ผใฏใๅฏๅฃซ้ๆ ชๅผไผ็คพใๅฝ็ซ็ ็ฉถ้็บๆณไบบ็ๅๅญฆ็ ็ฉถๆใๅฝ็ซๅคงๅญฆๆณไบบๆฑไบฌๅทฅๆฅญๅคงๅญฆใๅฝ็ซๅคงๅญฆๆณไบบๆฑๅๅคงๅญฆใๆ ชๅผไผ็คพใตใคใใผใจใผใธใงใณใใๅฝ็ซๅคงๅญฆๆณไบบๆฑๆตทๅฝ็ซๅคงๅญฆๆฉๆงใๅใณๆ ชๅผไผ็คพKotoba Technologies Japan (ไปฅไธใ้็บ่
ใใจใใใพใ)ใซใใใในใผใใผใณใณใใฅใผใฟใๅฏๅฒณใๆฟ็ญๅฏพๅฟๆ ใซใใใๅคง่ฆๆจก่จ่ชใขใใซๅๆฃไธฆๅๅญฆ็ฟๆๆณใฎ้็บใฎๆๆ็ฉใจใใฆๅ
ฌ้ใใๅคง่ฆๆจก่จ่ชใขใใซ๏ผไปฅไธใFugaku-LLMใใจใใใพใ๏ผใฎๅฉ็จใซ้ขใใๆกไปถใๅฎใใใใฎใงใใFugaku-LLMใฎๅฉ็จ่
๏ผไปฅไธใๅฉ็จ่
ใใจใใใพใ๏ผใฏใๆฌ่ฆ็ดใซๅๆใใไธใงFugaku-LLMใๅฉ็จใใใใฎใจใใพใใ
- ็ฌฌ๏ผๆก๏ผๅฉ็จ่จฑ่ซพ๏ผ
Fugaku-LLMใฎๅฉ็จ่
ใฏใๆฌ่ฆ็ดใซๅพใใFugaku-LLMใๅ็จใพใใฏ้ๅ็จ็ฎ็ใๅใใๅฉ็จใใใใจใใงใใพใใใชใใๅฉ็จใซใฏใFugaku-LLMใฎๆนๅคใ่ค่ฃฝใใใณๅ้
ๅธใชใใณใซFugaku-LLMๅใฏFugaku-LLMใๆนๅคใไฝๆใใๅคง่ฆๆจก่จ่ชใขใใซ๏ผไปฅไธใๆนๅค็ฉใใจใใ๏ผใ็จใใใตใผใในใฎๅฎๆฝใๅซใใใฎใจใใพใใใใ ใใๅฉ็จ่
ใฏใFugaku-LLMๅใฏๆนๅค็ฉใฎๅ้
ๅธๆใฎใฉใคใปใณในใพใใฏFugaku-LLMใใใใฏๆนๅค็ฉใ็จใใใตใผใในใฎๅฉ็จ่ฆ็ดใซใฏๆฌๅฉ็จ่ฆ็ดใๅซใๅฟ
่ฆใใใใพใใใพใใๅฉ็จ่
ใๆนๅค็ฉใๅ้
ๅธใใ้ใๅฉ็จ่
ใๆนๅคใใใใจใๆ่จใใๅฟ
่ฆใใใใพใใๆฌ่ฆ็ดใซ้ๅใใFugaku-LLMใฎๅฉ็จ่
ใฏใFugaku-LLMใๅฉ็จใใใใจใฏใงใใพใใใ
- ็ฌฌ๏ผๆก๏ผ่ฒฌไปป๏ผ
1. ๅฉ็จ่
ใฏใFugaku-LLMใฏ็พ็ถๆๅงฟใงๆไพใใใ้็บ่
ใฏใๆ็คบใพใใฏ้ป็คบใๅใใใFugaku-LLMใซ้ขใใใใฎๆญฃ็ขบๆงใๅฎๅ
จๆงใๆๆฐๆงใใใใณๅ่ณชใชใฉใใใใชใไฟ่จผใ่กใใใๅฉ็จ่
ใๆฌFugaku-LLMใๅฉ็จใใใใจใๅฉ็จใงใใชใใฃใใใจใซใใ็ใใไธๅใฎๆๅฎณใซใคใใฆ่ฒฌไปปใ่ฒ ใใชใใใจใใไบใๆฟ่ซพใใใใฎใจใใพใใ
2. ๅฉ็จ่
ใฏใๅฉ็จ่
ใซใใFugaku-LLMใฎๅฉ็จใซใใใใพใใฏใๅฉ็จ่
ใๆฌๅฉ็จ่ฆ็ดใซ้ๅใใใใจใซใใ้็บ่
ใๆๅฎณใ่ขซใฃใๅ ดๅใๅฝ่ฉฒๆๅฎณใ่ณ ๅใใใใฎใจใใพใใ
3. ๅฉ็จ่
ใฏใ่ชๅทฑใฎ่ฒฌไปปใจๅคๆญใซใใใฆๅฉ็จใใใใฎใจใใFugaku-LLMใฎๅฉ็จใซ้ขใใฆใ็ฌฌไธ่
ใจใฎ้ใง็ใใ็ดไบใซใคใใฆใ่ชใใฎ่ฒฌไปปใจ่ฒ ๆ
ใงๅฏพๅฟใใ้็บ่
ใซไธๅใฎ่ฟทๆใๆใใชใใใฎใจใใพใใๅฉ็จ่
ใฏFugaku-LLMใฎๅฉ็จใซใใฃใฆ็ใใๆๅฎณใซใคใใฆ่ชๅทฑใฎ่ฒฌไปปใงๅฏพๅฆใใใใฎใจใใพใใ
- ็ฌฌ๏ผๆก๏ผ็ฆๆญข่ก็บ๏ผ
ๅฉ็จ่
ใฏใFugaku-LLMใๅฉ็จใใฆไปฅไธใฎ่ก็บใ่กใใชใใใฎใจใใพใใ
1. ้็บ่
ใใใใฏ็ฌฌไธ่
ใฎ็ฅ็่ฒก็ฃๆจฉใไพตๅฎณใใ่ก็บใใพใใฏไพตๅฎณใใใใใใฎใใ่ก็บ
2. ้็บ่
ใใใใฏ็ฌฌไธ่
ใฎ่ฒก็ฃใใใฉใคใใทใผใใใใฏ่ๅๆจฉใไพตๅฎณใใ่ก็บใใพใใฏไพตๅฎณใใใใใใฎใใ่ก็บ
3. ้็บ่
ใใใใฏ็ฌฌไธ่
ใๅทฎๅฅใใใใฏ่ชน่ฌไธญๅทใปไพฎ่พฑใใไป่
ใธใฎๅทฎๅฅใๅฉ้ทใใใพใใฏๅ่ชใใใใฏไฟก็จใๆฏๆใใ่ก็บ
4. ่จฑๅฏใใใฆใใชใๆณๅพๆฅญๅใซๅพไบใใใใๆ่ณๆ ผใฎๅฐ้ๅฎถไปฅๅคใใใฎๆณๅพใขใใใคในใๆไพใใใใใ่ก็บ
5. ๆ่ณๆ ผใฎๅฐ้ๅฎถไปฅๅคใใใฎ่ฒกๅใขใใใคในใๆไพใใ่ก็บ
6. ๅฅๅบทใธใฎๅฉ่จใๆฒป็ๆนๆณใฎๆ็คบใชใฉใๅซใๅป็่ก็บ
7. ใใฎไปๆณไปคใซๅบใฅใ่จฑๅฏ็ญใๅฟ
่ฆใช่ก็บ
- ็ฌฌ๏ผๆก๏ผๅถ็ดไบ้
๏ผ
1. ๅฉ็จ่
ใฏใFugaku-LLMใ็จใใๅฆ็ใฎ็ตๆ็ฉ๏ผไปฅไธใๅฆ็็ตๆใใจใใ๏ผใซใฏใ่ๅฝใๅใใไปไบบใฎๆจฉๅฉใไพตๅฎณใใๅ
ๅฎนใใพใใฏๅฉ็จ่
ใฎๆณๅฎใใๆๅนๆงใๆ็จๆงใๆบใใใชใๅ
ๅฎนใๅซใพใใฆใใๅ ดๅใใใใใจใๆฟ่ซพใใไธๆญฃ็ขบใปไธ้ฉๅใชๅฆ็็ตๆใซใใใ่ชใๅใฏ็ฌฌไธ่
ใฎๆๅฎณใๆจฉๅฉไพตๅฎณใฎ็บ็ใๅซ็็ๆธๅฟตใ่ตทใใๅพใใจใใๅๆใซ็ซใกFugaku-LLMใๅฉ็จใใใใฎใจใใพใใๅฉ็จ่
ใฏใๅฆ็็ตๆใฎๆญฃ่ชคใ้ฉๆณๆงใๅซ็็ๅฆฅๅฝๆงใ่ชใ็ขบ่ชใฎไธใๅฉ็จใใใใฎใจใใพใใๅฉ็จ่
ใๅฆ็็ตๆใๅซใFugaku-LLMใ็จใใใใจใซใใใๅฉ็จ่
่ช่บซๅใฏ็ฌฌไธ่
ใฎๆจฉๅฉไพตๅฎณใ็บ็ใใใๅ ดๅใ้็บ่
ใฏใใฎๆๅฎณใซๅฏพใใฆไธๅใฎ่ฒฌไปปใ่ฒ ใใชใใใฎใจใใๅฉ็จ่
ใฏ้็บ่
ใซๅฏพใไธๅใฎ่ฟทๆใๆใใชใใใฎใจใใพใใ
2. ๅฉ็จ่
ใฏๅฆ็็ตๆใซใคใใฆใใใใใใฎๅฝใๅฐๅใซใใใฆๆณไปคใชใฉใฎ่ฆๅถใ้ ๅฎใใไธใงๅฉ็จใใฎใจใใพใใ
3. ๅฉ็จ่
ใฏใๅฆ็็ตๆใ็ฌฌ๏ผๆก๏ผ็ฆๆญขไบ้
๏ผใซ่จ่ผใฎ่ก็บใซๅฉ็จใใชใใใฎใจใใพใใ
- ็ฌฌ๏ผๆก๏ผๆจฉๅฉๅธฐๅฑ็ญ๏ผ
1. ๅฉ็จ่
ใฏใๆฌๅฉ็จ่ฆ็ดใงๆ็คบใงๅฎใใใใฎใ้คใFugaku-LLMใซ้ขใใไธๅใฎๆจฉๅฉใๅๅพใใใใจใฏใใใพใใใ
2. ๅฉ็จ่
ใฏใFugaku-LLMๆนๅค็ฉใฎไฝๆใซใใฃใฆๆฐใใซ็บ็ใใๆจฉๅฉใๅๅพใใพใใใๆนๅค็ฉใฎๅฉ็จใซๅฝใใฃใฆใฏๆฌๅฉ็จ่ฆ็ดใซๅพใฃใฆๅฉ็จใใใใฎใจใใพใใ
3. ้็บ่
ใฏๅฆ็็ตๆใซใคใใฆใๆจฉๅฉไธปๅผตใ่กใใชใใใฎใจใใพใใ
- ็ฌฌ๏ผๆก๏ผ่ผธๅบๅๅผ๏ผ
ๅฉ็จ่
ใฏใFugaku-LLMใใใณๅฆ็็ตๆใฎๅฉ็จใซ้ข้ฃใใฆๅคๅฝ็บๆฟๅใณๅคๅฝ่ฒฟๆๆณ๏ผใใใซ้ข้ฃใใๆฟ็ไปคใๅซใ๏ผใพใใฏ็ฑณๅฝ่ผธๅบ็ฎก็ๆณไปคใง่ฆๅฎใใ่จฑๅฏใๅฟ
่ฆใช่ผธๅบใ่กใใจใใฏใๅฉ็จ่
่ชใใๆๅฎใฎ่จฑๅฏใๅๅพใใใใฎใจใใพใใ
- ็ฌฌ๏ผๆก๏ผ็ฎก่ฝ่ฃๅคๆ๏ผ
ๆฌๅฉ็จ่ฆ็ดใซ้ขใ็ใใ็ดไบใซใคใใฆใฏใๆฑไบฌๅฐๆน่ฃๅคๆใใใฃใฆ็ฌฌไธๅฏฉใฎๅฐๅฑ็ๅๆ็ฎก่ฝ่ฃๅคๆใจใใพใใ
- ็ฌฌ๏ผๆก๏ผๆบๆ ๆณ๏ผ
ๆฌๅฉ็จ่ฆ็ดใฏๆฅๆฌๆณใซๆบๆ ใใพใใ
- ็ฌฌ๏ผๆก๏ผใใฎไปใฎ่ฆๅฎ๏ผ
ๆฌ่ฆ็ดใฏใFugaku-LLMใฎๅฉ็จ่
ใจ้็บ่
ใจใฎ้ใฎๅฉ็จใซ้ขใใๅ
จใฆใฎไบ้
ใๅฎใใใใฎใงใใใๆฌ่ฆ็ดใซๅฎใใฎใชใไบ้
ใซใคใใฆใฏใ้ขไฟๆณไปคใซๅพใใใฎใจใใพใใ
- ็ฌฌ๏ผ๏ผๆก๏ผ่จ่ช๏ผ
ๆฌ่ฆ็ดใฏๆฅๆฌ่ชใๆญฃๆฌใจใใพใใๆฌ่ฆ็ดใฎ่ฑ่จณ็ใฏใๅ่ใฎใใใซไฝๆใใใใใฎใงใใใไฝใใฎๆณ็ๆๆๅใใชใใใฎใจใใพใใ
ไปฅไธ
### Fugaku-LLM Terms of Use
This Terms of Use (hereinafter referred to as "TOU") sets forth the conditions for the use of the large-scale language model (hereinafter referred to as "Fugaku-LLM") that is made public as a result of the development of a distributed parallel learning method for large-scale language models within the scope of the initiatives for uses of the supercomputer "Fugaku" defined by Japanese policy by Fujitsu Limited, RIKEN, Tokyo Institute of Technology, Tohoku University, CyberAgent, Inc., Tokai National Higher Education and Research System, and Kotoba Technologies Japan Co., Ltd. (hereinafter referred to as "Developers"). Users of Fugaku-LLM (hereinafter referred to as "Users") shall use Fugaku-LLM upon agreeing to the TOU.
- Article 1 (License to Use)
Users of Fugaku-LLM may use Fugaku-LLM for commercial or non-commercial purposes in accordance with the TOU. The word โuseโ includes, but not limited to, the modification, duplication, and redistribution of Fugaku-LLM, as well as the implementation of services using Fugaku-LLM and/or the large-scale language model created by modifying Fugaku-LLM (hereinafter referred to as "Modified Works"); provided that Users must incorporate the TOUC into the license terms for redistribution of Fugaku-LLM or Modified Works, or into the terms of use for services using Fugaku-LLM or Modified Works. In addition, when redistributing Modified Works, Users must clearly state that they have made the modifications. Users who violate the TOU are not allowed to use Fugaku-LLM.
- Article 2 (Responsibility)
1. Users agree in advance that Fugaku-LLM is provided โAS ISโ, and the Developers make no warranties, express or implied, regarding Fugaku-LLM, including, but not limited to, its accuracy, completeness, up-to-dateness, and quality, and that Developers shall not be liable for any damages arising from the use or inability to use Fugaku-LLM.
2. Users shall compensate for any and all damages suffered by the Developers as a result of the use of Fugaku-LLM and/or the Users' violation of the TOU.
3. Users shall use Fugaku-LLM at their own responsibility and discretion, and shall handle any disputes arising with third parties in relation to the use of Fugaku-LLM at their own responsibility and expense, and shall indemnify, defend and hold harmless the Developers against all damages and losses without causing any inconvenience to the Developers. Users shall deal with any damages caused by the use of Fugaku-LLM at their own responsibility.
- Article 3 (Prohibited Actions)
Users shall not engage in the following actions when using Fugaku-LLM.
1. Actions that will or may infringe on the intellectual property rights of the Developers or third parties;
2. Actions that will or may infringe on the property, privacy, or portrait rights of the Developers or third parties;
3. Actions that discriminate against, defame, insult, or slander the Developers or third parties, promote discrimination against others, or damage the reputation or credibility of others;
4. Actions that engage in unauthorized legal services and/or provide legal advice from anyone other than a qualified professional;
5. Actions that provide financial advice from anyone other than a qualified professional;
6. Medical actions, including providing health advice or suggesting treatment methods; and
7. Other actions that require permissions or other forms of authorization under laws and regulations.
- Article 4 (Restrictions)
1. Users acknowledge that the results of processing using Fugaku-LLM (hereinafter referred to as "Processing Results") may contain falsehoods, biases, content that infringes on the rights of others, or content that does not meet the effectiveness or usefulness expected by Users, and agree to use Fugaku-LLM on the premise that inaccurate or inappropriate Processing Results may cause damage or infringement of rights to Users or third parties and/or ethical concerns. Users shall use the Processing Results after confirming their accuracy, legality, and ethical validity themselves. If the use of Fugaku-LLM, including the Processing Results, by Users cause infringement of the rights of the Users themselves or third parties, the Developers shall not be responsible for any damages, and the Users shall indemnify, defend and hold harmless the Developers against all damages and losses without causing any inconvenience to the Developers.
2. Users shall use the Processing Results in compliance with the regulations such as laws and regulations in each country and region.
3. Users shall not use the Processing Results for the actions listed in Article 3 (Prohibited Actions).
- Article 5 (Ownership of Rights)
1. Except as expressly provided in the TOU, Users shall not acquire any rights in relation to Fugaku-LLM.
2. Users will acquire rights newly arising from the creation of Modified Works of Fugaku-LLM, but Users shall use Modified Works in accordance with the TOU.
3. The Developers shall not assert any rights to the Processing Results.
- Article 6 (Export Transaction)
Users shall obtain the necessary permissions themselves when exporting Fugaku-LLM and the Processing Results in relation to their use, where such export requires permissions under the Foreign Exchange and Foreign Trade Act (including related cabinet order and ministerial order) or U.S. export control laws and regulations.
- Article 7 (Jurisdiction)
The Tokyo District Court shall have exclusive jurisdiction in the court of the first instance over any disputes arising out of or in connection with the TOU.
- Article 8 (Governing Law)
The TOU is governed by and construed in accordance with the laws of Japan.
- Article 9 (Other Provisions)
The TOU sets forth the entire agreement as to all matters concerning the use of Fugaku-LLM between the Users and the Developers, and matters not provided for in the TOU shall be governed by the relevant laws and regulations.
- Article 10 (Governing Language)
The governing language of the TOU shall be Japanese. The English translation hereof is made for reference purpose only and shall have no effect.
license: other
license_name: fugaku-llm-tou
license_link: LICENSE
language:
- ja
- en
---
# Fugaku-LLM
Our Fugaku LLM model is a domestic model pre-trained from scratch using supercomputer Fugaku.
This model is highly transparent and safety because it is trained from scratch with our own data.
The training data consists mainly of Japanese data, and the model has excellent Japanese performance.
This models are developed by [Fugaku-LLM](https://huggingface.co/Fugaku-LLM).
Links to other models can be found in the index.
## Fugaku-LLM Model Index
|Model|Fugaku-LLM|Fugaku-LLM-instruct|
|---|---|---|
|13B| [Link](https://huggingface.co/Fugaku-LLM/Fugaku-LLM-13B) | [Link](https://huggingface.co/Fugaku-LLM/Fugaku-LLM-13B-instruct)|
## Model Details
* **Developed by**: Fugaku-LLM
* **Model type**: GPT-2
* **Language(s)**: Japanese, English
* **Library**: [DeepSpeedFugaku](https://github.com/Fugaku-LLM/DeepSpeedFugaku)
* **Tokenizer**: [llm-jp-tokenizer](https://github.com/llm-jp/llm-jp-tokenizer), code10k_en20k_ja30k of v2.2
* **License**: Fugaku-LLM Terms of Use
## Model Performance
### Instruction-tuned model
We evaluated our model by Japanese MT benchmark in the same way as [Nejumi LLM Leaderboard Neo](https://wandb.ai/wandb-japan/llm-leaderboard/reports/Nejumi-LLM-Neo--Vmlldzo2MTkyMTU0).
We only modified the following parts of the Fastchat code:
- Added "add_special_tokens=False" when calling tokenizer for input prompt
- Limit the number of tokens generated to less than 2048
|Model Name | average | coding | extraction | humanities | math | reasoning | roleplay | stem | writing |
| -- |-- | -- | -- | -- | -- | -- | -- | -- | -- |
| Fugaku-LLM-13B-instruct | 5.47 | 2.10 | 4.10 | 9.18 | 2.30 | 3.40 | 8.20 | 7.25 | 7.25 |
## How to use
### Use the instruction-tuned model
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "Fugaku-LLM/Fugaku-LLM-13B-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.bfloat16, device_map="auto")
model.eval()
system_example = "ไปฅไธใฏใใฟในใฏใ่ชฌๆใใๆ็คบใงใใ่ฆๆฑใ้ฉๅใซๆบใใๅฟ็ญใๆธใใชใใใ"
instruction_example = "ในใผใใผใณใณใใฅใผใฟใๅฏๅฒณใใฎๅๅใฎ็ฑๆฅใๆใใฆใใ ใใใ"
prompt = f"{system_example}\n\n### ๆ็คบ:\n{instruction_example}\n\n### ๅฟ็ญ:\n"
input_ids = tokenizer.encode(prompt,
add_special_tokens=False,
return_tensors="pt")
tokens = model.generate(
input_ids.to(device=model.device),
max_new_tokens=128,
do_sample=True,
temperature=0.1,
top_p=1.0,
repetition_penalty=1.0,
top_k=0
)
out = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(out)
```
### Use the base model
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "Fugaku-LLM/Fugaku-LLM-13B"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.bfloat16, device_map="auto")
model.eval()
prompt = "ในใผใใผใณใณใใฅใผใฟใๅฏๅฒณใใจใใๅ็งฐใฏ"
input_ids = tokenizer.encode(prompt,
add_special_tokens=False,
return_tensors="pt")
tokens = model.generate(
input_ids.to(device=model.device),
max_new_tokens=128,
do_sample=True,
temperature=0.1,
top_p=1.0,
repetition_penalty=1.0,
top_k=0
)
out = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(out)
```
## Training Datasets
### Instruction Tuning
- [oasst1](https://huggingface.co/datasets/llm-jp/oasst1-21k-ja)
- [databricks-dolly-15k](https://huggingface.co/datasets/llm-jp/databricks-dolly-15k-ja)
- [gsm8k](https://huggingface.co/datasets/gsm8k)
## License
Fugaku-LLM Terms of Use is available at [LICENSE](./LICENSE) and [LICENSE_ja](./LICENSE_ja) files.
## Risks and Limitations
The results of processing using Fugaku-LLM may contain falsehoods, biases, content that infringes on the rights of others, or content that does not meet the effectiveness or usefulness expected by Users.
## Acknowledgements
This achievement is based on the GovernmentโInitiated Projects of Supercomputer Fugaku "Development of Distributed Training Method for Large Language Models on Fugaku."
## Authors
- Tokyo Institute of Technology
- Tohoku University
- Fujitsu Limited
- RIKEN
- Nagoya University
- CyberAgent, Inc.
- Kotoba Technologies, Inc.
|
jcwee0873/llama3-8b-cv-swap-v0.1 | jcwee0873 | "2024-05-17T00:05:56Z" | 1,102 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"en",
"arxiv:2310.04799",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-16T08:29:06Z" | ---
language:
- ko
- en
license: llama3
---
# Model Card for llama3-8b-cv-swap
## Method
- Weight Swap with ChatVector (with meta-llama/Meta-Llama-3-8B-Instruct)
- [Chat Vector: A Simple Approach to Equip LLMs with Instruction Following and Model Alignment in New Languages](https://arxiv.org/abs/2310.04799)
## Base Model
- [MLP-KTLim/llama-3-Korean-Bllossom-8B](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B) |
duyntnet/aya-23-8B-imatrix-GGUF | duyntnet | "2024-05-24T22:39:23Z" | 1,102 | 0 | transformers | [
"transformers",
"gguf",
"imatrix",
"aya-23-8B",
"text-generation",
"en",
"license:other",
"region:us"
] | text-generation | "2024-05-24T20:33:46Z" | ---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- aya-23-8B
---
Quantizations of https://huggingface.co/CohereForAI/aya-23-8B
# From original readme
### Usage
Please install transformers from the source repository that includes the necessary changes for this model
```python
# pip install transformers==4.41.1
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "CohereForAI/aya-23-8B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Format message with the command-r-plus chat template
messages = [{"role": "user", "content": "Anneme onu ne kadar sevdiฤimi anlatan bir mektup yaz"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Anneme onu ne kadar sevdiฤimi anlatan bir mektup yaz<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
``` |
jeiku/T-850-8B-Q3_K_S-GGUF | jeiku | "2024-06-18T02:38:06Z" | 1,102 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:jeiku/T-850-8B",
"endpoints_compatible",
"region:us"
] | null | "2024-06-18T02:37:20Z" | ---
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
base_model: jeiku/T-850-8B
---
# jeiku/T-850-8B-Q3_K_S-GGUF
This model was converted to GGUF format from [`jeiku/T-850-8B`](https://huggingface.co/jeiku/T-850-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/jeiku/T-850-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo jeiku/T-850-8B-Q3_K_S-GGUF --hf-file t-850-8b-q3_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo jeiku/T-850-8B-Q3_K_S-GGUF --hf-file t-850-8b-q3_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo jeiku/T-850-8B-Q3_K_S-GGUF --hf-file t-850-8b-q3_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo jeiku/T-850-8B-Q3_K_S-GGUF --hf-file t-850-8b-q3_k_s.gguf -c 2048
```
|
timm/poolformer_s12.sail_in1k | timm | "2023-05-05T06:16:51Z" | 1,101 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2210.13452",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-05-05T06:16:43Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for poolformer_s12.sail_in1k
A PoolFormer (a MetaFormer) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 11.9
- GMACs: 1.8
- Activations (M): 5.5
- Image size: 224 x 224
- **Papers:**
- MetaFormer Is Actually What You Need for Vision: https://arxiv.org/abs/2210.13452
- **Original:** https://github.com/sail-sg/poolformer
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('poolformer_s12.sail_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'poolformer_s12.sail_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 56, 56])
# torch.Size([1, 128, 28, 28])
# torch.Size([1, 320, 14, 14])
# torch.Size([1, 512, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'poolformer_s12.sail_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 512, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{yu2022metaformer,
title={Metaformer is actually what you need for vision},
author={Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={10819--10829},
year={2022}
}
```
|
yuhuili/EAGLE-Vicuna-7B-v1.3 | yuhuili | "2023-12-11T17:41:56Z" | 1,101 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-08T10:58:40Z" | ---
license: apache-2.0
---
|
digiplay/DucHaiten-Real3D-NSFW-V1 | digiplay | "2023-12-19T15:55:17Z" | 1,101 | 9 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-12-14T17:05:53Z" | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/232442/duchaiten-real3d-nsfw
Original Author's DEMO images :


Sample prompt and images (generated by Hugginface's API)
Magazine ad, iconic, 1943, Cartoon, sharp focus, comic, 4k art on canvas by kyoani and ROSSDRAWS, princess



|
yanolja/Bookworm-10.7B-v0.4-DPO | yanolja | "2024-01-19T08:47:21Z" | 1,101 | 11 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"arxiv:2310.01377",
"arxiv:2306.02707",
"base_model:yanolja/KoSOLAR-10.7B-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-18T09:16:56Z" | ---
license: apache-2.0
base_model: yanolja/KoSOLAR-10.7B-v0.2
tags:
- generated_from_trainer
model-index:
- name: yanolja/Bookworm-10.7B-v0.4-DPO
results: []
---
# Bookworm-10.7B-v0.4-DPO
## Join Our Community on Discord!
If you're passionate about the field of Large Language Models and wish to exchange knowledge and insights, we warmly invite you to join our Discord server. It's worth noting that Korean is the primary language used in this server. The landscape of LLM is evolving rapidly, and without active sharing, our collective knowledge risks becoming outdated swiftly. Let's collaborate and drive greater impact together! Join us here: [Discord Link](https://discord.gg/b27bAHg95m).
## Our Dedicated Team (Alphabetical Order)
| Research | Engineering | Product Management | UX Design |
|-----------------|-----------------|--------------------|--------------
| Myeongho Jeong | Geon Kim | Bokyung Huh | Eunsue Choi |
| Seungduk Kim | Rifqi Alfi | | |
| Seungtaek Choi | Sanghoon Han | | |
| | Suhyun Kang | | |
## About the Model
This model is a fine-tuned version of [yanolja/KoSOLAR-10.7B-v0.2](https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.2), which is a Korean vocabulary-extended version of [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0). Specifically, we employed Direct Preference Optimization (DPO) based on [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory).
### Training Data
- Korean-translated version of [Open-Orca/SlimOrca-Dedup](https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup)
- Korean-translated version of [argilla/ultrafeedback-binarized-preferences-cleaned](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences-cleaned)
- No other dataset was used
## Citation
```
@misc{cui2023ultrafeedback,
title={UltraFeedback: Boosting Language Models with High-quality Feedback},
author={Ganqu Cui and Lifan Yuan and Ning Ding and Guanming Yao and Wei Zhu and Yuan Ni and Guotong Xie and Zhiyuan Liu and Maosong Sun},
year={2023},
eprint={2310.01377},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{SlimOrcaDedup,
title = {SlimOrca Dedup: A Deduplicated Subset of SlimOrca},
author = {Wing Lian and Guan Wang and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium" and Nathan Hoos},
year = {2023},
publisher = {HuggingFace},
url = {https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup/}
}
```
```
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
heavytail/kullm-mistral-S | heavytail | "2024-01-28T12:09:42Z" | 1,101 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-28T09:21:19Z" | ---
license: apache-2.0
language:
- ko
---
# KULLM project
- base model: mistralai/Mistral-7B-Instruct-v0.2
## datasets
- KULLM dataset
- hand-crafted instruction data
## Implementation Code
```python
from transformers import (
AutoModelForCausalLM,
AutoTokenizer
)
import torch
repo = "heavytail/kullm-mistral-S"
model = AutoModelForCausalLM.from_pretrained(
repo,
torch_dtype=torch.float16,
device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(repo)
```
Initial upload: 2024/01/28 21:00 |
megastudyedu/ME-7B-v1.0 | megastudyedu | "2024-01-29T03:18:06Z" | 1,101 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"ko",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-29T00:54:20Z" | ---
license: cc-by-nc-nd-4.0
language:
- ko
---
# Model Card for ME-7B-v1.0
## Developed by : ๋ฉ๊ฐ์คํฐ๋๊ต์ก, ํ๋ฆฌ๋์
, ๋ง์ด์ค
## Base Model : [jondurbin/bagel-dpo-7b-v0.1](https://huggingface.co/jondurbin/bagel-dpo-7b-v0.1)
## ์ฌ์ฉ ๋ฐ์ดํฐ์
- jondurbin/bagel-v0.3 SFT ๋ฒ์ญ ๋ฐ์ดํฐ์
(translated by megastudyedu/ME-7B-v1.0)
- [megastudy/M-SOLAR-10.7B-v1.3](https://huggingface.co/megastudy/M-SOLAR-10.7B-v1.3) ๋ฐ์ดํฐ
|
giprime/OOM-13B_01 | giprime | "2024-02-13T02:00:04Z" | 1,101 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"ko",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-05T01:55:21Z" | ---
license: cc-by-nc-sa-4.0
language:
- en
- ko
library_name: transformers
---
Model Architecture
OOM-13B_01 is an language model that uses an optimized transformer architecture based on Llama-2.
## Model description
Based on "beomi/llama-2-koen-13b"
## Intended uses & limitations
T.B.D.
## Training and evaluation data
T.B.D.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-04
- train_batch_size: 2
- eval_batch_size: 8
- seed: 24
- gradient_accumulation_steps: 1
- total_train_batch_size:
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu118
- Datasets 2.16.1
- Tokenizers 0.15.1 |
Cyanex/D.r.e.a.m_Mega | Cyanex | "2024-06-21T13:27:24Z" | 1,101 | 3 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"art",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-02-11T22:14:05Z" | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- art
language:
- en
---
# D.r.e.a.m (Digital Rendering Engine for Artistic Melodies)
## Welcome to D.r.e.a.m (Digital Rendering Engine for Artistic Melodies).
The model is currently in its training phase. This is not the final version and may contain artifacts, potentially performing poorly in some cases. The goal of this model is to create images similar to those produced by Midjourney. It is being trained using the Midjourney Normalized Dataset available on Kaggle.
## Model Details
- **Developed by:** Cyanex1702
- **Model Type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **Dataset:** [DreamScape](http://https://www.kaggle.com/datasets/cyanex1702/midjouney-normalized-dataset "DreamScape")
- **Training Status:** In Progress
## Model Description
D.r.e.a.m is a model designed to generate and modify images based on text prompts. The model leverages advanced diffusion techniques to create high-quality, artistic renderings from textual descriptions, aiming to emulate the style and creativity of Midjourney.
## Samples










## Features
- **Text-to-Image Generation:** Generate images from descriptive text prompts.
- **Image Modification:** Modify existing images based on new text inputs.
- **Creative Rendering:** Produce artistic and imaginative images.
## Usage
To use the model, you can input text prompts in English. The model will process these prompts and generate corresponding images. Note that due to the model's current training phase, the results may vary and contain imperfections.
## Contributing
We welcome contributions from the community! If you'd like to contribute. |
mu0gum/AIFT-42dot_LLM-PLM-1.3B-ao-instruct-all-v0.91 | mu0gum | "2024-02-14T06:28:26Z" | 1,101 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-12T00:36:29Z" | ---
license: cc-by-nc-4.0
---
# AIFT-42dot_LLM-PLM-1.3B-ao-instruct-all-v0.91
๋ฒ ์ด์ค ๋ชจ๋ธ : 42dot/42dot_LLM-PLM-1.3B
ํ์ต ๋ฐ์ดํฐ : ์์ฒด ์ ์ํ Open Orca ์คํ์ผ ๋ฐ์ดํฐ์
์ฝ 48,000๊ฑด (์ค๋ณต ์ ๊ฑฐ ๋ฐ ๋ฐ์ดํฐ ๋ถํฌ ์กฐ์ )
ํ์ต ๋ฐฉ๋ฒ : Full finetuning
epoch : 3
## ko-lm-evaluation-harness(5-shot)
|kobest_boolq|kobest_copa|kobest_hellaswag|pawsx_ko|
|--|--|--|--|
|0.5220797720797721|0.72|0.458|0.563|
## Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.0.0
- Tokenizers 0.15.0 |
Kukedlc/Triunvirato-7b | Kukedlc | "2024-02-16T21:30:39Z" | 1,101 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"mistralai/Mistral-7B-v0.1",
"Kukedlc/neuronal-7b-Mlab",
"mlabonne/Monarch-7B",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:Kukedlc/neuronal-7b-Mlab",
"base_model:mlabonne/Monarch-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-14T03:56:46Z" | ---
tags:
- merge
- mergekit
- lazymergekit
- mistralai/Mistral-7B-v0.1
- Kukedlc/neuronal-7b-Mlab
- mlabonne/Monarch-7B
base_model:
- mistralai/Mistral-7B-v0.1
- Kukedlc/neuronal-7b-Mlab
- mlabonne/Monarch-7B
license: apache-2.0
---
# Triunvirato-7b
Trinity-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
* [Kukedlc/neuronal-7b-Mlab](https://huggingface.co/Kukedlc/neuronal-7b-Mlab)
* [mlabonne/Monarch-7B](https://huggingface.co/mlabonne/Monarch-7B)
## ๐งฉ Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
- model: Kukedlc/neuronal-7b-Mlab
parameters:
density: 0.5
weight: [0, 0.3, 0.7, 1] # weight gradient
- model: mlabonne/Monarch-7B
parameters:
density: 0.33
weight:
- filter: mlp
value: 0.5
- value: 0
merge_method: ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
normalize: true
int8_mask: true
dtype: float16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kukedlc/Triunvirato-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
jungyuko/DAVinCI-42dot_LLM-PLM-1.3B-v1.4 | jungyuko | "2024-03-05T01:35:05Z" | 1,101 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-05T01:18:08Z" | ---
license: cc-by-nc-4.0
---
## DAVinCI-42dot_LLM-PLM-1.3B-v1.4
This model is a fine-tuned version of [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B) on a custom dataset.
### Model description
More information needed
### Intended uses & limitations
More information needed
### Training and evaluation data
More information needed
### Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
* learning_rate: 2e-05
* train_batch_size: 24
* eval_batch_size: 8
* seed: 42
* gradient_accumulation_steps: 4
* total_train_batch_size: 96
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr_scheduler_type: linear
* num_epochs: 3.0
* mixed_precision_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.2
* Pytorch 2.1.2+cu121
* Datasets 2.0.0
* Tokenizers 0.15.0
|
ENERGY-DRINK-LOVE/komt_DPOv3 | ENERGY-DRINK-LOVE | "2024-03-16T08:23:05Z" | 1,101 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"arxiv:2305.18290",
"base_model:davidkim205/komt-solar-10.7b-sft-v5",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-11T02:42:10Z" | ---
license: cc-by-4.0
base_model: davidkim205/komt-solar-10.7b-sft-v5
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: nhn_dpo_v3_komt-solar-10.7b-sft-v5_DPO
results: []
---
# ENERGY-DRINK-LOVE/komt_DPOv3
### Our Team
* Youjin Chung
* Jingyeom Kim
## Model
### Base Model
* [davidkim205/komt-solar-10.7b-sft-v5](https://huggingface.co/davidkim205/komt-solar-10.7b-sft-v5)
### Hardware and Software
* Hardware: A100 * 8 for training our model
* Deepspeed library & Huggingface TRL Trainer
### Dataset
* DPO_dataset
* ์์ฒด ์ ์ dpo dataset(AI-hub dataset ํ์ฉ)
* OpenOrca DPO ๋ฑ ์์ด ๋ฐ์ดํฐ์
๋ฒ์ญ(ENERGY-DRINK-LOVE/translate_share_gpt_dedup_llama_SFT_1024, ์์ฒด๋ชจ๋ธ ํ์ฉ)
### Training Method
* [DPO](https://arxiv.org/abs/2305.18290)
## Benchmark
**[Ko LM Eval Harness](https://github.com/Beomi/ko-lm-evaluation-harness)**
**[Ko-LLM-Leaderboard](https://www.aihub.or.kr/leaderboard/view.do?currMenu=500&topMenu=102)**
* (240316๊ธฐ์ค 4๋ฑ)
* 
| Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
| ------: | -----: | -----------: | ------: | ------------: | --------------: |
| 61.20 | 57.51 | 70.33 | 53.34 | 68.49 | 56.32 |
|
freewheelin/free-solar-instrunction-v0.1 | freewheelin | "2024-03-14T08:57:57Z" | 1,101 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"en",
"arxiv:2312.15166",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-13T07:02:53Z" | ---
language:
- ko
- en
license: mit
---
# Model Card for free-solar-instruction-v0.1
## Developed by : [Freewheelin](https://freewheelin-recruit.oopy.io/) AI Technical Team
## Hardware and Software
* **Training Factors**: We fine-tuned this model using the [HuggingFace TRL Trainer](https://huggingface.co/docs/trl/trainer)
## Method
- This model was trained using the learning method introduced in the [SOLAR paper](https://arxiv.org/pdf/2312.15166.pdf).
## Base Model
- [davidkim205/komt-solar-10.7b-sft-v5](https://huggingface.co/davidkim205/komt-solar-10.7b-sft-v5)
|
WillHeld/via-llama | WillHeld | "2024-04-20T20:51:25Z" | 1,101 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-03-20T16:51:42Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
YanweiLi/MGM-7B | YanweiLi | "2024-04-21T14:14:50Z" | 1,101 | 6 | transformers | [
"transformers",
"safetensors",
"text-generation",
"vision-language model",
"llama",
"generation",
"dataset:YanweiLi/MGM-Instruction",
"arxiv:2403.18814",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-26T03:05:46Z" | ---
tags:
- vision-language model
- llama
- generation
datasets:
- YanweiLi/MGM-Instruction
---
# MGM-7B Model Card
<a href='https://github.com/dvlab-research/MGM'><img src='https://img.shields.io/badge/Project-Code-violet'></a>
<a href='https://mini-gemini.github.io/'><img src='https://img.shields.io/badge/Project-Page-Green'></a>
<a href='https://arxiv.org/pdf/2403.18814.pdf'><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a>
## Model details
The framework supports a series of dense and MoE Large Language Models (LLMs) from 2B to 34B with HD image understanding, reasoning, and generation simultaneously.
Normal resolution setting: [MGM-2B](https://huggingface.co/YanweiLi/MGM-2B), [MGM-13B](https://huggingface.co/YanweiLi/MGM-13B), [MGM-8x7B](https://huggingface.co/YanweiLi/MGM-8x7B), [MGM-34B](https://huggingface.co/YanweiLi/MGM-34B)
High resolution setting: [MGM-7B-HD](https://huggingface.co/YanweiLi/MGM-7B-HD), [MGM-13B-HD](https://huggingface.co/YanweiLi/MGM-13B-HD), [MGM-8x7B-HD](https://huggingface.co/YanweiLi/MGM-8x7B-HD), [MGM-34B-HD](https://huggingface.co/YanweiLi/MGM-34B-HD)
**Model type:**
MGM is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data.
It empowers existing frameworks to support HD image understanding, reasoning, and generation simultaneously.
**Model version:**
MGM with LLM Vicuna-7B-v1.5
**Model date:**
MGM-7B was trained on 03/2024.
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
**Where to send questions or comments about the model:**
https://github.com/dvlab-research/MGM/issues
## Intended use
**Primary intended uses:**
The primary use is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## Training data
This model is trained based on [MGM-Instruction](https://huggingface.co/datasets/YanweiLi/MGM-Instruction) dataset, please to the [Github](https://github.com/dvlab-research/MGM) for more detail.
## Acknowledgement
This project is not affiliated with Google LLC. |
lcw99/google-gemma-14B-ko-chang | lcw99 | "2024-04-19T21:43:57Z" | 1,101 | 4 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-13T10:57:35Z" | ---
library_name: transformers
language:
- ko
license: apache-2.0
---
# Model Card for Model ID
This is a model of the google Gemma-7B model with the parameter size increased to 14B. The attention head has been doubled and the number of hidden layers has been increased to 42.
# Chat template
**system:** system message...
**B:** user message...
**A:** assistant message... |
egafni/nano-llama | egafni | "2024-05-10T23:29:47Z" | 1,101 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"feature-extraction",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | feature-extraction | "2024-05-10T23:17:53Z" | ---
license: mit
---
|
timm/resnet26d.bt_in1k | timm | "2024-02-10T23:38:47Z" | 1,100 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"arxiv:1512.03385",
"arxiv:1812.01187",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-04-05T18:04:52Z" | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
---
# Model card for resnet26d.bt_in1k
A ResNet-D image classification model.
This model features:
* ReLU activations
* 3-layer stem of 3x3 convolutions with pooling
* 2x2 average pool + 1x1 convolution shortcut downsample
Trained on ImageNet-1k in `timm` using recipe template described below.
Recipe details:
* Bag-of-Tricks recipe.
* SGD (w/ Nesterov) optimizer
* Cosine LR schedule with warmup
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 16.0
- GMACs: 2.6
- Activations (M): 8.2
- Image size: train = 224 x 224, test = 288 x 288
- **Papers:**
- Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385
- Bag of Tricks for Image Classification with Convolutional Neural Networks: https://arxiv.org/abs/1812.01187
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('resnet26d.bt_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnet26d.bt_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 256, 56, 56])
# torch.Size([1, 512, 28, 28])
# torch.Size([1, 1024, 14, 14])
# torch.Size([1, 2048, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnet26d.bt_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
|model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec|
|------------------------------------------|--------|-----|-----|-----------|-----|-----|-------|
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 |
|[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 |
|[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 |
|[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 |
|[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 |
|[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 |
|[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 |
|[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 |
|[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 |
|[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 |
|[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 |
|[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 |
|[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 |
|[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 |
|[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 |
|[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 |
|[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 |
|[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 |
|[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 |
|[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 |
|[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 |
|[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 |
|[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 |
|[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 |
|[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 |
|[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 |
|[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 |
|[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 |
|[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 |
|[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 |
|[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 |
|[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 |
|[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 |
|[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 |
|[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 |
|[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 |
|[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 |
|[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 |
|[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 |
|[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 |
|[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 |
|[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 |
|[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 |
|[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 |
|[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 |
|[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 |
|[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 |
|[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 |
|[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 |
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{He2015,
author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun},
title = {Deep Residual Learning for Image Recognition},
journal = {arXiv preprint arXiv:1512.03385},
year = {2015}
}
```
```bibtex
@article{He2018BagOT,
title={Bag of Tricks for Image Classification with Convolutional Neural Networks},
author={Tong He and Zhi Zhang and Hang Zhang and Zhongyue Zhang and Junyuan Xie and Mu Li},
journal={2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2018},
pages={558-567}
}
```
|
timm/mobilevit_xs.cvnets_in1k | timm | "2023-04-24T22:23:24Z" | 1,100 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2110.02178",
"license:other",
"region:us"
] | image-classification | "2023-04-24T22:23:14Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: other
datasets:
- imagenet-1k
---
# Model card for mobilevit_xs.cvnets_in1k
A MobileViT image classification model. Trained on ImageNet-1k by paper authors.
See license details at https://github.com/apple/ml-cvnets/blob/main/LICENSE
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 2.3
- GMACs: 1.1
- Activations (M): 16.3
- Image size: 256 x 256
- **Papers:**
- MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer: https://arxiv.org/abs/2110.02178
- **Original:** https://github.com/apple/ml-cvnets
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('mobilevit_xs.cvnets_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'mobilevit_xs.cvnets_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 32, 128, 128])
# torch.Size([1, 48, 64, 64])
# torch.Size([1, 64, 32, 32])
# torch.Size([1, 80, 16, 16])
# torch.Size([1, 384, 8, 8])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'mobilevit_xs.cvnets_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 384, 8, 8) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{mehta2022mobilevit,
title={MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer},
author={Sachin Mehta and Mohammad Rastegari},
booktitle={International Conference on Learning Representations},
year={2022}
}
```
|
LI-ST/Mistral-7B-ko-v0.005 | LI-ST | "2024-01-09T07:36:16Z" | 1,100 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"ko",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-08T10:22:04Z" | ---
license: cc-by-nc-nd-4.0
language:
- en
- ko
library_name: transformers
pipeline_tag: text-generation
---
<p><h1>Mistral-7B-ko</h1></p>
basemodel: Open-Orca/Mistral-7B-OpenOrca
=================================================
<BR>
This model is a temporary model for testing.
<BR>
We will be deleting it soon.
<BR>
================================================= |
mu0gum/polyglot-ko-1.3b-slim_orca_10000-epoch2 | mu0gum | "2024-01-15T12:01:34Z" | 1,100 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-10T11:24:15Z" | ---
license: cc-by-nc-4.0
---
๋ฒ ์ด์ค ๋ชจ๋ธ : EleutherAI/polyglot-ko-1.3b
ํ์ต ๋ฐ์ดํฐ : nlp-with-deeplearning/Ko.SlimOrca
ํ์ต ๋ฐ์ดํฐ 10000๊ฑด์ ์ํ๋งํ์ฌ ํ์ต (system ๋ฏธํฌํจ)
- ํ์ต ๋ฐ์ดํฐ ์์
- "prompt": "์ด ๊ธ์ ํ ๋๋ก โํ ๋จ์ฑ์ด ํธ์ฃผ ์๋๋์ ํ ์๋น์์ ํ ์ฌ์ฑ๊ณผ ๋ฐ์ดํธ๋ฅผ ํ๋ ๋จ์ฑ์ด ์นผ์ ์ฐ๋ฆฐ ๋ค ์ด์ธยท์ด์ธ๋ฏธ์ ํ์๋ก ๊ธฐ์๋๋คโ๋ ๊ธ์ ์ด๋ค.", "completion": "์ ๋ชฉ: ์๋๋์ ๋น๊ทน์ ๊ณต๊ฒฉ: ์ ๋
์์ฌ ์ค ์นผ์ ์ฐ๋ฆฐ ํฌ์์, ์ด์ธ ํ์ ๋ฐ ์ด์ธ ๋ฏธ์ ํ์\n\n์๋ก \n\nํ ๋จ์ฑ์ด ํ ๋ถ๋ถ๋ฅผ ๊ณต๊ฒฉํด ๋ ํฌ์์๋ฅผ ๋ชจ๋ ์ฐ๋ฅธ ๊ฒ์ผ๋ก ์๋ ค์ง ๊ฐ์ด๋ฐ ํธ์ฃผ ์๋๋์์ ํ์ง ์๋น์์ ๋ก๋งจํฑํ ์ ๋
๋ฐ์ดํธ๊ฐ ์ ํ์ฌํ๋ก ๋๋์ ํ๋ฒํด ๋ณด์ด๋ ์ ๋
์ด ๋น๊ทน์ ์ผ๋ก ๋ณํ๋ค. ์ง๋ฐฉ ๋น๊ตญ์ ๊ทธ ๋์ฐํ ๋ฒ์ฃ์ ๊ด๋ จํ์ฌ ์ฉ์์๋ฅผ ์ด์ธ ๋ฐ ์ด์ธ ๋ฏธ์๋ก ๊ธฐ์ํ๋ค. ์ด ๊ธฐ์ฌ์์ ์ฐ๋ฆฌ๋ ์๋ฆ๋ผ์น๋ ์ฌ๊ฑด, ์ฉ์์, ๊ทธ๋ฆฌ๊ณ ๊ทธ๊ฒ์ด ์ง์ญ์ฌํ์ ๋ผ์น ์ํฅ์ ์กฐ์ฌํ๋ค.\n\n์ฌ๊ฑด\n\n๋๋ผ์ด ํ์ฌ๋ ํธ์ฃผ ์๋๋์ ํ ์ธ๊ธฐ ๋ ์คํ ๋์์ ์ด๋ ธ์ต๋๋ค. ํ ์ฌ์ฑ๊ณผ ๊ทธ๋
์ ๋ฐ์ดํธ ์๋๋ ๊ณง ํผ์ณ์ง ๋น๊ทน์ ์ธ ์ํฉ์ ์ ๋ชจ๋ฅด๊ณ ์ฆ๊ฒ๊ฒ ์์ฌ๋ฅผ ์ฆ๊ธฐ๊ณ ์์๋ค. ๋ํ๋ฅผ ๋๋๊ณ ์ ๋
์์ฌ์ ๋น ์ ธ๋ค์ ํ ๋จ์ฑ์ด ๋ถ๋ถ์๊ฒ ๋ค๊ฐ๊ฐ ๊ฒฝ๊ณ ๋ ์์ด ๋ ์นด๋ก์ด ๋ฌผ์ฒด๋ก ์ฐ๋ฅด๊ธฐ ์์ํ๋ค.\n\n๊ฐํด์๊ฐ ๋๋์ ๋ถ๋ฆฌ๋ฉด์ ์๋น์ ์๋๋ค์ ์์ ์ ์ํด ์๋คํ ๋์ฐ๋ค. ๋ฐฉ์กฐ๋ฒ๋ค์ ์ ์ํ ์๊ธ๊ตฌ์กฐ๋์ ์ ํ๋ฅผ ํ๊ณ , ๊ทธ๋ค์ ์ฆ์ ํ์ฅ์ ๋์ฐฉํ๋ค. ๋ ํฌ์์ ๋ชจ๋ ์ธ๊ทผ ๋ณ์์ผ๋ก ๊ธํ ์ด์ก๋์ง๋ง ์ด ์ฌ์ฑ์ ๋ถ์์ผ๋ก ์ฌ๋งํ๋ค. ๊ทธ๋
๊ฐ ๋ฐ์ดํธํ๊ณ ์๋ ๋จ์๋ ๋ชฉ์จ์ ๊ฑธ๊ณ ์ธ์ฐ๋ฉด์ ์๋
ํ ์ํ๋ก ๋จ์ ์์๋ค.\n\n์ฉ์์\n\n์ด๊ธฐ ์กฐ์ฌ์ ๋ฐ๋ฅด๋ฉด ๊ณต๊ฒฉ์๋ก ์ถ์ ๋๋ ์ฌ๋์ ํผํด์์ ์๋ ค์ง ์ฐ๊ด์ฑ์ด ์์์ผ๋ฉฐ, ์ด๋ ๊ฐ์์ค๋ฝ๊ณ ์์ธํ ํญํ์ ์ ๋น๋ก์ด ์ฑ๊ฒฉ์ ๋ํ๋ค. ๋ชฉ๊ฒฉ์๋ค์ ๊ฐํด์๊ฐ ๊ตฌ์ฒด์ ์ธ ๋๊ธฐ๋ ์๋ ๊ฒ์ผ๋ก ๋ณด์ด๋ฉฐ ๋ฌด์์์ ์ผ๋ก ํ์๋ฅผ ์ํํ๋ ๊ฒ์ผ๋ก ๋ณด์ธ๋ค๊ณ ์ ํ๋ค.\n\n์ฒ ์ ํ ์์ฌ์ ์ด์ด ์ง์ญ ๊ฒฝ์ฐฐ์ด ์ฉ์์๋ฅผ ํ์
ํด ๊ฒ๊ฑฐํ๋ค. ์ด ๊ณผ์ ์์ ๊ฐํด์๋ก ์ถ์ ๋๋ ์ฌ๋์ ๋ฒ์ฃ ํ์ ์ด๋ ฅ๊ณผ ๋ฌธ์ ๊ฐ ๋ ๊ณผ๊ฑฐ๊ฐ ์๋ ๊ฒ์ผ๋ก ๋ฐํ์ก์ง๋ง, ์ด๊ฒ์ด ๋น๊ทน์ ์ธ ์ฌ๊ฑด์ ์ํฅ์ ๋ฏธ์ณค๋์ง๋ ๋ถ๋ถ๋ช
ํ๋ค.\n\n๋น๊ตญ์ ๊ทธ ๋จ์๋ฅผ ๊ทน์
๋ฌด๋ํ ๋ฒ์ฃ์ ๊ด๋ จํ์ฌ ๊ณต์์ ์ผ๋ก ์ด์ธ ๋ฐ ์ด์ธ ๋ฏธ์ ํ์๋ก ๊ธฐ์ํ๋ค. ๊ทธ๋ ๊ณตํ๊ธฐ์ผ์ ๊ฒฐ์ ํ๊ธฐ ์ํ ์ฌ๋ฆฌ๋ฅผ ๊ธฐ๋ค๋ฆฌ๊ณ ์์ผ๋ฉฐ, ์ ์ฃ๊ฐ ํ์ ๋๋ฉด ์ข
์ ํ์ ์ฒํด์ง ์ ์๋ค.\n\n์ง์ญ์ฌํ์ ๋์\n\n๊ทธ ๋ฒ์ฃ์ ์ถฉ๊ฒฉ์ ์ธ ์ฑ๊ฒฉ์ ์๋๋ ์ฃผ๋ฏผ๋ค์ด ๋์ฐํ ์ฌ๊ฑด์ ๋ฐ์๋ค์ด๊ธฐ ์ํด ๊ณ ๊ตฐ๋ถํฌํ๋ฉด์ ํ๋ค๋ฆฌ๊ฒ ํ๋ค ํ
๋ฌ๊ฐ ๋ฐ์ํ ์๋น์ ์ดํ ํ์๋์ผ๋ฉฐ, ํฌ์์๋ค์ ์ถ๋ชจํ๊ธฐ ์ํด ์ธ๋ถ์ ์์ ๊ธฐ๋
๊ด์ด ์กฐ์ฑ๋๋ค.\n\n๋ง์ ์ฌ๋๋ค์ ์ธ๊ณ์์ ๊ฐ์ฅ ์์ ํ ๋์ ์ค ํ๋๋ก ์ฌ๊ฒจ์ง๋ ๊ณณ์์ ๊ทธ๋ฌํ ์ฌ์
ํ ๋ฒ์ฃ๊ฐ ์ด๋ป๊ฒ ์ผ์ด๋ ์ ์๋์ง์ ๋ํด ์๋ฌธ์ ์ ๊ธฐํ๋ค. ์ด ๋น๊ทน์ ์ธ ์ฌ๊ฑด์ ์ง์ญ ๊ณต๋ฌด์๋ค์ด ํ์ฌ์ ๋ณด์ ์กฐ์น๋ฅผ ์ฌํ๊ฐํ๊ณ ๊ณต๊ณต ์์ ์ ๊ฐํํ๊ธฐ ์ํ ์ ๋ต์ ๊ณ ์ํ๋๋ก ๋ฐ์ฐจ๋ฅผ ๊ฐํ๋ค. ์ฃผ๋ฏผ๋ค์ด ๊ฐํด ํ์์์ ํ๋์ ํ์
ํ๊ณ ํฅํ ์ ์ฌํ ์ฌ๊ฑด์ด ๋ฐ์ํ์ง ์๋๋ก ๋
ธ๋ ฅํ๋ฉด์ ์ ์ ๊ฑด๊ฐ๊ณผ ๋ฒ์ฃ ์ฌํ์ ๋๋ฌ์ผ ๋
ผ์๋ ์๋ฉด ์๋ก ๋ ์ฌ๋๋ค.\n\n์นผ์ ์ฐ๋ฆฐ ํ, ๊ทธ ์ง์ญ ์ฌํ๋ ํฌ์์๋ค๊ณผ ๊ทธ๋ค์ ๊ฐ์กฑ๋ค์ ๋ํ ์ฌํ๊ณผ ์ง์ง์ ํญ์ฐ์ ํจ๊ป ๋ญ์ณค๋ค ์ด๋ถ ์งํ๊ฐ ์ด๋ ธ๊ณ , ์์กดํ ํฌ์์์ ์น๋ฃ๋น์ ํ๋ณต์ ๋๊ธฐ ์ํ ์จ๋ผ์ธ ๋ชจ๊ธ ์ด๋์ด ์งํ ์ค์ด๋ค.\n\n๊ฒฐ๋ก \n\n์๋๋์์์ ๋น๊ทน์ ์ธ ๊ณต๊ฒฉ์ ๋ฌด๋ถ๋ณํ ํญ๋ ฅ ํ์๊ฐ ์ธ์ ์ด๋์๋ ์ผ์ด๋ ์ ์๋ค๋ ๊ฒ์ ์๋ฆ๋ผ์น๊ฒ ์๊ธฐ์ํค๋ ์ญํ ์ ํ๋ค. ์ด ๋์ฐํ๊ณ ๊ฒ๋ณด๊ธฐ์ ๋ฌด์์์ ์ธ ์ํน ํ์์ ๋ํ ๊ณต๋์ฒด์ ๋์์ ํฅํ ์ด๋ฌํ ์ฌ๊ฑด์ ๋ฐฉ์งํ๊ธฐ ์ํ ๊ฒฐ์๋ ฅ๊ณผ ๊ฒฐ๋จ๋ ฅ ์ค ํ๋์๋ค. ์ฌ๊ฑด์ด ์ ๊ฐ๋๋ฉด์ ๋น๊ทน์ ์ผ๋ก ์งง์ ๋ชฉ์จ์ ์์ ์ฐธํนํ ์ฃฝ์์ ์ฌํผํ๊ณ ์ด ์์ํ ์ ์๋ ์๋ จ์ ๊ณ์ ๊ฒฌ๋๋ฉด์ ํ๋ณต ์ค์ธ ํผํด์๋ฅผ ๋ถ์ํ๊ณ ์๋ ํผํด์์ ๊ทธ ๊ฐ์กฑ๋ค์ ๊ณต๋์ฒด์ ๊ฐ์ด์ ๋จ์ ์๋ค."
ํ์ต ๋ฐฉ๋ฒ : Lora
Lora Config
- lora_alpha: 16
- lora_dropout: 0.05,
- r: 8
ko-lm-evaluation-harness ๊ฒฐ๊ณผ
- kobest_boolq 0.502137
- kobest_copa 0.683000
- kobest_hellaswag 0.402000
- kobest_sentineg 0.725441
- kohatespeech 0.420382
- kohatespeech_apeach 0.509549
- kohatespeech_gen_bias 0.450106
- korunsmile 0.426313
- nsmc 0.688540
- pawsx_ko 0.488000
- Average_score 0.529547
|
blueapple8259/TinyKo-V4 | blueapple8259 | "2024-01-11T12:53:34Z" | 1,100 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"dataset:psymon/namuwiki_alpaca_dataset",
"dataset:maywell/ko_wikidata_QA",
"dataset:bigcode/starcoderdata",
"dataset:royboy0416/ko-alpaca",
"dataset:beomi/KoAlpaca-v1.1a",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-11T12:44:49Z" | ---
license: cc-by-nc-sa-4.0
datasets:
- psymon/namuwiki_alpaca_dataset
- maywell/ko_wikidata_QA
- bigcode/starcoderdata
- royboy0416/ko-alpaca
- beomi/KoAlpaca-v1.1a
language:
- ko
pipeline_tag: text-generation
---
์ฌ๋ฌ ๋ฐ์ดํฐ์
์ผ๋ก ์ฌ์ ํ์ต๋ AI๋ชจ๋ธ์
๋๋ค.
## ์ฃผ์
์ด์ ๋ชจ๋ธ๋ค๊ณผ ๋ง์ฐฌ๊ฐ์ง๋ก ์ฌ์ค๊ณผ ๋ง์ง ์๋ ์ถ๋ ฅ์ด ๋งค์ฐ ๋ง์ต๋๋ค.
## ํ๋กฌํํธ
์ฃผ์: ์ง ํ๊ณ ์ถ์ ๋ง๋ง ํฉ๋๋ค.(์: ์ฌ๊ณผ์ ์์ ์๋ ค์ฃผ์ธ์. -> ์๊ฐ๋ ์ค๊ตญ ๋๋ถ๋ถ ๋ถ๋จ์ ์์นํ ์๋ฅ์ผ๋ก, ...)
```
๋ค์์ ์์
์ ์ค๋ช
ํ๋ ์ง์นจ์
๋๋ค. ์์ฒญ์ ์ ์ ํ๊ฒ ์๋ฃํ๋ ์๋ต์ ์์ฑํ์ญ์์ค.
### ์ง์นจ:
{prompt}
### ์๋ต:
```
## ๋ฐ์ดํฐ์
์ผ๋ถ ๋ฐ์ดํฐ์
์ output๋ง ์ฌ์ฉํ์์ต๋๋ค.
[psymon/namuwiki_alpaca_dataset](https://huggingface.co/datasets/psymon/namuwiki_alpaca_dataset)
[maywell/ko_wikidata_QA](https://huggingface.co/datasets/maywell/ko_wikidata_QA)
[bigcode/starcoderdata](https://huggingface.co/datasets/bigcode/starcoderdata) - ํผ์ ๋๋ฌด ๋ง์ ๊ด๊ณ๋ก ํ์ด์ฌ ์ฒซ ํ์ผ์ 1/4๋ง ์ฌ์ฉํจ
[royboy0416/ko-alpaca](https://huggingface.co/datasets/royboy0416/ko-alpaca)
[beomi/KoAlpaca-v1.1a](https://huggingface.co/datasets/beomi/KoAlpaca-v1.1a)
|
mu0gum/AIFT-42dot-LLM-PLM-1.3B-ao-instruct-all-v0.2 | mu0gum | "2024-01-16T12:34:49Z" | 1,100 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-16T11:49:49Z" | ---
license: cc-by-nc-4.0
---
# AIFT-42dot-LLM-PLM-1.3B-ao-instruct-all-v0.2
๋ฒ ์ด์ค ๋ชจ๋ธ : 42dot/42dot_LLM-PLM-1.3B
ํ์ต ๋ฐ์ดํฐ : ์์ฒด ์ ์ํ Open Orca ์คํ์ผ ๋ฐ์ดํฐ์
์ฝ 26,000๊ฑด
ํ์ต ๋ฐฉ๋ฒ : Lora
Lora Config
- lora_alpha: 16
- lora_dropout: 0.05,
- r: 8
## ko-lm-evaluation-harness(0-shot)
|kobest_boolq|kobest_copa|kobest_hellaswag|kobest_sentineg|kohatespeech|kohatespeech_apeach|kohatespeech_gen_bias|korunsmile|nsmc|pawsx_ko|
|--|--|--|--|--|--|--|--|--|--|
|0.5014245014245015|0.702|0.434|0.6876574307304786|0.2951167728237792|0.5106100795755968|0.14225053078556263|0.3627087567466955|0.60714|0.5265| |
cocoirun/Yi-Ko-6B-instruct-v1.7 | cocoirun | "2024-01-18T07:14:38Z" | 1,100 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-18T06:54:59Z" | ---
license: cc-by-sa-4.0
---
<h1>instruct ๋ชจ๋ธ v1.7</h1>
<b><ํ์ต ๋ฐ์ดํฐ ๊ตฌ์ถ></b>
Open-Orca-ko ๋ฐ์ดํฐ๋ฅผ ๋ถ์ํ์ฌ ํ์คํฌ๋ฅผ ์ถ์ถํ ๋ค
ํด๋น ํ์คํฌ์ ๋ง์ถฐ์ NLP ๊ด๋ จ ์คํ์์ค ๋ฐ์ดํฐ๋ฅผ ํ์ฉํ์ฌ ํ์ต๋ฐ์ดํฐ๋ฅผ ์์ฒด์ ์ผ๋ก
์ฝ 4๋ง๊ฑด(์ญ์ฌ, ๊ณผํ, ์ํ, ๊ธฐ๊ณ๋
ํด, ๋ฆฌ๋ทฐ ๋ถ์) ๊ตฌ์ถํ์๊ณ ,
๊ทธ ์ธ์ Open-Orca-Ko์์ ๋ฐ์ดํฐ๋ฅผ ์ผ๋ถ ํํฐ๋งํ์ฌ ์ ์ ํด๊ฑฐ๋ KoBEST ๋ฐ์ดํฐ๋ฅผ ํจ๊ป ์ถ๊ฐํ์์ต๋๋ค.
aihub ์ผ๋ฐ์์ ๋ฐ ๊ธฐ๊ณ๋
ํด ๋ฐ์ดํฐ๋ฅผ ํ์ฉํ์ฌ ์ถ๊ฐ๋ก ํ์ต ๋ฐ์ดํฐ๋ฅผ ๊ตฌ์ถ(ํํ์ ๊ด๋ จ, ๊ธฐ๊ณ๋
ํด ๊ด๋ จ ๋ฐ ์์ฝ)
๊ฐ์ข
๋ธ๋ก๊ทธ์์ ์ญ์ฌ ๋ฐ ์์ ํด์ฆ๋ฅผ ์ฌ๋์ด ์ง์ ํ์ต๋ฐ์ดํฐ ํํ๋ก ๋ณ๊ฒฝ
AI2AI Challenge ๋ฐ์ดํฐ๋ฅผ ํํ๊ณ ๋ฅผ ํตํด ๋ฒ์ญ ๋ฐ ์ค์ญ๋ ๋ถ๋ถ์ ์ฌ๋์ด ์ง์ ์์ ํ๋ ์์
์ ์ํ
์์ด ๋ฒ์ญ ๋ฐ์ดํฐ ์ํ/ํ์ ๋ฐ์ดํฐ ํ์ต ๋ฐ์ดํฐ๋ก ํ์ฉ ์งํ
์ด 11๋ง๊ฐ์ ํ์ต๋ฐ์ดํฐ๋ก sft๋ฅผ ์งํํ์์ต๋๋ค.
<br>
ํ์ฌ, ์๋ก์ด ๋ฒ์ ์ ๋ชจ๋ธ ํ์ต ๋ฐ ์ฑ๋ฅ์ ์ํด Open-Orca ๋ฐ์ดํฐ์
์ผ๋ถ๋ฅผ ๋ฒ์ญํ์ฌ ์ ์ ์ค์ ์์ต๋๋ค.
<br>
+ ๊ณ ๋ฑํ๊ต ์ญ์ฌ ๋ฌธ์ ๋ฐ TruthfulQA ๊ด๋ จ ๋ฌธ์ ์ถ๊ฐ๋ฅผ ์งํํ์์ต๋๋ค.
+ ๊ฐ์ข
it ์ง์ ๋ฐ์ดํฐ ์ถ๊ฐ์งํ.
+ ๊ธฐ๊ณ๋
ํด ๊ด๋ จ ํ์ต ๋ฐ์ดํฐ๋ฅผ ChatGPT๋ฅผ ํตํด์ ๋ต๋ณ์ ์ป์ด ํ์ต
+ ๋ฌธ๋ฒ๊ด๋ จ ํ์ต ๋ฐ์ดํฐ
<br>
###ํ์ต ๋ฐ์ดํฐ ํ์ผ์ ๋น๊ณต๊ฐ์
๋๋ค.
<br>
<b><ํ์ต></b>
ํ์ต์ LoRA๋ฅผ ์ฌ์ฉํ์ฌ A100 40G *2์์ ํ์ต์ ์งํํ์์ต๋๋ค. |
jungyuko/DAVinCI-42dot_LLM-PLM-1.3B-v0.61 | jungyuko | "2024-01-23T13:55:23Z" | 1,100 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-23T13:43:04Z" | ---
license: cc-by-nc-4.0
---
## DAVinCI-42dot_LLM-PLM-1.3B-v0.61
This model is a fine-tuned version of [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B) on an unknown dataset.
### Model description
More information needed
### Intended uses & limitations
More information needed
### Training and evaluation data
More information needed
### Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
* learning_rate: 2e-05
* train_batch_size: 24
* eval_batch_size: 8
* seed: 42
* gradient_accumulation_steps: 4
* total_train_batch_size: 96
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr_scheduler_type: linear
* num_epochs: 1.0
* mixed_precision_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.2
* Pytorch 2.1.2+cu121
* Datasets 2.0.0
* Tokenizers 0.15.0
|
Edentns/DataVortexS-10.7B-dpo-v1.3 | Edentns | "2024-02-15T15:00:30Z" | 1,100 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"base_model:yanolja/KoSOLAR-10.7B-v0.2",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-25T09:23:02Z" | ---
tags:
- text-generation
license: cc-by-nc-4.0
language:
- ko
base_model: yanolja/KoSOLAR-10.7B-v0.2
pipeline_tag: text-generation
---
# **DataVortexS-10.7B-dpo-v1.3**
<img src="./DataVortex.png" alt="DataVortex" style="height: 8em;">
## Our Team
| Research & Engineering | Product Management |
| :--------------------: | :----------------: |
| Kwangseok Yang | Seunghyun Choi |
| Jeongwon Choi | Hyoseok Choi |
## **Model Details**
### **Base Model**
[yanolja/KoSOLAR-10.7B-v0.2](https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.2)
### **Trained On**
- **OS**: Ubuntu 22.04
- **GPU**: H100 80GB 4ea
- **transformers**: v4.36.2
### **Instruction format**
It follows **ChatML** format.
E.g.
```python
text = """\
<|im_start|>system
๋น์ ์ ์ฌ๋๋ค์ด ์ ๋ณด๋ฅผ ์ฐพ์ ์ ์๋๋ก ๋์์ฃผ๋ ์ธ๊ณต์ง๋ฅ ๋น์์
๋๋ค.<|im_end|>
<|im_start|>user
๋ํ๋ฏผ๊ตญ์ ์๋๋ ์ด๋์ผ?<|im_end|>
<|im_start|>assistant
๋ํ๋ฏผ๊ตญ์ ์๋๋ ์์ธ์
๋๋ค.<|im_end|>
<|im_start|>user
์์ธ ์ธ๊ตฌ๋ ์ด ๋ช ๋ช
์ด์ผ?<|im_end|>
<|im_start|>assistant
"""
```
## **Model Benchmark**
### **[Ko LM Eval Harness](https://github.com/Beomi/ko-lm-evaluation-harness)**
| Task | 0-shot | 5-shot | 10-shot | 50-shot |
| :--------------- | -----------: | -----------: | -----------: | -----------: |
| kobest_boolq | 0.91154 | 0.927338 | 0.92373 | 0.653224 |
| kobest_copa | 0.747317 | 0.826961 | 0.842943 | 0.860989 |
| kobest_hellaswag | 0.445855 | 0.459065 | 0.462306 | 0.4721 |
| kobest_sentineg | 0.483219 | 0.95466 | 0.964734 | 0.972292 |
| **Average** | **0.646983** | **0.792006** | **0.798428** | **0.739651** |
### **[Ko-LLM-Leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)**
| Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
| ------: | -----: | -----------: | ------: | ------------: | --------------: |
| 57.65 | 52.99 | 64.8 | 54.86 | 53.87 | 61.75 |
## **Implementation Code**
This model contains the chat_template instruction format.
You can use the code below.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v1.3")
tokenizer = AutoTokenizer.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v1.3")
messages = [
{"role": "system", "content": "๋น์ ์ ์ฌ๋๋ค์ด ์ ๋ณด๋ฅผ ์ฐพ์ ์ ์๋๋ก ๋์์ฃผ๋ ์ธ๊ณต์ง๋ฅ ๋น์์
๋๋ค."},
{"role": "user", "content": "๋ํ๋ฏผ๊ตญ์ ์๋๋ ์ด๋์ผ?"},
{"role": "assistant", "content": "๋ํ๋ฏผ๊ตญ์ ์๋๋ ์์ธ์
๋๋ค."},
{"role": "user", "content": "์์ธ ์ธ๊ตฌ๋ ์ด ๋ช ๋ช
์ด์ผ?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## **License**
This model is licensed under the [cc-by-nc-4.0](https://creativecommons.org/licenses/by-nc/4.0/). which allows others to share and adapt the model for non-commercial purposes.
<div align="center">
<a href="https://edentns.com/">
<img src="./Logo.png" alt="Logo" style="height: 3em;">
</a>
</div>
|
qutrino/7b-finetune | qutrino | "2024-01-27T19:19:18Z" | 1,100 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:kyujinpy/KOpen-platypus",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-27T18:17:33Z" | ---
datasets:
- kyujinpy/KOpen-platypus
license: apache-2.0
---
beomi/open-llama-2-ko-7b ๋ชจ๋ธ์ kyujinpy/KOpen-platypus ๋ก ํ์ตํ์์ต๋๋ค.
## ํ์ธํ๋๋ฐฉ๋ฒ
- lora
- lora_r: 16
- lora_alpha: 16
- loca_dropout: 0.05
## basemodel
- beomi/open-llama-2-ko-7b
## instruction dataset
- kyujinpy/KOpen-platypus only
|
blueapple8259/TinyKo-v5-c | blueapple8259 | "2024-02-03T05:48:31Z" | 1,100 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"dataset:maywell/korean_textbooks",
"dataset:nlpai-lab/kullm-v2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-03T05:32:37Z" | ---
license: mit
datasets:
- maywell/korean_textbooks
- nlpai-lab/kullm-v2
language:
- ko
---
[TinyKo-v5-b](https://huggingface.co/blueapple8259/TinyKo-v5-b)๋ชจ๋ธ์ [kullm-v2](https://huggingface.co/datasets/nlpai-lab/kullm-v2)๋ฐ์ดํฐ์
์ผ๋ก ํ์ธํ๋ํ ๋ชจ๋ธ์
๋๋ค.
์ฃผ์: ์ฑ๋ฅ์ด ๋งค์ฐ ๋จ์ด์ง๋ฉฐ ํ ๋ฃจ์๋ค์ด์
์ด ๋งค์ฐ ์ฌํฉ๋๋ค.
## ๋ชจ๋ธ ์ ๋ณด
model type: llama
hidden size: 6
hidden size: 127
num attention heads: 16
num key value heads: 4
|
blueapple8259/SmallKo | blueapple8259 | "2024-03-02T04:32:05Z" | 1,100 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"dataset:mc4",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-02T04:21:41Z" | ---
license: cc-by-nc-sa-4.0
datasets:
- mc4
language:
- ko
widget:
- text: "์๋
ํ์ธ์"
---
## ๋ฐ์ดํฐ์
[mc4](https://huggingface.co/datasets/mc4) ๋ฐ์ดํฐ์
์์ ํ๊ธ์ ์ ์ ํ ์ฌ์ฉํ์์ต๋๋ค.
## ์ฃผ์
๋ฐ์ดํฐ์
์์ ํ์ค ํํ ๋ฑ์ ์ ๊ฑฐํ์ง ์๊ณ ์ฌ์ฉํ์์ต๋๋ค.
## ๋ชจ๋ธ ์ ๋ณด
model type: llama
hidden size: 8
hidden size: 512
num attention heads: 16
num key value heads: 4
|
kurugai/Kurugai-EEVE-v1.1 | kurugai | "2024-03-12T05:30:02Z" | 1,100 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"dataset:kurugai/MedText",
"base_model:kurugai/Kurugai-EEVE-v1.0",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-10T13:48:09Z" | ---
license: cc-by-nc-sa-4.0
base_model: kurugai/Kurugai-EEVE-v1.0
datasets:
- kurugai/MedText
language:
- ko
---
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
**kurugai/Kurugai-EEVE-v1.1**๋ **kurugai/Kurugai-EEVE-v1.0**๋ฅผ ๋ฒ ์ด์ค๋ชจ๋ธ๋ก ํด์ **BI55/MedText** ๋ฐ์ดํฐ์
์ผ๋ก ํ์ต๋ ๋ชจ๋ธ์
๋๋ค.
# ํ์ต์๊ฐ
RTX 8000 GPU 1EA๋ก 1์๊ฐ ํ์ตํ์์ต๋๋ค.
# ๋์์ ์ฃผ์ ๋ถ
์ด ๋ชจ๋ธ์ ์๋ด์ ์ง์์ ๋ฐ์ ์ ์๋์์ต๋๋ค. ์๋ด์๊ฒ ๊ฐ์ฌ์ ๋ง์ ์ ํฉ๋๋ค. |
Deepnoid/deep-solar-v3.0 | Deepnoid | "2024-03-19T01:35:15Z" | 1,100 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-14T02:22:57Z" | ---
license: apache-2.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
# Developed by : [Deepnoid](https://www.deepnoid.com/) AI research team
|
freewheelin/free-solar-instrunction-v0.2 | freewheelin | "2024-03-14T08:39:40Z" | 1,100 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"en",
"arxiv:2312.15166",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-14T08:00:34Z" | ---
language:
- ko
- en
license: mit
---
# Model Card for free-solar-instruction-v0.2
## Developed by : [Freewheelin](https://freewheelin-recruit.oopy.io/) AI Technical Team
## Hardware and Software
* **Training Factors**: We fine-tuned this model using the [HuggingFace TRL Trainer](https://huggingface.co/docs/trl/trainer)
## Method
- This model was trained using the learning method introduced in the [SOLAR paper](https://arxiv.org/pdf/2312.15166.pdf).
## Base Model
- [davidkim205/komt-solar-10.7b-sft-v5](https://huggingface.co/davidkim205/komt-solar-10.7b-sft-v5) |
kody0525/KOpen-platypus-polyglot-ko-12.8b | kody0525 | "2024-03-20T03:37:43Z" | 1,100 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-15T00:49:49Z" | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jieunhan/solar_merge_test_1 | jieunhan | "2024-04-21T05:18:53Z" | 1,100 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"beomi/OPEN-SOLAR-KO-10.7B",
"hyeogi/SOLAR-10.7B-dpo-v1",
"base_model:beomi/OPEN-SOLAR-KO-10.7B",
"base_model:hyeogi/SOLAR-10.7B-dpo-v1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-20T06:46:13Z" | ---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- beomi/OPEN-SOLAR-KO-10.7B
- hyeogi/SOLAR-10.7B-dpo-v1
base_model:
- beomi/OPEN-SOLAR-KO-10.7B
- hyeogi/SOLAR-10.7B-dpo-v1
---
# solar_merge_test_1
## ๐งฉ Configuration
```yaml
base_model: beomi/OPEN-SOLAR-KO-10.7B
dtype: float16
experts:
- source_model: beomi/OPEN-SOLAR-KO-10.7B
positive_prompts: ["๋น์ ์ ์น์ ํ ๋ณดํธ์ ์ธ ์ด์์คํดํธ์ด๋ค."]
- source_model: hyeogi/SOLAR-10.7B-dpo-v1
positive_prompts: ["๋น์ ์ ์น์ ํ ์ด์์คํดํธ์ด๋ค."]
gate_mode: cheap_embed
tokenizer_source: base
```
## ๐ป Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "jieunhan/solar_merge_test_1"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
gelukuMLG/llama-3-stinky-v2-8B-GGUF | gelukuMLG | "2024-05-13T02:27:32Z" | 1,100 | 4 | null | [
"gguf",
"license:llama3",
"region:us"
] | null | "2024-05-12T12:31:31Z" | ---
license: llama3
---
These are GGUF statis quants of: nbeerbower/llama-3-stinky-v2-8B
Original model link: https://huggingface.co/nbeerbower/llama-3-stinky-v2-8B
The quants have been tested and work propperly with the latest version of Koboldcpp.
|
mssma/ko-solar-10.7b-v0.2b | mssma | "2024-05-13T05:00:36Z" | 1,100 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-13T04:43:53Z" | ---
library_name: transformers
license: apache-2.0
language:
- ko
---
# usage
```
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
path = "mssma/ko-solar-10.7b-v0.2b"
model = AutoModelForCausalLM.from_pretrained(
path,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(path)
``` |
elinas/Llama-3-15B-Instruct-ft-v2 | elinas | "2024-05-31T19:57:48Z" | 1,100 | 4 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"finetune",
"conversational",
"dataset:Chat-Error/Pure-dove-sharegpt",
"base_model:elinas/Llama-3-15B-Instruct-zeroed",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-19T00:52:27Z" | ---
base_model:
- elinas/Llama-3-15B-Instruct-zeroed
library_name: transformers
tags:
- mergekit
- merge
- finetune
datasets:
- Chat-Error/Pure-dove-sharegpt
license: llama3
---
# Llama-3-15B-Instruct-zeroed-ft-v2
This is a QLoRA **finetune** of a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
The model is based on a "zeroed" passthrough merge of [Llama-3-15B-Instruct-zeroed](https://huggingface.co/elinas/Llama-3-15B-Instruct-zeroed)
This was primarily an experiment to see how a passthrough merge will respond to further finetuning of all LoRA modules.
The model was finetuned on **8192 context length** and it can possibly be extended using RoPE up to 32k.
**v3 of the model will contain significantly more data, primarily human focused, aimed to excel at writing as well as maintaining logic, coherency, and continuity.**
**[GGUF Quants provided by @gelukuMLG](https://huggingface.co/gelukuMLG/Llama-3-15B-Instruct-ft-v2-GGUF)**
## Datasets
* [Chat-Error/Pure-dove-sharegpt](https://huggingface.co/datasets/Chat-Error/Pure-dove-sharegpt)
A small, high quality, curated dataset was used as a PoC / validation on stabilizing the model after the original passthrough merge.
## Finetuning details
This is a QLoRA model and all of the LoRA modules were targeted this time to ensure sufficient training before moving on to larger datasets.
the first version of this model only targeted **o_proj** and **up_proj**
```yaml
lora_target_modules:
- gate_proj
- down_proj
- up_proj
- q_proj
- v_proj
- k_proj
- o_proj
lora_modules_to_save:
- embed_tokens
- lm_head
```
The model is coherent even with training the "zeroed" layers plus the additional layers, as this was the recommendation from [Charles Goddard](https://huggingface.co/chargoddard) (mergekit developer) - thank you for sharing the method of merging as well as Toasty
Pigeon for bringing it to my attention!
```yaml
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 3
- total_train_batch_size: 3
- total_eval_batch_size: 3
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 25
- num_epochs: 1
```
Optimizer `paged_adamw_8bit` and Deepspeed ZeRO 3 was used at a LR of `1e-5` using the cosine scheduler for 1 epoch on 3x3090s taking 4 hours total.
**Unsloth** was used for speed and memory savings.
Sample packing and padding was disabled to reduce VRAM consumption significantly at the cost of speed.
W&B Run Summary
```
wandb: eval/loss 0.90895
wandb: eval/runtime 463.4688
wandb: eval/samples_per_second 0.833
wandb: eval/steps_per_second 0.278
wandb: total_flos 8270790524928.0
wandb: train/epoch 1.0
wandb: train/global_step 1157
wandb: train/grad_norm 7.3847
wandb: train/learning_rate 0.0
wandb: train/loss 0.8702
wandb: train_loss 0.87814
wandb: train_runtime 16425.2713
wandb: train_samples_per_second 0.211
wandb: train_steps_per_second 0.07
```
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
## Model Evaluation
TBD
If you have any questions or comments on the model, feel free to open a discussion in the community tab.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) |
DeepPavlov/rudialogpt3_medium_based_on_gpt2_v2 | DeepPavlov | "2022-11-07T14:59:03Z" | 1,099 | 3 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2022-11-07T13:26:22Z" | ---
license: other
---
|
Undi95/Dawn-v2-70B | Undi95 | "2023-11-08T22:34:24Z" | 1,099 | 9 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"not-for-all-audiences",
"nsfw",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-06T15:16:55Z" | ---
license: cc-by-nc-4.0
tags:
- not-for-all-audiences
- nsfw
---
<center>[<a href="https://huggingface.co/Undi95/Dawn-v2-70B">fp16</a> - <a href="https://huggingface.co/Undi95/Dawn-v2-70B-GGUF">gguf</a> - exl2 : <a href="https://huggingface.co/Undi95/Dawn-v2-70B-2.55bpw-h6-exl2">2.55bpw</a>]</center>
</br>
<div style="width: 100%;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/Cxcfqi4WdtXCNLnaIqSRB.png" style="width: 75%; min-width: 200px; display: block; margin: auto;">
</div>
<!-- description start -->
## Description
This repo contains fp16 files of Dawn-70B, a merge I have done with the new [layer shuffle](https://github.com/cg123/mergekit/blob/main/mergekit/scripts/layershuffle.py) method from mergekit.
[UtopiaXL](https://huggingface.co/Undi95/UtopiaXL-13B) was a huge success for me, I really liked it, so I took the same path to do this 70B: A good base, some psychologic data, some medical data, a little bit of this, of that, and LimaRP at the end as always.
NOTE: This repo contain the file [measurement.json](https://huggingface.co/Undi95/Dawn-v2-70B/blob/main/measurement.json) needed to do your own exl2 quant (I use [wikitext](https://huggingface.co/datasets/wikitext/resolve/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train/0000.parquet)).
<!-- description end -->
<!-- description start -->
## Models and loras used
- [Sao10K/Euryale-1.3-L2-70B](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B)
- [Xwin-LM/Xwin-LM-70B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1)
- [ehartford/Samantha-1.11-70b](https://huggingface.co/ehartford/Samantha-1.11-70b)
- [NousResearch/Nous-Hermes-Llama2-70b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-70b)
- [augtoma/qCammel-70-x](https://huggingface.co/augtoma/qCammel-70-x)
- [jondurbin/airoboros-l2-c70b-3.1.2](https://huggingface.co/jondurbin/airoboros-l2-c70b-3.1.2)
- [fangloveskari/ORCA_LLaMA_70B_QLoRA](https://huggingface.co/fangloveskari/ORCA_LLaMA_70B_QLoRA)
- [Doctor-Shotgun/limarpv3-llama2-70b-qlora](https://huggingface.co/Doctor-Shotgun/limarpv3-llama2-70b-qlora)
<!-- description end -->
## The sauce
```
!mergekit-layershuffle ./Dawn-v2-70B \
--model Sao10K/Euryale-1.3-L2-70B --weight 0.3 \
--model Xwin-LM/Xwin-LM-70B-V0.1 --weight 0.2 \
--model ehartford/Samantha-1.11-70b --weight 0.1 \
--model NousResearch/Nous-Hermes-Llama2-70b --weight 0.05 \
--model augtoma/qCammel-70-x --weight 0.05 \
--model jondurbin/airoboros-l2-c70b-3.1.2 --weight 0.2 \
--model fangloveskari/ORCA_LLaMA_70B_QLoRA --weight 0.1 \
--write-yaml Dawn-v2-70B.yaml
=========================
merge_method: passthrough
slices:
- sources:
- layer_range:
- 0
- 1
model: fangloveskari/ORCA_LLaMA_70B_QLoRA
- sources:
- layer_range:
- 1
- 2
model: jondurbin/airoboros-l2-c70b-3.1.2
- sources:
- layer_range:
- 2
- 3
model: Sao10K/Euryale-1.3-L2-70B
- sources:
- layer_range:
- 3
- 4
model: jondurbin/airoboros-l2-c70b-3.1.2
- sources:
- layer_range:
- 4
- 5
model: fangloveskari/ORCA_LLaMA_70B_QLoRA
- sources:
- layer_range:
- 5
- 6
model: ehartford/Samantha-1.11-70b
- sources:
- layer_range:
- 6
- 8
model: Xwin-LM/Xwin-LM-70B-V0.1
- sources:
- layer_range:
- 8
- 9
model: ehartford/Samantha-1.11-70b
- sources:
- layer_range:
- 9
- 10
model: Sao10K/Euryale-1.3-L2-70B
- sources:
- layer_range:
- 10
- 11
model: ehartford/Samantha-1.11-70b
- sources:
- layer_range:
- 11
- 12
model: jondurbin/airoboros-l2-c70b-3.1.2
- sources:
- layer_range:
- 12
- 13
model: fangloveskari/ORCA_LLaMA_70B_QLoRA
- sources:
- layer_range:
- 13
- 14
model: Sao10K/Euryale-1.3-L2-70B
- sources:
- layer_range:
- 14
- 15
model: fangloveskari/ORCA_LLaMA_70B_QLoRA
- sources:
- layer_range:
- 15
- 16
model: Sao10K/Euryale-1.3-L2-70B
- sources:
- layer_range:
- 16
- 17
model: fangloveskari/ORCA_LLaMA_70B_QLoRA
- sources:
- layer_range:
- 17
- 18
model: jondurbin/airoboros-l2-c70b-3.1.2
- sources:
- layer_range:
- 18
- 19
model: NousResearch/Nous-Hermes-Llama2-70b
- sources:
- layer_range:
- 19
- 20
model: Xwin-LM/Xwin-LM-70B-V0.1
- sources:
- layer_range:
- 20
- 21
model: Sao10K/Euryale-1.3-L2-70B
- sources:
- layer_range:
- 21
- 22
model: ehartford/Samantha-1.11-70b
- sources:
- layer_range:
- 22
- 23
model: jondurbin/airoboros-l2-c70b-3.1.2
- sources:
- layer_range:
- 23
- 24
model: augtoma/qCammel-70-x
- sources:
- layer_range:
- 24
- 25
model: Sao10K/Euryale-1.3-L2-70B
- sources:
- layer_range:
- 25
- 27
model: jondurbin/airoboros-l2-c70b-3.1.2
- sources:
- layer_range:
- 27
- 28
model: Xwin-LM/Xwin-LM-70B-V0.1
- sources:
- layer_range:
- 28
- 29
model: ehartford/Samantha-1.11-70b
- sources:
- layer_range:
- 29
- 30
model: Sao10K/Euryale-1.3-L2-70B
- sources:
- layer_range:
- 30
- 32
model: Xwin-LM/Xwin-LM-70B-V0.1
- sources:
- layer_range:
- 32
- 33
model: ehartford/Samantha-1.11-70b
- sources:
- layer_range:
- 33
- 34
model: augtoma/qCammel-70-x
- sources:
- layer_range:
- 34
- 35
model: Xwin-LM/Xwin-LM-70B-V0.1
- sources:
- layer_range:
- 35
- 37
model: Sao10K/Euryale-1.3-L2-70B
- sources:
- layer_range:
- 37
- 38
model: jondurbin/airoboros-l2-c70b-3.1.2
- sources:
- layer_range:
- 38
- 39
model: ehartford/Samantha-1.11-70b
- sources:
- layer_range:
- 39
- 40
model: augtoma/qCammel-70-x
- sources:
- layer_range:
- 40
- 41
model: Xwin-LM/Xwin-LM-70B-V0.1
- sources:
- layer_range:
- 41
- 42
model: ehartford/Samantha-1.11-70b
- sources:
- layer_range:
- 42
- 43
model: Sao10K/Euryale-1.3-L2-70B
- sources:
- layer_range:
- 43
- 44
model: Xwin-LM/Xwin-LM-70B-V0.1
- sources:
- layer_range:
- 44
- 45
model: NousResearch/Nous-Hermes-Llama2-70b
- sources:
- layer_range:
- 45
- 46
model: jondurbin/airoboros-l2-c70b-3.1.2
- sources:
- layer_range:
- 46
- 48
model: ehartford/Samantha-1.11-70b
- sources:
- layer_range:
- 48
- 49
model: Sao10K/Euryale-1.3-L2-70B
- sources:
- layer_range:
- 49
- 50
model: Xwin-LM/Xwin-LM-70B-V0.1
- sources:
- layer_range:
- 50
- 51
model: jondurbin/airoboros-l2-c70b-3.1.2
- sources:
- layer_range:
- 51
- 54
model: fangloveskari/ORCA_LLaMA_70B_QLoRA
- sources:
- layer_range:
- 54
- 55
model: jondurbin/airoboros-l2-c70b-3.1.2
- sources:
- layer_range:
- 55
- 56
model: fangloveskari/ORCA_LLaMA_70B_QLoRA
- sources:
- layer_range:
- 56
- 58
model: jondurbin/airoboros-l2-c70b-3.1.2
- sources:
- layer_range:
- 58
- 59
model: Sao10K/Euryale-1.3-L2-70B
- sources:
- layer_range:
- 59
- 60
model: Xwin-LM/Xwin-LM-70B-V0.1
- sources:
- layer_range:
- 60
- 62
model: jondurbin/airoboros-l2-c70b-3.1.2
- sources:
- layer_range:
- 62
- 63
model: Xwin-LM/Xwin-LM-70B-V0.1
- sources:
- layer_range:
- 63
- 64
model: fangloveskari/ORCA_LLaMA_70B_QLoRA
- sources:
- layer_range:
- 64
- 65
model: NousResearch/Nous-Hermes-Llama2-70b
- sources:
- layer_range:
- 65
- 66
model: Sao10K/Euryale-1.3-L2-70B
- sources:
- layer_range:
- 66
- 67
model: Xwin-LM/Xwin-LM-70B-V0.1
- sources:
- layer_range:
- 67
- 68
model: augtoma/qCammel-70-x
- sources:
- layer_range:
- 68
- 70
model: Xwin-LM/Xwin-LM-70B-V0.1
- sources:
- layer_range:
- 70
- 71
model: augtoma/qCammel-70-x
- sources:
- layer_range:
- 71
- 72
model: Xwin-LM/Xwin-LM-70B-V0.1
- sources:
- layer_range:
- 72
- 73
model: Sao10K/Euryale-1.3-L2-70B
- sources:
- layer_range:
- 73
- 75
model: jondurbin/airoboros-l2-c70b-3.1.2
- sources:
- layer_range:
- 75
- 76
model: Sao10K/Euryale-1.3-L2-70B
- sources:
- layer_range:
- 76
- 77
model: augtoma/qCammel-70-x
- sources:
- layer_range:
- 77
- 78
model: Xwin-LM/Xwin-LM-70B-V0.1
- sources:
- layer_range:
- 78
- 79
model: NousResearch/Nous-Hermes-Llama2-70b
- sources:
- layer_range:
- 79
- 80
model: Xwin-LM/Xwin-LM-70B-V0.1
=========================
=> Applying Doctor-Shotgun/limarpv3-llama2-70b-qlora x 0.35
```
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
A big thanks to [Charles](https://huggingface.co/chargoddard) for adding the layer shuffle method to his tool [mergekit](https://github.com/cg123/mergekit/tree/main) and [Henky/KoboldAI](https://koboldai.org/) for the machine he let me use.
If you want to support me, you can [here](https://ko-fi.com/undiai). |
Weyaxi/OpenHermes-2.5-Nebula-v2-7B | Weyaxi | "2023-11-12T11:16:10Z" | 1,099 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:garage-bAInd/Open-Platypus",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-12T11:12:59Z" |
---
license: cc-by-nc-4.0
datasets:
- garage-bAInd/Open-Platypus
language:
- en
---

<a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
# OpenHermes-2.5-Nebula-v2-7B
OpenHermes-2.5-Nebula-v2-7B is a merge of [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) and [PulsarAI/Nebula-v2-7B-Lora](https://huggingface.co/PulsarAI/Nebula-v2-7B-Lora)
# Evaluation Results ([Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard))
| Metric | Value |
|-----------------------|-----------|
| Avg. | |
| ARC (25-shot) | |
| HellaSwag (10-shot) | |
| MMLU (5-shot) | |
| TruthfulQA (0-shot) | |
| Winogrande (5-shot) | |
| GSM8K (5-shot) | |
| DROP (3-shot) | |
|
jungyuko/DAVinCI-Yi-Ko-6B-v0.61-ff-e1 | jungyuko | "2024-01-24T05:39:07Z" | 1,099 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-24T04:56:06Z" | ---
license: cc-by-nc-4.0
---
## DAVinCI-Yi-Ko-6B-v0.61-ff-e1
This model is a fine-tuned version of [beomi/Yi-Ko-6B](https://huggingface.co/beomi/Yi-Ko-6B) on an unknown dataset.
### Model description
More information needed
### Intended uses & limitations
More information needed
### Training and evaluation data
More information needed
### Training procedure
### Training hypuerparameters
The following hyperparameters were used during training:
* learning_rate: 2e-05
* train_batch_size: 4
* eval_batch_size: 8
* seed: 42
* gradient_accumulation_steps: 8
* total_train_batch_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr_scheduler_type: linear
* num_epochs: 1.0
* mixed_precision_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.2
* Pytorch 2.1.2+cu121
* Datasets 2.0.0
* Tokenizers 0.15.0 |
kmyoon/mzllm-solar-10.7B | kmyoon | "2024-02-08T14:26:30Z" | 1,099 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"arxiv:1910.09700",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-08T09:26:17Z" | ---
library_name: transformers
license: cc-by-nc-4.0
language:
- ko
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mightbe/EEVE-10.8B-Multiturn | mightbe | "2024-03-20T06:19:36Z" | 1,099 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-19T12:31:29Z" | ---
license: apache-2.0
---
# EEVE-10.8B-Multiturn
๋ณธ ๋ชจ๋ธ์ KoVast ๋ฐ์ดํฐ์
๋ฐ argilla์ ๋ฐ์ดํฐ์
์ผ๋ก ํ์ต๋์์ต๋๋ค.
Jolteon ๋ชจ๋ธ ํ์ต๊ณผ์ ์์ ๋์จ ๋ถ์ฐ๋ฌผ์
๋๋ค.
**์ฃผ์**
ํด๋น ๋ชจ๋ธ์ ์๋ฒฝํ๊ฒ ํ์ต์ด ๋์ง ์์ ์ด๊ธฐ๋ชจ๋ธ์
๋๋ค. ์งง์ ๋ต๋ณ, ์ถ๋ก ์ ๋ถ์ฌ๋ฑ์ ๋ฌธ์ ๊ฐ ๋ฐ์ ํ ์ ์์ต๋๋ค.
## ํ๋กฌํํธ ํฌ๋งท
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction: {instruction}
### Response: {response}</s><|im_end|> ### Instruction: {instruction}
### Response: {response}</s><|im_end|>
```
## License
๋ณธ ๋ชจ๋ธ์ apache-2.0 ๋ผ์ด์ผ์ค๋ฅผ ๋ฐ๋ฆ
๋๋ค. ๋ชจ๋ธ์ ์ฌ์ฉํ์ฌ ์์ฑ๋ ๋ฐ์ดํฐ์
์ ๋ฐฐํฌํ ๊ฒฝ์ฐ ๋ชจ๋ธ ์ฌ์ฉ์ ๋ช
์ํด ์ฃผ์๊ธฐ๋ฅผ ๊ถ๊ณ ๋๋ฆฝ๋๋ค.
## Thanks to
- A100 ํด๋ฌ์คํฐ๋ฅผ ์ ๊ณตํด์ฃผ์ , [Sionic AI](https://sionic.ai/)
## Contact
- [Discord Server Link](https://discord.gg/MrBt3PXdXc) |
ahnyeonchan/legendary-river-koalpaca | ahnyeonchan | "2024-05-14T04:02:41Z" | 1,099 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"instruction_ft",
"ko",
"arxiv:2310.06694",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-17T03:50:32Z" | ---
license: apache-2.0
language:
- ko
pipeline_tag: text-generation
tags:
- instruction_ft
---
We built this modle based on princeton-nlp/Sheared-LLaMA-1.3B.
We finetuned the model using korean wiki, ko alpaca with Lora.
Please see following information about princeton-nlp/Sheared-LLaMA-1.3B.
**Paper**: [https://arxiv.org/pdf/2310.06694.pdf](https://arxiv.org/pdf/2310.06694.pdf)
**Code**: https://github.com/princeton-nlp/LLM-Shearing
**Models**: [Sheared-LLaMA-1.3B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B), [Sheared-LLaMA-2.7B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-2.7B)
**Pruned Models without Continued Pre-training**: [Sheared-LLaMA-1.3B-Pruned](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B-Pruned), [Sheared-LLaMA-2.7B-Pruned](https://huggingface.co/princeton-nlp/Sheared-LLaMA-2.7B-Pruned)
**Instruction-tuned Models**: [Sheared-LLaMA-1.3B-ShareGPT](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B-ShareGPT), [Sheared-LLaMA-2.7B-ShareGPT](https://huggingface.co/princeton-nlp/Sheared-LLaMA-2.7B-ShareGPT)
**License**: Must comply with license of Llama2 since it's a model derived from Llama2.
---
Sheared-LLaMA-1.3B is a model pruned and further pre-trained from [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf). We dynamically load data from different domains in the [RedPajama dataset](https://github.com/togethercomputer/RedPajama-Data) to prune and contune pre-train the model. We use 0.4B tokens for pruning and 50B tokens for continued pre-training the pruned model. This model can be loaded with HuggingFace via
```
model = AutoModelForCausalLM.from_pretrained("princeton-nlp/Sheared-LLaMA-1.3B")
```
- Smaller-scale
- Same vocabulary as LLaMA1 and LLaMA2
- Derived with a budget of 50B tokens by utilizing existing strong LLMs
## Downstream Tasks
We evaluate on an extensive set of downstream tasks including reasoning, reading comprehension, language modeling and knowledge intensive tasks. Our Sheared-LLaMA models outperform existing large language models.
| Model | # Pre-training Tokens | Average Performance |
| ------------------- | --------------------- | ------------------- |
| LLaMA2-7B | 2T | 64.6 |
**1.3B**
| Model | # Pre-training Tokens | Average Performance |
| ------------------- | --------------------- | ------------------- |
| OPT-1.3B | 300B | 48.2 |
| Pythia-1.4B | 300B | 48.9 |
| **Sheared-LLaMA-1.3B** | **50B** | **51.0** |
**3B**
| Model | # Pre-training Tokens | Average Performance |
| ------------------- | --------------------- | ------------------- |
| OPT-2.7B | 300B | 51.4 |
| Pythia-2.8B | 300B | 52.5 |
| INCITE-Base-3B | 800B | 54.7 |
| Open-LLaMA-3B-v1 | 1T | 55.1 |
| Open-LLaMA-3B-v2 | 1T | 55.7 |
| Sheared-LLaMA-2.7B | 50B | 56.7 |
## Bibtex
```
@article{xia2023sheared,
title={Sheared llama: Accelerating language model pre-training via structured pruning},
author={Xia, Mengzhou and Gao, Tianyu and Zeng, Zhiyuan and Chen, Danqi},
journal={arXiv preprint arXiv:2310.06694},
year={2023}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_princeton-nlp__Sheared-LLaMA-1.3B)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 31.47 |
| ARC (25-shot) | 32.85 |
| HellaSwag (10-shot) | 60.91 |
| MMLU (5-shot) | 25.71 |
| TruthfulQA (0-shot) | 37.14 |
| Winogrande (5-shot) | 58.64 |
| GSM8K (5-shot) | 0.45 |
| DROP (3-shot) | 4.56 | |
thesven/Hermes-2-Theta-Llama-3-8B-GGUF | thesven | "2024-05-22T11:42:05Z" | 1,099 | 0 | null | [
"gguf",
"Llama-3",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"function calling",
"json mode",
"axolotl",
"merges",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:NousResearch/Hermes-2-Pro-Llama-3-8B",
"license:llama3",
"region:us"
] | null | "2024-05-18T04:42:11Z" | ---
base_model: NousResearch/Hermes-2-Pro-Llama-3-8B
tags:
- Llama-3
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- synthetic data
- distillation
- function calling
- json mode
- axolotl
- merges
model-index:
- name: Hermes-2-Pro-Llama-3-Instruct-8B-Merge
results: []
language:
- en
datasets:
- teknium/OpenHermes-2.5
widget:
- example_title: Hermes 2 Pro Llama-3 Instruct Merge
messages:
- role: system
content: >-
You are a sentient, superintelligent artificial general intelligence, here
to teach and assist me.
- role: user
content: >-
Write a short story about Goku discovering kirby has teamed up with Majin
Buu to destroy the world.
license: llama3
---
# - Hermes-2 ฮ Llama-3 8B
# - Quantization Description
This repo contains GGUF quantized versions of the Nous Research Hermes-2-Pro-Llama-3-8B model.
The model is supplied in different quantizations so that you can see what works best on the hardware you would like to run it on.
The repo contains quantizations in the following types:
Q4_0
Q4_1
Q4_K
Q4_K_S
Q4_K_M
Q5_0
Q5_1
Q5_K
Q5_K_M
Q5_K_S
Q6_K
Q8_0
Q2_K
Q3_K
Q3_K_S
Q3_K_XS
IQ2_K
IQ3_S
IQ3_XXS
IQ4_NL
IQ4_XS
IQ5_K
IQ2_S
IQ2_XS
IQ1_S
<div style="text-align: center;">
<a href="https://github.com/thesven/GGUF-n-Go">
<img src="https://github.com/thesven/GGUF-n-Go/blob/main/assets/quantized_with.png?raw=true" alt="image/png" style="max-width: 350px;">
</a>
</div>
## Model Description
Hermes-2 ฮ (Theta) is the first experimental merged model released by [Nous Research](https://nousresearch.com/), in collaboration with Charles Goddard at [Arcee](https://www.arcee.ai/), the team behind MergeKit.
Hermes-2 ฮ is a merged and then further RLHF'ed version our excellent Hermes 2 Pro model and Meta's Llama-3 Instruct model to form a new model, Hermes-2 ฮ, combining the best of both worlds of each model.
## Example Outputs
### Create New Mythos:

### Chat with a Meta-Cognitive Entity

### Ask for a structured JSON output:

# Prompt Format
Hermes 2 ฮ uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
## Prompt Format for Function Calling
Our model was trained on specific system prompts and structures for Function Calling. While the system prompt looks complicated, we have created a GitHub repo containing code to easily build these based on real python functions.
You should use the system role with this message, followed by a function signature json as this example shows here.
```
<|im_start|>system
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"arguments": <args-dict>, "name": <function-name>}
</tool_call><|im_end|>
```
To complete the function call, create a user prompt that follows the above system prompt, like so:
```
<|im_start|>user
Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|>
```
The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
```
<|im_start|>assistant
<tool_call>
{"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"}
</tool_call><|im_end|>
```
Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so:
```
<|im_start|>tool
<tool_response>
{"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}}
</tool_response>
<|im_end|>
```
The assistant will then read in that data from the function's response, and generate a natural language response:
```
<|im_start|>assistant
The stock fundamentals data for Tesla (TSLA) are as follows:
- **Symbol**: TSLA
- **Company Name**: Tesla, Inc.
- **Sector**: Consumer Cyclical
- **Industry**: Auto Manufacturers
- **Market Capitalization**: $566,160,130,480
- **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73
- **Price-to-Book Ratio (PB Ratio)**: 9.04
- **Dividend Yield**: N/A
- **Trailing Earnings Per Share (EPS)**: $4.3
- **Beta Value of the Stock**: 2.42
- **52-Week High Price of the Stock**: $299.29
- **52-Week Low Price of the Stock**: $152.37
This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|>
```
## Prompt Format for JSON Mode / Structured Outputs
Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema.
Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main
```
<|im_start|>system
You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|>
```
Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.
## Inference Code for Function Calling:
All code for utilizing, parsing, and building function calling templates is available on our github:
[https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling)

# How to cite:
```bibtext
@misc{Hermes-2-Theta-Llama-3-8B,
url={[https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B][NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B))},
title={Hermes-2-Theta-Llama-3-8B},
author={"Teknium", Charles Goddard, "interstellarninja", "theemozilla", "karan4d", "huemin_art"}
}
``` |
damgomz/fp_bs16_lr5_x2 | damgomz | "2024-05-25T17:16:32Z" | 1,099 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"fill-mask",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-05-23T10:05:43Z" | ---
language: en
tags:
- fill-mask
kwargs:
timestamp: '2024-05-25T19:16:26'
project_name: fp_bs16_lr5_x2_emissions_tracker
run_id: 9707263d-9e9a-4966-a8a4-8b04469d920b
duration: 204006.114985466
emissions: 0.133456463837719
emissions_rate: 6.541787428637955e-07
cpu_power: 42.5
gpu_power: 0.0
ram_power: 7.5
cpu_energy: 2.408401058182943
gpu_energy: 0
ram_energy: 0.4250093542863928
energy_consumed: 2.8334104124693544
country_name: Switzerland
country_iso_code: CHE
region: .nan
cloud_provider: .nan
cloud_region: .nan
os: Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34
python_version: 3.10.4
codecarbon_version: 2.3.4
cpu_count: 4
cpu_model: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz
gpu_count: .nan
gpu_model: .nan
longitude: .nan
latitude: .nan
ram_total_size: 20
tracking_mode: machine
on_cloud: N
pue: 1.0
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 204006.114985466 |
| Emissions (Co2eq in kg) | 0.133456463837719 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 7.5 |
| CPU energy (kWh) | 2.408401058182943 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.4250093542863928 |
| Consumed energy (kWh) | 2.8334104124693544 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 4 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.392711771347022 |
| Emissions (Co2eq in kg) | 0.07990239503597418 |
## Note
23 May 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | fp_bs16_lr5_x2 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 5e-05 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 40959 |
## Training and Testing steps
Epoch | Train Loss | Test Loss
---|---|---
| 0.0 | 14.935577 | 12.206219 |
| 0.5 | 3.362514 | 3.065457 |
| 1.0 | 2.929414 | 2.875572 |
| 1.5 | 2.817826 | 2.758826 |
| 2.0 | 2.665450 | 2.674433 |
| 2.5 | 2.603621 | 2.611908 |
| 3.0 | 2.542651 | 2.553587 |
| 3.5 | 2.480111 | 2.503488 |
| 4.0 | 2.446439 | 2.465049 |
| 4.5 | 2.394290 | 2.422634 |
| 5.0 | 2.337941 | 2.389408 |
| 5.5 | 2.313014 | 2.357817 |
| 6.0 | 2.287183 | 2.340843 |
|
lmstudio-community/internlm2-math-plus-20b-GGUF | lmstudio-community | "2024-05-30T17:52:27Z" | 1,099 | 0 | null | [
"gguf",
"math",
"text-generation",
"en",
"zh",
"base_model:internlm/internlm2-math-plus-20b",
"license:other",
"region:us"
] | text-generation | "2024-05-30T17:36:00Z" | ---
pipeline_tag: text-generation
license: other
language:
- en
- zh
tags:
- math
quantized_by: bartowski
lm_studio:
param_count: 20b
use_case: math
release_date: 24-05-2024
model_creator: InternLM
prompt_template: ChatML
system_prompt: none
base_model: InternLM
original_repo: internlm/internlm2-math-plus-20b
base_model: internlm/internlm2-math-plus-20b
---
## ๐ซ Community Model> InternLM2 Math Plus 20b by InternLM
*๐พ [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [InternLM](https://huggingface.co/internlm)<br>
**Original model**: [internlm2-math-plus-20b](https://huggingface.co/internlm/internlm2-math-plus-20b)<br>
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b3001](https://github.com/ggerganov/llama.cpp/releases/tag/b3001)<br>
## Model Summary:
InternLM2 Math Plus is a series of math proficient models by InternLM, following up on their original series of math models.<br>
This series has state of the art bilingual open-sourced math reasoning models at several sizes. This should be used as a solver, prover, verifier, augmentor, with chain of thought reasoning.
## Prompt template:
Choose the `ChatML` preset in your LM Studio.
Under the hood, the model will see a prompt that's formatted like so:
```
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Technical Details
Math Plus has improved informal math reasoning performance (chain-of-thought and code-intepreter) and formal math reasoning performance (LEAN 4 translation and LEAN 4 theorem proving).<br>
InternLM2-Math are continued pretrained from InternLM2-Base with ~100B high quality math-related tokens and SFT with ~2M bilingual math supervised data.<br>
More details can be found here: https://github.com/InternLM/InternLM-Math
## Special thanks
๐ Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/)
๐ Special thanks to [Kalomaze](https://github.com/kalomaze) and [Dampf](https://github.com/Dampfinchen) for their work on the dataset (linked [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)) that was used for calculating the imatrix for all sizes.
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio. |
Geotrend/distilbert-base-fr-cased | Geotrend | "2023-04-02T15:53:50Z" | 1,098 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"fill-mask",
"fr",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:04Z" | ---
language: fr
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-fr-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-fr-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-fr-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grรฉgoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
omarques/autotrain-dogs-and-cats-1527055142 | omarques | "2022-09-21T21:38:24Z" | 1,098 | 1 | transformers | [
"transformers",
"pytorch",
"autotrain",
"vision",
"image-classification",
"dataset:omarques/autotrain-data-dogs-and-cats",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | image-classification | "2022-09-21T21:37:41Z" | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- omarques/autotrain-data-dogs-and-cats
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 0.8187420113922029
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1527055142
- CO2 Emissions (in grams): 0.8187
## Validation Metrics
- Loss: 0.068
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- AUC: 1.000
- F1: 1.000 |
allenai/open-instruct-human-mix-65b | allenai | "2023-06-29T18:51:34Z" | 1,098 | 4 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:databricks/databricks-dolly-15k",
"dataset:OpenAssistant/oasst1",
"arxiv:2306.04751",
"arxiv:2302.13971",
"arxiv:2304.07327",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-06-07T17:41:36Z" | ---
datasets:
- databricks/databricks-dolly-15k
- OpenAssistant/oasst1
language:
- en
extra_gated_prompt: >-
To request access to the models, please fill out this form, and we'll review
and let you know if your use case is approved. The information you provide
below will be used solely to assess eligibility to access these models.
extra_gated_fields:
First Name: text
Last Name: text
Institution: text
Country (where user is located): text
Intended Use: text
Previous Related Publications: text
I agree to abide by the terms of the license associated to this artifact, including domain and used-based restrictions: checkbox
---
# Open-Instruct Human-mix 65B
This model is a 65B LLaMa model finetuned on a mixture of human-authored datasets (FLAN V2, CoT, Dolly, and Open Assistant 1). *Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
The licenses can be found in [our codebase](https://github.com/allenai/open-instruct/tree/main/model_licenses) - see `tulu_license.txt` for the model license and `llama_license.txt` for the Llama license.
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.**
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 60.7 | 61.6 | 8.0 | 57.5 | 50.1 | 52.7 | 58.5 | 15.9 | 24.5 | 43.2 | 46.5 | 43.8 |
If you use this model, please cite our work, the llama paper, and the original datasets:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothรฉe Lacroix and Baptiste Roziรจre and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{dolly,
author = {Databricks},
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {Blog post},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm}
}
```
```
@article{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Longpre, Shayne and Hou, Le and Vu, Tu and Webson, Albert and Chung, Hyung Won and Tay, Yi and Zhou, Denny and Le, Quoc V and Zoph, Barret and Wei, Jason and others},
journal={arXiv preprint arXiv:2301.13688},
year={2023}
}
```
```
@misc{kรถpf2023openassistant,
title={OpenAssistant Conversations -- Democratizing Large Language Model Alignment},
author={Andreas Kรถpf and Yannic Kilcher and Dimitri von Rรผtte and Sotiris Anagnostidis and Zhi-Rui Tam and Keith Stevens and Abdullah Barhoum and Nguyen Minh Duc and Oliver Stanley and Richรกrd Nagyfi and Shahul ES and Sameer Suri and David Glushkov and Arnav Dantuluri and Andrew Maguire and Christoph Schuhmann and Huu Nguyen and Alexander Mattick},
year={2023},
eprint={2304.07327},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Undi95/UtopiaXL-13B | Undi95 | "2023-11-05T01:43:42Z" | 1,098 | 16 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"not-for-all-audiences",
"nsfw",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-04T23:47:26Z" | ---
license: cc-by-nc-4.0
tags:
- not-for-all-audiences
- nsfw
---
<!-- description start -->
## Description
This repo contains fp16 files of UtopiaXL-13B, a merge I have done with the new [layer shuffle](https://github.com/cg123/mergekit/blob/main/mergekit/scripts/layershuffle.py) method from mergekit (thank you Charles for adding my request to your project!)
This is more a proof of concept showing the following:
- Llama2 is very flexible
- Llama2 don't care about what is finetuned on the layers specifically if you keep them in the same order
- Clean merge (no ties, no SLERP, etc...) with only layer is possible without breaking something
- Deleting special tokens/using model with special token don't break the model
- Alpaca win, always. So use it.
The name "XL" come from the absurd amount of model pushed into it.
<!-- description end -->
<!-- description start -->
## Models and loras used
- [Undi95/Utopia-13B](https://huggingface.co/Undi95/Utopia-13B)
- [KoboldAI/LLAMA2-13B-Holodeck-1](https://huggingface.co/KoboldAI/LLAMA2-13B-Holodeck-1)
- [Undi95/PsyMedRP-v1-13B](https://huggingface.co/Undi95/PsyMedRP-v1-13B)
- [PygmalionAI/pygmalion-2-13b](https://huggingface.co/PygmalionAI/pygmalion-2-13b)
- [Heralax/Cat-0.5](https://huggingface.co/Heralax/Cat-0.5)
- [KoboldAI/LLaMA2-13B-TiefighterLR](https://huggingface.co/KoboldAI/LLaMA2-13B-TiefighterLR)
- [Heralax/Augmental-13b-two-epochs](https://huggingface.co/Heralax/Augmental-13b-two-epochs)
- [Undi95/Storytelling-v2.1-13B-lora](https://huggingface.co/Undi95/Storytelling-v2.1-13B-lora)
- [Undi95/LimaRP-UtopiaXL-13B-v3-lora](https://huggingface.co/Undi95/LimaRP-UtopiaXL-13B-v3-lora)
<!-- description end -->
## The sauce
```
!mergekit-layershuffle ./UtopiaXL \
--model Undi95/Utopia-13B --weight 0.4 \
--model KoboldAI/LLAMA2-13B-Holodeck-1 --weight 0.1 \
--model Undi95/PsyMedRP-v1-13B --weight 0.1 \
--model PygmalionAI/pygmalion-2-13b --weight 0.25 \
--model Heralax/Cat-0.5 --weight 0.1 \
--model KoboldAI/LLaMA2-13B-TiefighterLR --weight 0.1 \
--model Heralax/Augmental-13b-two-epochs --weight 0.1 \
--write-yaml UtopiaXL.yaml
=========================
merge_method: passthrough
slices:
- sources:
- layer_range:
- 0
- 1
model: KoboldAI/LLAMA2-13B-Holodeck-1
- sources:
- layer_range:
- 1
- 4
model: Undi95/Utopia-13B
- sources:
- layer_range:
- 4
- 5
model: PygmalionAI/pygmalion-2-13b
- sources:
- layer_range:
- 5
- 6
model: Undi95/PsyMedRP-v1-13B
- sources:
- layer_range:
- 6
- 7
model: KoboldAI/LLaMA2-13B-TiefighterLR
- sources:
- layer_range:
- 7
- 8
model: PygmalionAI/pygmalion-2-13b
- sources:
- layer_range:
- 8
- 9
model: Undi95/PsyMedRP-v1-13B
- sources:
- layer_range:
- 9
- 10
model: PygmalionAI/pygmalion-2-13b
- sources:
- layer_range:
- 10
- 13
model: Undi95/Utopia-13B
- sources:
- layer_range:
- 13
- 14
model: Heralax/Cat-0.5
- sources:
- layer_range:
- 14
- 17
model: Undi95/Utopia-13B
- sources:
- layer_range:
- 17
- 18
model: Heralax/Augmental-13b-two-epochs
- sources:
- layer_range:
- 18
- 19
model: KoboldAI/LLaMA2-13B-TiefighterLR
- sources:
- layer_range:
- 19
- 22
model: PygmalionAI/pygmalion-2-13b
- sources:
- layer_range:
- 22
- 23
model: Undi95/Utopia-13B
- sources:
- layer_range:
- 23
- 25
model: PygmalionAI/pygmalion-2-13b
- sources:
- layer_range:
- 25
- 27
model: Undi95/Utopia-13B
- sources:
- layer_range:
- 27
- 28
model: Heralax/Cat-0.5
- sources:
- layer_range:
- 28
- 30
model: Undi95/Utopia-13B
- sources:
- layer_range:
- 30
- 31
model: Heralax/Augmental-13b-two-epochs
- sources:
- layer_range:
- 31
- 32
model: Undi95/Utopia-13B
- sources:
- layer_range:
- 32
- 33
model: Heralax/Cat-0.5
- sources:
- layer_range:
- 33
- 34
model: Heralax/Augmental-13b-two-epochs
- sources:
- layer_range:
- 34
- 35
model: Undi95/PsyMedRP-v1-13B
- sources:
- layer_range:
- 35
- 36
model: Heralax/Augmental-13b-two-epochs
- sources:
- layer_range:
- 36
- 37
model: Undi95/Utopia-13B
- sources:
- layer_range:
- 37
- 38
model: KoboldAI/LLAMA2-13B-Holodeck-1
- sources:
- layer_range:
- 38
- 39
model: PygmalionAI/pygmalion-2-13b
- sources:
- layer_range:
- 39
- 40
model: KoboldAI/LLAMA2-13B-Holodeck-1
=========================
=> Applying Undi95/Storytelling-v2.1-13B-lora x 0.1
=> Trained on LimaRP for +2h
=> Applying Undi95/LimaRP-UtopiaXL-13B-v3-lora x 0.35
```
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
A big thanks to [Charles](https://huggingface.co/chargoddard) for adding the layer shuffle method to his tool [mergekit](https://github.com/cg123/mergekit/tree/main) and [Henky/KoboldAI](https://koboldai.org/) for the machine he let me use.
If you want to support me, you can [here](https://ko-fi.com/undiai). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.