modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
Yntec/DreamLikeRemix | Yntec | "2023-09-02T08:58:22Z" | 1,116 | 3 | diffusers | [
"diffusers",
"safetensors",
"anime",
"Dreamlike",
"art",
"Retro",
"Elldreths",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:other",
"autotrain_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-08-11T14:26:00Z" | ---
license: other
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- anime
- Dreamlike
- art
- Retro
- Elldreths
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: false
---
# DreamLikeRemix
Samples and prompts:


beautiful background, beautiful detailed girl, Cartoon Pretty CUTE Girl, sitting on a box of cherries, DETAILED CHIBI EYES, holding antique slot machine, detailed hair, Ponytail, key shot at computer monitor, Magazine ad, iconic, 1940, sharp focus. Acrylic art on canvas By KlaysMoji and artgerm and Clay Mann and and leyendecker
A mix of Dreamlike Diffusion and a little bit of Elldreths Retro Mix.
Full recipe:
# Add Difference 1.0
Primary model:
Dreamlike Diffusion
Secondary model:
Elldreths Retro Mix
Tertiary model:
v1-5-pruned-fp16-no-ema
Output Model:
Temporary
# Weighted Sum 0.85
Primary model:
Temporary
Secondary model:
Dreamlike Diffusion
Output Model:
dreamLikeRemix
Original pages:
https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0
https://civitai.com/models/1474/elldreths-retro-mix |
TheBloke/medalpaca-13B-GGUF | TheBloke | "2023-09-27T12:53:27Z" | 1,116 | 5 | transformers | [
"transformers",
"gguf",
"llama",
"medical",
"text-generation",
"en",
"arxiv:2303.14070",
"base_model:medalpaca/medalpaca-13b",
"license:other",
"text-generation-inference",
"region:us"
] | text-generation | "2023-09-20T02:21:25Z" | ---
language:
- en
license: other
library_name: transformers
tags:
- medical
model_name: Medalpaca 13B
base_model: medalpaca/medalpaca-13b
inference: false
model_creator: medalpaca
model_type: llama
pipeline_tag: text-generation
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Medalpaca 13B - GGUF
- Model creator: [medalpaca](https://huggingface.co/medalpaca)
- Original model: [Medalpaca 13B](https://huggingface.co/medalpaca/medalpaca-13b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [medalpaca's Medalpaca 13B](https://huggingface.co/medalpaca/medalpaca-13b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/medalpaca-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/medalpaca-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/medalpaca-13B-GGUF)
* [medalpaca's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/medalpaca/medalpaca-13b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [medalpaca-13b.Q2_K.gguf](https://huggingface.co/TheBloke/medalpaca-13B-GGUF/blob/main/medalpaca-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [medalpaca-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/medalpaca-13B-GGUF/blob/main/medalpaca-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [medalpaca-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/medalpaca-13B-GGUF/blob/main/medalpaca-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [medalpaca-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/medalpaca-13B-GGUF/blob/main/medalpaca-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [medalpaca-13b.Q4_0.gguf](https://huggingface.co/TheBloke/medalpaca-13B-GGUF/blob/main/medalpaca-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [medalpaca-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/medalpaca-13B-GGUF/blob/main/medalpaca-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [medalpaca-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/medalpaca-13B-GGUF/blob/main/medalpaca-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [medalpaca-13b.Q5_0.gguf](https://huggingface.co/TheBloke/medalpaca-13B-GGUF/blob/main/medalpaca-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [medalpaca-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/medalpaca-13B-GGUF/blob/main/medalpaca-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [medalpaca-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/medalpaca-13B-GGUF/blob/main/medalpaca-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [medalpaca-13b.Q6_K.gguf](https://huggingface.co/TheBloke/medalpaca-13B-GGUF/blob/main/medalpaca-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [medalpaca-13b.Q8_0.gguf](https://huggingface.co/TheBloke/medalpaca-13B-GGUF/blob/main/medalpaca-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/medalpaca-13B-GGUF and below it, a specific filename to download, such as: medalpaca-13b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/medalpaca-13B-GGUF medalpaca-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/medalpaca-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/medalpaca-13B-GGUF medalpaca-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m medalpaca-13b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/medalpaca-13B-GGUF", model_file="medalpaca-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: medalpaca's Medalpaca 13B
# MedAlpaca 13b
## Table of Contents
[Model Description](#model-description)
- [Architecture](#architecture)
- [Training Data](#trainig-data)
[Model Usage](#model-usage)
[Limitations](#limitations)
## Model Description
### Architecture
`medalpaca-13b` is a large language model specifically fine-tuned for medical domain tasks.
It is based on LLaMA (Large Language Model Meta AI) and contains 13 billion parameters.
The primary goal of this model is to improve question-answering and medical dialogue tasks.
### Training Data
The training data for this project was sourced from various resources.
Firstly, we used Anki flashcards to automatically generate questions,
from the front of the cards and anwers from the back of the card.
Secondly, we generated medical question-answer pairs from [Wikidoc](https://www.wikidoc.org/index.php/Main_Page).
We extracted paragraphs with relevant headings, and used Chat-GPT 3.5
to generate questions from the headings and using the corresponding paragraphs
as answers. This dataset is still under development and we believe
that approximately 70% of these question answer pairs are factual correct.
Thirdly, we used StackExchange to extract question-answer pairs, taking the
top-rated question from five categories: Academia, Bioinformatics, Biology,
Fitness, and Health. Additionally, we used a dataset from [ChatDoctor](https://arxiv.org/abs/2303.14070)
consisting of 200,000 question-answer pairs, available at https://github.com/Kent0n-Li/ChatDoctor.
| Source | n items |
|------------------------------|--------|
| ChatDoc large | 200000 |
| wikidoc | 67704 |
| Stackexchange academia | 40865 |
| Anki flashcards | 33955 |
| Stackexchange biology | 27887 |
| Stackexchange fitness | 9833 |
| Stackexchange health | 7721 |
| Wikidoc patient information | 5942 |
| Stackexchange bioinformatics | 5407 |
## Model Usage
To evaluate the performance of the model on a specific dataset, you can use the Hugging Face Transformers library's built-in evaluation scripts. Please refer to the evaluation guide for more information.
Inference
You can use the model for inference tasks like question-answering and medical dialogues using the Hugging Face Transformers library. Here's an example of how to use the model for a question-answering task:
```python
from transformers import pipeline
pl = pipeline("text-generation", model="medalpaca/medalpaca-13b", tokenizer="medalpaca/medalpaca-13b")
question = "What are the symptoms of diabetes?"
context = "Diabetes is a metabolic disease that causes high blood sugar. The symptoms include increased thirst, frequent urination, and unexplained weight loss."
answer = pl(f"Context: {context}\n\nQuestion: {question}\n\nAnswer: ")
print(answer)
```
## Limitations
The model may not perform effectively outside the scope of the medical domain.
The training data primarily targets the knowledge level of medical students,
which may result in limitations when addressing the needs of board-certified physicians.
The model has not been tested in real-world applications, so its efficacy and accuracy are currently unknown.
It should never be used as a substitute for a doctor's opinion and must be treated as a research tool only.
<!-- original-model-card end -->
|
daebum/LoRA-Submit-Test | daebum | "2024-01-28T08:50:32Z" | 1,116 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-28T06:37:03Z" | ---
license: apache-2.0
---
### Base model
beomi/OPEN-SOLAR-KO-10.7B
### Training Corpus
- Only the Training segment of the data was used.
- The Validation and Test segments were deliberately excluded.
- Detailed information about the training datasets is available below:
- korquad
- AI Hub: corpus/AI_HUB
- huggingface dataset : open-koern-instructions
|
internlm/internlm2-chat-1_8b-sft | internlm | "2024-07-02T12:26:44Z" | 1,116 | 10 | transformers | [
"transformers",
"safetensors",
"internlm2",
"text-generation",
"conversational",
"custom_code",
"arxiv:2403.17297",
"license:other",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-01-30T13:53:10Z" | ---
pipeline_tag: text-generation
license: other
---
# InternLM
<div align="center">
<img src="https://github.com/InternLM/InternLM/assets/22529082/b9788105-8892-4398-8b47-b513a292378e" width="200"/>
<div> </div>
<div align="center">
<b><font size="5">InternLM</font></b>
<sup>
<a href="https://internlm.intern-ai.org.cn/">
<i><font size="4">HOT</font></i>
</a>
</sup>
<div> </div>
</div>
[](https://github.com/internLM/OpenCompass/)
[💻Github Repo](https://github.com/InternLM/InternLM) • [🤔Reporting Issues](https://github.com/InternLM/InternLM/issues/new) • [📜Technical Report](https://arxiv.org/abs/2403.17297)
</div>
## Introduction
InternLM2-1.8B is the 1.8 billion parameter version of the second generation InternLM series. In order to facilitate user use and research, InternLM2-1.8B has three versions of open-source models. They are:
- InternLM2-1.8B: Foundation models with high quality and high adaptation flexibility, which serve as a good starting point for downstream deep adaptations.
- InternLM2-Chat-1.8B-SFT: Chat model after supervised fine-tuning (SFT) on InternLM2-1.8B.
- InternLM2-Chat-1.8B: Further aligned on top of InternLM2-Chat-1.8B-SFT through online RLHF. InternLM2-Chat-1.8B exhibits better instruction following, chat experience, and function calling, which is recommended for downstream applications.
The InternLM2 has the following technical features:
- Effective support for ultra-long contexts of up to 200,000 characters: The model nearly perfectly achieves "finding a needle in a haystack" in long inputs of 200,000 characters. It also leads among open-source models in performance on long-text tasks such as LongBench and L-Eval.
- Comprehensive performance enhancement: Compared to the previous generation model, it shows significant improvements in various capabilities, including reasoning, mathematics, and coding.
## InternLM2-1.8B
### Performance Evaluation
We have evaluated InternLM2 on several important benchmarks using the open-source evaluation tool [OpenCompass](https://github.com/open-compass/opencompass). Some of the evaluation results are shown in the table below. You are welcome to visit the [OpenCompass Leaderboard](https://rank.opencompass.org.cn/leaderboard-llm) for more evaluation results.
| Dataset\Models | InternLM2-1.8B | InternLM2-Chat-1.8B-SFT | InternLM2-7B | InternLM2-Chat-7B |
| :---: | :---: | :---: | :---: | :---: |
| MMLU | 46.9 | 47.1 | 65.8 | 63.7 |
| AGIEval | 33.4 | 38.8 | 49.9 | 47.2 |
| BBH | 37.5 | 35.2 | 65.0 | 61.2 |
| GSM8K | 31.2 | 39.7 | 70.8 | 70.7 |
| MATH | 5.6 | 11.8 | 20.2 | 23.0 |
| HumanEval | 25.0 | 32.9 | 43.3 | 59.8 |
| MBPP(Sanitized) | 22.2 | 23.2 | 51.8 | 51.4 |
- The evaluation results were obtained from [OpenCompass](https://github.com/open-compass/opencompass) , and evaluation configuration can be found in the configuration files provided by [OpenCompass](https://github.com/open-compass/opencompass).
- The evaluation data may have numerical differences due to the version iteration of [OpenCompass](https://github.com/open-compass/opencompass), so please refer to the latest evaluation results of [OpenCompass](https://github.com/open-compass/opencompass).
**Limitations:** Although we have made efforts to ensure the safety of the model during the training process and to encourage the model to generate text that complies with ethical and legal requirements, the model may still produce unexpected outputs due to its size and probabilistic generation paradigm. For example, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We are not responsible for any consequences resulting from the dissemination of harmful information.
### Import from Transformers
To load the InternLM2 1.8B Chat SFT model using Transformers, use the following code:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("internlm/internlm2-chat-1_8b-sft", trust_remote_code=True)
# Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and cause OOM Error.
model = AutoModelForCausalLM.from_pretrained("internlm/internlm2-chat-1_8b-sft", torch_dtype=torch.float16, trust_remote_code=True).cuda()
model = model.eval()
response, history = model.chat(tokenizer, "hello", history=[])
print(response)
# Hello! How can I help you today?
response, history = model.chat(tokenizer, "please provide three suggestions about time management", history=history)
print(response)
```
The responses can be streamed using `stream_chat`:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "internlm/internlm2-chat-1_8b-sft"
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16, trust_remote_code=True).cuda()
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = model.eval()
length = 0
for response, history in model.stream_chat(tokenizer, "Hello", history=[]):
print(response[length:], flush=True, end="")
length = len(response)
```
## Deployment
### LMDeploy
LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the MMRazor and MMDeploy teams.
```bash
pip install lmdeploy
```
You can run batch inference locally with the following python code:
```python
import lmdeploy
pipe = lmdeploy.pipeline("internlm/internlm2-chat-1_8b-sft")
response = pipe(["Hi, pls intro yourself", "Shanghai is"])
print(response)
```
Or you can launch an OpenAI compatible server with the following command:
```bash
lmdeploy serve api_server internlm/internlm2-chat-1_8b-sft --model-name internlm2-chat-1_8b-sft --server-port 23333
```
Then you can send a chat request to the server:
```bash
curl http://localhost:23333/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "internlm2-chat-1_8b-sft",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Introduce deep learning to me."}
]
}'
```
Find more details in the [LMDeploy documentation](https://lmdeploy.readthedocs.io/en/latest/)
### vLLM
Launch OpenAI compatible server with `vLLM>=0.3.2`:
```bash
pip install vllm
```
```bash
python -m vllm.entrypoints.openai.api_server --model internlm/internlm2-chat-1_8b-sft --served-model-name internlm2-chat-1_8b-sft --trust-remote-code
```
Then you can send a chat request to the server:
```bash
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "internlm2-chat-1_8b-sft",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Introduce deep learning to me."}
]
}'
```
Find more details in the [vLLM documentation](https://docs.vllm.ai/en/latest/index.html)
## Open Source License
The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow **free** commercial usage. To apply for a commercial license, please fill in the [application form (English)](https://wj.qq.com/s2/12727483/5dba/)/[申请表(中文)](https://wj.qq.com/s2/12725412/f7c1/). For other questions or collaborations, please contact <[email protected]>.
## Citation
```
@misc{cai2024internlm2,
title={InternLM2 Technical Report},
author={Zheng Cai and Maosong Cao and Haojiong Chen and Kai Chen and Keyu Chen and Xin Chen and Xun Chen and Zehui Chen and Zhi Chen and Pei Chu and Xiaoyi Dong and Haodong Duan and Qi Fan and Zhaoye Fei and Yang Gao and Jiaye Ge and Chenya Gu and Yuzhe Gu and Tao Gui and Aijia Guo and Qipeng Guo and Conghui He and Yingfan Hu and Ting Huang and Tao Jiang and Penglong Jiao and Zhenjiang Jin and Zhikai Lei and Jiaxing Li and Jingwen Li and Linyang Li and Shuaibin Li and Wei Li and Yining Li and Hongwei Liu and Jiangning Liu and Jiawei Hong and Kaiwen Liu and Kuikun Liu and Xiaoran Liu and Chengqi Lv and Haijun Lv and Kai Lv and Li Ma and Runyuan Ma and Zerun Ma and Wenchang Ning and Linke Ouyang and Jiantao Qiu and Yuan Qu and Fukai Shang and Yunfan Shao and Demin Song and Zifan Song and Zhihao Sui and Peng Sun and Yu Sun and Huanze Tang and Bin Wang and Guoteng Wang and Jiaqi Wang and Jiayu Wang and Rui Wang and Yudong Wang and Ziyi Wang and Xingjian Wei and Qizhen Weng and Fan Wu and Yingtong Xiong and Chao Xu and Ruiliang Xu and Hang Yan and Yirong Yan and Xiaogui Yang and Haochen Ye and Huaiyuan Ying and Jia Yu and Jing Yu and Yuhang Zang and Chuyu Zhang and Li Zhang and Pan Zhang and Peng Zhang and Ruijie Zhang and Shuo Zhang and Songyang Zhang and Wenjian Zhang and Wenwei Zhang and Xingcheng Zhang and Xinyue Zhang and Hui Zhao and Qian Zhao and Xiaomeng Zhao and Fengzhe Zhou and Zaida Zhou and Jingming Zhuo and Yicheng Zou and Xipeng Qiu and Yu Qiao and Dahua Lin},
year={2024},
eprint={2403.17297},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## 简介
书生·浦语-1.8B (InternLM2-1.8B) 是第二代浦语模型系列的18亿参数版本。为了方便用户使用和研究,书生·浦语-1.8B (InternLM2-1.8B) 共有三个版本的开源模型,他们分别是:
- InternLM2-1.8B: 具有高质量和高适应灵活性的基础模型,为下游深度适应提供了良好的起点。
- InternLM2-Chat-1.8B-SFT:在 InternLM2-1.8B 上进行监督微调 (SFT) 后得到的对话模型。
- InternLM2-chat-1.8B:通过在线 RLHF 在 InternLM2-Chat-1.8B-SFT 之上进一步对齐。 InternLM2-Chat-1.8B 表现出更好的指令跟随、聊天体验和函数调用,推荐下游应用程序使用。
InternLM2 模型具备以下的技术特点
- 有效支持20万字超长上下文:模型在20万字长输入中几乎完美地实现长文“大海捞针”,而且在 LongBench 和 L-Eval 等长文任务中的表现也达到开源模型中的领先水平。
- 综合性能全面提升:各能力维度相比上一代模型全面进步,在推理、数学、代码等方面的能力提升显著。
## InternLM2-1.8B
### 性能评测
我们使用开源评测工具 [OpenCompass](https://github.com/internLM/OpenCompass/) 对 InternLM2 在几个重要的评测集进行了评测 ,部分评测结果如下表所示,欢迎访问[ OpenCompass 榜单 ](https://rank.opencompass.org.cn/leaderboard-llm)获取更多的评测结果。
| 评测集 | InternLM2-1.8B | InternLM2-Chat-1.8B-SFT | InternLM2-7B | InternLM2-Chat-7B |
| :---: | :---: | :---: | :---: | :---: |
| MMLU | 46.9 | 47.1 | 65.8 | 63.7 |
| AGIEval | 33.4 | 38.8 | 49.9 | 47.2 |
| BBH | 37.5 | 35.2 | 65.0 | 61.2 |
| GSM8K | 31.2 | 39.7 | 70.8 | 70.7 |
| MATH | 5.6 | 11.8 | 20.2 | 23.0 |
| HumanEval | 25.0 | 32.9 | 43.3 | 59.8 |
| MBPP(Sanitized) | 22.2 | 23.2 | 51.8 | 51.4 |
- 以上评测结果基于 [OpenCompass](https://github.com/open-compass/opencompass) 获得(部分数据标注`*`代表数据来自原始论文),具体测试细节可参见 [OpenCompass](https://github.com/open-compass/opencompass) 中提供的配置文件。
- 评测数据会因 [OpenCompass](https://github.com/open-compass/opencompass) 的版本迭代而存在数值差异,请以 [OpenCompass](https://github.com/open-compass/opencompass) 最新版的评测结果为主。
**局限性:** 尽管在训练过程中我们非常注重模型的安全性,尽力促使模型输出符合伦理和法律要求的文本,但受限于模型大小以及概率生成范式,模型可能会产生各种不符合预期的输出,例如回复内容包含偏见、歧视等有害内容,请勿传播这些内容。由于传播不良信息导致的任何后果,本项目不承担责任。
### 通过 Transformers 加载
通过以下的代码加载 InternLM2 1.8B Chat SFT 模型
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("internlm/internlm2-chat-1_8b-sft", trust_remote_code=True)
# `torch_dtype=torch.float16` 可以令模型以 float16 精度加载,否则 transformers 会将模型加载为 float32,导致显存不足
model = AutoModelForCausalLM.from_pretrained("internlm/internlm2-chat-1_8b-sft", torch_dtype=torch.float16, trust_remote_code=True).cuda()
model = model.eval()
response, history = model.chat(tokenizer, "你好", history=[])
print(response)
# 你好!有什么我可以帮助你的吗?
response, history = model.chat(tokenizer, "请提供三个管理时间的建议。", history=history)
print(response)
```
如果想进行流式生成,则可以使用 `stream_chat` 接口:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "internlm/internlm2-chat-1_8b-sft"
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dype=torch.float16, trust_remote_code=True).cuda()
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = model.eval()
length = 0
for response, history in model.stream_chat(tokenizer, "你好", history=[]):
print(response[length:], flush=True, end="")
length = len(response)
```
## 部署
### LMDeploy
LMDeploy 由 MMDeploy 和 MMRazor 团队联合开发,是涵盖了 LLM 任务的全套轻量化、部署和服务解决方案。
```bash
pip install lmdeploy
```
你可以使用以下 python 代码进行本地批量推理:
```python
import lmdeploy
pipe = lmdeploy.pipeline("internlm/internlm2-chat-1_8b-sft")
response = pipe(["Hi, pls intro yourself", "Shanghai is"])
print(response)
```
或者你可以使用以下命令启动兼容 OpenAI API 的服务:
```bash
lmdeploy serve api_server internlm/internlm2-chat-1_8b-sft --server-port 23333
```
然后你可以向服务端发起一个聊天请求:
```bash
curl http://localhost:23333/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "internlm2-chat-1_8b-sft",
"messages": [
{"role": "system", "content": "你是个友善的AI助手。"},
{"role": "user", "content": "介绍一下深度学习。"}
]
}'
```
更多信息请查看 [LMDeploy 文档](https://lmdeploy.readthedocs.io/en/latest/)
### vLLM
使用`vLLM>=0.3.2`启动兼容 OpenAI API 的服务:
```bash
pip install vllm
```
```bash
python -m vllm.entrypoints.openai.api_server --model internlm/internlm2-chat-1_8b-sft --trust-remote-code
```
然后你可以向服务端发起一个聊天请求:
```bash
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "internlm2-chat-1_8b-sft",
"messages": [
{"role": "system", "content": "你是个友善的AI助手。"},
{"role": "user", "content": "介绍一下深度学习。"}
]
}'
```
更多信息请查看 [vLLM 文档](https://docs.vllm.ai/en/latest/index.html)
## 开源许可证
本仓库的代码依照 Apache-2.0 协议开源。模型权重对学术研究完全开放,也可申请免费的商业使用授权([申请表](https://wj.qq.com/s2/12725412/f7c1/))。其他问题与合作请联系 <[email protected]>。
## 引用
```
@misc{cai2024internlm2,
title={InternLM2 Technical Report},
author={Zheng Cai and Maosong Cao and Haojiong Chen and Kai Chen and Keyu Chen and Xin Chen and Xun Chen and Zehui Chen and Zhi Chen and Pei Chu and Xiaoyi Dong and Haodong Duan and Qi Fan and Zhaoye Fei and Yang Gao and Jiaye Ge and Chenya Gu and Yuzhe Gu and Tao Gui and Aijia Guo and Qipeng Guo and Conghui He and Yingfan Hu and Ting Huang and Tao Jiang and Penglong Jiao and Zhenjiang Jin and Zhikai Lei and Jiaxing Li and Jingwen Li and Linyang Li and Shuaibin Li and Wei Li and Yining Li and Hongwei Liu and Jiangning Liu and Jiawei Hong and Kaiwen Liu and Kuikun Liu and Xiaoran Liu and Chengqi Lv and Haijun Lv and Kai Lv and Li Ma and Runyuan Ma and Zerun Ma and Wenchang Ning and Linke Ouyang and Jiantao Qiu and Yuan Qu and Fukai Shang and Yunfan Shao and Demin Song and Zifan Song and Zhihao Sui and Peng Sun and Yu Sun and Huanze Tang and Bin Wang and Guoteng Wang and Jiaqi Wang and Jiayu Wang and Rui Wang and Yudong Wang and Ziyi Wang and Xingjian Wei and Qizhen Weng and Fan Wu and Yingtong Xiong and Chao Xu and Ruiliang Xu and Hang Yan and Yirong Yan and Xiaogui Yang and Haochen Ye and Huaiyuan Ying and Jia Yu and Jing Yu and Yuhang Zang and Chuyu Zhang and Li Zhang and Pan Zhang and Peng Zhang and Ruijie Zhang and Shuo Zhang and Songyang Zhang and Wenjian Zhang and Wenwei Zhang and Xingcheng Zhang and Xinyue Zhang and Hui Zhao and Qian Zhao and Xiaomeng Zhao and Fengzhe Zhou and Zaida Zhou and Jingming Zhuo and Yicheng Zou and Xipeng Qiu and Yu Qiao and Dahua Lin},
year={2024},
eprint={2403.17297},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
mradermacher/OpenDolphinMaid-4x7b-i1-GGUF | mradermacher | "2024-05-06T06:17:17Z" | 1,116 | 0 | transformers | [
"transformers",
"gguf",
"not-for-all-audiences",
"nsfw",
"en",
"base_model:Undi95/OpenDolphinMaid-4x7b",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | "2024-03-10T10:42:14Z" | ---
base_model: Undi95/OpenDolphinMaid-4x7b
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- not-for-all-audiences
- nsfw
---
## About
weighted/imatrix quants of https://huggingface.co/Undi95/OpenDolphinMaid-4x7b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/OpenDolphinMaid-4x7b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OpenDolphinMaid-4x7b-i1-GGUF/resolve/main/OpenDolphinMaid-4x7b.i1-IQ1_S.gguf) | i1-IQ1_S | 5.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/OpenDolphinMaid-4x7b-i1-GGUF/resolve/main/OpenDolphinMaid-4x7b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/OpenDolphinMaid-4x7b-i1-GGUF/resolve/main/OpenDolphinMaid-4x7b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.4 | |
| [GGUF](https://huggingface.co/mradermacher/OpenDolphinMaid-4x7b-i1-GGUF/resolve/main/OpenDolphinMaid-4x7b.i1-IQ2_S.gguf) | i1-IQ2_S | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/OpenDolphinMaid-4x7b-i1-GGUF/resolve/main/OpenDolphinMaid-4x7b.i1-IQ2_M.gguf) | i1-IQ2_M | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/OpenDolphinMaid-4x7b-i1-GGUF/resolve/main/OpenDolphinMaid-4x7b.i1-Q2_K.gguf) | i1-Q2_K | 9.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenDolphinMaid-4x7b-i1-GGUF/resolve/main/OpenDolphinMaid-4x7b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OpenDolphinMaid-4x7b-i1-GGUF/resolve/main/OpenDolphinMaid-4x7b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenDolphinMaid-4x7b-i1-GGUF/resolve/main/OpenDolphinMaid-4x7b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.7 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenDolphinMaid-4x7b-i1-GGUF/resolve/main/OpenDolphinMaid-4x7b.i1-IQ3_S.gguf) | i1-IQ3_S | 10.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/OpenDolphinMaid-4x7b-i1-GGUF/resolve/main/OpenDolphinMaid-4x7b.i1-IQ3_M.gguf) | i1-IQ3_M | 10.9 | |
| [GGUF](https://huggingface.co/mradermacher/OpenDolphinMaid-4x7b-i1-GGUF/resolve/main/OpenDolphinMaid-4x7b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenDolphinMaid-4x7b-i1-GGUF/resolve/main/OpenDolphinMaid-4x7b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenDolphinMaid-4x7b-i1-GGUF/resolve/main/OpenDolphinMaid-4x7b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 14.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/OpenDolphinMaid-4x7b-i1-GGUF/resolve/main/OpenDolphinMaid-4x7b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 14.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenDolphinMaid-4x7b-i1-GGUF/resolve/main/OpenDolphinMaid-4x7b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/OpenDolphinMaid-4x7b-i1-GGUF/resolve/main/OpenDolphinMaid-4x7b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 17.4 | |
| [GGUF](https://huggingface.co/mradermacher/OpenDolphinMaid-4x7b-i1-GGUF/resolve/main/OpenDolphinMaid-4x7b.i1-Q6_K.gguf) | i1-Q6_K | 20.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
chlee10/T3Q-ko-solar-sft-v2.0 | chlee10 | "2024-03-18T09:13:34Z" | 1,116 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"T3Q-ko-solar-sft-v2.0",
"nlpai-lab/kullm-v2",
"conversational",
"en",
"dataset:nlpai-lab/kullm-v2",
"base_model:davidkim205/nox-solar-10.7b-v4",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-18T08:56:52Z" | ---
pipeline_tag: text-generation
license: apache-2.0
language:
- en
tags:
- T3Q-ko-solar-sft-v2.0
- nlpai-lab/kullm-v2
base_model: davidkim205/nox-solar-10.7b-v4
datasets:
- nlpai-lab/kullm-v2
model-index:
- name: T3Q-ko-solar-sft-v2.0
results: []
---
Update @ 2024.03.18
## T3Q-ko-solar-sft-v2.0
This model is a SFT fine-tuned version of davidkim205/nox-solar-10.7b-v4
**Model Developers** Chihoon Lee(chlee10), T3Q
## Training hyperparameters
The following hyperparameters were used during training:
```python
# 데이터셋과 훈련 횟수와 관련된 하이퍼 파라미터
batch_size = 16
num_epochs = 1
micro_batch = 1
gradient_accumulation_steps = batch_size // micro_batch
# 훈련 방법에 대한 하이퍼 파라미터
cutoff_len = 4096
lr_scheduler = 'cosine'
warmup_ratio = 0.06 # warmup_steps = 100
learning_rate = 4e-4
optimizer = 'adamw_torch'
weight_decay = 0.01
max_grad_norm = 1.0
# LoRA config(QLoRA)
lora_r = 16
lora_alpha = 16
lora_dropout = 0.05
lora_target_modules = ["gate_proj", "down_proj", "up_proj"]
# Tokenizer에서 나오는 input값 설정 옵션
train_on_inputs = False
add_eos_token = False
# NEFTune params
noise_alpha: int = 5
```
## Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu121
- Datasets 2.13.0
- Tokenizers 0.14.1 |
duyntnet/Phi-3-medium-4k-instruct-imatrix-GGUF | duyntnet | "2024-05-22T03:53:24Z" | 1,116 | 0 | transformers | [
"transformers",
"gguf",
"imatrix",
"Phi-3-medium-4k-instruct",
"phi3",
"text-generation",
"en",
"license:other",
"region:us"
] | text-generation | "2024-05-21T23:15:03Z" | ---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- Phi-3-medium-4k-instruct
- phi3
---
Quantizations of https://huggingface.co/microsoft/Phi-3-medium-4k-instruct
**Note**: All quants are created after this [commit](https://github.com/ggerganov/llama.cpp/commit/201cc11afa0a1950e1f632390b2ac6c937a0d8f0) so they should work correctly.
# From original readme
## How to Use
Phi-3-Medium-4K-Instruct has been integrated in the development version (4.40.2) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
* Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source.
The current `transformers` version can be verified with: `pip list | grep transformers`.
Phi-3-Medium-4K-Instruct is also available in [Azure AI Studio](https://aka.ms/phi3-azure-ai).
### Tokenizer
Phi-3-Medium-4K-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size.
### Chat Format
Given the nature of the training data, the Phi-3-Medium-4K-Instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follow:
```markdown
<|user|>\nQuestion <|end|>\n<|assistant|>
```
For example:
```markdown
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following:
```markdown
<|user|>
I am going to Paris, what should I see?<|end|>
<|assistant|>
Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|>
<|user|>
What is so great about #1?<|end|>
<|assistant|>
```
### Sample inference code
This code snippets show how to get quickly started with running the model on a GPU:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "microsoft/Phi-3-medium-4k-instruct"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
*Some applications/frameworks might not include a BOS token (`<s>`) at the start of the conversation. Please ensure that it is included since it provides more reliable results.* |
Fictiverse/Voxel_XL_Lora | Fictiverse | "2023-08-08T19:39:26Z" | 1,115 | 16 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail",
"region:us"
] | text-to-image | "2023-07-29T15:28:30Z" | ---
license: openrail
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: voxel style
widget:
- text: voxel style
---
Voxel XL Lora for [SDXL 1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) stable diffusion model
Trigger words are : voxel style




 |
d0rj/e5-large-en-ru | d0rj | "2023-09-21T13:05:05Z" | 1,115 | 7 | transformers | [
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"feature-extraction",
"mteb",
"retrieval",
"retriever",
"pruned",
"e5",
"sentence-transformers",
"sentence-similarity",
"en",
"ru",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2023-09-18T14:44:07Z" | ---
license: mit
language:
- en
- ru
metrics:
- accuracy
- f1
- recall
library_name: transformers
pipeline_tag: sentence-similarity
tags:
- mteb
- retrieval
- retriever
- pruned
- e5
- sentence-transformers
- feature-extraction
- sentence-similarity
model-index:
- name: e5-large-en-ru
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 79.5671641791045
- type: ap
value: 44.011060753169424
- type: f1
value: 73.76504135120175
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 57.69669466706412
- type: mrr
value: 70.61370531592138
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 86.36465960226795
- type: cos_sim_spearman
value: 84.57602350761223
- type: euclidean_pearson
value: 84.31391364490506
- type: euclidean_spearman
value: 84.57602350761223
- type: manhattan_pearson
value: 84.15796224236456
- type: manhattan_spearman
value: 84.3645729064343
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.105698873583098
- type: mrr
value: 32.163780846856206
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 83.75973907678062
- type: cos_sim_spearman
value: 80.54994608351296
- type: euclidean_pearson
value: 80.58496551316748
- type: euclidean_spearman
value: 80.54993996457814
- type: manhattan_pearson
value: 80.49280884070782
- type: manhattan_spearman
value: 80.41230093993471
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 87.345503928209
- type: cos_sim_spearman
value: 80.4634619001261
- type: euclidean_pearson
value: 84.2666575030677
- type: euclidean_spearman
value: 80.46347579495351
- type: manhattan_pearson
value: 84.14370038922885
- type: manhattan_spearman
value: 80.36565043629274
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 75.14644787456163
- type: cos_sim_spearman
value: 75.88443166051762
- type: euclidean_pearson
value: 76.19117255044588
- type: euclidean_spearman
value: 75.88443166051762
- type: manhattan_pearson
value: 76.00450128624708
- type: manhattan_spearman
value: 75.69943934692938
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 77.60763524019471
- type: cos_sim_spearman
value: 77.2591077818027
- type: euclidean_pearson
value: 77.14021401348042
- type: euclidean_spearman
value: 77.25911027186999
- type: manhattan_pearson
value: 76.87139081109731
- type: manhattan_spearman
value: 76.98379627773018
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 88.18321035966198
- type: cos_sim_spearman
value: 89.0469892725742
- type: euclidean_pearson
value: 88.05085809092137
- type: euclidean_spearman
value: 89.04698194601134
- type: manhattan_pearson
value: 88.03620967628684
- type: manhattan_spearman
value: 89.02859425307943
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 82.39166503459249
- type: cos_sim_spearman
value: 83.71826060604693
- type: euclidean_pearson
value: 82.70145770530107
- type: euclidean_spearman
value: 83.71826045549452
- type: manhattan_pearson
value: 82.56870669205291
- type: manhattan_spearman
value: 83.55353737670136
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 89.58290721169323
- type: cos_sim_spearman
value: 89.25956993522081
- type: euclidean_pearson
value: 89.4716703635447
- type: euclidean_spearman
value: 89.25956993522081
- type: manhattan_pearson
value: 89.4475864648432
- type: manhattan_spearman
value: 89.14694174575615
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 81.4879065181404
- type: mrr
value: 94.81295937178291
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.73960396039604
- type: cos_sim_ap
value: 92.70840767967965
- type: cos_sim_f1
value: 86.90890990542557
- type: cos_sim_precision
value: 86.5213082259663
- type: cos_sim_recall
value: 87.3
- type: dot_accuracy
value: 99.73960396039604
- type: dot_ap
value: 92.70828452993575
- type: dot_f1
value: 86.90890990542557
- type: dot_precision
value: 86.5213082259663
- type: dot_recall
value: 87.3
- type: euclidean_accuracy
value: 99.73960396039604
- type: euclidean_ap
value: 92.7084093403562
- type: euclidean_f1
value: 86.90890990542557
- type: euclidean_precision
value: 86.5213082259663
- type: euclidean_recall
value: 87.3
- type: manhattan_accuracy
value: 99.74059405940594
- type: manhattan_ap
value: 92.7406819850299
- type: manhattan_f1
value: 87.01234567901234
- type: manhattan_precision
value: 85.95121951219512
- type: manhattan_recall
value: 88.1
- type: max_accuracy
value: 99.74059405940594
- type: max_ap
value: 92.7406819850299
- type: max_f1
value: 87.01234567901234
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 48.566931484512196
- type: mrr
value: 49.23111100500807
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.27287357692079
- type: cos_sim_ap
value: 74.20855854505362
- type: cos_sim_f1
value: 69.09903201787044
- type: cos_sim_precision
value: 65.22961574507966
- type: cos_sim_recall
value: 73.45646437994723
- type: dot_accuracy
value: 86.27287357692079
- type: dot_ap
value: 74.20853189774614
- type: dot_f1
value: 69.09903201787044
- type: dot_precision
value: 65.22961574507966
- type: dot_recall
value: 73.45646437994723
- type: euclidean_accuracy
value: 86.27287357692079
- type: euclidean_ap
value: 74.20857455896677
- type: euclidean_f1
value: 69.09903201787044
- type: euclidean_precision
value: 65.22961574507966
- type: euclidean_recall
value: 73.45646437994723
- type: manhattan_accuracy
value: 86.2192287059665
- type: manhattan_ap
value: 74.0513280969461
- type: manhattan_f1
value: 69.13344473621389
- type: manhattan_precision
value: 63.12118570183086
- type: manhattan_recall
value: 76.41160949868075
- type: max_accuracy
value: 86.27287357692079
- type: max_ap
value: 74.20857455896677
- type: max_f1
value: 69.13344473621389
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.16055419722902
- type: cos_sim_ap
value: 86.03614264194854
- type: cos_sim_f1
value: 78.89855695205357
- type: cos_sim_precision
value: 73.74656938215409
- type: cos_sim_recall
value: 84.82445334154605
- type: dot_accuracy
value: 89.16055419722902
- type: dot_ap
value: 86.03614225282097
- type: dot_f1
value: 78.89855695205357
- type: dot_precision
value: 73.74656938215409
- type: dot_recall
value: 84.82445334154605
- type: euclidean_accuracy
value: 89.16055419722902
- type: euclidean_ap
value: 86.0361548355667
- type: euclidean_f1
value: 78.89855695205357
- type: euclidean_precision
value: 73.74656938215409
- type: euclidean_recall
value: 84.82445334154605
- type: manhattan_accuracy
value: 89.11786393448985
- type: manhattan_ap
value: 86.00799361972808
- type: manhattan_f1
value: 78.84721152788472
- type: manhattan_precision
value: 75.26776338816941
- type: manhattan_recall
value: 82.78410840776101
- type: max_accuracy
value: 89.16055419722902
- type: max_ap
value: 86.0361548355667
- type: max_f1
value: 78.89855695205357
---
# E5-large-en-ru
## Model info
This is vocabulary pruned version of [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large).
Uses only russian and english tokens.
### Size
| | intfloat/multilingual-e5-large | d0rj/e5-large-en-ru |
| --- | --- | --- |
| Model size (MB) | 2135.82 | 1394.8 |
| Params (count) | 559,890,946 | 365,638,14 |
| Word embeddings dim | 256,002,048 | 61,749,248 |
### Performance
Equal performance on SberQuAD dev benchmark.
| Metric on SberQuAD (4122 questions) | intfloat/multilingual-e5-large | d0rj/e5-large-en-ru |
| --- | --- | --- |
| recall@3 | 0.787239204269772 | **0.7882096069868996** |
| map@3 | 0.7230713245997101 | **0.723192624939351** |
| mrr@3 | 0.7241630276564784 | **0.7243651948892132** |
| recall@5 | 0.8277535177098496 | **0.8284813197476953** |
| map@5 | 0.7301603186155587 | **0.7302573588872716** |
| mrr@5 | 0.7334667637069385 | **0.7335718906679607** |
| recall@10 | **0.8716642406598738** | 0.871421639980592 |
| map@10 | **0.7314774917730316** | 0.7313000338687417 |
| mrr@10 | **0.7392223685527911** | 0.7391814537556898 |
## Usage
- Use **dot product** distance for retrieval.
- Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval.
- Use "query: " prefix for symmetric tasks such as semantic similarity, bitext mining, paraphrase retrieval.
- Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering.
### transformers
#### Direct usage
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import XLMRobertaTokenizer, XLMRobertaModel
def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
input_texts = [
'query: How does a corporate website differ from a business card website?',
'query: Где был создан первый троллейбус?',
'passage: The first trolleybus was created in Germany by engineer Werner von Siemens, probably influenced by the idea of his brother, Dr. Wilhelm Siemens, who lived in England, expressed on May 18, 1881 at the twenty-second meeting of the Royal Scientific Society. The electrical circuit was carried out by an eight-wheeled cart (Kontaktwagen) rolling along two parallel contact wires. The wires were located quite close to each other, and in strong winds they often overlapped, which led to short circuits. An experimental trolleybus line with a length of 540 m (591 yards), opened by Siemens & Halske in the Berlin suburb of Halensee, operated from April 29 to June 13, 1882.',
'passage: Корпоративный сайт — содержит полную информацию о компании-владельце, услугах/продукции, событиях в жизни компании. Отличается от сайта-визитки и представительского сайта полнотой представленной информации, зачастую содержит различные функциональные инструменты для работы с контентом (поиск и фильтры, календари событий, фотогалереи, корпоративные блоги, форумы). Может быть интегрирован с внутренними информационными системами компании-владельца (КИС, CRM, бухгалтерскими системами). Может содержать закрытые разделы для тех или иных групп пользователей — сотрудников, дилеров, контрагентов и пр.',
]
tokenizer = XLMRobertaTokenizer.from_pretrained('d0rj/e5-large-en-ru', use_cache=False)
model = XLMRobertaModel.from_pretrained('d0rj/e5-large-en-ru', use_cache=False)
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
# [[68.59542846679688, 81.75910949707031], [80.36100769042969, 64.77748107910156]]
```
#### Pipeline
```python
from transformers import pipeline
pipe = pipeline('feature-extraction', model='d0rj/e5-large-en-ru')
embeddings = pipe(input_texts, return_tensors=True)
embeddings[0].size()
# torch.Size([1, 17, 1024])
```
### sentence-transformers
```python
from sentence_transformers import SentenceTransformer
sentences = [
'query: Что такое круглые тензоры?',
'passage: Abstract: we introduce a novel method for compressing round tensors based on their inherent radial symmetry. We start by generalising PCA and eigen decomposition on round tensors...',
]
model = SentenceTransformer('d0rj/e5-large-en-ru')
embeddings = model.encode(sentences, convert_to_tensor=True)
embeddings.size()
# torch.Size([2, 1024])
``` |
StatPan/SinGung7B-DPO-v0.1-12600c | StatPan | "2023-12-29T02:10:42Z" | 1,115 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-29T01:55:05Z" | Entry not found |
jeonsworld/CarbonVillain-10.7B-v3 | jeonsworld | "2024-01-02T11:08:24Z" | 1,115 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-01T06:56:10Z" | ---
license: cc-by-nc-sa-4.0
language:
- ko
---
# CarbonVillain
**This is a model created without learning to oppose indiscriminate carbon emissions.**
This model is an experimental version created using [mergekit](https://github.com/cg123/mergekit).
- merge models
- jeonsworld/CarbonVillain-en-10.7B-v1
- megastudy/M-SOLAR-10.7B-v1.1-beta
- method: slerp |
GAI-LLM/KoSOLAR-10.7B-dpo-v1 | GAI-LLM | "2024-01-09T05:51:46Z" | 1,115 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"ko",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-09T04:50:41Z" | ---
license: cc-by-nc-4.0
language:
- ko
library_name: transformers
pipeline_tag: text-generation
---
**The license is `cc-by-nc-4.0`.**
# **GAI-LLM/KoSOLAR-10.7B-dpo-v1**
## Model Details
**Model Developers** Donghoon Oh, Hanmin Myung, Eunyoung Kim (SK C&C G.AI Eng)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
GAI-LLM/KoSOLAR-10.7B-dpo-v1 is an auto-regressive language model based on the LLaMA2 transformer architecture.
**Base Model** [GAI-LLM/KoSOLAR-10.7B-mixed-v13](https://huggingface.co/GAI-LLM/KoSOLAR-10.7B-mixed-v13)
**Training Dataset**
- We combined Open Korean Dateset using mixed-strategy with DPO.
- We use A100 GPU 80GB * 8, when training.
# **Model Benchmark**
## KO-LLM leaderboard
- Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).
# Implementation Code
```python
### GAI-LLM/KoSOLAR-10.7B-dpo-v1
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "GAI-LLM/KoSOLAR-10.7B-dpo-v1"
model = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(repo)
``` |
julleong/illuni-llama-2-ko-7b-test | julleong | "2024-03-07T08:19:25Z" | 1,115 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"instruct",
"instruction",
"llama-2",
"question-answering",
"ko",
"base_model:beomi/llama-2-ko-7b",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | question-answering | "2024-02-29T03:22:31Z" | ---
language:
- ko
base_model: beomi/llama-2-ko-7b
license: mit
pipeline_tag: question-answering
tags:
- instruct
- instruction
- llama-2
---
# llama2-7b
### Model Details
- Developed by: Julleong
- Backbone Model: beomi/llama-2-ko-7b
- Library: [transformers](https://github.com/huggingface/transformers)
### Used Datasets
- 개체명 사전 2022(모두의 말뭉치)
### Prompt Template
```
<usr>
{Instruction}
<bot>
{Answer}
```
### License
- MIT |
ITT-AF/ITT-42dot_LLM-PLM-1.3B-v5.0 | ITT-AF | "2024-03-05T02:03:49Z" | 1,115 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-05T01:18:30Z" | ---
license: cc-by-nc-4.0
---
# ITT-AF/ITT-42dot_LLM-PLM-1.3B-v5.0
This model is a fine-tuned version of [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B) on an custom dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 96
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.0.0
- Tokenizers 0.15.0 |
hkss/hk-SOLAR-10.7B-v1.2 | hkss | "2024-03-20T08:02:38Z" | 1,115 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-20T04:56:44Z" | ---
library_name: transformers
license: apache-2.0
language:
- ko
---
### Model Description
Finetuning based on [yanolja/EEVE-Korean-10.8B-v1.0](https://huggingface.co/yanolja/EEVE-Korean-10.8B-v1.0).
### Training Method
Using Deepspeed, Accelerate, TRL etc.
### Datasets
TBA |
gemmathon/test-last | gemmathon | "2024-03-24T08:32:25Z" | 1,115 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-21T06:37:45Z" | ---
license: other
library_name: transformers
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
URP/urllm-ko-7b | URP | "2024-04-11T04:20:23Z" | 1,115 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"meta",
"llama-2",
"llama-2-ko",
"ko",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-25T04:34:50Z" | ---
license: cc-by-nc-4.0
language:
- ko
pipeline_tag: text-generation
tags:
- meta
- llama-2
- llama-2-ko
---
## Model Details
**Model Architecture:**
urLLM-KO-7B is an auto-regressive language model that leverages an optimized transformer architecture derived from Llama-2-7b.
**Training Corpus**
The model was trained using selected datasets from Modu Corpus and Korean Wikipedia (approximately 28GB).
**Vocab Expansion**
The expanded vocab size is 51385.
**Model Card Contact**
For errors or additional questions about details in this model card, contact [email protected] . |
chahyunmook/42dot_law | chahyunmook | "2024-04-03T11:15:28Z" | 1,115 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-25T05:14:39Z" | ---
license: cc-by-4.0
library_name: transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Deepnoid/DND-v0.0-e1 | Deepnoid | "2024-03-26T10:16:39Z" | 1,115 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-26T02:04:04Z" | ---
license: apache-2.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) |
ihopper/ko-gemma-7b-sft-dpo-v1.0 | ihopper | "2024-04-04T06:26:48Z" | 1,115 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"korean",
"pytorch",
"conversational",
"ko",
"en",
"arxiv:1910.09700",
"base_model:google/gemma-7b-it",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-04T02:28:57Z" | ---
library_name: transformers
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
language:
- ko
- en
tags:
- korean
- gemma
- pytorch
pipeline_tag: text-generation
base_model: google/gemma-7b-it
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cockroach54/opensolar-qlora-dpo | cockroach54 | "2024-04-08T12:12:36Z" | 1,115 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"en",
"dataset:davidkim205/kollm-converations",
"dataset:davidkim205/kollm-comparision",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-08T11:52:54Z" | ---
license: apache-2.0
language:
- ko
- en
library_name: transformers
datasets:
- davidkim205/kollm-converations
- davidkim205/kollm-comparision
---
just for practice
- trained by transformers, peft, bitsandbytes
- evaluated by https://github.com/davidkim205/kollm_evaluation
base model
- beomi/OPEN-SOLAR-KO-10.7B
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|----------------|------:|------|-----:|--------|-----:|---|------|
|kobest_wic | 1|none | 0|acc |0.4873|± |0.0141|
| | |none | 0|f1 |0.3276|± |N/A |
|kobest_sentineg | 1|none | 0|acc |0.6499|± |0.0240|
| | |none | 0|f1 |0.6134|± |N/A |
|kobest_hellaswag| 1|none | 0|acc |0.4980|± |0.0224|
| | |none | 0|f1 |0.4952|± |N/A |
| | |none | 0|acc_norm|0.6280|± |0.0216|
|kobest_copa | 1|none | 0|acc |0.8030|± |0.0126|
| | |none | 0|f1 |0.8026|± |N/A |
|kobest_boolq | 1|none | 0|acc |0.8711|± |0.0089|
| | |none | 0|f1 |0.8710|± |N/A |
|ko_truthfulqa | 2|none | 0|acc |0.2717|± |0.0156|
|ko_hellaswag | 1|none | 0|acc |0.4109|± |0.0049|
| | |none | 0|acc_norm|0.5503|± |0.0050|
|ko_common_gen | 1|none | 0|acc |0.9041|± |0.0075|
| | |none | 0|acc_norm|0.9041|± |0.0075|
|ko_arc_easy | 1|none | 0|acc |0.3490|± |0.0139|
| | |none | 0|acc_norm|0.4326|± |0.0145| |
Vezora/Mistral-22B-v0.1 | Vezora | "2024-04-12T05:03:44Z" | 1,115 | 150 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-11T08:54:51Z" | ---
license: apache-2.0
---
<img src="https://huggingface.co/Vezora/Mistral-22B-v0.1/resolve/main/unsloth.png" width="100" height="150" />
### Mistral-22b-V.01 Release Announcement 🚀
## This model is not an moe, it is infact a 22B parameter dense model!
**Date**: April 11
**Creator** [Nicolas Mejia-Petit](https://twitter.com/mejia_petit)
### Overview
Just one day after the release of **Mixtral-8x-22b**, we are excited to introduce our handcrafted experimental model, **Mistral-22b-V.01**. This model is a culmination of equal knowledge distilled from all experts into a single, dense 22b model. This model is not a single trained expert, rather its a compressed MOE model, turning it into a dense 22b mode. This is the first working MOE to Dense model conversion.
### Capabilities
- **Math Proficiency**: The model exhibits strong mathematical abilities. Dispite not being trained on math.
### Experimental Nature
Please note that Mistral-22b-V.01 is an experimental model. It has been fine-tuned with fewer examples compared to the model set for release tomorrow. We encourage you to explore its capabilities and provide feedback. This First model was trained on 500 regular human written Q/A and 500 tested python examples, this was done since it was only a test, it trained in less than an hour, so be warned this is a experimental model. Don't expect ground breaking results, expect llama 1 for now,the next V2 fingers crossed will be significantly better.
### Upcoming Release: V.2
Stay tuned for the release of **V.2** tomorrow, which will feature enhancements in:
- Multi-turn conversations
- Multiturn coding
- JSON mode
- Agent abilities
### Background
The decision to release this experimental version was prompted by someone attempting to replicate my experiment based on my tweets. We wanted to ensure our community has access to the official version first.
### Stay Updated
Keep an eye out for **V.2**, it's going to be a game-changer! And is currently training, will be done in the next ~24 hours. 🌟Paper Coming Soon🌟
## Thank you!
- Thank you to [Daniel Han](https://twitter.com/danielhanchen), for Unsloth AI which was used to train this model. this led to a 2-3x speed increae and 2-3x decrease in memmory consumption.
- Thank you to [Charles Coddard](https://twitter.com/chargoddard), for providng me with a script that was nessary to make this model.
- Thank you to Mistral, for releasing Another Wonderful open source model, under Apache 2.0.
- Thank you to [Tim Dettmers](https://twitter.com/Tim_Dettmers), for creating QLora
- Thank you to [Tri Dao](https://twitter.com/tri_dao), for creating Flash Attention
- Thank you to Microsoft, for the Lora paper, and the Slice-GPT paper.
- Thank you to the Hugging Face team, for everything.❤️ We really do appreciate you guys and all your hard work and commitment to the open source community!❤️
## I will answer more questions about this model tomorrow, I have not slept since mixtral 22bx8 dropped. Base model + V.2 Lora checkpoint will be available tomorrow ##
## Future plans, potentially continued pretraning, if 1. more questions dosent help, or 2. My second idea to only use the knowledge from 2 experts as oppsed to all 8. Since compressing all 8 might lead to the DORI effect (yes from findng nemo, and yes I'm taking credit from coining this phrase). |
karakuri-ai/karakuri-lm-8x7b-chat-v0.1 | karakuri-ai | "2024-05-20T10:17:11Z" | 1,115 | 18 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"steerlm",
"conversational",
"en",
"ja",
"dataset:OpenAssistant/oasst2",
"dataset:nvidia/HelpSteer",
"base_model:tokyotech-llm/Swallow-MX-8x7b-NVE-v0.1",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-25T04:18:17Z" | ---
library_name: transformers
license: apache-2.0
datasets:
- OpenAssistant/oasst2
- nvidia/HelpSteer
language:
- en
- ja
tags:
- mixtral
- steerlm
base_model: tokyotech-llm/Swallow-MX-8x7b-NVE-v0.1
model-index:
- name: karakuri-ai/karakuri-lm-8x7b-chat-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: MT-Bench
type: unknown
metrics:
- type: unknown
name: score
value: 7.39375
source:
url: https://huggingface.co/spaces/lmsys/mt-bench
- task:
type: text-generation
name: Text Generation
dataset:
name: MT-Bench-jp
type: unknown
metrics:
- type: unknown
name: score
value: 7.540625
source:
url: https://api.wandb.ai/links/wandb-japan/6ff86bp3
---
# KARAKURI LM 8x7B Chat v0.1

## Model Details
### Model Description
- **Developed by:** [KARAKURI Inc.](https://about.karakuri.ai/)
- **Model type:** Mixture of Experts (MoE)
- **Languages**: Primarily English and Japanese
- **License:** Apache 2.0
- **Finetuned from model:** [tokyotech-llm/Swallow-MX-8x7b-NVE-v0.1](https://huggingface.co/tokyotech-llm/Swallow-MX-8x7b-NVE-v0.1)
- **Contact**: For questions and comments about the model, please email `[email protected]`
- **Demo**: https://lm.karakuri.cc/
## Usage
### Warning
The prompt format has been changed from [KARAKURI LM 70B Chat v0.1](https://huggingface.co/karakuri-ai/karakuri-lm-70b-chat-v0.1).
Please make sure to follow the correct format.
Otherwise, the model will generate sub-optimal outputs.
### Prompt Format
We use the following prompt template of multi-turn conversation in the Mistral format, which includes an encoded string of multiple attribute values.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("karakuri-ai/karakuri-lm-8x7b-chat-v0.1")
messages = [
{"role": "system", "content": "System prompt"},
{"role": "user", "content": "User prompt"},
{"role": "assistant", "content": "Model response"},
{"role": "user", "content": "User prompt"},
]
tokenizer.apply_chat_template(messages, tokenize=False)
# <s>[INST] <<SYS>>
# System prompt
# <</SYS>>
#
# User prompt [ATTR] helpfulness: 4 correctness: 4 coherence: 4 complexity: 4 verbosity: 4 quality: 4 toxicity: 0 humor: 0 creativity: 0 [/ATTR] [/INST]Model response</s>[INST] User prompt [ATTR] helpfulness: 4 correctness: 4 coherence: 4 complexity: 4 verbosity: 4 quality: 4 toxicity: 0 humor: 0 creativity: 0 [/ATTR] [/INST]
```
The prompt template contains nine attributes.
The first five are derived from HelpSteer, while the remaining four are derived from OASST2.
The values are represented by integers ranging from 0 to 4, with 0 being the lowest and 4 being the highest.
- helpfulness (default: 4): Overall helpfulness of the response to the prompt.
- correctness (default: 4): Inclusion of all pertinent facts without errors.
- coherence (default: 4): Consistency and clarity of expression.
- complexity (default: 4): Intellectual depth required to write response (i.e. whether the response can be written by anyone with basic language competency or requires deep domain expertise).
- verbosity (default: 4): Amount of detail included in the response, relative to what is asked for in the prompt.
- quality (default: 4): Perceived goodness of response.
- toxicity (default: 0): Undesirable elements such as vulgar, harmful or potentially biased response.
- humor (default: 0): Sense of humor within response.
- creativity (default: 0): Willingness to generate non-conventional response.
If you want to change attribute values from the default values specified in the template, you can modify them to any values by adding the attribute values to the user messages:
```python
messages = [
{"role": "user", "content": "User prompt", "helpfulness": 0, "complexity": 0},
]
tokenizer.apply_chat_template(messages, tokenize=False)
# <s>[INST] User prompt [ATTR] helpfulness: 0 correctness: 4 coherence: 4 complexity: 0 verbosity: 4 quality: 4 toxicity: 0 humor: 0 creativity: 0 [/ATTR] [/INST]
```
### Run the model
```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(
"karakuri-ai/karakuri-lm-8x7b-chat-v0.1",
torch_dtype="auto",
device_map="auto",
)
messages = [
{
"role": "user",
"content": "週末に日帰りで東京に遊びに行こうと思っています。日帰りなので、短時間で回れるおすすめの観光プランを教えてください。",
},
]
input_ids = tokenizer.apply_chat_template(
messages,
return_tensors="pt",
).to(model.device)
outputs = model.generate(input_ids, max_new_tokens=512)
tokenizer.decode(outputs[0][input_ids.shape[-1]:])
```
## Performance
| Model | # Active Params | Alignment | MT-Bench-jp |
| :----------------------------- | :-------------: | :---------: | ----------: |
| Qwen1.5 72B Chat | 72B | DPO | 8.19 |
| **KARAKURI LM 8x7B Chat v0.1** | **13B** | **SteerLM** | **7.54** |
| Command R+ | 104B | - | 7.31 |
| Mixtral 8x7B Instruct v0.1 | 13B | DPO | 7.24 |
| Llama 3 70B Instruct | 70B | RLHF | 7.13 |
| KARAKURI LM 70B Chat v0.1 | 70B | SteerLM | 6.43 |
| Llama 2 70B Chat | 70B | RLHF | 5.23 |
## Training Details
### Training Data
- [OASST2](https://huggingface.co/datasets/OpenAssistant/oasst2)
- [HelpSteer](https://huggingface.co/datasets/nvidia/HelpSteer)
- Internal Japanese dataset
### Training Infrastructure
- **Hardware**: The model was trained on 8 nodes of an Amazon EC2 trn1.32xlarge instance.
- **Software**: We use code based on [neuronx-nemo-megatron](https://github.com/aws-neuron/neuronx-nemo-megatron).
## Citation
```
@misc{karakuri_lm_8x7b_chat_v01,
author = { {KARAKURI} {I}nc. },
title = { {KARAKURI} {LM} 8x7{B} {C}hat v0.1 },
year = { 2024 },
url = { https://huggingface.co/karakuri-ai/karakuri-lm-8x7b-chat-v0.1 },
publisher = { Hugging Face },
journal = { Hugging Face repository }
}
``` |
mradermacher/L3-SnowStorm-v1.15-4x8B-B-i1-GGUF | mradermacher | "2024-05-30T04:42:46Z" | 1,115 | 3 | transformers | [
"transformers",
"gguf",
"moe",
"en",
"base_model:xxx777xxxASD/L3-SnowStorm-v1.15-4x8B-B",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | "2024-05-28T22:51:31Z" | ---
base_model: xxx777xxxASD/L3-SnowStorm-v1.15-4x8B-B
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- moe
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/xxx777xxxASD/L3-SnowStorm-v1.15-4x8B-B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-i1-GGUF/resolve/main/L3-SnowStorm-v1.15-4x8B-B.i1-IQ1_S.gguf) | i1-IQ1_S | 5.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-i1-GGUF/resolve/main/L3-SnowStorm-v1.15-4x8B-B.i1-IQ1_M.gguf) | i1-IQ1_M | 6.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-i1-GGUF/resolve/main/L3-SnowStorm-v1.15-4x8B-B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-i1-GGUF/resolve/main/L3-SnowStorm-v1.15-4x8B-B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-i1-GGUF/resolve/main/L3-SnowStorm-v1.15-4x8B-B.i1-IQ2_S.gguf) | i1-IQ2_S | 7.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-i1-GGUF/resolve/main/L3-SnowStorm-v1.15-4x8B-B.i1-IQ2_M.gguf) | i1-IQ2_M | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-i1-GGUF/resolve/main/L3-SnowStorm-v1.15-4x8B-B.i1-Q2_K.gguf) | i1-Q2_K | 9.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-i1-GGUF/resolve/main/L3-SnowStorm-v1.15-4x8B-B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-i1-GGUF/resolve/main/L3-SnowStorm-v1.15-4x8B-B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-i1-GGUF/resolve/main/L3-SnowStorm-v1.15-4x8B-B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 11.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-i1-GGUF/resolve/main/L3-SnowStorm-v1.15-4x8B-B.i1-IQ3_S.gguf) | i1-IQ3_S | 11.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-i1-GGUF/resolve/main/L3-SnowStorm-v1.15-4x8B-B.i1-IQ3_M.gguf) | i1-IQ3_M | 11.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-i1-GGUF/resolve/main/L3-SnowStorm-v1.15-4x8B-B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 12.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-i1-GGUF/resolve/main/L3-SnowStorm-v1.15-4x8B-B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 13.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-i1-GGUF/resolve/main/L3-SnowStorm-v1.15-4x8B-B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 13.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-i1-GGUF/resolve/main/L3-SnowStorm-v1.15-4x8B-B.i1-Q4_0.gguf) | i1-Q4_0 | 14.3 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-i1-GGUF/resolve/main/L3-SnowStorm-v1.15-4x8B-B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 14.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-i1-GGUF/resolve/main/L3-SnowStorm-v1.15-4x8B-B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 15.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-i1-GGUF/resolve/main/L3-SnowStorm-v1.15-4x8B-B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-i1-GGUF/resolve/main/L3-SnowStorm-v1.15-4x8B-B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-i1-GGUF/resolve/main/L3-SnowStorm-v1.15-4x8B-B.i1-Q6_K.gguf) | i1-Q6_K | 20.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
stevhliu/my_awesome_billsum_model | stevhliu | "2022-10-11T18:23:16Z" | 1,114 | 4 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2022-10-11T18:04:14Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.176
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4290
- Rouge1: 0.176
- Rouge2: 0.0773
- Rougel: 0.1454
- Rougelsum: 0.1455
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.5195 | 0.1478 | 0.0528 | 0.1197 | 0.1194 | 19.0 |
| No log | 2.0 | 124 | 2.4660 | 0.1572 | 0.06 | 0.1288 | 0.1287 | 19.0 |
| No log | 3.0 | 186 | 2.4366 | 0.1691 | 0.0719 | 0.1394 | 0.1396 | 19.0 |
| No log | 4.0 | 248 | 2.4290 | 0.176 | 0.0773 | 0.1454 | 0.1455 | 19.0 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.13.1
|
SciPhi/SciPhi-Mistral-7B-32k | SciPhi | "2023-10-31T21:44:54Z" | 1,114 | 69 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-16T16:58:43Z" | ---
license: mit
---
# SciPhi-Mistral-7B-32k Model Card
The SciPhi-Mistral-7B-32k is a Large Language Model (LLM) fine-tuned from Mistral-7B-v0.1. This model underwent a fine-tuning process over four epochs using more than 1 billion tokens, which include regular instruction tuning data and synthetic textbooks. The objective of this work was to increase the model's scientific reasoning and educational abilities. For best results, follow the Alpaca prompting guidelines.
SciPhi-AI is available via a free hosted API, though the exposed model can vary. Currently, SciPhi-Self-RAG-Mistral-7B-32k is available. More details can be found in the docs [here](https://sciphi.readthedocs.io/en/latest/setup/quickstart.html).
## Model Architecture
Base Model: Mistral-7B-v0.1
**Architecture Features:**
- Transformer-based model
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## Recommended Chat Formatting
We recommend mapping such that
```
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate",
},
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
goes to --->
### System:
You are a friendly chatbot who always responds in the style of a pirate
### Instruction:
How many helicopters can a human eat in one sitting?
### Response:
...
```
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
## References
1. Lian, W., Goodson, B., Wang, G., Pentland, E., Cook, A., Vong, C., & Teknium. (2023). MistralOrca: Mistral-7B Model Instruct-tuned on Filtered OpenOrcaV1 GPT-4 Dataset. *HuggingFace repository*. [Link](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)
2. Mukherjee, S., Mitra, A., Jawahar, G., Agarwal, S., Palangi, H., & Awadallah, A. (2023). Orca: Progressive Learning from Complex Explanation Traces of GPT-4. *arXiv preprint arXiv:2306.02707*.
3. Longpre, S., Hou, L., Vu, T., Webson, A., Chung, H. W., Tay, Y., Zhou, D., Le, Q. V., Zoph, B., Wei, J., & Roberts, A. (2023). The Flan Collection: Designing Data and Methods for Effective Instruction Tuning. *arXiv preprint arXiv:2301.13688*.
4. Mistral AI. (2023). Model Card for Mistral-7B-v0.1. The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters. Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks tested. For full details, please refer to the paper and release blog post. Model Architecture: Transformer with Grouped-Query Attention, Sliding-Window Attention, and Byte-fallback BPE tokenizer. [Link](https://huggingface.co/mistralai/Mistral-7B-v0.1)
## Acknowledgements
Thank you to the [AI Alignment Lab](https://huggingface.co/Alignment-Lab-AI), [vikp](https://huggingface.co/vikp), [jph00](https://huggingface.co/jph00) and others who contributed to this work. |
SummerSigh/Pythia410m-V0-Instruct | SummerSigh | "2024-04-29T18:58:52Z" | 1,114 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-30T18:33:21Z" | ---
license: apache-2.0
---
# Model info
This is EleutherAI/pythia-410m finetuned on OpenAssistant/oasst_top1_2023-08-25
# Why
Plain and simple. Im experimenting with making instruction LLMs under 1B params. I think we can still squeeze out better performance out of these models.
# Random Notes
- Only using OpenAssistant data gives fantastic results becuase of its high quality. I like the top1 dataset becuase of it's lack of prompt refusals.
- Prompt refusals have been shown to damage the performance of instruction LLMs. My theory is that the model "spends" parameters learning how to refuse prompts rather than learning actually useful information. Adding to this, I think that unlike other tasks, learning prompt refusals most likely has no other value in terms of transfer learning.
# Usage
```
from transformers import pipeline
pipe = pipeline("text-generation", model="SummerSigh/Pythia410m-V0-Instruct")
out= pipe("<|im_start|>user\nWhat's the meaning of life?<|im_end|>\n<|im_start|>assistant\n",max_length = 500,repetition_penalty = 1.2, temperature = 0.5, do_sample = True)
print(out[0]["generated_text"])
```
# Contact
If you want to contact me and work with me on making good under 1B param models, you can reach me on discord at summer_ai.
|
StatPan/all-you-need-is | StatPan | "2024-01-04T01:49:05Z" | 1,114 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-04T01:37:30Z" | Entry not found |
cocoirun/Yi-Ko-6B-instruct-v1.5 | cocoirun | "2024-01-11T01:47:43Z" | 1,114 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-10T01:59:04Z" | ---
license: cc-by-sa-4.0
---
<h1>instruct 모델 v1.5</h1>
<b><학습 데이터 구축></b>
Open-Orca-ko 데이터를 분석하여 태스크를 추출한 뒤
해당 태스크에 맞춰서 NLP 관련 오픈소스 데이터를 활용하여 학습데이터를 자체적으로
약 4만건(역사, 과학, 수학, 기계독해, 리뷰 분석) 구축하였고,
그 외에 Open-Orca-Ko에서 데이터를 일부 필터링하여 정제해거나 KoBEST 데이터를 함께 추가하였습니다.
aihub 일반상식 및 기계독해 데이터를 활용하여 추가로 학습 데이터를 구축(형태소 관련, 기계독해 관련 및 요약)
각종 블로그에서 역사 및 상식 퀴즈를 사람이 직접 학습데이터 형태로 변경
AI2AI Challenge 데이터를 파파고를 통해 번역 및 오역된 부분을 사람이 직접 수정 하는 작업을 수행
영어 번역 데이터 영한/한영 데이터 학습 데이터로 활용 진행
총 11만개의 학습데이터로 sft를 진행하였습니다.
<br>
현재, 새로운 버전의 모델 학습 및 성능을 위해 Open-Orca 데이터셋 일부를 번역하여 정제 중에 있습니다.
<br>
+ 고등학교 역사 문제 및 TruthfulQA 관련 문제 추가를 진행하였습니다.
+ 각종 it 지식 데이터 추가진행.
+ 기계독해 관련 학습 데이터를 ChatGPT를 통해서 답변을 얻어 학습
+ 문법관련 학습 데이터
<br>
###학습 데이터 파일은 비공개입니다.
<br>
<b><학습></b>
학습은 LoRA를 사용하여 A100 40G *2에서 학습을 진행하였습니다. |
dddsaty/OPEN_SOLAR_KO_10.7B_DPO_Adapter_Attach | dddsaty | "2024-02-11T12:04:01Z" | 1,114 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"dataset:maywell/ko_Ultrafeedback_binarized",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-25T01:40:40Z" | ---
license: apache-2.0
language:
- ko
library_name: transformers
pipeline_tag: text-generation
datasets:
- maywell/ko_Ultrafeedback_binarized
---
**Explanation**
- With the base model, applied DPO to the small amount of layers with the open dataset , saved just the adapter part
- Merged the base model and the tuned adapter together
**Base Model**
- [beomi/OPEN-SOLAR-KO-10.7B](https://huggingface.co/beomi/OPEN-SOLAR-KO-10.7B)
**Used Corpus**
- [maywell/ko_Ultrafeedback_binarized](https://huggingface.co/datasets/maywell/ko_Ultrafeedback_binarized)
**Score**
|Average|Ko-ARC|Ko-HellaSwag|Ko-MMLU|Ko-TruthfulQA|Ko-CommonGen V2|
|:---:|:---:|:---:|:---:|:---:|:---:|
|52.83|50|60.55|48.8|71.51|43.65|61.16|
**Log**
- 2024.01.25: Initial version Upload
- 2024.02.10: Readme updated
- 2024.02.11: Score updated
**LICENSE**
- Apache 2.0
**Citation**
- beomi/OPEN-SOLAR-KO-10.7B
```
@misc {solar_ko_junbum_2023,
author = { {L. Junbum} },
title = { Solar-Ko-10.7b },
year = 2024,
url = { https://huggingface.co/beomi/SOLAR-KO-10.7B },
publisher = { Hugging Face }
}
``` |
colable/llama-ko-peft | colable | "2024-01-26T14:52:50Z" | 1,114 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-26T14:34:31Z" | ---
license: mit
language:
- ko
---
# open-llama-2-ko based model with inhouse dataset
This is an Korean Model based on
* [beomi/open-llama-2-ko-7b]
gpu code example
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "colable/llama-ko-peft"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float32, device_map='auto',local_files_only=False, load_in_4bit=True
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
``` |
logicker/SkkuDataScience-DPO-v2-440-ckpt | logicker | "2024-02-27T17:03:26Z" | 1,114 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"chat",
"en",
"ko",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-29T18:34:20Z" | ---
license: other
license_name: tongyi-qianwen
license_link: https://huggingface.co/Qwen/Qwen1.5-7B-Chat/blob/main/LICENSE
language:
- en
- ko
pipeline_tag: text-generation
tags:
- chat
---
# Qwen1.5-7B-Chat
## Introduction
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:
* 6 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, and 72B;
* Significant performance improvement in human preference for chat models;
* Multilingual support of both base and chat models;
* Stable support of 32K context length for models of all sizes
* No need of `trust_remote_code`.
For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).
<br>
## Model Details
Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA and the mixture of SWA and full attention.
## Training details
We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization. However, DPO leads to improvements in human preference evaluation but degradation in benchmark evaluation. In the very near future, we will fix both problems.
## Requirements
The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'
```
DPO Tuning
- Dataset: https://huggingface.co/datasets/Intel/orca_dpo_pairs |
Byungchae/k2s3_test_0000 | Byungchae | "2024-01-30T01:47:29Z" | 1,114 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-30T01:33:57Z" | ---
license: cc-by-nc-4.0
language: ko
---
## Developed by : Byungchae Song
## Model Number: k2s3_test_0000
## Base Model :
* [meta-llama/Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf)
### Training Data
* in-house dataset
### Training Method
* PEFT QLoRA
|
hyeogi/SOLAR-10.7B-v1.2 | hyeogi | "2024-02-01T01:46:18Z" | 1,114 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"SOLAR-10.7B",
"ko",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-01T01:54:29Z" | ---
language:
- ko
pipeline_tag: text-generation
tags:
- SOLAR-10.7B
license: cc-by-nc-4.0
---
# SOLAR-10.7B
### Model Details
- Base Model: [yanolja/KoSOLAR-10.7B-v0.2](https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.2)
### Datasets
- translate [GAIR/lima](https://huggingface.co/datasets/GAIR/lima)
### Benchmark |
chahyunmook/42dot_label | chahyunmook | "2024-03-25T05:11:45Z" | 1,114 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-25T04:57:36Z" | ---
library_name: transformers
license: cc-by-4.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Gregor/mblip-mt0-xl | Gregor | "2024-05-07T11:45:49Z" | 1,113 | 12 | transformers | [
"transformers",
"pytorch",
"safetensors",
"blip-2",
"vision",
"image-to-text",
"image-captioning",
"visual-question-answering",
"en",
"multilingual",
"dataset:Gregor/mblip-train",
"arxiv:2307.06930",
"arxiv:2301.12597",
"license:mit",
"region:us"
] | image-to-text | "2023-07-10T15:02:45Z" | ---
language:
- en
- multilingual
license: mit
tags:
- vision
- image-to-text
- image-captioning
- visual-question-answering
pipeline_tag: image-to-text
inference: false
datasets:
- Gregor/mblip-train
---
# mBLIP mT0-XL
This is the model checkpoint for our work [mBLIP: Efficient Bootstrapping of Multilingual Vision-LLMs](https://arxiv.org/abs/2307.06930).
## Model description
mBLIP is a [BLIP-2](https://arxiv.org/abs/2301.12597) model which consists of 3 sub-models: a Vision Transformer (ViT), a Query-Transformer (Q-Former) and a large language model (LLM).
The Q-Former and ViT have both been initialized by an English BLIP-2 checkpoint ([blip2-flan-t5-xl](https://huggingface.co/Gregor/mblip-mt0-xl)) and then re-aligned
to the multilingual LLM ([mt0-xl](https://huggingface.co/bigscience/mt0-xl)) using a [multilingual task mixture](https://huggingface.co/datasets/Gregor/mblip-train).
<img src="https://github.com/gregor-ge/mBLIP/blob/main/architecture.png"
alt="The mBLIP architecture" width="600"/>
This allows the model to be used for tasks like:
- image captioning
- visual question answering (VQA)
in 96 languages.
#### Languages
mBLIP was trained on the following 96 languages:
`
af, am, ar, az, be, bg, bn, ca, ceb, cs, cy, da, de, el, en, eo, es, et, eu, fa, fi, fil, fr, ga, gd, gl, gu, ha, hi, ht, hu, hy, id, ig, is, it, iw, ja, jv, ka, kk, km, kn, ko, ku, ky, lb, lo, lt, lv, mg, mi, mk, ml, mn, mr, ms, mt, my, ne, nl, no, ny, pa, pl, ps, pt, ro, ru, sd, si, sk, sl, sm, sn, so, sq, sr, st, su, sv, sw, ta, te, tg, th, tr, uk, ur, uz, vi, xh, yi, yo, zh, zu
`
## Direct Use and Downstream Use
You can use the raw model for conditional text generation given an image and prompt text in a zero-shot setup or
alternatively finetune it for downstream applications.
We strongly recommend LoRA applied to the LLM when finetuning and to use bf16 as data type - standard fp16 can cause NaN loss.
See [our repository](https://github.com/gregor-ge/mBLIP) for the code used to train and finetune this model.
## Bias, Risks, Limitations, and Ethical Considerations
While mBLIP can work in theory with up to 100 languages, in practice, we expect best results when prompted in high-resource languages
like English, German, Spanish, etc.
mBLIP inherits the risk, limitations, and biases from the models used to initialize it.
mBLIP has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within.
### How to use
For code examples, we refer to the BLIP-2 [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example).
#### Running the model on CPU
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import BlipProcessor, Blip2ForConditionalGeneration
processor = BlipProcessor.from_pretrained("Gregor/mblip-mt0-xl")
model = Blip2ForConditionalGeneration.from_pretrained("Gregor/mblip-mt0-xl")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "Describe the image in German."
inputs = processor(raw_image, question, return_tensors="pt")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
#### Running the model on GPU
##### In full precision
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Gregor/mblip-mt0-xl")
model = Blip2ForConditionalGeneration.from_pretrained("Gregor/mblip-mt0-xl", device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "Describe the image in German."
inputs = processor(raw_image, question, return_tensors="pt").to("cuda")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
##### In half precision (`bfloat16`)
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import torch
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Gregor/mblip-mt0-xl")
model = Blip2ForConditionalGeneration.from_pretrained("Gregor/mblip-mt0-xl", torch_dtype=torch.bfloat16, device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "Describe the image in German."
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.bfloat16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
##### In 8-bit precision (`int8`)
>**Important:** Paper results only use int8 for the LLM weights while this loads all weights in int8.
> We see that this gives slightly worse results but currently int8 for some model parts is not supported by HuggingFace.
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate bitsandbytes
import torch
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Gregor/mblip-mt0-xl")
model = Blip2ForConditionalGeneration.from_pretrained("Gregor/mblip-mt0-xl", load_in_8bit=True, device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "Describe the image in German."
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.bfloat16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
## Citation
If you use our model, please cite the following:
```
@article{geigle2023mblip,
author = {Gregor Geigle and
Abhay Jain and
Radu Timofte and
Goran Glava\v{s}},
title = {mBLIP: Efficient Bootstrapping of Multilingual Vision-LLMs},
journal = {arXiv},
volume = {abs/2307.06930},
year = {2023},
url = {https://arxiv.org/abs/2307.06930},
eprinttype = {arXiv},
eprint = {2307.06930},
}
``` |
GAI-LLM/KoSOLAR-10.7B-mixed-v13 | GAI-LLM | "2024-01-09T01:17:38Z" | 1,113 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"ko",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-08T02:07:09Z" | ---
license: cc-by-nc-4.0
language:
- ko
library_name: transformers
pipeline_tag: text-generation
---
**The license is `cc-by-nc-4.0`.**
# **GAI-LLM/KoSOLAR-10.7B-mixed-v13**
## Model Details
**Model Developers** Donghoon Oh, Hanmin Myung, Eunyoung Kim (SK C&C G.AI Eng)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
GAI-LLM/KoSOLAR-10.7B-mixed-v13 is an auto-regressive language model based on the LLaMA2 transformer architecture.
**Base Model** [yanolja/KoSOLAR-10.7B-v0.1](https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.1-deprecated)
**Training Dataset**
- We combined Open Korean Dateset using mixed-strategy.
- We use A100 GPU 80GB * 8, when training.
# **Model Benchmark**
## KO-LLM leaderboard
- Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).
# Implementation Code
```python
### GAI-LLM/KoSOLAR-10.7B-mixed-v13
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "GAI-LLM/KoSOLAR-10.7B-mixed-v13"
model = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(repo)
``` |
DooDooHyun/AIFT-42dot_LLM-PLM-1.3B-ao-instruct-all-v0.55 | DooDooHyun | "2024-01-22T06:46:41Z" | 1,113 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:42dot/42dot_LLM-PLM-1.3B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-22T06:27:06Z" | ---
license: cc-by-nc-4.0
base_model: 42dot/42dot_LLM-PLM-1.3B
tags:
- generated_from_trainer
model-index:
- name: AIFT-42dot_LLM-PLM-1.3B-ao-instruct-all-v0.55
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AIFT-42dot_LLM-PLM-1.3B-ao-instruct-all-v0.55
This model is a fine-tuned version of [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 96
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.0.0
- Tokenizers 0.15.0
|
Edentns/DataVortexS-10.7B-dpo-v1.11 | Edentns | "2024-02-24T18:58:23Z" | 1,113 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"base_model:LDCC/LDCC-SOLAR-10.7B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-31T04:16:25Z" | ---
tags:
- text-generation
license: cc-by-nc-4.0
language:
- ko
base_model: LDCC/LDCC-SOLAR-10.7B
pipeline_tag: text-generation
---
# **DataVortexS-10.7B-dpo-v1.11**
<img src="./DataVortex.png" alt="DataVortex" style="height: 8em;">
## Our Team
| Research & Engineering | Product Management |
| :--------------------: | :----------------: |
| Kwangseok Yang | Seunghyun Choi |
| Jeongwon Choi | Hyoseok Choi |
## **Model Details**
### **Base Model**
[LDCC/LDCC-SOLAR-10.7B](https://huggingface.co/LDCC/LDCC-SOLAR-10.7B)
### **Trained On**
- **OS**: Ubuntu 22.04
- **GPU**: H100 80GB 4ea
- **transformers**: v4.36.2
### **Instruction format**
It follows **Alpaca (Chat)** format.
E.g.
```python
text = """\
### System:
당신은 사람들이 정보를 찾을 수 있도록 도와주는 인공지능 비서입니다.
### User:
대한민국의 수도는 어디야?
### Assistant:
대한민국의 수도는 서울입니다.
### User:
서울 인구는 총 몇 명이야?
"""
```
## **Model Benchmark**
### **[Ko LM Eval Harness](https://github.com/Beomi/ko-lm-evaluation-harness)**
| Task | 0-shot | 5-shot | 10-shot | 50-shot |
| :--------------- | -----------: | -----------: | -----------: | -----------: |
| kobest_boolq | 0.920101 | 0.928018 | 0.933025 | 0.928754 |
| kobest_copa | 0.721782 | 0.801936 | 0.817737 | 0.84093 |
| kobest_hellaswag | 0.44502 | 0.482783 | 0.483978 | 0.48978 |
| kobest_sentineg | 0.51398 | 0.931928 | 0.944556 | 0.934475 |
| **Average** | **0.650221** | **0.786166** | **0.794824** | **0.798485** |
### **[Ko-LLM-Leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)**
| Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
| ------: | -----: | -----------: | ------: | ------------: | --------------: |
| 59.56 | 55.97 | 68.68 | 52.67 | 66.74 | 53.72 |
## **Implementation Code**
This model contains the chat_template instruction format.
You can use the code below.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v1.11")
tokenizer = AutoTokenizer.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v1.11")
messages = [
{"role": "system", "content": "당신은 사람들이 정보를 찾을 수 있도록 도와주는 인공지능 비서입니다."},
{"role": "user", "content": "대한민국의 수도는 어디야?"},
{"role": "assistant", "content": "대한민국의 수도는 서울입니다."},
{"role": "user", "content": "서울 인구는 총 몇 명이야?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## **License**
This model is licensed under the [cc-by-nc-4.0](https://creativecommons.org/licenses/by-nc/4.0/). which allows others to share and adapt the model for non-commercial purposes.
<div align="center">
<a href="https://edentns.com/">
<img src="./Logo.png" alt="Logo" style="height: 3em;">
</a>
</div>
|
ITT-AF/ITT-Yi-Ko-6B-v4.0 | ITT-AF | "2024-02-14T09:52:33Z" | 1,113 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-14T06:28:56Z" | ---
license: cc-by-nc-4.0
---
## ITT-AF/ITT-Yi-Ko-6B-v4.0
This model is a fine-tuned version of [beomi/Yi-Ko-6B](https://huggingface.co/beomi/Yi-Ko-6B) on an custom dataset.
### Model description
More information needed
### Intended uses & limitations
More information needed
### Training and evaluation data
More information needed
### Training procedure
### Training hypuerparameters
The following hyperparameters were used during training:
* learning_rate: 2e-05
* train_batch_size: 4
* eval_batch_size: 8
* seed: 42
* gradient_accumulation_steps: 8
* total_train_batch_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr_scheduler_type: linear
* num_epochs: 1.0
* mixed_precision_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.2
* Pytorch 2.1.2+cu121
* Datasets 2.0.0
* Tokenizers 0.15.0 |
ITT-AF/ITT-42dot_LLM-SFT-1.3B-v3.0 | ITT-AF | "2024-02-19T10:36:48Z" | 1,113 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-19T08:00:42Z" | ---
license: cc-by-nc-4.0
---
# ITT-AF/ITT-42dot_LLM-SFT-1.3B-v3.0
This model is a fine-tuned version of [42dot/42dot_LLM-SFT-1.3B](https://huggingface.co/42dot/42dot_LLM-SFT-1.3B) on an custom dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 96
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.0.0
- Tokenizers 0.15.0 |
hwkwon/S-SOLAR-10.7B-v1.0 | hwkwon | "2024-03-02T01:43:18Z" | 1,113 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-29T13:25:40Z" | ---
license: cc-by-nc-4.0
language:
- ko
---
# S-SOLAR-10.7B
<!-- Provide a quick summary of what the model is/does. -->
<!--This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).-->
### Model Description
<!-- Provide a longer summary of what this model is. -->
This model is a fine-tuned version of [yanolja/KoSOLAR-10.7B-v0.2](https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.2).
### Trained Data
TBA
### Prompt Template
```
### User: User query input
### Assistant:
```
### License
This model is licensed under the cc-by-nc-4.0. which allows others to share and adapt the model for non-commercial purposes.
|
hkss/hk-SOLAR-10.7B-v1.1 | hkss | "2024-03-20T08:02:06Z" | 1,113 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-20T04:26:00Z" | ---
library_name: transformers
license: apache-2.0
language:
- ko
---
### Model Description
Finetuning based on [yanolja/EEVE-Korean-10.8B-v1.0](https://huggingface.co/yanolja/EEVE-Korean-10.8B-v1.0).
### Training Method
Using Deepspeed, Accelerate, TRL etc.
### Datasets
TBA |
Niggendar/WildCardX_XLPony | Niggendar | "2024-04-17T08:41:26Z" | 1,113 | 3 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-04-17T08:35:17Z" | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
leafspark/Llama-3-8B-Instruct-Gradient-4194k-GGUF | leafspark | "2024-05-09T00:58:24Z" | 1,113 | 2 | transformers | [
"transformers",
"gguf",
"meta",
"llama-3",
"safetensors",
"text-generation",
"en",
"license:llama3",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-08T22:51:32Z" | ---
language:
- en
license: llama3
tags:
- meta
- llama-3
- safetensors
pipeline_tag: text-generation
library_name: transformers
---
# Llama-3-8B-Instruct-Gradient-4194k-GGUF
# Fixing prompt format issues
- ~~Use iMatrix for Llama 3 prompt format on Q4 and below, or try Q4_K_M fixed~~
- ~~Use ChatML for Q6 and below~~
- Use Llama 3, see issues
# Issues
- ~~Context length is not defined correctly in quant, not sure if this is a llama.cpp issue~~ Use RoPE settings
- Output ends with </s> or other EOS tokens, might be an issue with their training data
This model was converted to GGUF format from [`gradientai/Llama-3-8B-Instruct-Gradient-4194k`](https://huggingface.co/gradientai/Llama-3-8B-Instruct-Gradient-4194k) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/gradientai/Llama-3-8B-Instruct-Gradient-4194k) for more details on the model.
| Filename | Quant | Size | Description |
| -------- | ---------- | --------- | ----------- |
| [llama-3-8b-instruct-gradient-4194k.f16.gguf](https://huggingface.co/leafspark/Llama-3-8B-Instruct-Gradient-4194k-GGUF/blob/main/llama-3-8b-instruct-gradient-4194k.f16.gguf) | f16 | 14.9GB | Lossless |
| [llama-3-8b-instruct-gradient-4194k.Q8_0.gguf](https://huggingface.co/leafspark/Llama-3-8B-Instruct-Gradient-4194k-GGUF/blob/main/llama-3-8b-instruct-gradient-4194k.Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality |
| [llama-3-8b-instruct-gradient-4194k.Q6_K.gguf](https://huggingface.co/leafspark/Llama-3-8B-Instruct-Gradient-4194k-GGUF/blob/main/llama-3-8b-instruct-gradient-4194k.Q6_K.gguf) | Q6_K | 6.60GB | Very high quality, near perfect, *recommended*. |
| [llama-3-8b-instruct-gradient-4194k.Q5_K_M.gguf](https://huggingface.co/leafspark/Llama-3-8B-Instruct-Gradient-4194k-GGUF/blob/main/llama-3-8b-instruct-gradient-4194k.Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality |
| [llama-3-8b-instruct-gradient-4194k.Q5_K_S.gguf](https://huggingface.co/leafspark/Llama-3-8B-Instruct-Gradient-4194k-GGUF/blob/main/llama-3-8b-instruct-gradient-4194k.Q5_K_S.gguf) | Q5_K_S | 5.60GB | Even higher quality |
| [llama-3-8b-instruct-gradient-4194k.Q4_K_M.gguf](https://huggingface.co/leafspark/Llama-3-8B-Instruct-Gradient-4194k-GGUF/blob/main/llama-3-8b-instruct-gradient-4194k.Q4_K_M.gguf) | Q4_K_M | 4.92GB | Recommended, medium-high quality |
| [llama-3-8b-instruct-gradient-4194k.Q4_K_M.fixed.gguf](https://huggingface.co/leafspark/Llama-3-8B-Instruct-Gradient-4194k-GGUF/blob/main/llama-3-8b-instruct-gradient-4194k.Q4_K_M.fixed.gguf) | Q4_K_M (fixed) | 4.92GB | Fixed version (requanted) |
| [llama-3-8b-instruct-gradient-4194k.Q4_K_S.gguf](https://huggingface.co/leafspark/Llama-3-8B-Instruct-Gradient-4194k-GGUF/blob/main/llama-3-8b-instruct-gradient-4194k.Q4_K_S.gguf) | Q4_K_S | 4.69GB | Recommended, medium quality |
| [llama-3-8b-instruct-gradient-4194k.Q4_0.gguf](https://huggingface.co/leafspark/Llama-3-8B-Instruct-Gradient-4194k-GGUF/blob/main/llama-3-8b-instruct-gradient-4194k.Q4_0.gguf) | Q4_0 | 4.66GB | Usable, better than Q3 but worse than Q4 |
| [llama-3-8b-instruct-gradient-4194k.Q3_K_L.gguf](https://huggingface.co/leafspark/Llama-3-8B-Instruct-Gradient-4194k-GGUF/blob/main/llama-3-8b-instruct-gradient-4194k.Q3_K_L.gguf) | Q3_K_L | 4.32GB | Usable |
| [llama-3-8b-instruct-gradient-4194k.Q3_K_M.gguf](https://huggingface.co/leafspark/Llama-3-8B-Instruct-Gradient-4194k-GGUF/blob/main/llama-3-8b-instruct-gradient-4194k.Q3_K_M.gguf) | Q3_K_M | 4.02GB | Bad quality, use Q4 |
| [llama-3-8b-instruct-gradient-4194k.Q3_K_S.gguf](https://huggingface.co/leafspark/Llama-3-8B-Instruct-Gradient-4194k-GGUF/blob/main/llama-3-8b-instruct-gradient-4194k.Q3_K_S.gguf) | Q3_K_S | 3.66GB | Not recommended |
| [llama-3-8b-instruct-gradient-4194k.Q2_K.gguf](https://huggingface.co/leafspark/Llama-3-8B-Instruct-Gradient-4194k-GGUF/blob/main/llama-3-8b-instruct-gradient-4194k.Q2_K.gguf) | Q2_K | 2.95GB | Very low quality, would not use on 8b models |
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo leafspark/llama-3-8b-instruct-gradient-4194k.Q8_0-GGUF --model llama-3-8b-instruct-gradient-4194k.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo leafspark/llama-3-8b-instruct-gradient-4194k.Q8_0-GGUF --model llama-3-8b-instruct-gradient-4194k.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-8b-instruct-gradient-4194k.Q8_0.gguf -n 128
``` |
RichardErkhov/mychen76_-_mistral7b_ocr_to_json_v1-gguf | RichardErkhov | "2024-05-30T06:13:26Z" | 1,113 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-05-30T03:07:33Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
mistral7b_ocr_to_json_v1 - GGUF
- Model creator: https://huggingface.co/mychen76/
- Original model: https://huggingface.co/mychen76/mistral7b_ocr_to_json_v1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [mistral7b_ocr_to_json_v1.Q2_K.gguf](https://huggingface.co/RichardErkhov/mychen76_-_mistral7b_ocr_to_json_v1-gguf/blob/main/mistral7b_ocr_to_json_v1.Q2_K.gguf) | Q2_K | 2.53GB |
| [mistral7b_ocr_to_json_v1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/mychen76_-_mistral7b_ocr_to_json_v1-gguf/blob/main/mistral7b_ocr_to_json_v1.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [mistral7b_ocr_to_json_v1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/mychen76_-_mistral7b_ocr_to_json_v1-gguf/blob/main/mistral7b_ocr_to_json_v1.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [mistral7b_ocr_to_json_v1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/mychen76_-_mistral7b_ocr_to_json_v1-gguf/blob/main/mistral7b_ocr_to_json_v1.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [mistral7b_ocr_to_json_v1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/mychen76_-_mistral7b_ocr_to_json_v1-gguf/blob/main/mistral7b_ocr_to_json_v1.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [mistral7b_ocr_to_json_v1.Q3_K.gguf](https://huggingface.co/RichardErkhov/mychen76_-_mistral7b_ocr_to_json_v1-gguf/blob/main/mistral7b_ocr_to_json_v1.Q3_K.gguf) | Q3_K | 3.28GB |
| [mistral7b_ocr_to_json_v1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/mychen76_-_mistral7b_ocr_to_json_v1-gguf/blob/main/mistral7b_ocr_to_json_v1.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [mistral7b_ocr_to_json_v1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/mychen76_-_mistral7b_ocr_to_json_v1-gguf/blob/main/mistral7b_ocr_to_json_v1.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [mistral7b_ocr_to_json_v1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/mychen76_-_mistral7b_ocr_to_json_v1-gguf/blob/main/mistral7b_ocr_to_json_v1.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [mistral7b_ocr_to_json_v1.Q4_0.gguf](https://huggingface.co/RichardErkhov/mychen76_-_mistral7b_ocr_to_json_v1-gguf/blob/main/mistral7b_ocr_to_json_v1.Q4_0.gguf) | Q4_0 | 3.83GB |
| [mistral7b_ocr_to_json_v1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/mychen76_-_mistral7b_ocr_to_json_v1-gguf/blob/main/mistral7b_ocr_to_json_v1.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [mistral7b_ocr_to_json_v1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/mychen76_-_mistral7b_ocr_to_json_v1-gguf/blob/main/mistral7b_ocr_to_json_v1.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [mistral7b_ocr_to_json_v1.Q4_K.gguf](https://huggingface.co/RichardErkhov/mychen76_-_mistral7b_ocr_to_json_v1-gguf/blob/main/mistral7b_ocr_to_json_v1.Q4_K.gguf) | Q4_K | 4.07GB |
| [mistral7b_ocr_to_json_v1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/mychen76_-_mistral7b_ocr_to_json_v1-gguf/blob/main/mistral7b_ocr_to_json_v1.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [mistral7b_ocr_to_json_v1.Q4_1.gguf](https://huggingface.co/RichardErkhov/mychen76_-_mistral7b_ocr_to_json_v1-gguf/blob/main/mistral7b_ocr_to_json_v1.Q4_1.gguf) | Q4_1 | 4.24GB |
| [mistral7b_ocr_to_json_v1.Q5_0.gguf](https://huggingface.co/RichardErkhov/mychen76_-_mistral7b_ocr_to_json_v1-gguf/blob/main/mistral7b_ocr_to_json_v1.Q5_0.gguf) | Q5_0 | 4.65GB |
| [mistral7b_ocr_to_json_v1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/mychen76_-_mistral7b_ocr_to_json_v1-gguf/blob/main/mistral7b_ocr_to_json_v1.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [mistral7b_ocr_to_json_v1.Q5_K.gguf](https://huggingface.co/RichardErkhov/mychen76_-_mistral7b_ocr_to_json_v1-gguf/blob/main/mistral7b_ocr_to_json_v1.Q5_K.gguf) | Q5_K | 4.78GB |
| [mistral7b_ocr_to_json_v1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/mychen76_-_mistral7b_ocr_to_json_v1-gguf/blob/main/mistral7b_ocr_to_json_v1.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [mistral7b_ocr_to_json_v1.Q5_1.gguf](https://huggingface.co/RichardErkhov/mychen76_-_mistral7b_ocr_to_json_v1-gguf/blob/main/mistral7b_ocr_to_json_v1.Q5_1.gguf) | Q5_1 | 5.07GB |
| [mistral7b_ocr_to_json_v1.Q6_K.gguf](https://huggingface.co/RichardErkhov/mychen76_-_mistral7b_ocr_to_json_v1-gguf/blob/main/mistral7b_ocr_to_json_v1.Q6_K.gguf) | Q6_K | 5.53GB |
| [mistral7b_ocr_to_json_v1.Q8_0.gguf](https://huggingface.co/RichardErkhov/mychen76_-_mistral7b_ocr_to_json_v1-gguf/blob/main/mistral7b_ocr_to_json_v1.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
thumbnail: "url to a thumbnail used in social sharing"
tags:
- tag1
- tag2
license: apache-2.0
datasets:
- dataset1
- dataset2
metrics:
- metric1
- metric2
---
Model Architecture:
The mychen76/mistral7b_ocr_to_json_v1 (LLM) is a finetuned for convert OCR text to Json object task. this experimental model is based on Mistral-7B-v0.1 which outperforms Llama 2 13B on all benchmarks tested.
Motivation:
Currently, OCR engines are well tested on image detection and text recognition. LLM models are well trained for text processing and generation. Hence, leveraging outputs from OCR engines could save LLM training times for image-to-text use cases such as invoice or receipt image to JSON object conversion tasks.
Model Usage:
Take an invoice or receipt image, perform OCR on the image to get text boxes, and feed the outputs into LLM models to generate a well-formed receipt JSON object.
```
### Instruction:
You are POS receipt data expert, parse, detect, recognize and convert following receipt OCR image result into structure receipt data object.
Don't make up value not in the Input. Output must be a well-formed JSON object.```json
### Input:
[[[[184.0, 42.0], [278.0, 45.0], [278.0, 62.0], [183.0, 59.0]], ('BAJA FRESH', 0.9551795721054077)], [[[242.0, 113.0], [379.0, 118.0], [378.0, 136.0], [242.0, 131.0]], ('GENERAL MANAGER:', 0.9462024569511414)], [[[240.0, 133.0], [300.0, 135.0], [300.0, 153.0], [240.0, 151.0]], ('NORMAN', 0.9913229942321777)], [[[143.0, 166.0], [234.0, 171.0], [233.0, 192.0], [142.0, 187.0]], ('176 Rosa C', 0.9229503870010376)], [[[130.0, 207.0], [206.0, 210.0], [205.0, 231.0], [129.0, 228.0]], ('Chk 7545', 0.9349349141120911)], [[[283.0, 215.0], [431.0, 221.0], [431.0, 239.0], [282.0, 233.0]], ("Dec26'0707:26PM", 0.9290117025375366)], [[[440.0, 221.0], [489.0, 221.0], [489.0, 239.0], [440.0, 239.0]], ('Gst0', 0.9164432883262634)], [[[164.0, 252.0], [308.0, 256.0], [308.0, 276.0], [164.0, 272.0]], ('TAKE OUT', 0.9367803335189819)], [[[145.0, 274.0], [256.0, 278.0], [255.0, 296.0], [144.0, 292.0]], ('1 BAJA STEAK', 0.9167789816856384)], [[[423.0, 282.0], [465.0, 282.0], [465.0, 304.0], [423.0, 304.0]], ('6.95', 0.9965073466300964)], [[[180.0, 296.0], [292.0, 299.0], [292.0, 319.0], [179.0, 316.0]], ('NO GUACAMOLE', 0.9631438255310059)], [[[179.0, 317.0], [319.0, 322.0], [318.0, 343.0], [178.0, 338.0]], ('ENCHILADO STYLE', 0.9704310894012451)], [[[423.0, 325.0], [467.0, 325.0], [467.0, 347.0], [423.0, 347.0]], ('1.49', 0.988395631313324)], [[[159.0, 339.0], [201.0, 341.0], [200.0, 360.0], [158.0, 358.0]], ('CASH', 0.9982023239135742)], [[[417.0, 348.0], [466.0, 348.0], [466.0, 367.0], [417.0, 367.0]], ('20.00', 0.9921982884407043)], [[[156.0, 380.0], [200.0, 382.0], [198.0, 404.0], [155.0, 402.0]], ('FOOD', 0.9906187057495117)], [[[426.0, 390.0], [468.0, 390.0], [468.0, 409.0], [426.0, 409.0]], ('8.44', 0.9963030219078064)], [[[154.0, 402.0], [190.0, 405.0], [188.0, 427.0], [152.0, 424.0]], ('TAX', 0.9963871836662292)], [[[427.0, 413.0], [468.0, 413.0], [468.0, 432.0], [427.0, 432.0]], ('0.61', 0.9934712648391724)], [[[153.0, 427.0], [224.0, 429.0], [224.0, 450.0], [153.0, 448.0]], ('PAYMENT', 0.9948703646659851)], [[[428.0, 436.0], [470.0, 436.0], [470.0, 455.0], [428.0, 455.0]], ('9.05', 0.9961490631103516)], [[[152.0, 450.0], [251.0, 453.0], [250.0, 475.0], [152.0, 472.0]], ('Change Due', 0.9556287527084351)], [[[420.0, 458.0], [471.0, 458.0], [471.0, 480.0], [420.0, 480.0]], ('10.95', 0.997236430644989)], [[[209.0, 498.0], [382.0, 503.0], [381.0, 524.0], [208.0, 519.0]], ('$2.000FF', 0.9757758378982544)], [[[169.0, 522.0], [422.0, 528.0], [421.0, 548.0], [169.0, 542.0]], ('NEXT PURCHASE', 0.962527871131897)], [[[167.0, 546.0], [365.0, 552.0], [365.0, 570.0], [167.0, 564.0]], ('CALL800 705 5754or', 0.926964521408081)], [[[146.0, 570.0], [416.0, 577.0], [415.0, 597.0], [146.0, 590.0]], ('Go www.mshare.net/bajafresh', 0.9759786128997803)], [[[147.0, 594.0], [356.0, 601.0], [356.0, 621.0], [146.0, 614.0]], ('Take our brief survey', 0.9390400648117065)], [[[143.0, 620.0], [410.0, 626.0], [409.0, 647.0], [143.0, 641.0]], ('When Prompted, Enter Store', 0.9385656118392944)], [[[142.0, 646.0], [408.0, 653.0], [407.0, 673.0], [142.0, 666.0]], ('Write down redemption code', 0.9536812901496887)], [[[141.0, 672.0], [409.0, 679.0], [408.0, 699.0], [141.0, 692.0]], ('Use this receipt as coupon', 0.9658807516098022)], [[[138.0, 697.0], [448.0, 701.0], [448.0, 725.0], [138.0, 721.0]], ('Discount on purchases of $5.00', 0.9624248743057251)], [[[139.0, 726.0], [466.0, 729.0], [466.0, 750.0], [139.0, 747.0]], ('or more,Offer expires in 30 day', 0.9263916611671448)], [[[137.0, 750.0], [459.0, 755.0], [459.0, 778.0], [137.0, 773.0]], ('Good at participating locations', 0.963909924030304)]]
### Output:
```
```json
{
"receipt": {
"store": "BAJA FRESH",
"manager": "GENERAL MANAGER: NORMAN",
"address": "176 Rosa C",
"check": "Chk 7545",
"date": "Dec26'0707:26PM",
"tax": "Gst0",
"total": "20.00",
"payment": "CASH",
"change": "0.61",
"discount": "Discount on purchases of $5.00 or more,Offer expires in 30 day",
"coupon": "Use this receipt as coupon",
"survey": "Take our brief survey",
"redemption": "Write down redemption code",
"prompt": "When Prompted, Enter Store Write down redemption code Use this receipt as coupon",
"items": [
{
"name": "1 BAJA STEAK",
"price": "6.95",
"modifiers": [
"NO GUACAMOLE",
"ENCHILADO STYLE"
]
},
{
"name": "TAKE OUT",
"price": "1.49"
}
]
}
}
```
# Load model directly
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("mychen76/mistral7b_ocr_to_json_v1")
model = AutoModelForCausalLM.from_pretrained("mychen76/mistral7b_ocr_to_json_v1")
prompt=f"""### Instruction:
You are POS receipt data expert, parse, detect, recognize and convert following receipt OCR image result into structure receipt data object.
Don't make up value not in the Input. Output must be a well-formed JSON object.```json
### Input:
{receipt_boxes}
### Output:
"""
with torch.inference_mode():
inputs = tokenizer(prompt,return_tensors="pt",truncation=True).to(device)
outputs = model.generate(**inputs, max_new_tokens=512)
result_text = tokenizer.batch_decode(outputs)[0]
print(result_text)
```
## Get OCR Image boxes
```python
from paddleocr import PaddleOCR, draw_ocr
from ast import literal_eval
import json
paddleocr = PaddleOCR(lang="en",ocr_version="PP-OCRv4",show_log = False,use_gpu=True)
def paddle_scan(paddleocr,img_path_or_nparray):
result = paddleocr.ocr(img_path_or_nparray,cls=True)
result = result[0]
boxes = [line[0] for line in result] #boundign box
txts = [line[1][0] for line in result] #raw text
scores = [line[1][1] for line in result] # scores
return txts, result
# perform ocr scan
receipt_texts, receipt_boxes = paddle_scan(paddleocr,receipt_image_array)
print(50*"--","\ntext only:\n",receipt_texts)
print(50*"--","\nocr boxes:\n",receipt_boxes)
```
# Load model in 4bits
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig, BitsAndBytesConfig
# quantization_config = BitsAndBytesConfig(llm_int8_enable_fp32_cpu_offload=True)
bnb_config = BitsAndBytesConfig(
llm_int8_enable_fp32_cpu_offload=True,
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
)
# control model memory allocation between devices for low GPU resource (0,cpu)
device_map = {
"transformer.word_embeddings": 0,
"transformer.word_embeddings_layernorm": 0,
"lm_head": 0,
"transformer.h": 0,
"transformer.ln_f": 0,
"model.embed_tokens": 0,
"model.layers":0,
"model.norm":0
}
device = "cuda" if torch.cuda.is_available() else "cpu"
# model use for inference
model_id="mychen76/mistral7b_ocr_to_json_v1"
model = AutoModelForCausalLM.from_pretrained(
model_id,
trust_remote_code=True,
torch_dtype=torch.float16,
quantization_config=bnb_config,
device_map=device_map)
# tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
```
Dataset use for finetuning:
mychen76/invoices-and-receipts_ocr_v1
|
neopolita/qwen2-1.5b-instruct-gguf | neopolita | "2024-06-06T23:31:47Z" | 1,113 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-06T23:22:06Z" | ---
{}
---
# GGUF quants for [**Qwen/Qwen2-1.5B-Instruct**](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct) using [llama.cpp](https://github.com/ggerganov/llama.cpp)
**Terms of Use**: Please check the [**original model**](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct)
<picture>
<img alt="cthulhu" src="https://huggingface.co/neopolita/common/resolve/main/profile.png">
</picture>
## Quants
* `q2_k`: Uses Q4_K for the attention.vw and feed_forward.w2 tensors, Q2_K for the other tensors.
* `q3_k_s`: Uses Q3_K for all tensors
* `q3_k_m`: Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K
* `q3_k_l`: Uses Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K
* `q4_0`: Original quant method, 4-bit.
* `q4_1`: Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.
* `q4_k_s`: Uses Q4_K for all tensors
* `q4_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K
* `q5_0`: Higher accuracy, higher resource usage and slower inference.
* `q5_1`: Even higher accuracy, resource usage and slower inference.
* `q5_k_s`: Uses Q5_K for all tensors
* `q5_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K
* `q6_k`: Uses Q8_K for all tensors
* `q8_0`: Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
DopeorNope/SOLAR_D-v2-10.7B | DopeorNope | "2023-12-31T13:09:51Z" | 1,112 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-31T12:41:56Z" | Entry not found |
HwiyeolJo/TeamJaeCorpo | HwiyeolJo | "2024-01-06T01:02:16Z" | 1,112 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-06T00:55:46Z" | Entry not found |
etri-xainlp/llama2-13b-sft-dpo | etri-xainlp | "2024-02-23T06:12:34Z" | 1,112 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-30T00:55:28Z" | ---
license: apache-2.0
---
# etri-xainlp/llama2-13b-sft-dpo
## Model Details
**Model Developers** ETRI xainlp team
**Input** text only.
**Output** text only.
**Model Architecture**
**Base Model** [meta-llama/Llama-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf)
**Training Dataset**
- fully sft: 650k instruction-following set
- dpo+lora: 90k user preference set
- We use A100 GPU 80GB * 8, when training.
|
colable/llama2-ko-DPO | colable | "2024-02-05T11:07:53Z" | 1,112 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-05T10:44:22Z" | ---
license: mit
language:
- ko
---
# open-llama-2-ko based model with modified DPO dataset
This is an Korean Model based on
* [beomi/open-llama-2-ko-7b]
Dataset is modified from
* [SJ-Donald/orca-dpo-pairs-ko]
Parameters
```
learning_rate: float = 3e-4
lr_scheduler: str = "cosine"
warmup_ratio: float = 0.1
lora_r: int = 16
lora_alpha: int = 16
lora_dropout: float = 0.05
optim='paged_adamw_32bit'
bf16=True
``` |
etri-xainlp/llama2-13b-lima-sft-dpo | etri-xainlp | "2024-02-26T02:31:54Z" | 1,112 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-26T02:08:24Z" | ---
license: apache-2.0
---
# etri-xainlp/llama2-13b-lima-sft-dpo
## Model Details
**Model Developers** ETRI xainlp team
**Input** text only.
**Output** text only.
**Model Architecture**
**Base Model** [meta-llama/Llama-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf)
**Training Dataset**
- fully sft: 650k instruction-following set
- lima sft: 280k instruction-following set
- dpo+lora: 90k user preference set
- We use A100 GPU 80GB * 7, when training.
|
ENERGY-DRINK-LOVE/leaderboard_inst_v1.5_LDCC-SOLAR-10.7B_SFT | ENERGY-DRINK-LOVE | "2024-03-04T06:43:12Z" | 1,112 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:generator",
"base_model:LDCC/LDCC-SOLAR-10.7B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-04T06:14:34Z" | ---
license: cc-by-nc-4.0
base_model: LDCC/LDCC-SOLAR-10.7B
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
model-index:
- name: leaderboard_inst_v1_LDCC-SOLAR-10.7B_SFT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# leaderboard_inst_v1_LDCC-SOLAR-10.7B_SFT
This model is a fine-tuned version of [LDCC/LDCC-SOLAR-10.7B](https://huggingface.co/LDCC/LDCC-SOLAR-10.7B) on the generator dataset.
### Training results
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1+cu118
- Datasets 2.17.1
- Tokenizers 0.15.2
|
DooDooHyun/AIFT-42dot_LLM-PLM-1.3B-v1.51 | DooDooHyun | "2024-03-06T07:43:13Z" | 1,112 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:42dot/42dot_LLM-PLM-1.3B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-06T06:39:38Z" | ---
license: cc-by-nc-4.0
base_model: 42dot/42dot_LLM-PLM-1.3B
tags:
- generated_from_trainer
model-index:
- name: AIFT-42dot_LLM-PLM-1.3B-v1.51
results: []
---
# AIFT-42dot_LLM-PLM-1.3B-v1.51
This model is a fine-tuned version of [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.0.0
- Tokenizers 0.15.0 |
ENERGY-DRINK-LOVE/leaderboard_inst_v1.3_deup_LDCC-SOLAR-10.7B_SFT | ENERGY-DRINK-LOVE | "2024-03-07T05:29:24Z" | 1,112 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:generator",
"base_model:LDCC/LDCC-SOLAR-10.7B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-07T03:44:34Z" | ---
license: cc-by-nc-4.0
base_model: LDCC/LDCC-SOLAR-10.7B
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
model-index:
- name: leaderboard_inst_v1_LDCC-SOLAR-10.7B_SFT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# leaderboard_inst_v1_LDCC-SOLAR-10.7B_SFT
This model is a fine-tuned version of [LDCC/LDCC-SOLAR-10.7B](https://huggingface.co/LDCC/LDCC-SOLAR-10.7B) on the generator dataset.
### Training results
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1+cu118
- Datasets 2.17.1
- Tokenizers 0.15.2
|
hyeogi/SOLAR-10.7B-v1.5 | hyeogi | "2024-03-13T02:06:41Z" | 1,112 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"SOLAR-10.7B",
"conversational",
"ko",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-13T01:52:32Z" | ---
language:
- ko
pipeline_tag: text-generation
tags:
- SOLAR-10.7B
license: cc-by-nc-4.0
---
# SOLAR-10.7B
### Model Details
- Base Model: [yanolja/KoSOLAR-10.7B-v0.2](https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.2)
### Datasets
- sampling and translate [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca)
- sampling and instrcution format [HAERAE-HUB/KMMLU](https://huggingface.co/datasets/HAERAE-HUB/KMMLU)
- sampling and translate [Anthropic/hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf) |
chlee10/T3Q-ko-solar-sft-v3.0 | chlee10 | "2024-03-25T14:44:56Z" | 1,112 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"T3Q-ko-solar-sft-v3.0",
"kyujinpy/KoCommercial-NoSSL",
"conversational",
"en",
"dataset:davidkim205/ko_common_gen",
"base_model:chihoonlee10/T3Q-ko-solar-dpo-v3.0",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-25T14:16:38Z" | ---
pipeline_tag: text-generation
license: apache-2.0
language:
- en
tags:
- T3Q-ko-solar-sft-v3.0
- kyujinpy/KoCommercial-NoSSL
base_model: chihoonlee10/T3Q-ko-solar-dpo-v3.0
datasets:
- davidkim205/ko_common_gen
model-index:
- name: T3Q-ko-solar-sft-v3.0
results: []
---
Update @ 2024.03.25

## T3Q-ko-solar-sft-v3.0
This model is a SFT fine-tuned version of chihoonlee10/T3Q-ko-solar-dpo-v3.0
**Model Developers** Chihoon Lee(chlee10), T3Q
## Training hyperparameters
The following hyperparameters were used during training:
```python
# 데이터셋과 훈련 횟수와 관련된 하이퍼 파라미터
batch_size = 16
num_epochs = 1
micro_batch = 1
gradient_accumulation_steps = batch_size // micro_batch
# 훈련 방법에 대한 하이퍼 파라미터
cutoff_len = 4096
lr_scheduler = 'cosine'
warmup_ratio = 0.06 # warmup_steps = 100
learning_rate = 5e-5
optimizer = 'paged_adamw_32bit'
weight_decay = 0.01
max_grad_norm = 1.0
# LoRA config
lora_r = 16
lora_alpha = 16
lora_dropout = 0.05
lora_target_modules = ["k_proj", "v_proj","gate_proj", "down_proj", "up_proj"]
# Tokenizer에서 나오는 input값 설정 옵션
train_on_inputs = False
add_eos_token = False
# NEFTune params
neftune_noise_alpha = 5
```
|
juengsi/EVO-SOLAR-10.7B-v0.1 | juengsi | "2024-03-31T23:24:43Z" | 1,112 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"arxiv:1910.09700",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-31T23:08:57Z" | ---
library_name: transformers
license: cc-by-4.0
language:
- ko
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JY623/KoSOLAR-10.7B-merge-v3.2 | JY623 | "2024-04-16T07:11:51Z" | 1,112 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:JY623/KoSOLAR-10.7B-merge-v3.1",
"base_model:JY623/KoSOLAR-10.7B-merge-v3.0",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-16T06:46:39Z" | ---
base_model:
- JY623/KoSOLAR-10.7B-merge-v3.1
- JY623/KoSOLAR-10.7B-merge-v3.0
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# slerp_test2
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [JY623/KoSOLAR-10.7B-merge-v3.1](https://huggingface.co/JY623/KoSOLAR-10.7B-merge-v3.1)
* [JY623/KoSOLAR-10.7B-merge-v3.0](https://huggingface.co/JY623/KoSOLAR-10.7B-merge-v3.0)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: JY623/KoSOLAR-10.7B-merge-v3.0
layer_range: [0, 48]
- model: JY623/KoSOLAR-10.7B-merge-v3.1
layer_range: [0, 48]
merge_method: slerp
base_model: JY623/KoSOLAR-10.7B-merge-v3.0
parameters:
t: 0.2
dtype: bfloat16
``` |
T3Q-LLM/T3Q-LLM-sft1.0-dpo1.0 | T3Q-LLM | "2024-05-23T01:17:24Z" | 1,112 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:maywell/ko_Ultrafeedback_binarized",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-17T12:31:29Z" | ---
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
datasets:
- maywell/ko_Ultrafeedback_binarized
base model:
- yanolja/EEVE-Korean-Instruct-10.8B-v1.0
---

# T3Q-LLM-sft1.0-dpo1.0
## This model is a version of T3Q-LLM/T3Q-LLM-solar10.8-sft-v1.0 that has been fine-tuned with DPO.
## Model Developers Chihoon Lee(chihoonlee10), T3Q
## Prompt Template
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
Human: {prompt}
Assistant:
```
## How to Use it
```python
from transformers import AutoTokenizer
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("T3Q-LLM/T3Q-LLM-sft1.0-dpo1.0")
tokenizer = AutoTokenizer.from_pretrained("T3Q-LLM/T3Q-LLM-sft1.0-dpo1.0")
prompt_template = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.\nHuman: {prompt}\nAssistant:\n"
text = '한국의 수도는 어디인가요? 아래 선택지 중 골라주세요.\n\n(A) 경성\n(B) 부산\n(C) 평양\n(D) 서울\n(E) 전주'
model_inputs = tokenizer(prompt_template.format(prompt=text), return_tensors='pt')
outputs = model.generate(**model_inputs, max_new_tokens=256)
output_text = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
print(output_text)
```
### Example Output
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
Human: 한국의 수도는 어디인가요? 아래 선택지 중 골라주세요.
(A) 경성
(B) 부산
(C) 평양
(D) 서울
(E) 전주
Assistant:
(D) 서울이 한국의 수도입니다. 서울은 나라의 북동부에 위치해 있으며, 정치, 경제, 문화의 중심지입니다. 약 1,000만 명이 넘는 인구를 가진 세계에서 가장 큰 도시 중 하나입니다. 서울은 높은 빌딩, 현대적인 인프라, 활기 문화 장면으로 유명합니다. 또한, 많은 역사적 명소와 박물관이 있어 방문객들에게 풍부한 문화 체험을 제공합니다.
```
| Task |Version| Metric |Value | |Stderr|
|----------------|------:|--------|-----:|---|-----:|
|kobest_boolq | 0|acc |0.9387|± |0.0064|
| | |macro_f1|0.9387|± |0.0064|
|kobest_copa | 0|acc |0.7590|± |0.0135|
| | |macro_f1|0.7585|± |0.0135|
|kobest_hellaswag| 0|acc |0.5080|± |0.0224|
| | |acc_norm|0.5580|± |0.0222|
| | |macro_f1|0.5049|± |0.0224|
|kobest_sentineg | 0|acc |0.8489|± |0.0180|
| | |macro_f1|0.8483|± |0.0180|
hf-causal-experimental (pretrained=nlpai-lab/KULLM3,use_accelerate=true,trust_remote_code=true), limit: None, provide_description: False, num_fewshot: 0, batch_size: 8
| Task |Version| Metric |Value | |Stderr|
|----------------|------:|--------|-----:|---|-----:|
|kobest_boolq | 0|acc |0.8896|± |0.0084|
| | |macro_f1|0.8888|± |0.0084|
|kobest_copa | 0|acc |0.6930|± |0.0146|
| | |macro_f1|0.6925|± |0.0147|
|kobest_hellaswag| 0|acc |0.4640|± |0.0223|
| | |acc_norm|0.5240|± |0.0224|
| | |macro_f1|0.4612|± |0.0223|
|kobest_sentineg | 0|acc |0.6297|± |0.0243|
| | |macro_f1|0.6255|± |0.0244| |
RichardErkhov/jphme_-_em_german_7b_v01-gguf | RichardErkhov | "2024-06-01T02:28:40Z" | 1,112 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-01T00:53:05Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
em_german_7b_v01 - GGUF
- Model creator: https://huggingface.co/jphme/
- Original model: https://huggingface.co/jphme/em_german_7b_v01/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [em_german_7b_v01.Q2_K.gguf](https://huggingface.co/RichardErkhov/jphme_-_em_german_7b_v01-gguf/blob/main/em_german_7b_v01.Q2_K.gguf) | Q2_K | 2.36GB |
| [em_german_7b_v01.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/jphme_-_em_german_7b_v01-gguf/blob/main/em_german_7b_v01.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [em_german_7b_v01.IQ3_S.gguf](https://huggingface.co/RichardErkhov/jphme_-_em_german_7b_v01-gguf/blob/main/em_german_7b_v01.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [em_german_7b_v01.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/jphme_-_em_german_7b_v01-gguf/blob/main/em_german_7b_v01.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [em_german_7b_v01.IQ3_M.gguf](https://huggingface.co/RichardErkhov/jphme_-_em_german_7b_v01-gguf/blob/main/em_german_7b_v01.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [em_german_7b_v01.Q3_K.gguf](https://huggingface.co/RichardErkhov/jphme_-_em_german_7b_v01-gguf/blob/main/em_german_7b_v01.Q3_K.gguf) | Q3_K | 3.07GB |
| [em_german_7b_v01.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/jphme_-_em_german_7b_v01-gguf/blob/main/em_german_7b_v01.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [em_german_7b_v01.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/jphme_-_em_german_7b_v01-gguf/blob/main/em_german_7b_v01.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [em_german_7b_v01.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/jphme_-_em_german_7b_v01-gguf/blob/main/em_german_7b_v01.IQ4_XS.gguf) | IQ4_XS | 0.67GB |
| [em_german_7b_v01.Q4_0.gguf](https://huggingface.co/RichardErkhov/jphme_-_em_german_7b_v01-gguf/blob/main/em_german_7b_v01.Q4_0.gguf) | Q4_0 | 0.54GB |
| [em_german_7b_v01.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/jphme_-_em_german_7b_v01-gguf/blob/main/em_german_7b_v01.IQ4_NL.gguf) | IQ4_NL | 0.17GB |
| [em_german_7b_v01.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/jphme_-_em_german_7b_v01-gguf/blob/main/em_german_7b_v01.Q4_K_S.gguf) | Q4_K_S | 0.1GB |
| [em_german_7b_v01.Q4_K.gguf](https://huggingface.co/RichardErkhov/jphme_-_em_german_7b_v01-gguf/blob/main/em_german_7b_v01.Q4_K.gguf) | Q4_K | 0.1GB |
| [em_german_7b_v01.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/jphme_-_em_german_7b_v01-gguf/blob/main/em_german_7b_v01.Q4_K_M.gguf) | Q4_K_M | 0.1GB |
| [em_german_7b_v01.Q4_1.gguf](https://huggingface.co/RichardErkhov/jphme_-_em_german_7b_v01-gguf/blob/main/em_german_7b_v01.Q4_1.gguf) | Q4_1 | 0.1GB |
| [em_german_7b_v01.Q5_0.gguf](https://huggingface.co/RichardErkhov/jphme_-_em_german_7b_v01-gguf/blob/main/em_german_7b_v01.Q5_0.gguf) | Q5_0 | 0.1GB |
| [em_german_7b_v01.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/jphme_-_em_german_7b_v01-gguf/blob/main/em_german_7b_v01.Q5_K_S.gguf) | Q5_K_S | 0.01GB |
| [em_german_7b_v01.Q5_K.gguf](https://huggingface.co/RichardErkhov/jphme_-_em_german_7b_v01-gguf/blob/main/em_german_7b_v01.Q5_K.gguf) | Q5_K | 0.01GB |
| [em_german_7b_v01.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/jphme_-_em_german_7b_v01-gguf/blob/main/em_german_7b_v01.Q5_K_M.gguf) | Q5_K_M | 0.01GB |
| [em_german_7b_v01.Q5_1.gguf](https://huggingface.co/RichardErkhov/jphme_-_em_german_7b_v01-gguf/blob/main/em_german_7b_v01.Q5_1.gguf) | Q5_1 | 0.01GB |
| [em_german_7b_v01.Q6_K.gguf](https://huggingface.co/RichardErkhov/jphme_-_em_german_7b_v01-gguf/blob/main/em_german_7b_v01.Q6_K.gguf) | Q6_K | 0.0GB |
| [em_german_7b_v01.Q8_0.gguf](https://huggingface.co/RichardErkhov/jphme_-_em_german_7b_v01-gguf/blob/main/em_german_7b_v01.Q8_0.gguf) | Q8_0 | 0.0GB |
Original model description:
---
inference: false
language:
- de
library_name: transformers
license: llama2
model_creator: jphme
model_name: EM German
model_type: llama
pipeline_tag: text-generation
prompt_template: 'Du bist ein hilfreicher Assistent. USER: Was ist 1+1? ASSISTANT:'
tags:
- pytorch
- german
- deutsch
- llama2
- meta
- facebook
---

# Table of Contents
1. [Introduction](#introduction)
2. [Links & Demos](#links--demos)
- [Model Links](#model-links)
- [Demos](#demos)
3. [Prompt Format](#prompt-format)
4. [Example Output](#example-output)
5. [Acknowledgements](#acknowledgements)
6. [Contact](#contact)
7. [Disclaimer](#disclaimer)
# Introduction
**EM German** is a Llama2/Mistral/LeoLM-based model family, finetuned on a large dataset of various instructions in German language. The models are optimized for German text, providing proficiency in understanding, generating, and interacting with German language content.
We offer versions based on 7b, 13b and 70b Llama-2, Mistral and LeoLM (Llama-2/Mistral with continued pretraining on German texts) models.
Please find all Informations, Example Outputs, the special RAG prompt format, output examples and eval results for the EM German Model family in [our Github Repository](https://github.com/jphme/EM_German). ([Deutsche Version](https://github.com/jphme/EM_German/blob/main/README_DE.md)). You will also find instructions on how to run the models with a GUI (GPT4All/LM Studio).
# Links & Demos
## Model Links
Should you only try one model version, I strongly recommend the **[LeoLM Mistral](https://huggingface.co/jphme/em_german_leo_mistral)** model which offers by far the best combination of performance and computing requirements!
| Base Model | HF | GPTQ | GGUF | AWQ |
|-------|-------|-------|-------|-------|
| Llama2 7b | [Link](https://huggingface.co/jphme/em_german_7b_v01) | [Link](https://huggingface.co/TheBloke/em_german_7b_v01-GPTQ) | [Link](https://huggingface.co/TheBloke/em_german_7b_v01-GGUF) | [Link](https://huggingface.co/TheBloke/em_german_7b_v01-AWQ) |
| Llama2 13b | [Link](https://huggingface.co/jphme/em_german_13b_v01) | [Link](https://huggingface.co/TheBloke/em_german_13b_v01-GPTQ) | [Link](https://huggingface.co/TheBloke/em_german_13b_v01-GGUF) | [Link](https://huggingface.co/TheBloke/em_german_13b_v01-AWQ) |
| Llama2 70b | [Link](https://huggingface.co/jphme/em_german_70b_v01) | [Link](https://huggingface.co/TheBloke/em_german_70b_v01-GPTQ) | [Link](https://huggingface.co/TheBloke/em_german_70b_v01-GGUF) | [Link](https://huggingface.co/TheBloke/em_german_70b_v01-AWQ) |
| [Mistral 7b](https://huggingface.co/mistralai/Mistral-7B-v0.1) | [Link](https://huggingface.co/jphme/em_german_mistral_v01) | [Link](https://huggingface.co/TheBloke/em_german_mistral_v01-GPTQ) | [Link](https://huggingface.co/TheBloke/em_german_mistral_v01-GGUF) | [Link](https://huggingface.co/TheBloke/em_german_mistral_v01-AWQ) |
| [LeoLM 7b](https://huggingface.co/LeoLM/leo-hessianai-7b) | [Link](https://huggingface.co/jphme/em_german_7b_leo) | [Link](https://huggingface.co/jphme/em_german_7b_leo_gptq) | [Link](hhttps://huggingface.co/jphme/em_german_7b_leo_gguf) | tbc |
| [LeoLM 13b](https://huggingface.co/LeoLM/leo-hessianai-13b) | soon | soon | [Link](https://huggingface.co/jphme/em_german_13b_leo_gguf) | tbc |
| [LeoLM Mistral](https://huggingface.co/LeoLM/leo-mistral-hessianai-7b) | [Link](https://huggingface.co/jphme/em_german_leo_mistral) | [Link](https://huggingface.co/TheBloke/em_german_leo_mistral-GPTQ) | [Link](https://huggingface.co/TheBloke/em_german_leo_mistral-GGUF) | [Link](https://huggingface.co/TheBloke/em_german_leo_mistral-AWQ) |
### Notes about the different versions:
See also the [comparison of example outputs](https://github.com/jphme/EM_German/blob/main/example_output_comparison.md) for a comparison of (7b) model capabilities.
If you get unsatisfying results with one or another EM German version, please try a different (and/or larger) model or version for your usecase.
## Demos:
You can use some of the models with **free** google Colab instances (e.g. the 7bn model in 8bit or the 13b model with GPTQ):
* [Example Colab Notebook for 13b with GPTQ](https://colab.research.google.com/drive/1IJfJdVwGkfe5MYOqHptystR3FBeEUdGn?usp=sharing)
* [Example Colab Notebook for 7b with 8bit-Loading](https://colab.research.google.com/drive/1bsv6vkLM4AlCpSyXA6ol9P32zxZmf7Zu?usp=sharing)
* [Example Colab Notebook for 7b Mistral GGUF with Grammar-based structured output](https://colab.research.google.com/drive/17O-a3D4BnVc_V9Of3dFfed18v1ULu1Sv?usp=sharing)
# Prompt Format
This model follows the Vicuna format without linebreaks (but should work with linebreaks as well). The format is as follows:
```
Du bist ein hilfreicher Assistent. USER: <instruction> ASSISTANT:
```
You can swap the standard system prompt for a better suited one (see below for RAG-tasks).
# Example Output
Please find a selection of various example outputs for all 7b versions of EM German [here](https://github.com/jphme/EM_German/blob/main/example_output_comparison.md).
**Prompt:**
> Erkläre die Bedeutung von drei wesentlichen Elementen der Bilanzierung und des Accountings nach IFRS für Unternehmen, die auf internationaler Ebene tätig sind.
**Antwort EM German Leo Mistral 7b:**
```
Die International Financial Reporting Standards (IFRS) sind eine Reihe von internationalen Rechnungslegungsstandards, die von der International Accounting Standards Board (IASB) entwickelt wurden. Sie sollen dazu beitragen, dass Finanzberichte vergleichbar, zuverlässig und relevant sind.
1. Fair Value: Dieses Konzept ist ein zentraler Bestandteil der IFRS. Es besagt, dass Vermögenswerte und Schulden zum fairen Wert bilanziert werden sollten, was dem Preis entspricht, zu dem sie zwischen Marktteilnehmern gehandelt werden könnten. Dadurch wird sichergestellt, dass das Unternehmen eine genaue Darstellung seiner finanziellen Lage liefert.
2. Konsolidierung: Die IFRS verlangen von Unternehmen, ihre Finanzergebnisse auf konsolidierter Basis darzustellen. Das bedeutet, dass alle Tochtergesellschaften und verbundenen Unternehmen in die Bilanz des Mutterunternehmens aufgenommen werden. Dadurch wird ein klareres Bild von der finanziellen Situation des Konzerns geschaffen.
3. Vollständigkeitsprinzip: Die IFRS verlangen von Unternehmen, alle relevanten Informationen offenzulegen. Dies umfasst sowohl positive als auch negative Faktoren, wie zum Beispiel Gewinne und Verluste, Einnahmen und Ausgaben sowie Risiken und Unsicherheiten. Durch dieses Prinzip wird sichergestellt, dass Investoren, Kreditgeber und andere Stakeholder eine umfassende Einschätzung der finanziellen Lage des Unternehmens vornehmen können.
```
**Prompt:**
> Ein Mensch atmet durchschnittlich 16 Mal pro Minute. Wie oft atmet er an einem Tag?
**Antwort EM German Leo Mistral 7b:**
```
Zuerst berechnen wir, wie oft eine Person in einer Stunde atmet. Wir wissen, dass eine Stunde 60 Minuten hat, also:
Atemzüge pro Stunde = Atemfrequenz * Minuten pro Stunde
= 16 Atemzüge pro Minute * 60 Minuten
= 960 Atemzüge
Um herauszufinden, wie viele Atemzüge eine Person an einem Tag macht, müssen wir diese stündliche Rate mit der Anzahl der Stunden an einem Tag multiplizieren, also 24:
Atemzüge pro Tag = Atemzüge pro Stunde * Stunden pro Tag
= 960 Atemzüge * 24 Stunden
= 23.040 Atemzüge
Also macht ein durchschnittlicher Mensch etwa 23.040 Atemzüge an einem Tag.
```
(For more examples, please visit [our Github Repository](https://github.com/jphme/EM_German).)
# Acknowledgements:
Many thanks to [winglian/caseus](https://huggingface.co/winglian) for his great work on Axolotl which I used to train the EM mdoels. I am also grateful to [Jon Durbin](https://huggingface.co/jondurbin) and his [Airoboros](https://huggingface.co/jondurbin/airoboros-l2-70b-2.2.1) models and code from which I borrowed many ideas and code snippets.
Additionally many thanks to [Björn Plüster](https://huggingface.co/bjoernp) and the LeoLM team for the outstanding pretraining work on LeoLM and last but not least many many thanks to [TheBloke](https://huggingface.co/TheBloke) for the preparation of quantized versions in all formats under the sun.
The 70b model was trained with support of the [OVH Cloud Startup Program](https://startup.ovhcloud.com/en/).
# Contact
For detailed feedback & feature requests, please open an issue or get in contact with me via [my website](https://www.jph.me).
*PS: We are also always interested in support for our startup [ellamind](https://ellamind.com), which will offer customized models for business applications in the future (we are currently still in stealth mode). If you use our models for business applications and have advanced needs for specialized capabilities, please get in touch.*
# Disclaimer:
I am not responsible for the actions of third parties who use this model or the outputs of the model. This model should only be used for research purposes. The original base model license applies and is distributed with the model files.
|
microsoft/BiomedVLP-CXR-BERT-general | microsoft | "2022-07-11T14:52:52Z" | 1,111 | 29 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"exbert",
"en",
"arxiv:2204.09817",
"arxiv:2103.00020",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-05-05T22:10:13Z" | ---
language: en
tags:
- exbert
license: mit
widget:
- text: "Left pleural effusion with adjacent [MASK]."
example_title: "Radiology 1"
- text: "Heart size normal and lungs are [MASK]."
example_title: "Radiology 2"
- text: "[MASK] is a tumor suppressor gene."
example_title: "Biomedical"
- text: "The patient was on [MASK] for chronic atrial fibrillation"
example_title: "Medication"
---
# CXR-BERT-general
[CXR-BERT](https://arxiv.org/abs/2204.09817) is a chest X-ray (CXR) domain-specific language model that makes use of an improved vocabulary, novel pretraining procedure, weight regularization, and text augmentations. The resulting model demonstrates improved performance on radiology natural language inference, radiology masked language model token prediction, and downstream vision-language processing tasks such as zero-shot phrase grounding and image classification.
First, we pretrain **CXR-BERT-general** from a randomly initialized BERT model via Masked Language Modeling (MLM) on abstracts [PubMed](https://pubmed.ncbi.nlm.nih.gov/) and clinical notes from the publicly-available [MIMIC-III](https://physionet.org/content/mimiciii/1.4/) and [MIMIC-CXR](https://physionet.org/content/mimic-cxr/). In that regard, the general model is expected be applicable for research in clinical domains other than the chest radiology through domain specific fine-tuning.
**CXR-BERT-specialized** is continually pretrained from CXR-BERT-general to further specialize in the chest X-ray domain. At the final stage, CXR-BERT is trained in a multi-modal contrastive learning framework, similar to the [CLIP](https://arxiv.org/abs/2103.00020) framework. The latent representation of [CLS] token is utilized to align text/image embeddings.
## Model variations
| Model | Model identifier on HuggingFace | Vocabulary | Note |
| ------------------------------------------------- | ----------------------------------------------------------------------------------------------------------- | -------------- | --------------------------------------------------------- |
| CXR-BERT-general | [microsoft/BiomedVLP-CXR-BERT-general](https://huggingface.co/microsoft/BiomedVLP-CXR-BERT-general) | PubMed & MIMIC | Pretrained for biomedical literature and clinical domains |
| CXR-BERT-specialized (after multi-modal training) | [microsoft/BiomedVLP-CXR-BERT-specialized](https://huggingface.co/microsoft/BiomedVLP-CXR-BERT-specialized) | PubMed & MIMIC | Pretrained for chest X-ray domain |
## Citation
The corresponding manuscript is accepted to be presented at the [**European Conference on Computer Vision (ECCV) 2022**](https://eccv2022.ecva.net/)
```bibtex
@misc{https://doi.org/10.48550/arxiv.2204.09817,
doi = {10.48550/ARXIV.2204.09817},
url = {https://arxiv.org/abs/2204.09817},
author = {Boecking, Benedikt and Usuyama, Naoto and Bannur, Shruthi and Castro, Daniel C. and Schwaighofer, Anton and Hyland, Stephanie and Wetscherek, Maria and Naumann, Tristan and Nori, Aditya and Alvarez-Valle, Javier and Poon, Hoifung and Oktay, Ozan},
title = {Making the Most of Text Semantics to Improve Biomedical Vision-Language Processing},
publisher = {arXiv},
year = {2022},
}
```
## Model Use
### Intended Use
This model is intended to be used solely for (I) future research on visual-language processing and (II) reproducibility of the experimental results reported in the reference paper.
#### Primary Intended Use
The primary intended use is to support AI researchers building on top of this work. CXR-BERT and its associated models should be helpful for exploring various clinical NLP & VLP research questions, especially in the radiology domain.
#### Out-of-Scope Use
**Any** deployed use case of the model --- commercial or otherwise --- is currently out of scope. Although we evaluated the models using a broad set of publicly-available research benchmarks, the models and evaluations are not intended for deployed use cases. Please refer to [the associated paper](https://arxiv.org/abs/2204.09817) for more details.
## Data
This model builds upon existing publicly-available datasets:
- [PubMed](https://pubmed.ncbi.nlm.nih.gov/)
- [MIMIC-III](https://physionet.org/content/mimiciii/)
- [MIMIC-CXR](https://physionet.org/content/mimic-cxr/)
These datasets reflect a broad variety of sources ranging from biomedical abstracts to intensive care unit notes to chest X-ray radiology notes. The radiology notes are accompanied with their associated chest x-ray DICOM images in MIMIC-CXR dataset.
## Performance
We demonstrate that this language model achieves state-of-the-art results in radiology natural language inference through its improved vocabulary and novel language pretraining objective leveraging semantics and discourse characteristics in radiology reports.
A highlight of comparison to other common models, including [ClinicalBERT](https://aka.ms/clinicalbert) and [PubMedBERT](https://aka.ms/pubmedbert):
| | RadNLI accuracy (MedNLI transfer) | Mask prediction accuracy | Avg. # tokens after tokenization | Vocabulary size |
| ----------------------------------------------- | :-------------------------------: | :----------------------: | :------------------------------: | :-------------: |
| RadNLI baseline | 53.30 | - | - | - |
| ClinicalBERT | 47.67 | 39.84 | 78.98 (+38.15%) | 28,996 |
| PubMedBERT | 57.71 | 35.24 | 63.55 (+11.16%) | 28,895 |
| CXR-BERT (after Phase-III) | 60.46 | 77.72 | 58.07 (+1.59%) | 30,522 |
| **CXR-BERT (after Phase-III + Joint Training)** | **65.21** | **81.58** | **58.07 (+1.59%)** | 30,522 |
CXR-BERT also contributes to better vision-language representation learning through its improved text encoding capability. Below is the zero-shot phrase grounding performance on the **MS-CXR** dataset, which evaluates the quality of image-text latent representations.
| Vision–Language Pretraining Method | Text Encoder | MS-CXR Phrase Grounding (Avg. CNR Score) |
| ---------------------------------- | ------------ | :--------------------------------------: |
| Baseline | ClinicalBERT | 0.769 |
| Baseline | PubMedBERT | 0.773 |
| ConVIRT | ClinicalBERT | 0.818 |
| GLoRIA | ClinicalBERT | 0.930 |
| **BioViL** | **CXR-BERT** | **1.027** |
| **BioViL-L** | **CXR-BERT** | **1.142** |
Additional details about performance can be found in the corresponding paper, [Making the Most of Text Semantics to Improve Biomedical Vision-Language Processing](https://arxiv.org/abs/2204.09817).
## Limitations
This model was developed using English corpora, and thus can be considered English-only.
## Further information
Please refer to the corresponding paper, ["Making the Most of Text Semantics to Improve Biomedical Vision-Language Processing", ECCV'22](https://arxiv.org/abs/2204.09817) for additional details on the model training and evaluation.
For additional inference pipelines with CXR-BERT, please refer to the [HI-ML GitHub](https://aka.ms/biovil-code) repository. The associated source files will soon be accessible through this link.
|
Yntec/lametta | Yntec | "2023-09-07T15:58:01Z" | 1,111 | 4 | diffusers | [
"diffusers",
"safetensors",
"Anime",
"Chibi",
"Adorable",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"Lasorco",
"Safetensors",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-09-06T15:02:07Z" | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Anime
- Chibi
- Adorable
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- Lasorco
- diffusers
- Safetensors
---
# lametta
lametta v1602 with the MoistMixV2VAE baked in.
Sample and prompts:


pretty cute girl, accurately sitting, detailed chibi eyes, holding rocket launcher, beautiful detailed legs, police girl, gorgeous detailed hair, uniform hat, magazine ad, iconic, 1943, from the movie, sharp focus. visible brushstrokes by kyoani and clay mann
Original Page:
https://huggingface.co/Lasorco/lametta |
bofenghuang/vigogne-2-13b-chat | bofenghuang | "2023-10-16T14:20:03Z" | 1,111 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"LLM",
"llama-2",
"finetuned",
"conversational",
"fr",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-16T13:43:08Z" | ---
license: llama2
language: fr
pipeline_tag: text-generation
inference: false
tags:
- LLM
- llama-2
- finetuned
---
<p align="center" width="100%">
<img src="https://huggingface.co/bofenghuang/vigogne-2-13b-chat/resolve/main/logo_v2.jpg" alt="Vigogne" style="width: 30%; min-width: 300px; display: block; margin: auto;">
</p>
# Vigogne-2-13B-Chat: A Llama-2-based French Chat LLM
Vigogne-2-13B-Chat is a French chat LLM, based on [LLaMA-2-13B](https://ai.meta.com/llama), optimized to generate helpful and coherent responses in conversations with users.
Check out our [release blog](https://github.com/bofenghuang/vigogne/blob/main/blogs/2023-08-17-vigogne-chat-v2_0.md) and [GitHub repository](https://github.com/bofenghuang/vigogne) for more information.
**Usage and License Notices**: Vigogne-2-13B-Chat follows Llama-2's [usage policy](https://ai.meta.com/llama/use-policy). A significant portion of the training data is distilled from GPT-3.5-Turbo and GPT-4, kindly use it cautiously to avoid any violations of OpenAI's [terms of use](https://openai.com/policies/terms-of-use).
## Prompt Template
We utilized prefix tokens `<user>:` and `<assistant>:` to distinguish between user and assistant utterances.
You can apply this formatting using the [chat template](https://huggingface.co/docs/transformers/main/chat_templating) through the `apply_chat_template()` method.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bofenghuang/vigogne-2-13b-chat")
conversation = [
{"role": "user", "content": "Bonjour ! Comment ça va aujourd'hui ?"},
{"role": "assistant", "content": "Bonjour ! Je suis une IA, donc je n'ai pas de sentiments, mais je suis prêt à vous aider. Comment puis-je vous assister aujourd'hui ?"},
{"role": "user", "content": "Quelle est la hauteur de la Tour Eiffel ?"},
{"role": "assistant", "content": "La Tour Eiffel mesure environ 330 mètres de hauteur."},
{"role": "user", "content": "Comment monter en haut ?"},
]
print(tokenizer.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True))
```
You will get
```
<s><|system|>: Vous êtes Vigogne, un assistant IA créé par Zaion Lab. Vous suivez extrêmement bien les instructions. Aidez autant que vous le pouvez.
<|user|>: Bonjour ! Comment ça va aujourd'hui ?
<|assistant|>: Bonjour ! Je suis une IA, donc je n'ai pas de sentiments, mais je suis prêt à vous aider. Comment puis-je vous assister aujourd'hui ?</s>
<|user|>: Quelle est la hauteur de la Tour Eiffel ?
<|assistant|>: La Tour Eiffel mesure environ 330 mètres de hauteur.</s>
<|user|>: Comment monter en haut ?
<|assistant|>:
```
## Usage
<!-- ### Inference using the quantized versions
The quantized versions of this model are generously provided by [TheBloke](https://huggingface.co/TheBloke)!
- AWQ for GPU inference: [TheBloke/Vigogne-2-13B-Chat-AWQ](https://huggingface.co/TheBloke/Vigogne-2-13B-Chat-AWQ)
- GTPQ for GPU inference: [TheBloke/Vigogne-2-13B-Chat-GPTQ](https://huggingface.co/TheBloke/Vigogne-2-13B-Chat-GPTQ)
- GGUF for CPU+GPU inference: [TheBloke/Vigogne-2-13B-Chat-GGUF](https://huggingface.co/TheBloke/Vigogne-2-13B-Chat-GGUF)
These versions facilitate testing and development with various popular frameworks, including [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), [vLLM](https://github.com/vllm-project/vllm), [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ), [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa), [llama.cpp](https://github.com/ggerganov/llama.cpp), [text-generation-webui](https://github.com/oobabooga/text-generation-webui), and more. -->
### Inference using the unquantized model with 🤗 Transformers
```python
from typing import Dict, List, Optional
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig, TextStreamer
model_name_or_path = "bofenghuang/vigogne-2-13b-chat"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, padding_side="right", use_fast=False)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, torch_dtype=torch.float16, device_map="auto")
streamer = TextStreamer(tokenizer, timeout=10.0, skip_prompt=True, skip_special_tokens=True)
def chat(
query: str,
history: Optional[List[Dict]] = None,
temperature: float = 0.7,
top_p: float = 1.0,
top_k: float = 0,
repetition_penalty: float = 1.1,
max_new_tokens: int = 1024,
**kwargs,
):
if history is None:
history = []
history.append({"role": "user", "content": query})
input_ids = tokenizer.apply_chat_template(history, add_generation_prompt=True, return_tensors="pt").to(model.device)
input_length = input_ids.shape[1]
generated_outputs = model.generate(
input_ids=input_ids,
generation_config=GenerationConfig(
temperature=temperature,
do_sample=temperature > 0.0,
top_p=top_p,
top_k=top_k,
repetition_penalty=repetition_penalty,
max_new_tokens=max_new_tokens,
pad_token_id=tokenizer.eos_token_id,
**kwargs,
),
streamer=streamer,
return_dict_in_generate=True,
)
generated_tokens = generated_outputs.sequences[0, input_length:]
generated_text = tokenizer.decode(generated_tokens, skip_special_tokens=True)
history.append({"role": "assistant", "content": generated_text})
return generated_text, history
# 1st round
response, history = chat("Un escargot parcourt 100 mètres en 5 heures. Quelle est sa vitesse ?", history=None)
# 2nd round
response, history = chat("Quand il peut dépasser le lapin ?", history=history)
# 3rd round
response, history = chat("Écris une histoire imaginative qui met en scène une compétition de course entre un escargot et un lapin.", history=history)
```
You can also use the Google Colab Notebook provided below.
<a href="https://colab.research.google.com/github/bofenghuang/vigogne/blob/main/notebooks/infer_chat.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
### Inference using the unquantized model with vLLM
Set up an OpenAI-compatible server with the following command:
```bash
# Install vLLM
# This may take 5-10 minutes.
# pip install vllm
# Start server for Vigogne-Chat models
python -m vllm.entrypoints.openai.api_server --model bofenghuang/vigogne-2-13b-chat
# List models
# curl http://localhost:8000/v1/models
```
Query the model using the openai python package.
```python
import openai
# Modify OpenAI's API key and API base to use vLLM's API server.
openai.api_key = "EMPTY"
openai.api_base = "http://localhost:8000/v1"
# First model
models = openai.Model.list()
model = models["data"][0]["id"]
# Chat completion API
chat_completion = openai.ChatCompletion.create(
model=model,
messages=[
{"role": "user", "content": "Parle-moi de toi-même."},
],
max_tokens=1024,
temperature=0.7,
)
print("Chat completion results:", chat_completion)
```
## Limitations
Vigogne is still under development, and there are many limitations that have to be addressed. Please note that it is possible that the model generates harmful or biased content, incorrect information or generally unhelpful answers.
|
LI-ST/Mistral-7B-ko-v0.001 | LI-ST | "2024-01-09T07:35:02Z" | 1,111 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"ko",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-08T10:20:01Z" | ---
license: cc-by-nc-nd-4.0
language:
- en
- ko
library_name: transformers
pipeline_tag: text-generation
---
<p><h1>Mistral-7B-ko</h1></p>
basemodel: Open-Orca/Mistral-7B-OpenOrca
=================================================
<BR>
This model is a temporary model for testing.
<BR>
We will be deleting it soon.
<BR>
=================================================
|
GAI-LLM/Yi-Ko-6B-dpo-v6 | GAI-LLM | "2024-04-24T01:49:53Z" | 1,111 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"ko",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-22T01:26:42Z" | ---
license: cc-by-nc-4.0
language:
- ko
library_name: transformers
pipeline_tag: text-generation
---
**The license is `cc-by-nc-4.0`.**
# **GAI-LLM/Yi-Ko-6B-dpo-v6**
## Model Details
**Model Developers** Donghoon Oh, Hanmin Myung (SK C&C G.AI Eng)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
GAI-LLM/Yi-Ko-6B-dpo-v6 is an auto-regressive language model based on the LLaMA2 transformer architecture.
**Base Model** [GAI-LLM/Yi-Ko-6B-mixed-v15](https://huggingface.co/GAI-LLM/Yi-Ko-6B-mixed-v15)
**Training Dataset**
- We combined Open Korean Dateset using mixed-strategy with DPO
- We use A100 GPU 80GB * 8, when training.
# **Model Benchmark**
## KO-LLM leaderboard
- Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).
- 1st in ~7B (2024/2/1)
<img src='./gai-llm-yi-ko-6b-dpo-v6.png' width=512>
# Implementation Code
```python
### GAI-LLM/Yi-Ko-6B-dpo-v6
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "GAI-LLM/Yi-Ko-6B-dpo-v6"
model = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(repo)
``` |
DopeorNope/Ko-Mixtral-v1.3-MoE-7Bx2 | DopeorNope | "2024-01-29T04:19:24Z" | 1,111 | 14 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"ko",
"en",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-24T12:45:55Z" | ---
license: cc-by-nc-sa-4.0
language:
- ko
- en
tags:
- moe
---
# The license is cc-by-nc-sa-4.0.
- Commercializing is not allowed.

---
# Not based on Synatra model, we pre-train and full-finetuning Mixtralx2 to enhance Korean abilities.
# Developer
Seungyoo Lee (DopeorNope), Kyujin Han(kyujinpy)
---
# DATASET.
- Continuous pre-train was performed using AI hub corpus, and we applied instruct-tune using AI hub datasets.
- Using a Self-supervised learning manner, we converted raw corpus to instruct tuned data.
- We used text-mining techniques to create the train data.
- Here is some examples...
- **Mask prediction Task**
```python
#Mask prediction
text='지능(智能) 또는 인텔리전스(intelligence)는 인간의 <MASK> 능력을 말한다.'
response='지적'
complete_text='지능(智能) 또는 인텔리전스(intelligence)는 인간의 지적 능력을 말한다.'
```
- **Text allign Task**
```python
#Text-allign Task
text_list=['복수명령-복수자료(MIMD,Multiple Instruction, Multiple Data)은 전산에서 병렬화의 한 기법이다.',
'분산 메모리의 예는 MPP(massively parallel processors)와 COW (Clusters of Workstations)이다.',
'MIMD기계는 공유 메모리이거나 분산 메모리이며 이러한 분류는 MIMD가 어떻게 메모리를 이용하느냐에 따라 나뉜다.']
response='복수명령-복수자료(MIMD,Multiple Instruction, Multiple Data)은 전산에서 병렬화의 한 기법이다. \
MIMD기계는 공유 메모리이거나 분산 메모리이며 이러한 분류는 MIMD가 어떻게 메모리를 이용하느냐에 따라 나뉜다. \
분산 메모리의 예는 MPP(massively parallel processors)와 COW (Clusters of Workstations)이다.'
```
- **Text completion Task**
```python
#Text Completion
text= '그린브라우저(GreenBrowser)는 인터넷 익스플로러에서 사용하는 트라이던트 레이아웃 엔진을 바탕으로 하며 중국에 기반을 둔 소프트웨어 회사인 모어퀵(morequick)에서 만든 무료 웹 브라우저다. 간체자 중국어가 웹 브라우저에 내장되어 있다.
맥스톤 웹 브라우저와 비슷하여 MyIE와 밀접하게 관련되어 있다. 맥스톤용의 일부 플러그인이 그린브라우저에서도 작동할 것이다.'
response= '자동 스크롤, 자동 리프레시, 자동 저장, 자동 폼 채우기와 같은 많은 자동화 기능이 있다.'
```
---
# Acknoledgement
Markr AI is in constant communication with numerous open-source developers and researchers. We would also like to express our gratitude to **Beomi** and **Maywell**, who have provided many insights through extensive discussions in the development of the model. |
jungyuko/DAVinCI-42dot_LLM-PLM-1.3B-v0.63 | jungyuko | "2024-01-24T15:13:24Z" | 1,111 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-24T13:50:53Z" | ---
license: cc-by-nc-4.0
---
## DAVinCI-42dot_LLM-PLM-1.3B-v0.63
This model is a fine-tuned version of [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B) on an unknown dataset.
### Model description
More information needed
### Intended uses & limitations
More information needed
### Training and evaluation data
More information needed
### Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
* learning_rate: 2e-05
* train_batch_size: 24
* eval_batch_size: 8
* seed: 42
* gradient_accumulation_steps: 4
* total_train_batch_size: 96
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr_scheduler_type: linear
* num_epochs: 1.0
* mixed_precision_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.2
* Pytorch 2.1.2+cu121
* Datasets 2.0.0
* Tokenizers 0.15.0
|
GAI-LLM/OPEN-SOLAR-KO-10.7B-mixed-v15-dedup | GAI-LLM | "2024-01-29T02:27:10Z" | 1,111 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"ko",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-29T02:17:11Z" | ---
license: cc-by-nc-4.0
language:
- ko
library_name: transformers
pipeline_tag: text-generation
---
**The license is `cc-by-nc-4.0`.**
# **GAI-LLM/OPEN-SOLAR-KO-10.7B-mixed-v15-dedup**
## Model Details
**Model Developers** Donghoon Oh, Hanmin Myung, Eunyoung Kim (SK C&C G.AI Eng)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
GAI-LLM/OPEN-SOLAR-KO-10.7B-mixed-v15 is an auto-regressive language model based on the LLaMA2 transformer architecture.
**Base Model** [beomi/OPEN-SOLAR-KO-10.7B](https://huggingface.co/beomi/OPEN-SOLAR-KO-10.7B)
**Training Dataset**
- We combined Open Korean Dateset using mixed-strategy with near deduplication
- We use A100 GPU 80GB * 8, when training.
# **Model Benchmark**
## KO-LLM leaderboard
- Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).
# Implementation Code
```python
### GAI-LLM/OPEN-SOLAR-KO-10.7B-mixed-v15-dedup
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "GAI-LLM/OPEN-SOLAR-KO-10.7B-mixed-v15-dedup"
model = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(repo)
``` |
jungyuko/DAVinCI-42dot_LLM-PLM-1.3B-v0.73 | jungyuko | "2024-01-29T11:19:26Z" | 1,111 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-29T11:06:35Z" | ---
license: cc-by-nc-4.0
---
## DAVinCI-42dot_LLM-PLM-1.3B-v0.73
This model is a fine-tuned version of [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B) on an unknown dataset.
### Model description
More information needed
### Intended uses & limitations
More information needed
### Training and evaluation data
More information needed
### Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
* learning_rate: 2e-05
* train_batch_size: 24
* eval_batch_size: 8
* seed: 42
* gradient_accumulation_steps: 4
* total_train_batch_size: 96
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr_scheduler_type: linear
* num_epochs: 1.0
* mixed_precision_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.2
* Pytorch 2.1.2+cu121
* Datasets 2.0.0
* Tokenizers 0.15.0
|
colable/llama-ko-peft-v0.6 | colable | "2024-02-01T03:56:48Z" | 1,111 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-01T03:46:21Z" | ---
license: mit
language:
- ko
---
# open-llama-2-ko based model with inhouse dataset
This is an Korean Model based on
* [beomi/open-llama-2-ko-7b]
gpu code example
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "colable/llama-ko-peft-v0.2"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float32, device_map='auto',local_files_only=False, load_in_4bit=True
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
``` |
junga/hjys_LLM_final | junga | "2024-02-15T14:52:40Z" | 1,111 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"ko",
"en",
"dataset:beomi/KoAlpaca-v1.1a",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-13T04:00:28Z" | ---
license: cc-by-nc-4.0
datasets:
- beomi/KoAlpaca-v1.1a
language:
- ko
- en
---
### Model Card: hjys_LLM_final (42dot LLM-SFT-1.3B Fine-Tuned Version)
#### Model Overview
The 42dot LLM-SFT-1.3B is a fine-tuned version of the large language model developed by 42dot, specifically undergoing Supervised Fine-Tuning (SFT) to enhance its ability to follow natural language instructions. This model aims to improve scores on the ko-CommonGen V2 task, for which it was fine-tuned using the `beomi/KoAlpaca-v1.1a` dataset.
#### Dataset
The `beomi/KoAlpaca-v1.1a` dataset used for fine-tuning offers a rich resource for Korean natural language processing, contributing to the advancement of the model's language understanding and generation capabilities.
#### Goal
The primary goal of this model is to improve scores on the ko-CommonGen V2 task, which involves generating meaningful sentences using given words, assessing the model's creativity and language comprehension. This model is equipped to effectively use specific keywords to generate meaningful sentences.
#### Fine-Tuning Details
- **Parameters**: 1.3B
- **Layers**: 24
- **Attention Heads**: 32
- **Hidden Size**: 2,048
- **FFN Size**: 5,632
- **Maximum Length**: 4,096 tokens
- **Training Time**: 5 GPU hours on NVIDIA A100 (Google Colab Pro+)
#### Limitations and Ethical Considerations
Like other LLMs, the 42dot LLM-SFT-1.3B may produce hallucinated or biased content. Users should be aware of these limitations and take appropriate actions.
#### Disclaimer
Contents generated by this model do not necessarily reflect the views of 42dot Inc. All responsibility lies with the end-user, and 42dot assumes no liability.
#### License
This model is available for non-commercial use only, under the Creative Commons Attribution-NonCommercial 4.0 (CC BY-NC 4.0) license.
|
hyeogi/SOLAR-10.7B-v1.6 | hyeogi | "2024-03-20T00:38:31Z" | 1,111 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"SOLAR-10.7B",
"conversational",
"ko",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-20T00:23:36Z" | ---
language:
- ko
pipeline_tag: text-generation
tags:
- SOLAR-10.7B
license: cc-by-nc-4.0
---
# SOLAR-10.7B
### Model Details
- Base Model: [yanolja/KoSOLAR-10.7B-v0.2](https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.2)
### Datasets
- sampling and translate [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca)
- sampling and instrcution format [HAERAE-HUB/KMMLU](https://huggingface.co/datasets/HAERAE-HUB/KMMLU) |
hkss/hk-SOLAR-10.7B-v2.0 | hkss | "2024-04-01T06:10:57Z" | 1,111 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-01T06:03:17Z" | ---
library_name: transformers
license: apache-2.0
language:
- ko
---
### Model Description
Finetuning based on [yanolja/EEVE-Korean-10.8B-v1.0](https://huggingface.co/yanolja/EEVE-Korean-10.8B-v1.0).
### Training Method
Using Deepspeed, Accelerate, TRL etc.
### Datasets
TBA |
Heoni/Aguie_v0.1 | Heoni | "2024-04-04T12:16:33Z" | 1,111 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"ko",
"en",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-04T10:02:10Z" | ---
license: cc-by-nc-nd-4.0
language:
- ko
- en
---
아수라발발타 아수라발발타
손은 눈보다 빠르다! 무슨 패를 잡고 싶니?
아수라발발타 아수라발발타
돈을 벌고 싶니?
아수라발발타 아수라발발타
부자가되고 싶니?
부자가되고 싶어?
화투하면 대한민국에 딱 세 명이야. 경상도에 짝귀, 전라도에 아귀, 기카고 전국적으로 나! 예전에 짝귀랑 아귀가 한판 붙었는데, 아귀가 짝귀의 귀를 짤라 버렸어. 기래서 짝귀야
# Aguie_v0.1
<!-- Provide a quick summary of what the model is/does. -->
<!--This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).-->
### Model Description
<!-- Provide a longer summary of what this model is. -->
This model is a continual learning version of LLama2-koen-13b
### Trained Data
- 90gb Korean and 45gb English
### License
This model is licensed under the cc-by-nc-nd-4.0. |
mradermacher/Ninja-v1-NSFW-RP-GGUF | mradermacher | "2024-05-20T22:56:02Z" | 1,111 | 1 | transformers | [
"transformers",
"gguf",
"roleplay",
"ja",
"dataset:Aratako/Rosebleu-1on1-Dialogues-RP",
"dataset:Aratako/LimaRP-augmented-ja-karakuri",
"dataset:grimulkan/LimaRP-augmented",
"dataset:Aratako/Bluemoon_Top50MB_Sorted_Fixed_ja",
"dataset:SicariusSicariiStuff/Bluemoon_Top50MB_Sorted_Fixed",
"dataset:OmniAICreator/Japanese-Roleplay",
"base_model:Aratako/Ninja-v1-NSFW-RP",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-05-19T23:47:17Z" | ---
base_model: Aratako/Ninja-v1-NSFW-RP
datasets:
- Aratako/Rosebleu-1on1-Dialogues-RP
- Aratako/LimaRP-augmented-ja-karakuri
- grimulkan/LimaRP-augmented
- Aratako/Bluemoon_Top50MB_Sorted_Fixed_ja
- SicariusSicariiStuff/Bluemoon_Top50MB_Sorted_Fixed
- OmniAICreator/Japanese-Roleplay
language:
- ja
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- roleplay
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Aratako/Ninja-v1-NSFW-RP
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Ninja-v1-NSFW-RP-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Ninja-v1-NSFW-RP-GGUF/resolve/main/Ninja-v1-NSFW-RP.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Ninja-v1-NSFW-RP-GGUF/resolve/main/Ninja-v1-NSFW-RP.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Ninja-v1-NSFW-RP-GGUF/resolve/main/Ninja-v1-NSFW-RP.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Ninja-v1-NSFW-RP-GGUF/resolve/main/Ninja-v1-NSFW-RP.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Ninja-v1-NSFW-RP-GGUF/resolve/main/Ninja-v1-NSFW-RP.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Ninja-v1-NSFW-RP-GGUF/resolve/main/Ninja-v1-NSFW-RP.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Ninja-v1-NSFW-RP-GGUF/resolve/main/Ninja-v1-NSFW-RP.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Ninja-v1-NSFW-RP-GGUF/resolve/main/Ninja-v1-NSFW-RP.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Ninja-v1-NSFW-RP-GGUF/resolve/main/Ninja-v1-NSFW-RP.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Ninja-v1-NSFW-RP-GGUF/resolve/main/Ninja-v1-NSFW-RP.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Ninja-v1-NSFW-RP-GGUF/resolve/main/Ninja-v1-NSFW-RP.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Ninja-v1-NSFW-RP-GGUF/resolve/main/Ninja-v1-NSFW-RP.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Ninja-v1-NSFW-RP-GGUF/resolve/main/Ninja-v1-NSFW-RP.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Ninja-v1-NSFW-RP-GGUF/resolve/main/Ninja-v1-NSFW-RP.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Ninja-v1-NSFW-RP-GGUF/resolve/main/Ninja-v1-NSFW-RP.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Helsinki-NLP/opus-mt-en-ml | Helsinki-NLP | "2023-08-16T11:30:31Z" | 1,110 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ml",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-03-02T23:29:04Z" | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-ml
* source languages: en
* target languages: ml
* OPUS readme: [en-ml](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ml/README.md)
* dataset: opus+bt+bt
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus+bt+bt-2020-04-28.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ml/opus+bt+bt-2020-04-28.zip)
* test set translations: [opus+bt+bt-2020-04-28.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ml/opus+bt+bt-2020-04-28.test.txt)
* test set scores: [opus+bt+bt-2020-04-28.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ml/opus+bt+bt-2020-04-28.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.ml | 19.1 | 0.536 |
|
timm/swin_tiny_patch4_window7_224.ms_in22k_ft_in1k | timm | "2024-02-10T23:31:48Z" | 1,110 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-22k",
"arxiv:2103.14030",
"license:mit",
"region:us"
] | image-classification | "2023-03-18T04:15:31Z" | ---
license: mit
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
- imagenet-22k
---
# Model card for swin_tiny_patch4_window7_224.ms_in22k_ft_in1k
A Swin Transformer image classification model. Pretrained on ImageNet-22k and fine-tuned on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 28.3
- GMACs: 4.5
- Activations (M): 17.1
- Image size: 224 x 224
- **Papers:**
- Swin Transformer: Hierarchical Vision Transformer using Shifted Windows: https://arxiv.org/abs/2103.14030
- **Original:** https://github.com/microsoft/Swin-Transformer
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-22k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('swin_tiny_patch4_window7_224.ms_in22k_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'swin_tiny_patch4_window7_224.ms_in22k_ft_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g. for swin_base_patch4_window7_224 (NHWC output)
# torch.Size([1, 56, 56, 128])
# torch.Size([1, 28, 28, 256])
# torch.Size([1, 14, 14, 512])
# torch.Size([1, 7, 7, 1024])
# e.g. for swinv2_cr_small_ns_224 (NCHW output)
# torch.Size([1, 96, 56, 56])
# torch.Size([1, 192, 28, 28])
# torch.Size([1, 384, 14, 14])
# torch.Size([1, 768, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'swin_tiny_patch4_window7_224.ms_in22k_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled (ie.e a (batch_size, H, W, num_features) tensor for swin / swinv2
# or (batch_size, num_features, H, W) for swinv2_cr
output = model.forward_head(output, pre_logits=True)
# output is (batch_size, num_features) tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{liu2021Swin,
title={Swin Transformer: Hierarchical Vision Transformer using Shifted Windows},
author={Liu, Ze and Lin, Yutong and Cao, Yue and Hu, Han and Wei, Yixuan and Zhang, Zheng and Lin, Stephen and Guo, Baining},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
4bit/gpt4-x-alpaca-13b-native-4bit-128g-cuda | 4bit | "2023-04-09T17:01:12Z" | 1,110 | 12 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-04-09T16:57:11Z" | Quant of https://huggingface.co/chavinlo/gpt4-x-alpaca
There's already one located at https://huggingface.co/anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g, but neither the triton nor cuda version they uploaded seem to want to work on older versions of GPTQ-for-LLaMA such as the one currently used with KoboldAI for 4bit support on 0cc4m's fork.
This was quantized with cuda, not triton.
python llama.py ./gpt4-x-alpaca c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors gpt-x-alpaca-13b-native-4bit-128g-cuda.safetensors |
ProomptEngineer/pe-shitty-medieval-paintings | ProomptEngineer | "2023-09-11T15:29:34Z" | 1,110 | 8 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] | text-to-image | "2023-09-11T15:29:30Z" | ---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: PEBadMedivalArt
widget:
- text: PEBadMedivalArt
---
# PE Shitty Medieval Paintings

<h2 id="heading-5">Does what it says</h2><h2 id="heading-6">Weights 0.8-1</h2><h2 id="heading-7">If you want to donate:</h2><h2 id="heading-8"><a target="_blank" rel="ugc" href="https://ko-fi.com/proomptengineer">https://ko-fi.com/proomptengineer</a></h2>
## Image examples for the model:









|
TheBloke/em_german_70b_v01-GGUF | TheBloke | "2023-10-01T23:29:43Z" | 1,110 | 3 | transformers | [
"transformers",
"gguf",
"llama",
"facebook",
"meta",
"pytorch",
"llama-2",
"german",
"deutsch",
"text-generation",
"de",
"base_model:jphme/em_german_70b_v01",
"license:llama2",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-01T22:15:38Z" | ---
base_model: jphme/em_german_70b_v01
inference: false
language:
- de
library_name: transformers
license: llama2
model_creator: Jan Philipp Harries
model_name: EM German 70B v01
model_type: llama
pipeline_tag: text-generation
prompt_template: 'Du bist ein hilfreicher Assistent. USER: {prompt} ASSISTANT:
'
quantized_by: TheBloke
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
- german
- deutsch
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# EM German 70B v01 - GGUF
- Model creator: [Jan Philipp Harries](https://huggingface.co/jphme)
- Original model: [EM German 70B v01](https://huggingface.co/jphme/em_german_70b_v01)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Jan Philipp Harries's EM German 70B v01](https://huggingface.co/jphme/em_german_70b_v01).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/em_german_70b_v01-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/em_german_70b_v01-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/em_german_70b_v01-GGUF)
* [Jan Philipp Harries's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jphme/em_german_70b_v01)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: EmGerman
```
Du bist ein hilfreicher Assistent. USER: {prompt} ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [em_german_70b_v01.Q2_K.gguf](https://huggingface.co/TheBloke/em_german_70b_v01-GGUF/blob/main/em_german_70b_v01.Q2_K.gguf) | Q2_K | 2 | 29.28 GB| 31.78 GB | smallest, significant quality loss - not recommended for most purposes |
| [em_german_70b_v01.Q3_K_S.gguf](https://huggingface.co/TheBloke/em_german_70b_v01-GGUF/blob/main/em_german_70b_v01.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss |
| [em_german_70b_v01.Q3_K_M.gguf](https://huggingface.co/TheBloke/em_german_70b_v01-GGUF/blob/main/em_german_70b_v01.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss |
| [em_german_70b_v01.Q3_K_L.gguf](https://huggingface.co/TheBloke/em_german_70b_v01-GGUF/blob/main/em_german_70b_v01.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss |
| [em_german_70b_v01.Q4_0.gguf](https://huggingface.co/TheBloke/em_german_70b_v01-GGUF/blob/main/em_german_70b_v01.Q4_0.gguf) | Q4_0 | 4 | 38.87 GB| 41.37 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [em_german_70b_v01.Q4_K_S.gguf](https://huggingface.co/TheBloke/em_german_70b_v01-GGUF/blob/main/em_german_70b_v01.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB| 41.57 GB | small, greater quality loss |
| [em_german_70b_v01.Q4_K_M.gguf](https://huggingface.co/TheBloke/em_german_70b_v01-GGUF/blob/main/em_german_70b_v01.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended |
| [em_german_70b_v01.Q5_0.gguf](https://huggingface.co/TheBloke/em_german_70b_v01-GGUF/blob/main/em_german_70b_v01.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [em_german_70b_v01.Q5_K_S.gguf](https://huggingface.co/TheBloke/em_german_70b_v01-GGUF/blob/main/em_german_70b_v01.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
| [em_german_70b_v01.Q5_K_M.gguf](https://huggingface.co/TheBloke/em_german_70b_v01-GGUF/blob/main/em_german_70b_v01.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended |
| em_german_70b_v01.Q6_K.gguf | Q6_K | 6 | 56.59 GB| 59.09 GB | very large, extremely low quality loss |
| em_german_70b_v01.Q8_0.gguf | Q8_0 | 8 | 73.29 GB| 75.79 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
### Q6_K and Q8_0 files are split and require joining
**Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files.
<details>
<summary>Click for instructions regarding Q6_K and Q8_0 files</summary>
### q6_K
Please download:
* `em_german_70b_v01.Q6_K.gguf-split-a`
* `em_german_70b_v01.Q6_K.gguf-split-b`
### q8_0
Please download:
* `em_german_70b_v01.Q8_0.gguf-split-a`
* `em_german_70b_v01.Q8_0.gguf-split-b`
To join the files, do the following:
Linux and macOS:
```
cat em_german_70b_v01.Q6_K.gguf-split-* > em_german_70b_v01.Q6_K.gguf && rm em_german_70b_v01.Q6_K.gguf-split-*
cat em_german_70b_v01.Q8_0.gguf-split-* > em_german_70b_v01.Q8_0.gguf && rm em_german_70b_v01.Q8_0.gguf-split-*
```
Windows command line:
```
COPY /B em_german_70b_v01.Q6_K.gguf-split-a + em_german_70b_v01.Q6_K.gguf-split-b em_german_70b_v01.Q6_K.gguf
del em_german_70b_v01.Q6_K.gguf-split-a em_german_70b_v01.Q6_K.gguf-split-b
COPY /B em_german_70b_v01.Q8_0.gguf-split-a + em_german_70b_v01.Q8_0.gguf-split-b em_german_70b_v01.Q8_0.gguf
del em_german_70b_v01.Q8_0.gguf-split-a em_german_70b_v01.Q8_0.gguf-split-b
```
</details>
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/em_german_70b_v01-GGUF and below it, a specific filename to download, such as: em_german_70b_v01.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/em_german_70b_v01-GGUF em_german_70b_v01.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/em_german_70b_v01-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/em_german_70b_v01-GGUF em_german_70b_v01.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m em_german_70b_v01.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Du bist ein hilfreicher Assistent. USER: {prompt} ASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/em_german_70b_v01-GGUF", model_file="em_german_70b_v01.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Jan Philipp Harries's EM German 70B v01

# EM German 70b
([Zur deutschen Version](#deutsch-em-german))
**EM German (v01)** is an experimental llama2-based model family, finetuned on a large dataset of various instructions in German language. The models are optimized for German text, providing proficiency in understanding, generating, and interacting with German language content.
This is the 70b llama2 version.
# Links & Demos
We will publish further instructions, updates and code-snippets in the project's [Github-Repo](https://github.com/jphme/EM_German).
## Model Links
| Base Model | HF | GPTQ | GGUF | AWQ |
|-------|-------|-------|-------|-------|
| [Llama2](https://huggingface.co/meta-llama/Llama-2-7b-hf) 7b | [Link](https://huggingface.co/jphme/em_german_7b_v01) | [Link](https://huggingface.co/jphme/em_german_7b_v01_gptq) | [Link](https://huggingface.co/jphme/em_german_7b_v01_gguf) | soon |
| [Llama2](https://huggingface.co/meta-llama/Llama-2-13b-hf) 13b | [Link](https://huggingface.co/jphme/em_german_13b_v01) | [Link](https://huggingface.co/jphme/em_german_13b_v01_gptq) | soon | soon |
| [Llama2](https://huggingface.co/meta-llama/Llama-2-70b-hf) 70b | [Link](https://huggingface.co/jphme/em_german_70b_v01) | [Link](https://huggingface.co/jphme/em_german_70b_v01_gptq) | [Link](https://huggingface.co/jphme/em_german_70b_v01_gguf) | soon |
| [Mistral 7b](https://huggingface.co/mistralai/Mistral-7B-v0.1) | [Link](https://huggingface.co/jphme/em_german_mistral_v01) | soon | soon | tbc |
| [LeoLm 7b](https://huggingface.co/LeoLM/leo-hessianai-7b) | soon | soon | soon | tbc |
| [LeoLM 13b](https://huggingface.co/LeoLM/leo-hessianai-7b) | soon | soon | soon | tbc |
## Demos:
You can use some of the models with **free** google Colab instances (e.g. the 7bn model in 8bit or the 13b model with GPTQ):
* [Example Colab Notebook for 13b with GPTQ](https://colab.research.google.com/drive/1IJfJdVwGkfe5MYOqHptystR3FBeEUdGn?usp=sharing)
* [Example Colab Notebook for 7b with 8bit-Loading](https://colab.research.google.com/drive/1bsv6vkLM4AlCpSyXA6ol9P32zxZmf7Zu?usp=sharing)
* Further Examples for LLM apps will follow soon...
# Prompt Format
This model follows the Vicuna format without linebreaks (but should work with linebreaks as well). The format is as follows:
```
Du bist ein hilfreicher Assistent. USER: <instruction> ASSISTANT:
```
You can swap the standard system prompt for a better suited one (see below for RAG-tasks).
# Results
Attached a few example outputs. I also developed a few small German benchmarks (based on the Eleuther LM Evaluation Harness), which I will publish soon.
## Example Output
Prompt:
```
Du bist ein hilfreicher Assistent USER: Ein Buch hat 500 Seiten. Jede Seite enthält 300 Wörter. Wie viele Wörter sind im gesamten Buch enthalten? ASSISTANT:
```
Response EM 7b v01
```
Die Gesamtzahl der Wörter in dem Buch kann berechnet werden, indem man die Anzahl der Wörter pro Seite mit der Gesamtzahl der Seiten multipliziert.
Gesamtzahl der Wörter = Anzahl der Wörter pro Seite * Gesamtzahl der Seiten = 300 Wörter pro Seite * 500 Seiten = 150.000 Wörter
Das Buch enthält also insgesamt 150.000 Wörter.
```
Prompt:
```
Du bist ein hilfreicher Assistent USER: Erläutere die Grundprinzipien der evolutionären Biologie und Genetik, indem du den Prozess der natürlichen Selektion, die Rolle von Mutationen und genetischer Drift sowie das Konzept der adaptiven Radiation erklärst. ASSISTANT:
```
Response EM 7b v01
```
Die Evolutionäre Biologie und Genetik sind zwei intim verwandte Disziplinen, die sich mit dem Studium des Verhaltens und der Entwicklung von Organismen in der Natur befassen. Sie untersuchen, wie Arten sich im Laufe der Zeit entwickeln und wie diese Veränderungen genetisch bedingt sind.
1. Natürliche Selektion: Die natürliche Selektion ist ein grundlegendes Prinzip in der Evolutionäre Biologie. Es besagt, dass Individuen innerhalb einer Spezies unterschiedliche Fähigkeiten oder Merkmale aufweisen können. Diese Merkmale beeinflussen ihre Überlebens- und Fortpflanzungsrate. Diejenigen mit Vorteilen haben eine höhere Wahrscheinlichkeit, ihre Gene an zukünftige Generationen weiterzugeben und so die Art im Laufe der Zeit zu verändern.
2. Rolle von Mutationen: Mutationen sind Veränderungen im Genom eines Organismus, die entweder durch zufällige Fehler während der Replikation oder durch Exposition gegenüber Strahlung verursacht werden. Diese Veränderungen können dazu führen, dass ein Individuum neue Eigenschaften aufweist, die es im Kampf ums Überleben vorteilhaft machen könnten. Diese Mutationen werden dann von der natürlichen Selektion beeinflusst und können so zur Entwicklung neuer Arten beitragen.
3. Genetische Drift: Genetische Drift ist ein zufälliger Prozess, der durch die Verteilung von Allelen in einer Population stattfindet. Wenn eine Population kleiner wird oder wenn die Überlebens- und Fortpflanzungsraten zwischen den Individuen variieren, kann dies dazu führen, dass bestimmte Gene stärker oder weniger häufig werden. Dies kann dazu führen, dass die Population im Laufe der Zeit unterschiedliche Merkmale aufweist.
4. Adaptive Radiation: Adaptive Radiation ist ein Prozess, bei dem eine Art mehrere verschiedene Linien auslässt, die sich auf verschiedene Weise an ihre Umgebung anpassen. Dies geschieht normalerweise, wenn die Umwelt sich ändert und die Art neue Fähigkeiten benötigt, um überleben zu können. Die verschiedenen Linien können dann unterschiedliche Eigenschaften entwickeln, die ihnen helfen, in dieser neuen Umgebung zu überleben.
```
## Factual Retrieval / RAG
EM Models offer a special format for Factual Retrieval/RAG that is optimized for minimizing hallucinations. We use the same format as Airoboros for this:
### Example
Prompt
```
Du bist ein hilfreicher Assistent. Für die folgende Aufgabe stehen dir zwischen den tags BEGININPUT und ENDINPUT mehrere Quellen zur Verfügung. Metadaten zu den einzelnen Quellen wie Autor, URL o.ä. sind zwischen BEGINCONTEXT und ENDCONTEXT zu finden, danach folgt der Text der Quelle. Die eigentliche Aufgabe oder Frage ist zwischen BEGININSTRUCTION und ENDINCSTRUCTION zu finden. Beantworte diese wortwörtlich mit einem Zitat aus den Quellen. Sollten diese keine Antwort enthalten, antworte, dass auf Basis der gegebenen Informationen keine Antwort möglich ist! USER: BEGININPUT
BEGINCONTEXT
Url: https://www.jph.me
ENDCONTEXT
Das Wetter in Düsseldorf wird heute schön und sonnig!
ENDINPUT
BEGININSTRUCTION Was ist 1+1? ENDINSTRUCTION ASSISTANT:
```
Model response:
```
Mit den gegebenen Informationen ist diese Frage nicht zu beantworten.
```
Note that the model would normally always answer with the correct response to a trivial question like this but is trained to deny an answer, if there is no source between the BEGININPUT and ENDINPUT that provides a grounding.
If we keep the same Context/Input but ask instead:
```
BEGININSTRUCTION Wie wird das Wetter heute in Düsseldorf? Gebe die Quelle mit an! ENDINSTRUCTION ASSISTANT:
```
We get the correct answer with the source:
```
Das Wetter in Düsseldorf wird heute schön und sonnig! (Quelle: Url: https://www.jph.me)
```
# Evaluation
I will publish some custom German benchmarks and the results for them soon.
# Dataset
The dataset is a mixture of multi-turn chats, retrieval instructions and synthetically generated instructions spawning many topics and applications.
## Use with tranformers
Unchanged from the base model:
```python
import torch
from transformers import LlamaForCausalLM, LlamaTokenizer
# Hugging Face model_path
model_path = 'jphme/em_german_7b_v01'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto',
)
def ask_model(instruction, system='Du bist ein hilfreicher Assistent.'):
prompt=f"{system} USER: {instruction} ASSISTANT:"
input_tokens=tokenizer(prompt, return_tensors="pt").to(model.device)
output_tokens=model.generate(**input_tokens, max_new_tokens=200)[0]
answer=tokenizer.decode(output_tokens, skip_special_tokens=True)
return answer
print(ask_model("Nenne mir 10 gute Gründe dafür, heute Sport zu machen!"))
```
# Limitations & Biases
This model can produce factually incorrect output, and should not be relied on to produce factually accurate information.
This model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
# Acknowledgements:
Many thanks to [winglian/caseus](https://huggingface.co/winglian) for his great work on Axolotl which I used to train the EM mdoels. I am also grateful to [Jon Durbin](https://huggingface.co/jondurbin) and his [Airoboros](https://huggingface.co/jondurbin/airoboros-l2-70b-2.2.1) models and code from which I borrowed many ideas and code snippets.
The 70b model was trained with support of the [OVH Cloud Startup Program](https://startup.ovhcloud.com/en/).
# Contact
I you are interested in customized LLMs for business applications, please get in contact with me via [my website](https://www.jph.me). I am also always happy about suggestions and feedback.
*PS: I am also still searching for a Co-Founder*.
# Disclaimer:
The license on this model does not constitute legal advice. I am not responsible for the actions of third parties who use this model.
This model should only be used for research purposes. The original Llama2 license applies and is distributed with the model files.
-------------
-------------
-------------
# (Deutsch) EM German
**EM German (v01)** ist eine experimentelle, Llama2-basierte KI-Modellreihe, die in deutscher Sprache fine-getuned wurde.
Die Modelle sind für deutschen Text optimiert und können Inhalte in deutscher Sprache verstehen und generieren.
# Links & Demos
Im [Github-Repo](https://github.com/jphme/EM_German) Repo werde ich weitere Beispiele und Code-Snippets veröffentlichen.
## Model Links
| Base Model | HF | GPTQ | GGUF | AWQ |
|-------|-------|-------|-------|-------|
| [Llama2](https://huggingface.co/meta-llama/Llama-2-7b-hf) 7b | [Link](https://huggingface.co/jphme/em_german_7b_v01) | [Link](https://huggingface.co/jphme/em_german_7b_v01_gptq) | [Link](https://huggingface.co/jphme/em_german_7b_v01_gguf) | soon |
| [Llama2](https://huggingface.co/meta-llama/Llama-2-13b-hf) 13b | [Link](https://huggingface.co/jphme/em_german_13b_v01) | [Link](https://huggingface.co/jphme/em_german_13b_v01_gptq) | soon | soon |
| [Llama2](https://huggingface.co/meta-llama/Llama-2-70b-hf) 70b | [Link](https://huggingface.co/jphme/em_german_70b_v01) | [Link](https://huggingface.co/jphme/em_german_70b_v01_gptq) | [Link](https://huggingface.co/jphme/em_german_70b_v01_gguf) | soon |
| [Mistral 7b](https://huggingface.co/mistralai/Mistral-7B-v0.1) | [Link](https://huggingface.co/jphme/em_german_mistral_v01) | soon | soon | tbc |
| [LeoLm 7b](https://huggingface.co/LeoLM/leo-hessianai-7b) | soon | soon | soon | tbc |
| [LeoLM 13b](https://huggingface.co/LeoLM/leo-hessianai-7b) | soon | soon | soon | tbc |
## Colab:
Einige der Modelle können mit kostenlosen Google Colab-Instanzen verwendet werden (z.B. das 7b-Modell in 8-Bit oder das 13b-Modell mit GPTQ):
* [Beispiel-Notebook 13b mit GPTQ](https://colab.research.google.com/drive/1IJfJdVwGkfe5MYOqHptystR3FBeEUdGn?usp=sharing)
* [Beispiel-Notebook 7b mit 8-Bit](https://colab.research.google.com/drive/1bsv6vkLM4AlCpSyXA6ol9P32zxZmf7Zu?usp=sharing)
* Weitere Beispiele für LLM-Apps folgen in Kürze...
# Eingabeformat
Dieses Modell folgt dem Vicuna-Format ohne Zeilenumbrüche (sollte aber auch mit Zeilenumbrüchen funktionieren). Das Format lautet wie folgt:
```
Du bist ein hilfreicher Assistent. USER: <Anweisung> ASSISTANT:
```
Sie können das Standard-Systemprompt aber auch ändern (siehe unten für ein Beispiel bei Aufgaben mit externen Datenbanken).
# Ergebnisse
Anbei einige Beispiele für Modell-Outputs. Wir werden demnächst einige kleine deutsche Benchmarks veröffentlichen (basierend auf dem Eleuther LM Evaluation Harness), die wir bei diesen Modellen bereits getstet haben.
## Beispiel-Ausgaben
Prompt:
```
Du bist ein hilfreicher Assistent USER: Ein Buch hat 500 Seiten. Jede Seite enthält 300 Wörter. Wie viele Wörter sind im gesamten Buch enthalten? ASSISTANT:
```
Antwort EM 7b v01
```
Die Gesamtzahl der Wörter in dem Buch kann berechnet werden, indem man die Anzahl der Wörter pro Seite mit der Gesamtzahl der Seiten multipliziert.
Gesamtzahl der Wörter = Anzahl der Wörter pro Seite * Gesamtzahl der Seiten = 300 Wörter pro Seite * 500 Seiten = 150.000 Wörter
Das Buch enthält also insgesamt 150.000 Wörter.
```
Prompt:
```
Du bist ein hilfreicher Assistent USER: Erläutere die Grundprinzipien der evolutionären Biologie und Genetik, indem du den Prozess der natürlichen Selektion, die Rolle von Mutationen und genetischer Drift sowie das Konzept der adaptiven Radiation erklärst. ASSISTANT:
```
Antwort EM 7b v01:
```
Die Evolutionäre Biologie und Genetik sind zwei intim verwandte Disziplinen, die sich mit dem Studium des Verhaltens und der Entwicklung von Organismen in der Natur befassen. Sie untersuchen, wie Arten sich im Laufe der Zeit entwickeln und wie diese Veränderungen genetisch bedingt sind.
1. Natürliche Selektion: Die natürliche Selektion ist ein grundlegendes Prinzip in der Evolutionäre Biologie. Es besagt, dass Individuen innerhalb einer Spezies unterschiedliche Fähigkeiten oder Merkmale aufweisen können. Diese Merkmale beeinflussen ihre Überlebens- und Fortpflanzungsrate. Diejenigen mit Vorteilen haben eine höhere Wahrscheinlichkeit, ihre Gene an zukünftige Generationen weiterzugeben und so die Art im Laufe der Zeit zu verändern.
2. Rolle von Mutationen: Mutationen sind Veränderungen im Genom eines Organismus, die entweder durch zufällige Fehler während der Replikation oder durch Exposition gegenüber Strahlung verursacht werden. Diese Veränderungen können dazu führen, dass ein Individuum neue Eigenschaften aufweist, die es im Kampf ums Überleben vorteilhaft machen könnten. Diese Mutationen werden dann von der natürlichen Selektion beeinflusst und können so zur Entwicklung neuer Arten beitragen.
3. Genetische Drift: Genetische Drift ist ein zufälliger Prozess, der durch die Verteilung von Allelen in einer Population stattfindet. Wenn eine Population kleiner wird oder wenn die Überlebens- und Fortpflanzungsraten zwischen den Individuen variieren, kann dies dazu führen, dass bestimmte Gene stärker oder weniger häufig werden. Dies kann dazu führen, dass die Population im Laufe der Zeit unterschiedliche Merkmale aufweist.
4. Adaptive Radiation: Adaptive Radiation ist ein Prozess, bei dem eine Art mehrere verschiedene Linien auslässt, die sich auf verschiedene Weise an ihre Umgebung anpassen. Dies geschieht normalerweise, wenn die Umwelt sich ändert und die Art neue Fähigkeiten benötigt, um überleben zu können. Die verschiedenen Linien können dann unterschiedliche Eigenschaften entwickeln, die ihnen helfen, in dieser neuen Umgebung zu überleben.
```
## Sachliche Abfrage / RAG
EM-Modelle bieten ein spezielles Format für RAG (also die Verwendung mit Informationen aus zusätzlichen externen Quellen wie Datenbanken oder Websiten), das für die Minimierung von Halluzinationen und Falscinformationen optimiert ist. Wir verwenden dasselbe Format wie Airoboros dafür.
### Beispiel
Prompt
```
Du bist ein hilfreicher Assistent. Für die folgende Aufgabe stehen dir zwischen den tags BEGININPUT und ENDINPUT mehrere Quellen zur Verfügung. Metadaten zu den einzelnen Quellen wie Autor, URL o.ä. sind zwischen BEGINCONTEXT und ENDCONTEXT zu finden, danach folgt der Text der Quelle. Die eigentliche Aufgabe oder Frage ist zwischen BEGININSTRUCTION und ENDINCSTRUCTION zu finden. Beantworte diese wortwörtlich mit einem Zitat aus den Quellen. Sollten diese keine Antwort enthalten, antworte, dass auf Basis der gegebenen Informationen keine Antwort möglich ist! USER: BEGININPUT
BEGINCONTEXT
Url: https://www.jph.me
ENDCONTEXT
Das Wetter in Düsseldorf wird heute schön und sonnig!
ENDINPUT
BEGININSTRUCTION Was ist 1+1? ENDINSTRUCTION ASSISTANT:
```
Modellantwort:
```
Mit den gegebenen Informationen ist diese Frage nicht zu beantworten.
```
Hier würde das Modell normalerweise die (triviale) Antwort (2) geben, ist aber trainiert, keine Antwort zu geben, wenn es keine Quelle zwischen BEGININPUT und ENDINPUT gibt, die eine belegbare Antwort liefert.
Wenn wir anstattdessen fragen:
```
BEGININSTRUCTION Wie wird das Wetter heute in Düsseldorf? Gebe die Quelle mit an! ENDINSTRUCTION ASSISTANT:
```
Bekommen wir die korrekte Antwort mit Quellenangabe
```
Das Wetter in Düsseldorf wird heute schön und sonnig! (Quelle: Url: https://www.jph.me)
```
# Evaluation
coming soon
# Datensatz
Der Datensatz ist eine Mischung aus Multi-Turn-Chats, RAG Instructions und synthetisch generierten Finetuning-Daten, die viele Themen und Anwendungen abdecken.
# Einschränkungen & Vorurteile
Dieses Modell kann sachlich falsche Ausgaben produzieren und sollte nur zu Forschubngszwecken verwendet werden.
# Danksagungen:
Vielen Dank an [winglian/caseus](https://huggingface.co/winglian) für seine großartige Arbeit an Axolotl, die ich zur Schulung der EM-Modelle verwendet habe. Ich bin auch [Jon Durbin](https://huggingface.co/jondurbin) für seine Arioboros Modelle und Code-Schnipsel dankbar, von denen ich viele Ideen nutzen konnte.
Das 70b-Modell wurde mit Unterstützung des [OVH Cloud Startup Program](https://startup.ovhcloud.com/en/) trainiert.
# Kontakt
Wenn Sie an customized LLMs für geschäftliche Anwendungen interessiert sind, kontaktieren Sie mich bitte über [meine Website](https://www.jph.me). Ich freue mich auch immer über Anregungen und Feedback zu meinen Modellen.
*PS: Ich suche auch immer noch einen Co-Founder für unser Startup, das sich noch im Stealth-Modus befindet.*
# Haftungsausschluss:
Ich bin nicht verantwortlich für die Handlungen Dritter, die dieses Modell verwenden. Dieses Modell sollte nur für Forschungszwecke verwendet werden. Die ursprüngliche Llama2-Lizenz gilt und wird mit den Modell-Dateien verteilt.
<!-- original-model-card end -->
|
Edentns/DataVortexS-10.7B-v0.4 | Edentns | "2024-02-24T14:18:06Z" | 1,110 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"dataset:Edentns/data_go_kr-PublicDoc",
"dataset:Edentns/aihub-TL_unanswerable_output",
"dataset:Edentns/aihub-TL_span_extraction_how_output",
"dataset:Edentns/aihub-TL_multiple_choice_output",
"dataset:Edentns/aihub-TL_text_entailment_output",
"dataset:jojo0217/korean_rlhf_dataset",
"dataset:kyujinpy/KOR-OpenOrca-Platypus-v3",
"dataset:beomi/KoAlpaca-v1.1a",
"dataset:HumanF-MarkrAI/WIKI_QA_Near_dedup",
"base_model:LDCC/LDCC-SOLAR-10.7B",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-11T00:03:19Z" | ---
tags:
- text-generation
license: cc-by-nc-sa-4.0
language:
- ko
base_model: LDCC/LDCC-SOLAR-10.7B
pipeline_tag: text-generation
datasets:
- Edentns/data_go_kr-PublicDoc
- Edentns/aihub-TL_unanswerable_output
- Edentns/aihub-TL_span_extraction_how_output
- Edentns/aihub-TL_multiple_choice_output
- Edentns/aihub-TL_text_entailment_output
- jojo0217/korean_rlhf_dataset
- kyujinpy/KOR-OpenOrca-Platypus-v3
- beomi/KoAlpaca-v1.1a
- HumanF-MarkrAI/WIKI_QA_Near_dedup
---
# **DataVortexS-10.7B-v0.4**
<img src="./DataVortex.png" alt="DataVortex" style="height: 8em;">
## Our Team
| Research & Engineering | Product Management |
| :--------------------: | :----------------: |
| Kwangseok Yang | Seunghyun Choi |
| Jeongwon Choi | Hyoseok Choi |
## **Model Details**
### **Base Model**
[LDCC/LDCC-SOLAR-10.7B](https://huggingface.co/LDCC/LDCC-SOLAR-10.7B)
### **Trained On**
- **OS**: Ubuntu 20.04
- **GPU**: H100 80GB 2ea
- **transformers**: v4.36.2
### **Dataset**
- Edentns/data_go_kr-PublicDoc - private
- Edentns/aihub-TL_unanswerable_output - private
- Edentns/aihub-TL_span_extraction_how_output - private
- Edentns/aihub-TL_multiple_choice_output - private
- Edentns/aihub-TL_text_entailment_output - private
- [jojo0217/korean_rlhf_dataset](https://huggingface.co/datasets/jojo0217/korean_rlhf_dataset)
- [kyujinpy/KOR-OpenOrca-Platypus-v3](https://huggingface.co/datasets/kyujinpy/KOR-OpenOrca-Platypus-v3)
- [beomi/KoAlpaca-v1.1a](https://huggingface.co/datasets/beomi/KoAlpaca-v1.1a)
- [HumanF-MarkrAI/WIKI_QA_Near_dedup](https://huggingface.co/datasets/HumanF-MarkrAI/WIKI_QA_Near_dedup)
### **Instruction format**
It follows **Alpaca** format.
E.g.
```python
text = """\
당신은 사람들이 정보를 찾을 수 있도록 도와주는 인공지능 비서입니다.
### Instruction:
대한민국의 수도는 어디야?
### Response:
대한민국의 수도는 서울입니다.
### Instruction:
서울 인구는 총 몇 명이야?
"""
```
## **Model Benchmark**
### **[Ko LM Eval Harness](https://github.com/Beomi/ko-lm-evaluation-harness)**
| Task | 0-shot | 5-shot | 10-shot | 50-shot |
| :--------------- | ----------: | -------------: | -------------: | -----------: |
| kobest_boolq | 0.389066 | 0.912924 | 0.912808 | 0.906428 |
| kobest_copa | 0.744865 | 0.747742 | 0.768856 | 0.785896 |
| kobest_hellaswag | 0.455793 | 0.443909 | 0.465783 | 0.472771 |
| kobest_sentineg | 0.584156 | 0.947082 | 0.962216 | 0.954657 |
| **Average** | **0.54347** | **0.76291425** | **0.77741575** | **0.779938** |
### **[Ko-LLM-Leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)**
| Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
| ------: | -----: | -----------: | ------: | ------------: | --------------: |
| 54.15 | 49.4 | 59.7 | 54.63 | 47.5 | 59.5 |
## **Implementation Code**
This model contains the chat_template instruction format.
You can use the code below.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("Edentns/DataVortexS-10.7B-v0.4")
tokenizer = AutoTokenizer.from_pretrained("Edentns/DataVortexS-10.7B-v0.4")
messages = [
{"role": "system", "content": "당신은 사람들이 정보를 찾을 수 있도록 도와주는 인공지능 비서입니다."},
{"role": "user", "content": "대한민국의 수도는 어디야?"},
{"role": "assistant", "content": "대한민국의 수도는 서울입니다."},
{"role": "user", "content": "서울 인구는 총 몇 명이야?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## **License**
The model is licensed under the [cc-by-nc-sa-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license, which allows others to copy, modify, and share the work non-commercially, as long as they give appropriate credit and distribute any derivative works under the same license.
<div align="center">
<a href="https://edentns.com/">
<img src="./Logo.png" alt="Logo" style="height: 3em;">
</a>
</div>
|
HwiyeolJo/TeamJaeCorpo-v0.2 | HwiyeolJo | "2024-01-15T12:10:26Z" | 1,110 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-15T11:09:24Z" | ---
license: mit
---
test of model submission\n
test of model submission\ntest of model submission\ntest of model submission\ntest of model submission\ntest of model submission\ntest of model submission\ntest of model submission\ntest of model submission\ntest of model submission\ntest of model submission\ntest of model submission\ntest of model submission\ntest of model submission\ntest of model submission\ntest of model submission\ntest of model submission\ntest of model submission\ntest of model submission\ntest of model submission\ntest of model submission\ntest of model submission\ntest of model submission\ntest of model submission\ntest of model submission\ntest of model submission\ntest of model submission\ntest of model submission\n |
jingyeom/KoSoLAR-10.7B-v0.2_1.4_dedup_1 | jingyeom | "2024-02-01T05:17:25Z" | 1,110 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-31T00:18:18Z" | ---
license: apache-2.0
---
## Model
- base_model : yanolja/KoSOLAR-10.7B-v0.2
- training objective: instruction Tuning
## Dataset
공개 데이터 수집
- Deduplicating Training Data Makes Language Models Better 알고리즘 활용
- instruction version 1.4
## Code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "jjingyeom/freeze_KoSoLAR-10.7B-v0.2_1.4_dedup_1"
model = AutoModelForCausalLM.from_pretrained(
model_name,
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
``` |
metavoiceio/metavoice-1B-v0.1 | metavoiceio | "2024-04-03T10:30:14Z" | 1,110 | 724 | metavoice | [
"metavoice",
"pretrained",
"text-to-speech",
"en",
"license:apache-2.0",
"region:us"
] | text-to-speech | "2024-02-06T17:02:48Z" | ---
license: apache-2.0
language:
- en
tags:
- pretrained
- text-to-speech
library_name: metavoice
inference: false
---
MetaVoice-1B is a 1.2B parameter base model trained on 100K hours of speech for TTS (text-to-speech). It has been built with the following priorities:
* Emotional speech rhythm and tone in English. No hallucinations.
* Support for voice cloning with finetuning.
* We have had success with as little as 1 minute training data for Indian speakers.
* Zero-shot cloning for American & British voices, with 30s reference audio.
* Support for long-form synthesis.
We’re releasing MetaVoice-1B under the Apache 2.0 license, *it can be used without restrictions*.
## Usage
See [Github](https://github.com/metavoiceio/metavoice-src) for the latest usage instructions.
## Finetuning
See [Github](https://github.com/metavoiceio/metavoice-src?tab=readme-ov-file#finetuning) for the latest finetuning instructions.
## Soon
- Long form / arbitrary length TTS
- Streaming
## Architecture
We predict EnCodec tokens from text, and speaker information. This is then diffused up to the waveform level, with post-processing applied to clean up the audio.
* We use a causal GPT to predict the first two hierarchies of EnCodec tokens. Text and audio are part of the LLM context. Speaker information is passed via conditioning at the token embedding layer. This speaker conditioning is obtained from a separately trained speaker verification network.
- The two hierarchies are predicted in a "flattened interleaved" manner, we predict the first token of the first hierarchy, then the first token of the second hierarchy, then the second token of the first hierarchy, and so on.
- We use condition-free sampling to boost the cloning capability of the model.
- The text is tokenised using a custom trained BPE tokeniser with 512 tokens.
- Note that we've skipped predicting semantic tokens as done in other works, as we found that this isn't strictly necessary.
* We use a non-causal (encoder-style) transformer to predict the rest of the 6 hierarchies from the first two hierarchies. This is a super small model (~10Mn parameters), and has extensive zero-shot generalisation to most speakers we've tried. Since it's non-causal, we're also able to predict all the timesteps in parallel.
* We use multi-band diffusion to generate waveforms from the EnCodec tokens. We noticed that the speech is clearer than using the original RVQ decoder or VOCOS. However, the diffusion at waveform level leaves some background artifacts which are quite unpleasant to the ear. We clean this up in the next step.
* We use DeepFilterNet to clear up the artifacts introduced by the multi-band diffusion.
## Optimizations
The model supports:
1. KV-caching via Flash Decoding
2. Batching (including texts of different lengths)
|
jungyuko/DAVinCI-42dot_LLM-PLM-1.3B-v1.1 | jungyuko | "2024-02-15T05:12:18Z" | 1,110 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-15T01:58:16Z" | ---
license: cc-by-nc-4.0
---
## DAVinCI-42dot_LLM-PLM-1.3B-v1.1
This model is a fine-tuned version of [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B) on a custom dataset.
### Model description
More information needed
### Intended uses & limitations
More information needed
### Training and evaluation data
More information needed
### Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
* learning_rate: 2e-05
* train_batch_size: 24
* eval_batch_size: 8
* seed: 42
* gradient_accumulation_steps: 4
* total_train_batch_size: 96
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr_scheduler_type: linear
* num_epochs: 1.0
* mixed_precision_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.2
* Pytorch 2.1.2+cu121
* Datasets 2.0.0
* Tokenizers 0.15.0
|
kurugai/Kurugai-EEVE-v1.0 | kurugai | "2024-03-28T23:50:10Z" | 1,110 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"dataset:maywell/kiqu_samples",
"base_model:yanolja/EEVE-Korean-10.8B-v1.0",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-03T11:59:26Z" | ---
license: cc-by-nc-sa-4.0
base_model: yanolja/EEVE-Korean-10.8B-v1.0
datasets:
- maywell/kiqu_samples
language:
- ko
---
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
**kurugai/Kurugai-EEVE-v1.0**는 **yanolja/EEVE-Korean-10.8B-v1.0**를 베이스모델로 해서 **maywell/kiqu_samples** 데이터셋으로 학습된 모델입니다.
# 학습시간
RTX 8000 GPU 1EA로 16시간 학습하였습니다.
# 벤치마크
## **[Ko-LLM-Leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)**
| Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
| ------: | -----: | -----------: | ------: | ------------: | --------------: |
| 54.72 | 49.66 | 58.91 | 52.17 | 50.87 | 61.98 |
## **[LogicKor 한국어 언어모델 다분야 사고력 벤치마크](https://lk.instruct.kr/)**
| 추론 | 수학 | 글쓰기 | 코딩 | 이해 | 문법 | 싱글턴 | 멀티턴 | 총점 |
| ----: | -----: | -----------: | ------: | ------------: | --------------: | ------: | ------: | ------: |
| 3.00 | 2.71 | 3.92 | 4.21 | 6.35 | 4.14 | 4.59 | 3.52 | 4.05 |
# 탬플릿
본 모델을 알파카 포맷을 따릅니다.
```
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Response:
{output}
```
```
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
코로나19에 걸렸는데 어떻게 대처해야해?
### Response:
코로나19에 걸릴 경우 가장 중요한 것은 자신의 건강 상태를 잘 모니터링하고 적절한 의료 조치를 취하는 것이에요. 여기 몇 가지 기본적인 대응 방법을 소개할게요:
1. **증상 관리**: 발열, 기침, 호흡곤란 등의 증상이 나타날 수 있어요. 이러한 증상들이 심해지거나 악화될 경우 즉시 의사와 상담하거나 병원을 방문하세요. 또한 충분한 수분 섭취와 영양 보충을 통해 몸의 회복을 도와주세요.
2. **접촉자 추적 및 격리**: 감염 사실을 알게 되면 접촉자들과의 소통을 시작하고 필요한 경우 자가격리를 시작하셔야 해요. 이는 다른 사람들에게 바이러스를 전파하지 않도록 예방하기 위한 조치예요.
3. **의료 서비스 활용**: 증상이 심각해질 경우 즉각적인 의료 도움을 받아야 해요. 병원에 가서 진료를 받거나, 필요하다면 입원을 고려해보세요. 또한 처방된 약물이나 치료 계획을 따르는 것도 중요해요.
4. **정신적 지원 받기**: 감염병으로 인한 스트레스나 불안은 자연스러운 반응이에요. 만약 필요하면 정신건강 전문가의 상담을 받아 보시는 것을 권장드려요. 온라인 상담도 가능하니 참고해 주세요.
5. **예방 조치 유지하기**: 백신 접종, 마스크 착용, 손 씻기 등 코로나19 예방을 위한 기본 규칙을 계속 유지해주세요. 이를 통해 자신과 주변 사람들의 건강을 보호할 수 있어요.
6. **지속적인 정보 업데이트**: 질병관리청이나 관련 정부 기관에서 제공하는 최신 정보를 수시로 확인해 보세요. 이렇게 하면 자신에게 맞는 대응을 할 수 있는 정보를 얻을 수 있어요.
코로나19는 개인마다 다르게 영향을 미칠 수 있으니, 개인의 상황에 맞게 적절히 대응할 필요가 있어요. 항상 전문가와의 상의를 통한 접근이 최선이라고 할 수 있겠죠. 더 궁금한 점이 있으면 언제든지 물어봐 주세요!
```
# 도움을 주신분
이 모델은 아내의 지원을 받아 제작되었습니다. 아내에게 감사의 말을 전합니다. |
davidkim205/komt-solar-10.7b-sft-v3 | davidkim205 | "2024-03-08T21:20:42Z" | 1,110 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-03T13:58:11Z" | ---
license: cc-by-4.0
language:
- ko
---
# komt : korean multi task instruction tuning model

Recently, due to the success of ChatGPT, numerous large language models have emerged in an attempt to catch up with ChatGPT's capabilities.
However, when it comes to Korean language performance, it has been observed that many models still struggle to provide accurate answers or generate Korean text effectively.
This study addresses these challenges by introducing a multi-task instruction technique that leverages supervised datasets from various tasks to create training data for Large Language Models (LLMs).
## Model Details
* **Model Developers** : davidkim(changyeon kim)
* **Repository** : https://github.com/davidkim205/komt
* **base mode** : Edentns/DataVortexS-10.7B-dpo-v1.11
|
davidkim205/komt-solar-10.7b-sft-v5 | davidkim205 | "2024-03-12T04:35:17Z" | 1,110 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-06T04:46:33Z" | ---
license: cc-by-4.0
language:
- ko
---
# komt : korean multi task instruction tuning model

Recently, due to the success of ChatGPT, numerous large language models have emerged in an attempt to catch up with ChatGPT's capabilities.
However, when it comes to Korean language performance, it has been observed that many models still struggle to provide accurate answers or generate Korean text effectively.
This study addresses these challenges by introducing a multi-task instruction technique that leverages supervised datasets from various tasks to create training data for Large Language Models (LLMs).
## Model Details
* **Model Developers** : davidkim(changyeon kim)
* **Repository** : https://github.com/davidkim205/komt(will be updated soon.)
* **base mode** : Edentns/DataVortexS-10.7B-dpo-v1.11
* **dataset** : comp-341k(will be updated soon.)
|
chlee10/T3Q-ko-solar-sft-v1.0 | chlee10 | "2024-03-18T18:05:14Z" | 1,110 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"T3Q-ko-solar-sft-v1.0",
"kyujinpy/KoCommercial-NoSSL",
"conversational",
"en",
"dataset:kyujinpy/KoCommercial-NoSSL",
"base_model:davidkim205/nox-solar-10.7b-v4",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-18T17:06:59Z" | ---
pipeline_tag: text-generation
license: apache-2.0
language:
- en
tags:
- T3Q-ko-solar-sft-v1.0
- kyujinpy/KoCommercial-NoSSL
base_model: davidkim205/nox-solar-10.7b-v4
datasets:
- kyujinpy/KoCommercial-NoSSL
model-index:
- name: T3Q-ko-solar-sft-v1.0
results: []
---
Update @ 2024.03.19
## T3Q-ko-solar-sft-v1.0
This model is a SFT fine-tuned version of davidkim205/nox-solar-10.7b-v4
**Model Developers** Chihoon Lee(chlee10), T3Q
## Training hyperparameters
The following hyperparameters were used during training:
```python
# 데이터셋과 훈련 횟수와 관련된 하이퍼 파라미터
batch_size = 16
num_epochs = 1
micro_batch = 1
gradient_accumulation_steps = batch_size // micro_batch
# 훈련 방법에 대한 하이퍼 파라미터
cutoff_len = 4096
lr_scheduler = 'cosine'
warmup_ratio = 0.06 # warmup_steps = 100
learning_rate = 4e-4
optimizer = 'adamw_torch'
weight_decay = 0.01
max_grad_norm = 1.0
# LoRA config(QLoRA)
lora_r = 16
lora_alpha = 16
lora_dropout = 0.05
lora_target_modules = ["gate_proj", "down_proj", "up_proj"]
# Tokenizer에서 나오는 input값 설정 옵션
train_on_inputs = False
add_eos_token = False
# NEFTune params
noise_alpha: int = 5
```
|
URP/urllm-ko_en-2.7b | URP | "2024-04-12T09:07:20Z" | 1,110 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"meta",
"llama-2",
"llama-2-ko-en",
"sheared llama",
"ko",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-11T02:35:38Z" | ---
license: cc-by-nc-4.0
language:
- ko
- en
pipeline_tag: text-generation
tags:
- meta
- llama-2
- llama-2-ko-en
- sheared llama
---
## Model Details
**Model Architecture:**
urLLM-KO_EN-2.7B is an auto-regressive language model that leverages an optimized transformer architecture derived from princeton-nlp/Sheared-LLaMA-2.7B.
**Training Corpus**
The model was trained using selected datasets from Modu Corpus, Korean Wikipedia and Kaggle English News (approximately total 36GB).
**Vocab Expansion**
The expanded vocab size is 51385.
**Model Card Contact**
For errors or additional questions about details in this model card, contact [email protected] . |
JY623/KoSOLAR-10.7B-merge-v3.4 | JY623 | "2024-04-16T09:04:46Z" | 1,110 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:JY623/KoSOLAR-10.7B-merge-v3.0",
"base_model:JY623/KoSOLAR-10.7B-merge-v3.3",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-16T08:55:56Z" | ---
base_model:
- JY623/KoSOLAR-10.7B-merge-v3.0
- JY623/KoSOLAR-10.7B-merge-v3.3
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# slerp_test3
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [JY623/KoSOLAR-10.7B-merge-v3.0](https://huggingface.co/JY623/KoSOLAR-10.7B-merge-v3.0)
* [JY623/KoSOLAR-10.7B-merge-v3.3](https://huggingface.co/JY623/KoSOLAR-10.7B-merge-v3.3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: JY623/KoSOLAR-10.7B-merge-v3.0
layer_range: [0, 48]
- model: JY623/KoSOLAR-10.7B-merge-v3.3
layer_range: [0, 48]
merge_method: slerp
base_model: JY623/KoSOLAR-10.7B-merge-v3.0
parameters:
t: 0.2
dtype: bfloat16
``` |
chlee10/T3Q-LLM3-Llama3-sft1.0-dpo1.0 | chlee10 | "2024-05-02T10:00:08Z" | 1,110 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-02T09:47:54Z" | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CDEH/flan-t5-large | CDEH | "2024-06-03T18:56:36Z" | 1,110 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-06-03T18:52:40Z" | Entry not found |
BeaverAI/Llama-3SOME-8B-v2c-GGUF | BeaverAI | "2024-06-06T03:51:10Z" | 1,110 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-06T03:48:50Z" | Entry not found |
Salesforce/blip2-opt-6.7b-coco | Salesforce | "2024-03-31T10:07:32Z" | 1,109 | 28 | transformers | [
"transformers",
"pytorch",
"safetensors",
"blip-2",
"visual-question-answering",
"vision",
"image-to-text",
"image-captioning",
"en",
"arxiv:2301.12597",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-to-text | "2023-02-07T15:18:25Z" | ---
language: en
license: mit
tags:
- vision
- image-to-text
- image-captioning
- visual-question-answering
pipeline_tag: image-to-text
---
# BLIP-2, OPT-6.7b, fine-tuned on COCO
BLIP-2 model, leveraging [OPT-6.7b](https://huggingface.co/facebook/opt-6.7b) (a large language model with 6.7 billion parameters).
It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2).
Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model.
The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen
while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings,
which bridge the gap between the embedding space of the image encoder and the large language model.
The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg"
alt="drawing" width="600"/>
This allows the model to be used for tasks like:
- image captioning
- visual question answering (VQA)
- chat-like conversations by feeding the image and the previous conversation as prompt to the model
## Direct Use and Downstream Use
You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for
fine-tuned versions on a task that interests you.
## Bias, Risks, Limitations, and Ethical Considerations
BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card.
> Like other large language models for which the diversity (or lack thereof) of training
> data induces downstream impact on the quality of our model, OPT-175B has limitations in terms
> of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and
> hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern
> large language models.
>
BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data.
BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within.
### How to use
For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example). |
songlab/gpn-brassicales | songlab | "2024-01-27T18:17:35Z" | 1,109 | 3 | transformers | [
"transformers",
"pytorch",
"ConvNet",
"fill-mask",
"dna",
"language-model",
"variant-effect-prediction",
"biology",
"genomics",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2023-02-24T00:09:18Z" | ---
license: mit
tags:
- dna
- language-model
- variant-effect-prediction
- biology
- genomics
---
# GPN trained on *Arabidopsis thaliana* and 7 other Brassicales
See https://github.com/songlab-cal/gpn for more details.
## Some hparams:
* repeat_weight: 0.1
* lr: 120k at 1e-3 + 30k cosine decay |
Nondzu/Mistral-7B-code-16k-qlora | Nondzu | "2023-10-30T12:45:22Z" | 1,109 | 26 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-16T15:19:21Z" | ---
license: apache-2.0
---
# Mistral-7B-code-16k-qlora
I'm excited to announce the release of a new model called Mistral-7B-code-16k-qlora. This small and fast model shows a lot of promise for supporting coding or acting as a copilot. I'm currently looking for people to help me test it out!
## Additional Information
This model was trained on 3x RTX 3090 in my homelab, using around 65kWh for approximately 23 cents, which is equivalent to around $15 for electricity.
## Quantised:
1. https://huggingface.co/TheBloke/Mistral-7B-Code-16K-qlora-GPTQ
2. https://huggingface.co/TheBloke/Mistral-7B-Code-16K-qlora-AWQ
3. https://huggingface.co/TheBloke/Mistral-7B-Code-16K-qlora-GGUF
## Download by qBittorrent:
#### Torrent file: https://github.com/Nondzu/LlamaTor/blob/torrents/torrents/Nondzu_Mistral-7B-code-16k-qlora.torrent
## Dataset:
nickrosh/Evol-Instruct-Code-80k-v1
https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
## eval plus
Human eval plus: https://github.com/evalplus/evalplus
```
Nondzu mistral-7b-code
Base
{'pass@1': 0.3353658536585366}
Base + Extra
{'pass@1': 0.2804878048780488}
```
to compare here is original Mistral model tested on the same machine
```
Mistral 7b
Base
{'pass@1': 0.2926829268292683}
Base + Extra
{'pass@1': 0.24390243902439024}
```
## Settings:
```
base_model: mistralai/Mistral-7B-Instruct-v0.1
base_model_config: mistralai/Mistral-7B-Instruct-v0.1
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
is_mistral_derived_model: true
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: nickrosh/Evol-Instruct-Code-80k-v1
type: oasst
dataset_prepared_path:
val_set_size: 0.01
output_dir: ./Mistral-7B-Evol-Instruct-16k-test11
adapter: qlora
lora_model_dir:
# 16384 8192 4096 2048
sequence_len: 16384
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: mistral-code
wandb_entity:
wandb_watch:
wandb_run_id:
wandb_log_model:
gradient_accumulation_steps: 2
micro_batch_size: 1
num_epochs: 8
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
eval_steps: 20
save_steps:
debug:
# deepspeed:
deepspeed: deepspeed/zero2.json
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```

Check my other projects:
https://github.com/Nondzu/LlamaTor |
Weyaxi/HelpSteer-filtered-neural-chat-7b-v3-1-7B | Weyaxi | "2023-11-29T06:18:07Z" | 1,109 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-28T15:23:19Z" | ---
license: apache-2.0
---
# HelpSteer-filtered-neural-chat-7b-v3-1-7B
HelpSteer-filtered-neural-chat-7b-v3-1-7B is a merge of [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1) and [Weyaxi/HelpSteer-filtered-7B-Lora](https://hf.co/Weyaxi/HelpSteer-filtered-7B-Lora)
|
kimjaewon/open-llama-2-ko-7b-kullm | kimjaewon | "2024-01-02T05:17:18Z" | 1,109 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-02T05:07:19Z" | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.