modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
facebook/mms-lid-512 | facebook | 2023-06-13T10:16:19Z | 464 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"audio-classification",
"mms",
"ab",
"af",
"ak",
"am",
"ar",
"as",
"av",
"ay",
"az",
"ba",
"bm",
"be",
"bn",
"bi",
"bo",
"sh",
"br",
"bg",
"ca",
"cs",
"ce",
"cv",
"ku",
"cy",
"da",
"de",
"dv",
"dz",
"el",
"en",
"eo",
"et",
"eu",
"ee",
"fo",
"fa",
"fj",
"fi",
"fr",
"fy",
"ff",
"ga",
"gl",
"gn",
"gu",
"zh",
"ht",
"ha",
"he",
"hi",
"hu",
"hy",
"ig",
"ia",
"ms",
"is",
"it",
"jv",
"ja",
"kn",
"ka",
"kk",
"kr",
"km",
"ki",
"rw",
"ky",
"ko",
"kv",
"lo",
"la",
"lv",
"ln",
"lt",
"lb",
"lg",
"mh",
"ml",
"mr",
"mk",
"mg",
"mt",
"mn",
"mi",
"my",
"nl",
"no",
"ne",
"ny",
"oc",
"om",
"or",
"os",
"pa",
"pl",
"pt",
"ps",
"qu",
"ro",
"rn",
"ru",
"sg",
"sk",
"sl",
"sm",
"sn",
"sd",
"so",
"es",
"sq",
"su",
"sv",
"sw",
"ta",
"tt",
"te",
"tg",
"tl",
"th",
"ti",
"ts",
"tr",
"uk",
"vi",
"wo",
"xh",
"yo",
"zu",
"za",
"dataset:google/fleurs",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2023-06-13T08:59:08Z | ---
tags:
- mms
language:
- ab
- af
- ak
- am
- ar
- as
- av
- ay
- az
- ba
- bm
- be
- bn
- bi
- bo
- sh
- br
- bg
- ca
- cs
- ce
- cv
- ku
- cy
- da
- de
- dv
- dz
- el
- en
- eo
- et
- eu
- ee
- fo
- fa
- fj
- fi
- fr
- fy
- ff
- ga
- gl
- gn
- gu
- zh
- ht
- ha
- he
- hi
- sh
- hu
- hy
- ig
- ia
- ms
- is
- it
- jv
- ja
- kn
- ka
- kk
- kr
- km
- ki
- rw
- ky
- ko
- kv
- lo
- la
- lv
- ln
- lt
- lb
- lg
- mh
- ml
- mr
- ms
- mk
- mg
- mt
- mn
- mi
- my
- zh
- nl
- 'no'
- 'no'
- ne
- ny
- oc
- om
- or
- os
- pa
- pl
- pt
- ms
- ps
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- ro
- rn
- ru
- sg
- sk
- sl
- sm
- sn
- sd
- so
- es
- sq
- su
- sv
- sw
- ta
- tt
- te
- tg
- tl
- th
- ti
- ts
- tr
- uk
- ms
- vi
- wo
- xh
- ms
- yo
- ms
- zu
- za
license: cc-by-nc-4.0
datasets:
- google/fleurs
metrics:
- acc
---
# Massively Multilingual Speech (MMS) - Finetuned LID
This checkpoint is a model fine-tuned for speech language identification (LID) and part of Facebook's [Massive Multilingual Speech project](https://research.facebook.com/publications/scaling-speech-technology-to-1000-languages/).
This checkpoint is based on the [Wav2Vec2 architecture](https://huggingface.co/docs/transformers/model_doc/wav2vec2) and classifies raw audio input to a probability distribution over 512 output classes (each class representing a language).
The checkpoint consists of **1 billion parameters** and has been fine-tuned from [facebook/mms-1b](https://huggingface.co/facebook/mms-1b) on 512 languages.
## Table Of Content
- [Example](#example)
- [Supported Languages](#supported-languages)
- [Model details](#model-details)
- [Additional links](#additional-links)
## Example
This MMS checkpoint can be used with [Transformers](https://github.com/huggingface/transformers) to identify
the spoken language of an audio. It can recognize the [following 512 languages](#supported-languages).
Let's look at a simple example.
First, we install transformers and some other libraries
```
pip install torch accelerate torchaudio datasets
pip install --upgrade transformers
````
**Note**: In order to use MMS you need to have at least `transformers >= 4.30` installed. If the `4.30` version
is not yet available [on PyPI](https://pypi.org/project/transformers/) make sure to install `transformers` from
source:
```
pip install git+https://github.com/huggingface/transformers.git
```
Next, we load a couple of audio samples via `datasets`. Make sure that the audio data is sampled to 16000 kHz.
```py
from datasets import load_dataset, Audio
# English
stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "en", split="test", streaming=True)
stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000))
en_sample = next(iter(stream_data))["audio"]["array"]
# Arabic
stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "ar", split="test", streaming=True)
stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000))
ar_sample = next(iter(stream_data))["audio"]["array"]
```
Next, we load the model and processor
```py
from transformers import Wav2Vec2ForSequenceClassification, AutoFeatureExtractor
import torch
model_id = "facebook/mms-lid-512"
processor = AutoFeatureExtractor.from_pretrained(model_id)
model = Wav2Vec2ForSequenceClassification.from_pretrained(model_id)
```
Now we process the audio data, pass the processed audio data to the model to classify it into a language, just like we usually do for Wav2Vec2 audio classification models such as [ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition](https://huggingface.co/harshit345/xlsr-wav2vec-speech-emotion-recognition)
```py
# English
inputs = processor(en_sample, sampling_rate=16_000, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs).logits
lang_id = torch.argmax(outputs, dim=-1)[0].item()
detected_lang = model.config.id2label[lang_id]
# 'eng'
# Arabic
inputs = processor(ar_sample, sampling_rate=16_000, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs).logits
lang_id = torch.argmax(outputs, dim=-1)[0].item()
detected_lang = model.config.id2label[lang_id]
# 'ara'
```
To see all the supported languages of a checkpoint, you can print out the language ids as follows:
```py
processor.id2label.values()
```
For more details, about the architecture please have a look at [the official docs](https://huggingface.co/docs/transformers/main/en/model_doc/mms).
## Supported Languages
This model supports 512 languages. Unclick the following to toogle all supported languages of this checkpoint in [ISO 639-3 code](https://en.wikipedia.org/wiki/ISO_639-3).
You can find more details about the languages and their ISO 649-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html).
<details>
<summary>Click to toggle</summary>
- ara
- cmn
- eng
- spa
- fra
- mlg
- swe
- por
- vie
- ful
- sun
- asm
- ben
- zlm
- kor
- ind
- hin
- tuk
- urd
- aze
- slv
- mon
- hau
- tel
- swh
- bod
- rus
- tur
- heb
- mar
- som
- tgl
- tat
- tha
- cat
- ron
- mal
- bel
- pol
- yor
- nld
- bul
- hat
- afr
- isl
- amh
- tam
- hun
- hrv
- lit
- cym
- fas
- mkd
- ell
- bos
- deu
- sqi
- jav
- kmr
- nob
- uzb
- snd
- lat
- nya
- grn
- mya
- orm
- lin
- hye
- yue
- pan
- jpn
- kaz
- npi
- kik
- kat
- guj
- kan
- tgk
- ukr
- ces
- lav
- bak
- khm
- cak
- fao
- glg
- ltz
- xog
- lao
- mlt
- sin
- aka
- sna
- che
- mam
- ita
- quc
- srp
- mri
- tuv
- nno
- pus
- eus
- kbp
- ory
- lug
- bre
- luo
- nhx
- slk
- ewe
- fin
- rif
- dan
- yid
- yao
- mos
- quh
- hne
- xon
- new
- quy
- est
- dyu
- ttq
- bam
- pse
- uig
- sck
- ngl
- tso
- mup
- dga
- seh
- lis
- wal
- ctg
- bfz
- bxk
- ceb
- kru
- war
- khg
- bbc
- thl
- vmw
- zne
- sid
- tpi
- nym
- bgq
- bfy
- hlb
- teo
- fon
- kfx
- bfa
- mag
- ayr
- any
- mnk
- adx
- ava
- hyw
- san
- kek
- chv
- kri
- btx
- nhy
- dnj
- lon
- men
- ium
- nga
- nsu
- prk
- kir
- bom
- run
- hwc
- mnw
- ubl
- kin
- rkt
- xmm
- iba
- gux
- ses
- wsg
- tir
- gbm
- mai
- nyy
- nan
- nyn
- gog
- ngu
- hoc
- nyf
- sus
- bcc
- hak
- grt
- suk
- nij
- kaa
- bem
- rmy
- nus
- ach
- awa
- dip
- rim
- nhe
- pcm
- kde
- tem
- quz
- bba
- kbr
- taj
- dik
- dgo
- bgc
- xnr
- kac
- laj
- dag
- ktb
- mgh
- shn
- oci
- zyb
- alz
- wol
- guw
- nia
- bci
- sba
- kab
- nnb
- ilo
- mfe
- xpe
- bcl
- haw
- mad
- ljp
- gmv
- nyo
- kxm
- nod
- sag
- sas
- myx
- sgw
- mak
- kfy
- jam
- lgg
- nhi
- mey
- sgj
- hay
- pam
- heh
- nhw
- yua
- shi
- mrw
- hil
- pag
- cce
- npl
- ace
- kam
- min
- pko
- toi
- ncj
- umb
- hno
- ban
- syl
- bxg
- nse
- xho
- mkw
- nch
- mas
- bum
- mww
- epo
- tzm
- zul
- lrc
- ibo
- abk
- azz
- guz
- ksw
- lus
- ckb
- mer
- pov
- rhg
- knc
- tum
- nso
- bho
- ndc
- ijc
- qug
- lub
- srr
- mni
- zza
- dje
- tiv
- gle
- lua
- swk
- ada
- lic
- skr
- mfa
- bto
- unr
- hdy
- kea
- glk
- ast
- nup
- sat
- ktu
- bhb
- sgc
- dks
- ncl
- emk
- urh
- tsc
- idu
- igb
- its
- kng
- kmb
- tsn
- bin
- gom
- ven
- sef
- sco
- trp
- glv
- haq
- kha
- rmn
- sot
- sou
- gno
- igl
- efi
- nde
- rki
- kjg
- fan
- wci
- bjn
- pmy
- bqi
- ina
- hni
- the
- nuz
- ajg
- ymm
- fmu
- nyk
- snk
- esg
- thq
- pht
- wes
- pnb
- phr
- mui
- tkt
- bug
- mrr
- kas
- zgb
- lir
- vah
- ssw
- iii
- brx
- rwr
- kmc
- dib
- pcc
- zyn
- hea
- hms
- thr
- wbr
- bfb
- wtm
- blk
- dhd
- swv
- zzj
- niq
- mtr
- gju
- kjp
- haz
- shy
- nbl
- aii
- sjp
- bns
- brh
- msi
- tsg
- tcy
- kbl
- noe
- tyz
- ahr
- aar
- wuu
- kbd
- bca
- pwr
- hsn
- kua
- tdd
- bgp
- abs
- zlj
- ebo
- bra
- nhp
- tts
- zyj
- lmn
- cqd
- dcc
- cjk
- bfr
- bew
- arg
- drs
- chw
- bej
- bjj
- ibb
- tig
- nut
- jax
- tdg
- nlv
- pch
- fvr
- mlq
- kfr
- nhn
- tji
- hoj
- cpx
- cdo
- bgn
- btm
- trf
- daq
- max
- nba
- mut
- hnd
- ryu
- abr
- sop
- odk
- nap
- gbr
- czh
- vls
- gdx
- yaf
- sdh
- anw
- ttj
- nhg
- cgg
- ifm
- mdh
- scn
- lki
- luz
- stv
- kmz
- nds
- mtq
- knn
- mnp
- bar
- mzn
- gsw
- fry
</details>
## Model details
- **Developed by:** Vineel Pratap et al.
- **Model type:** Multi-Lingual Automatic Speech Recognition model
- **Language(s):** 512 languages, see [supported languages](#supported-languages)
- **License:** CC-BY-NC 4.0 license
- **Num parameters**: 1 billion
- **Audio sampling rate**: 16,000 kHz
- **Cite as:**
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
## Additional Links
- [Blog post](https://ai.facebook.com/blog/multilingual-model-speech-recognition/)
- [Transformers documentation](https://huggingface.co/docs/transformers/main/en/model_doc/mms).
- [Paper](https://arxiv.org/abs/2305.13516)
- [GitHub Repository](https://github.com/facebookresearch/fairseq/tree/main/examples/mms#asr)
- [Other **MMS** checkpoints](https://huggingface.co/models?other=mms)
- MMS base checkpoints:
- [facebook/mms-1b](https://huggingface.co/facebook/mms-1b)
- [facebook/mms-300m](https://huggingface.co/facebook/mms-300m)
- [Official Space](https://huggingface.co/spaces/facebook/MMS)
|
artificialguybr/360Redmond | artificialguybr | 2023-08-04T23:23:37Z | 464 | 22 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:bigscience-openrail-m",
"region:us"
]
| text-to-image | 2023-07-28T17:26:16Z | ---
license: bigscience-openrail-m
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: 360, 360 view
widget:
- text: 360 view
---
# 360Redmond

360Redmond is here!
I'm grateful for the GPU time from Redmond.AI that allowed me to finish this LORA!
This is a 360 VIEW/PANORAMA LORA fine-tuned on SD XL 1.0.
The LORA has a high capacity to generate 360 VIEW, 360, PANORAMA images in a wide variety of themes. It's a versatile LORA.
You need to use site like that to view the 360 Panorama:
360° Panorama Viewer Online (renderstuff.com)
I recommend gen in 1600x800 and then upscale 3x or 4x to have better quality.
You can use 360, 360 View to get better 360.
LORA is not perfect and sometimes needs more than one gen to create good images. I recommend simple prompts and not so complex scenes. LORA ends up making mistakes in very complex scenes (for example office interior or very dense forest).
I really hope you like the LORA and use it.
If you like the model and think it's worth it, you can make a donation to my Patreon or Ko-fi.
Follow me in my twitter to know before all about new models:
https://twitter.com/artificialguybr/ |
facebook/mms-tts-kan | facebook | 2023-09-01T13:13:47Z | 464 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| text-to-speech | 2023-09-01T13:13:26Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Kannada Text-to-Speech
This repository contains the **Kannada (kan)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-kan")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-kan")
text = "some example text in the Kannada language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
TigerResearch/tigerbot-70b-chat-v2 | TigerResearch | 2023-12-11T10:23:56Z | 464 | 48 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"zh",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-09-13T02:50:03Z | ---
license: apache-2.0
language:
- zh
- en
---
<div style="width: 100%;">
<p align="center" width="20%">
<img src="http://x-pai.algolet.com/bot/img/logo_core.png" alt="TigerBot" width="20%", style="display: block; margin: auto;"></img>
</p>
</div>
<p align="center">
<font face="黑体" size=5"> A cutting-edge foundation for your very own LLM. </font>
</p>
<p align="center">
💻<a href="https://github.com/TigerResearch/TigerBot" target="_blank">Github</a> • 🌐 <a href="https://tigerbot.com/" target="_blank">TigerBot</a> • 🤗 <a href="https://huggingface.co/TigerResearch" target="_blank">Hugging Face</a>
</p>
# 快速开始
- 方法1,通过transformers使用
- 下载 TigerBot Repo
```shell
git clone https://github.com/TigerResearch/TigerBot.git
```
- 启动infer代码
```shell
python infer.py --model_path TigerResearch/tigerbot-70b-chat-v2
```
- 方法2:
- 下载 TigerBot Repo
```shell
git clone https://github.com/TigerResearch/TigerBot.git
```
- 安装git lfs: `git lfs install`
- 通过huggingface或modelscope平台下载权重
```shell
git clone https://huggingface.co/TigerResearch/tigerbot-70b-chat-v2
git clone https://www.modelscope.cn/TigerResearch/tigerbot-70b-chat-v2.git
```
- 启动infer代码
```shell
python infer.py --model_path tigerbot-70b-chat-v2
```
------
# Quick Start
- Method 1, use through transformers
- Clone TigerBot Repo
```shell
git clone https://github.com/TigerResearch/TigerBot.git
```
- Run infer script
```shell
python infer.py --model_path TigerResearch/tigerbot-70b-chat-v2
```
- Method 2:
- Clone TigerBot Repo
```shell
git clone https://github.com/TigerResearch/TigerBot.git
```
- install git lfs: `git lfs install`
- Download weights from huggingface or modelscope
```shell
git clone https://huggingface.co/TigerResearch/tigerbot-70b-chat-v2
git clone https://www.modelscope.cn/TigerResearch/tigerbot-70b-chat-v2.git
```
- Run infer script
```shell
python infer.py --model_path tigerbot-70b-chat-v2
``` |
shaowenchen/longchat-7b-v1.5-32k-gguf | shaowenchen | 2023-09-15T13:37:00Z | 464 | 0 | null | [
"gguf",
"longchat",
"7b",
"32k",
"text-generation",
"zh",
"en",
"license:other",
"region:us"
]
| text-generation | 2023-09-15T06:57:17Z | ---
inference: true
language:
- zh
- en
license: other
model_creator: lmsys
model_link: https://huggingface.co/lmsys/longchat-7b-v1.5-32k
model_name: longchat-7b-v1.5-32k
model_type: longchat
pipeline_tag: text-generation
quantized_by: shaowenchen
tasks:
- text2text-generation
tags:
- gguf
- longchat
- 7b
- 32k
---
## Provided files
| Name | Quant method | Size |
| -------------------------------- | ------------ | ------ |
| longchat-7b-v1.5-32k.Q2_K.gguf | Q2_K | 5.1 GB |
| longchat-7b-v1.5-32k.Q3_K.gguf | Q3_K | 5.9 GB |
| longchat-7b-v1.5-32k.Q3_K_L.gguf | Q3_K_L | 6.5 GB |
| longchat-7b-v1.5-32k.Q3_K_S.gguf | Q3_K_S | 5.3 GB |
| longchat-7b-v1.5-32k.Q4_0.gguf | Q4_0 | 6.9 GB |
| longchat-7b-v1.5-32k.Q4_1.gguf | Q4_1 | 7.6 GB |
| longchat-7b-v1.5-32k.Q4_K.gguf | Q4_K | 7.3 GB |
| longchat-7b-v1.5-32k.Q4_K_S.gguf | Q4_K_S | 6.9 GB |
| longchat-7b-v1.5-32k.Q5_0.gguf | Q5_0 | 8.4 GB |
| longchat-7b-v1.5-32k.Q5_1.gguf | Q5_1 | 9.1 GB |
| longchat-7b-v1.5-32k.Q5_K.gguf | Q5_K | 8.6 GB |
| longchat-7b-v1.5-32k.Q5_K_S.gguf | Q5_K_S | 8.4 GB |
| longchat-7b-v1.5-32k.Q6_K.gguf | Q6_K | 9.9 GB |
| longchat-7b-v1.5-32k.Q8_0.gguf | Q8_0 | 13 GB |
| longchat-7b-v1.5-32k.gguf | full | 24 GB |
Usage:
```
docker run --rm -it -p 8000:8000 -v /path/to/models:/models -e MODEL=/models/gguf-model-name.gguf hubimage/llama-cpp-python:latest
```
and you can view http://localhost:8000/docs to see the swagger UI.
## Provided images
| Name | Quant method | Compressed Size |
| -------------------------------------------- | ------------ | --------------- |
| `shaowenchen/longchat-7b-v1.5-32k-gguf:Q2_K` | Q2_K | 2.88 GB |
| `shaowenchen/longchat-7b-v1.5-32k-gguf:Q3_K` | Q3_K | 3.3 GB |
| `shaowenchen/longchat-7b-v1.5-32k-gguf:Q4_K` | Q4_K | 4 GB |
Usage:
```
docker run --rm -p 8000:8000 shaowenchen/longchat-7b-v1.5-32k-gguf:Q2_K
```
and you can view http://localhost:8000/docs to see the swagger UI.
|
Yntec/Trending | Yntec | 2023-10-15T10:17:02Z | 464 | 4 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"Ciro_Negrogni",
"MagicArt35",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-17T03:50:21Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- Ciro_Negrogni
- MagicArt35
---
# Trending
AmovieX by MagicArt35 with the Photographic Trend LoRA by Ciro_Negrogni baked in. Third version of three with Photographic Trend's compositions.
First version: https://huggingface.co/Yntec/aPhotographicTrend
Second version with AmovieX's compositions: https://huggingface.co/Yntec/aMovieTrend
Sample and prompts:

Pretty Cute Girl Photorealistic, highly detailed, masterpiece, trending on ArtStation, sitting, Detailed Chibi Eyes, fantasy, beautiful detailed legs, streetwear, gorgeous detailed hair, hat, Magazine ad, iconic, 1943, from the movie, sharp focus.

Cartoon CUTE LITTLE baby, CHIBI, gorgeous detailed hair, looking, cute socks, holding pillow, skirt, Magazine ad, iconic, 1940, sharp focus. pencil art By KlaysMoji and Clay Mann and and leyendecker and Dave Rapoza.
Original pages:
https://civitai.com/models/98543 (Photographic Trend)
https://civitai.com/models/94687/photo-movie-x (AmovieX)
# Recipe
- Merge Photographic Trend LoRA to checkpoint 1.0
Model A:
AmovieX
OutPut:
PhotographicTrendAmovieX
- SuperMerger Weight sum Train Difference use MBW 0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1
Model A:
PhotographicTrendAmovieX
Model B:
AmovieX
OutPut:
Trending |
YanaS/llama-2-7b-langchain-chat-GGUF | YanaS | 2023-09-27T11:17:32Z | 464 | 20 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation",
"en",
"es",
"ru",
"de",
"pl",
"th",
"vi",
"sv",
"bn",
"da",
"he",
"it",
"fa",
"sk",
"id",
"nb",
"el",
"nl",
"hu",
"eu",
"zh",
"eo",
"ja",
"ca",
"cs",
"bg",
"fi",
"pt",
"tr",
"ro",
"ar",
"uk",
"gl",
"fr",
"ko",
"dataset:Photolens/oasst1-langchain-llama-2-formatted",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-09-27T10:47:15Z | ---
language:
- en
- es
- ru
- de
- pl
- th
- vi
- sv
- bn
- da
- he
- it
- fa
- sk
- id
- nb
- el
- nl
- hu
- eu
- zh
- eo
- ja
- ca
- cs
- bg
- fi
- pt
- tr
- ro
- ar
- uk
- gl
- fr
- ko
task_categories:
- conversational
license: llama2
datasets:
- Photolens/oasst1-langchain-llama-2-formatted
---
Model by [Photolens/llama-2-7b-langchain-chat](https://huggingface.co/Photolens/llama-2-7b-langchain-chat) converted in GGUF format.
## Model Overview
Model license: Llama-2<br>
This model is trained based on [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) model that is QLoRA finetuned on [Photolens/oasst1-langchain-llama-2-formatted](https://huggingface.co/datasets/Photolens/oasst1-langchain-llama-2-formatted) dataset.<br>
## Prompt Template: Llama-2
```
<s>[INST] Prompter Message [/INST] Assistant Message </s>
```
## Intended Use
Dataset that is used to finetune base model is optimized for langchain applications.<br> |
unsloth/zephyr-sft | unsloth | 2024-03-22T15:03:10Z | 464 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"unsloth",
"zephyr",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-12-31T06:57:54Z | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- unsloth
- transformers
- zephyr
- sft
---
# Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth!
Follows https://github.com/huggingface/alignment-handbook/issues/45#issuecomment-1845598205
From HuggingFaceH4/mistral-7b-sft-beta
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/u54VK8m8tk)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/buy%20me%20a%20coffee%20button.png" width="200"/>](https://ko-fi.com/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Gemma 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing) | 2.4x faster | 58% less |
| **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **Llama-2 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing) | 2.2x faster | 43% less |
| **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less |
| **CodeLlama 34b** A100 | [▶️ Start on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing) | 1.9x faster | 27% less |
| **Mistral 7b** 1xT4 | [▶️ Start on Kaggle](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook) | 5x faster\* | 62% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
|
OpenAlex/bert-base-multilingual-cased-finetuned-openalex-topic-classification-title-abstract | OpenAlex | 2024-05-13T18:19:52Z | 464 | 1 | transformers | [
"transformers",
"tf",
"safetensors",
"bert",
"text-classification",
"base_model:bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-01-06T12:31:28Z | ---
license: apache-2.0
base_model: bert-base-multilingual-cased
model-index:
- name: >-
bert-base-multilingual-cased-finetuned-openalex-topic-classification-title-abstract
results: []
pipeline_tag: text-classification
widget:
- text: "<TITLE> From Louvain to Leiden: guaranteeing well-connected communities\n<ABSTRACT> Community detection is often used to understand the structure of large and complex networks. One of the most popular algorithms for uncovering community structure is the so-called Louvain algorithm. We show that this algorithm has a major defect that largely went unnoticed until now: the Louvain algorithm may yield arbitrarily badly connected communities. In the worst case, communities may even be disconnected, especially when running the algorithm iteratively. In our experimental analysis, we observe that up to 25% of the communities are badly connected and up to 16% are disconnected. To address this problem, we introduce the Leiden algorithm. We prove that the Leiden algorithm yields communities that are guaranteed to be connected. In addition, we prove that, when the Leiden algorithm is applied iteratively, it converges to a partition in which all subsets of all communities are locally optimally assigned. Furthermore, by relying on a fast local move approach, the Leiden algorithm runs faster than the Louvain algorithm. We demonstrate the performance of the Leiden algorithm for several benchmark and real-world networks. We find that the Leiden algorithm is faster than the Louvain algorithm and uncovers better partitions, in addition to providing explicit guarantees."
- text: "<TITLE> Cleavage of Structural Proteins during the Assembly of the Head of Bacteriophage T4"
- text: "<TITLE> NONE\n<ABSTRACT> Surface wave (SW) over-the-horizon (OTH) radars are not only widely used for ocean remote sensing, but they can also be exploited in integrated maritime surveillance systems. This paper represents the first part of the description of the statistical and spectral analysis performed on sea backscattered signals recorded by the oceanographic WEllen RAdar (WERA) system. Data were collected on May 13th 2008 in the Bay of Brest, France. The data statistical analysis, after beamforming, shows that for near range cells the signal amplitude fits well the Rayleigh distribution, while for far cells the data show a more pronounced heavy-tailed behavior. The causes can be traced in man-made (i.e. radio communications) and/or natural (i.e. reflections of the transmitted signal through the ionosphere layers, meteor trails) interferences."
---
# bert-base-multilingual-cased-finetuned-openalex-topic-classification-title-abstract
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on a labeled dataset provided by [CWTS](https://www.cwts.nl/) (for labeled data: [CWTS Labeled Data](https://zenodo.org/records/10560276)). To see how CWTS labeled the data, please check out the following blog post: [An open approach for classifying research publications](https://www.leidenmadtrics.nl/articles/an-open-approach-for-classifying-research-publications)
It was made with the purpose of being able to classify scholarly work with a fixed set of well-defined topics. This is NOT the full model being used to tag [OpenAlex](https://openalex.org/) works with a topic. For that, check out the following github repo:
[OpenAlex Topic Classification](https://github.com/ourresearch/openalex-topic-classification)
That repository will also contain information about text preprocessing, modeling, testing, and deployment.
## Model description
The model was trained using the following input data format (so it is recommended the data be in this format as well):
Using both title and abstract:
"\<TITLE\> {insert-processed-title-here}\n\<ABSTRACT\> {insert-processed-abstract-here}"
Using only title:
"\<TITLE\> {insert-processed-title-here}"
Using only abstract:
"\<TITLE\> NONE\n\<ABSTRACT\> {insert-processed-abstract-here}"
The quickest way to use this model in Python is with the following code (assuming you have the transformers library installed):
```
from transformers import pipeline
title = "{insert-processed-title-here}"
abstract = "{insert-processed-abstract-here}"
classifier = \
pipeline(model="OpenAlex/bert-base-multilingual-cased-finetuned-openalex-topic-classification-title-abstract", top_k=10, "truncation":True,"max_length":512)
classifier(f"""<TITLE> {title}\n<ABSTRACT> {abstract}""")
```
This will return the top 10 outputs from the model. There will be 2 pieces of information here:
1. Full Topic Label: Made up of both the [OpenAlex](https://openalex.org/) topic ID and the topic label (ex: "1048: Ecology and Evolution of Viruses in Ecosystems")
2. Model Score: Model's confidence in the topic (ex: "0.364")
## Intended uses & limitations
The model is intended to be used as part of a larger model that also incorporates journal information and citation features. However, this model is good if you want to use it for quickly generating a topic based only on a title/abstract.
Since this model was fine-tuned on a BERT model, all of the biases seen in that model will most likely show up in this model as well.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 6e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 6e-05, 'decay_steps': 335420, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 500, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 4.8075 | 3.6686 | 0.3839 | 0 |
| 3.4867 | 3.3360 | 0.4337 | 1 |
| 3.1865 | 3.2005 | 0.4556 | 2 |
| 2.9969 | 3.1379 | 0.4675 | 3 |
| 2.8489 | 3.0900 | 0.4746 | 4 |
| 2.7212 | 3.0744 | 0.4799 | 5 |
| 2.6035 | 3.0660 | 0.4831 | 6 |
| 2.4942 | 3.0737 | 0.4846 | 7 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.13.0
- Datasets 2.15.0
- Tokenizers 0.15.0 |
TheBloke/Tess-34B-v1.5b-GGUF | TheBloke | 2024-01-28T20:57:45Z | 464 | 5 | transformers | [
"transformers",
"gguf",
"yi",
"base_model:migtissera/Tess-34B-v1.5b",
"license:other",
"region:us"
]
| null | 2024-01-28T20:04:19Z | ---
base_model: migtissera/Tess-34B-v1.5b
inference: false
license: other
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
license_name: yi-34b
model_creator: Migel Tissera
model_name: Tess 34B V1.5B
model_type: yi
prompt_template: 'SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Tess 34B V1.5B - GGUF
- Model creator: [Migel Tissera](https://huggingface.co/migtissera)
- Original model: [Tess 34B V1.5B](https://huggingface.co/migtissera/Tess-34B-v1.5b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Migel Tissera's Tess 34B V1.5B](https://huggingface.co/migtissera/Tess-34B-v1.5b).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Tess-34B-v1.5b-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Tess-34B-v1.5b-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Tess-34B-v1.5b-GGUF)
* [Migel Tissera's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/migtissera/Tess-34B-v1.5b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Orca-Vicuna
```
SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [tess-34b-v1.5b.Q2_K.gguf](https://huggingface.co/TheBloke/Tess-34B-v1.5b-GGUF/blob/main/tess-34b-v1.5b.Q2_K.gguf) | Q2_K | 2 | 12.83 GB| 15.33 GB | significant quality loss - not recommended for most purposes |
| [tess-34b-v1.5b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Tess-34B-v1.5b-GGUF/blob/main/tess-34b-v1.5b.Q3_K_S.gguf) | Q3_K_S | 3 | 14.96 GB| 17.46 GB | very small, high quality loss |
| [tess-34b-v1.5b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Tess-34B-v1.5b-GGUF/blob/main/tess-34b-v1.5b.Q3_K_M.gguf) | Q3_K_M | 3 | 16.65 GB| 19.15 GB | very small, high quality loss |
| [tess-34b-v1.5b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Tess-34B-v1.5b-GGUF/blob/main/tess-34b-v1.5b.Q3_K_L.gguf) | Q3_K_L | 3 | 18.14 GB| 20.64 GB | small, substantial quality loss |
| [tess-34b-v1.5b.Q4_0.gguf](https://huggingface.co/TheBloke/Tess-34B-v1.5b-GGUF/blob/main/tess-34b-v1.5b.Q4_0.gguf) | Q4_0 | 4 | 19.47 GB| 21.97 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [tess-34b-v1.5b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Tess-34B-v1.5b-GGUF/blob/main/tess-34b-v1.5b.Q4_K_S.gguf) | Q4_K_S | 4 | 19.60 GB| 22.10 GB | small, greater quality loss |
| [tess-34b-v1.5b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Tess-34B-v1.5b-GGUF/blob/main/tess-34b-v1.5b.Q4_K_M.gguf) | Q4_K_M | 4 | 20.66 GB| 23.16 GB | medium, balanced quality - recommended |
| [tess-34b-v1.5b.Q5_0.gguf](https://huggingface.co/TheBloke/Tess-34B-v1.5b-GGUF/blob/main/tess-34b-v1.5b.Q5_0.gguf) | Q5_0 | 5 | 23.71 GB| 26.21 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [tess-34b-v1.5b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Tess-34B-v1.5b-GGUF/blob/main/tess-34b-v1.5b.Q5_K_S.gguf) | Q5_K_S | 5 | 23.71 GB| 26.21 GB | large, low quality loss - recommended |
| [tess-34b-v1.5b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Tess-34B-v1.5b-GGUF/blob/main/tess-34b-v1.5b.Q5_K_M.gguf) | Q5_K_M | 5 | 24.32 GB| 26.82 GB | large, very low quality loss - recommended |
| [tess-34b-v1.5b.Q6_K.gguf](https://huggingface.co/TheBloke/Tess-34B-v1.5b-GGUF/blob/main/tess-34b-v1.5b.Q6_K.gguf) | Q6_K | 6 | 28.21 GB| 30.71 GB | very large, extremely low quality loss |
| [tess-34b-v1.5b.Q8_0.gguf](https://huggingface.co/TheBloke/Tess-34B-v1.5b-GGUF/blob/main/tess-34b-v1.5b.Q8_0.gguf) | Q8_0 | 8 | 36.54 GB| 39.04 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Tess-34B-v1.5b-GGUF and below it, a specific filename to download, such as: tess-34b-v1.5b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Tess-34B-v1.5b-GGUF tess-34b-v1.5b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Tess-34B-v1.5b-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Tess-34B-v1.5b-GGUF tess-34b-v1.5b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m tess-34b-v1.5b.Q4_K_M.gguf --color -c 200000 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "SYSTEM: {system_message}\nUSER: {prompt}\nASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 200000` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./tess-34b-v1.5b.Q4_K_M.gguf", # Download the model file first
n_ctx=200000, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"SYSTEM: {system_message}\nUSER: {prompt}\nASSISTANT:", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./tess-34b-v1.5b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Migel Tissera's Tess 34B V1.5B
<br>

<br>
Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-34B-v1.5b was trained on the Yi-34B-200K base.
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
```
<!-- original-model-card end -->
|
HiTZ/latxa-7b-v1.1 | HiTZ | 2024-07-02T08:14:43Z | 464 | 3 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"eu",
"en",
"dataset:HiTZ/latxa-corpus-v1.1",
"arxiv:2403.20266",
"arxiv:2307.09288",
"arxiv:2308.16884",
"arxiv:2112.10668",
"arxiv:1910.09700",
"license:llama2",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-02-16T14:40:47Z | ---
license: llama2
datasets:
- HiTZ/latxa-corpus-v1.1
language:
- eu
- en
metrics:
- accuracy
- f1
- perplexity
pipeline_tag: text-generation
model-index:
- name: Latxa-7b-v1.1
results:
- task:
type: multiple-choice
dataset:
name: xstory_cloze
type: XStory
metrics:
- name: Accuracy (0-shot)
type: Accuracy (0-shot)
value: 65.72
source:
name: Paper
url: https://arxiv.org/abs/2403.20266
- task:
type: multiple-choice
dataset:
name: belebele
type: Belebele
metrics:
- name: Accuracy (5-shot)
type: Accuracy (5-shot)
value: 36.89
source:
name: Paper
url: https://arxiv.org/abs/2403.20266
- task:
type: mix
dataset:
name: basque_glue
type: BasqueGLUE
metrics:
- name: Average scores (5-shot)
type: Average scores (5-shot)
value: 51.78
source:
name: Paper
url: https://arxiv.org/abs/2403.20266
- task:
type: multiple_choice
dataset:
name: eus_proficiency
type: EusProficiency
metrics:
- name: Accuracy (5-shot)
type: Accuracy (5-shot)
value: 32.44
source:
name: Paper
url: https://arxiv.org/abs/2403.20266
- task:
type: multiple_choice
dataset:
name: eus_reading
type: EusReading
metrics:
- name: Accuracy (5-shot)
type: Accuracy (5-shot)
value: 30.40
source:
name: Paper
url: https://arxiv.org/abs/2403.20266
- task:
type: multiple_choice
dataset:
name: eus_trivia
type: EusTrivia
metrics:
- name: Accuracy (5-shot)
type: Accuracy (5-shot)
value: 44.37
source:
name: Paper
url: https://arxiv.org/abs/2403.20266
- task:
type: multiple_choice
dataset:
name: eus_exams
type: EusExams
metrics:
- name: Accuracy (5-shot)
type: Accuracy (5-shot)
value: 34.20
source:
name: Paper
url: https://arxiv.org/abs/2403.20266
---
# **Model Card for Latxa 7b**
<p align="center">
<img src="https://github.com/hitz-zentroa/latxa/blob/b9aa705f60ee2cc03c9ed62fda82a685abb31b07/assets/latxa_round.png?raw=true" style="height: 350px;">
</p>
<span style="color: red; font-weight: bold">IMPORTANT:</span> This model is outdated and made available publicly for reproducibility purposes only. Please utilize the most recent version found in [our HuggingFace collection](https://huggingface.co/collections/HiTZ/latxa-65a697e6838b3acc53677304).
We introduce Latxa, a family of large language models for Basque ranging from 7 to 70 billion parameters. Latxa is based on Llama 2, which we continue pretraining on a new Basque corpus comprising 4.3M documents and 4.2B tokens. In our extensive evaluation, Latxa outperforms all previous open models we compare to by a large margin. In addition, it is competitive with GPT-4 Turbo in language proficiency and understanding, despite lagging behind in reading comprehension and knowledgeintensive tasks. Both the Latxa family of models, as well as our new pretraining corpora and evaluation datasets, are publicly available under open licenses. Our suite enables reproducible research on methods to build LLMs for low-resource languages
- 📒 Blog Post: [Latxa: An Open Language Model and Evaluation Suite for Basque](https://www.hitz.eus/en/node/340)
- 📖 Paper: [Latxa: An Open Language Model and Evaluation Suite for Basque](https://arxiv.org/abs/2403.20266)
- 💻 Code: [hitz-zentroa/latxa](https://github.com/hitz-zentroa/latxa)
# **Model Details**
## **Model Description**
Latxa is a family of Large Language Models (LLM) based on Meta’s [LLaMA models](https://huggingface.co/meta-llama). Current LLMs exhibit incredible performance for high-resource languages such as English, but, in the case of Basque and other low-resource languages, their performance is close to a random guesser. These limitations widen the gap between high- and low-resource languages when it comes to digital development. We present Latxa to overcome these limitations and promote the development of LLM-based technology and research for the Basque language. Latxa models follow the same architecture as their original counterparts and were further trained in [Latxa Corpus v1.1](https://huggingface.co/datasets/HiTZ/latxa-corpus-v1.1), a high-quality Basque corpora.
The models are released in three sizes: 7B, 13B and 70B.
* **Developed by:** HiTZ Research Center & IXA Research group (University of the Basque Country UPV/EHU)
* **Model type:** Language model
* **Language(s) (NLP):** en, eu
* **License:** llama2
* **Parent Model:** meta-llama/Llama-2-7b
* **Contact:** [email protected]
## **Getting started**
Use the code below to get started with the model.
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="HiTZ/latxa-7b-v1.1")
text = "Euskara adimen artifizialera iritsi da!"
pipe(text, max_new_tokens=50, num_beams=5)
>> [
{
'generated_text': 'Euskara adimen artifizialera iritsi da!\nEuskararen eta adimen artifizialaren arteko harremana aspaldikoa da,'
' baina azken urteotan aurrerapauso handiak eman dira arlo horretan'
}
]
```
# **Uses**
Latxa models are intended to be used with Basque data; for any other language the performance is not guaranteed. Same as the original, Latxa inherits the [LLaMA-2 License](https://ai.meta.com/llama/license/) which allows for commercial and research use.
## **Direct Use**
Latxa family models are pre-trained LLMs without any task-specific or instruction fine-tuning. That is, the model can either be prompted to perform a specific task or further fine-tuned for specific use cases.
## **Out-of-Scope Use**
The model was not fine-tuned to follow instructions or to work as a chat assistant, therefore, this kind of usage is not tested nor recommended.
# **Bias, Risks, and Limitations**
In an effort to alleviate the potentially disturbing or harmful content, Latxa has been trained on carefully selected and processed data which comes mainly from local media, national/regional newspapers, encyclopedias and blogs (see Latxa-Corpus below). Still, the model is based on LLaMA models and can potentially carry the same bias, risk and limitations.
Please see the LLaMA’s _Ethical Considerations and Limitations_ for further information.
# **Training Details**
## **Training Data**
Our training corpus combines various existing datasets, as well as some new ones that we release with this work. We have prioritized quality over quantity when constructing our corpus, prioritizing high-quality data sources and applying a thorough deduplication and filtering process. In total, a 4.17B tokens corpus is used to train the model.
See more details in the [Latxa Corpus](https://huggingface.co/datasets/HiTZ/latxa-corpus-v1.1) dataset card.
Additionally, 500K documents of English data randomly selected from the [Pile](https://huggingface.co/datasets/EleutherAI/pile) dataset were also included to avoid catastrophic forgetting.
## **Training Procedure**
The training of Latxa was conducted using the [GPT-Neox](https://github.com/EleutherAI/gpt-neox) library. As infrastructure, we leveraged the CINECA HPC Leonardo computing cluster located in Italy, which is powered by 3456 nodes each containing 4x custom A100 64Gb GPUs. The models were trained for 10k steps with a sequence length of 4096 tokens and an effective batch size of 2M tokens, resulting in a total of 20B tokens (around 4 epochs). We used a cosine learning rate schedule, with a warm-up of 500 steps and decaying down to 3\% of the peak learning rate. We set up the peak learning rate to be 1e-4. All other hyperparameters follow ([Touvron et al., 2023](https://arxiv.org/abs/2307.09288)).
# **Evaluation**
We evaluated the models on zero-shot and few-shot settings on generative, multiple-choice and classification tasks. We used the basque partitions of each dataset.
## **Testing Data, Factors & Metrics**
### **Testing Data**
* **Belebele** ([Bandarkar et al.](https://arxiv.org/abs/2308.16884)): Belebele is a multiple-choice machine reading comprehension (MRC) dataset spanning 122 language variants. We evaluated the model in a 5-shot fashion.
* Data card: [https://huggingface.co/datasets/facebook/belebele](https://huggingface.co/datasets/facebook/belebele)
* **X-StoryCloze** ([Lin et al.](https://arxiv.org/abs/2112.10668)): XStoryCloze consists of the professionally translated version of the English StoryCloze dataset to 10 non-English languages. Story Cloze is a commonsense reasoning dataset which consists of choosing the correct ending to a four-sentence story. We evaluated the model in a 0-shot fashion.
* Data card: [https://huggingface.co/datasets/juletxara/xstory_cloze](https://huggingface.co/datasets/juletxara/xstory_cloze)
* **BasqueGLUE** ([Urbizu et al.](https://aclanthology.org/2022.lrec-1.172.pdf)): BasqueGLUE is a NLU benchmark for Basque. We evaluated the model in a 5-shot fashion on the following tasks:
* Data card:[ https://huggingface.co/datasets/orai-nlp/basqueGLUE](https://huggingface.co/datasets/orai-nlp/basqueGLUE).
* Tasks:
* **BEC2016eu**: Sentiment analysis on tweets about the 2016 Basque elections campaign.
* **VaxxStance**: Stance detection on tweets around the anti-vaccine movement.
* **BTHCv2**: Topic classification of news extracts with 12 categories.
* **EpecKorrefBin**: Correference detection task similar to WSC.
* **QNLIeu**: Q&A NLI built from the Basque Wikipedia.
* **WiCeu**: Basque Word-in-Context task.
* **EusProficiency** ([Etxaniz et al., 2024]()): EusProficiency comprises 5,169 exercises on different topics from past EGA exams, the official C1-level certificate of proficiency in Basque.
* Data card: [https://huggingface.co/datasets/HiTZ/EusProficiency](https://huggingface.co/datasets/HiTZ/EusProficiency)
* **EusReading** ([Etxaniz et al., 2024]()): EusReading consists of 352 reading comprehension exercises (_irakurmena_) sourced from the same set of past EGA exams.
* Data card: [https://huggingface.co/datasets/HiTZ/EusReading](https://huggingface.co/datasets/HiTZ/EusReading)
* **EusTrivia** ([Etxaniz et al., 2024]()): EusTrivia consists of 1,715 trivia questions from multiple online sources. 56.3\% of the questions are elementary level (grades 3-6), while the rest are considered challenging.
* Data card: [https://huggingface.co/datasets/HiTZ/EusTrivia](https://huggingface.co/datasets/HiTZ/EusTrivia)
* **EusExams** ([Etxaniz et al., 2024]()): EusExams is a collection of tests designed to prepare individuals for Public Service examinations conducted by several Basque institutions, including the public health system Osakidetza, the Basque Government, the City Councils of Bilbao and Gasteiz, and the University of the Basque Country (UPV/EHU).
* Data card: [https://huggingface.co/datasets/HiTZ/EusExams](https://huggingface.co/datasets/HiTZ/EusExams)
### **Metrics**
For most of the task we used Accuracy, as they are framed as Multiple Choice questions. For the rest, particularly task from BasqueGLUE benchmark, we have used the following:
* **Micro F1**: BEC2016-eu and BHTCv2
* **Macro F1**: VaxxStance (favor & against)
## **Results**
The model was evaluated using the LM Evaluation harness library from Eleuther AI.
In order to reproduce our results please follow the instructions in Latxa's [Github repository](https://github.com/hitz-zentroa/latxa?tab=readme-ov-file#evaluation).
| Model | Size | XStory | Belebele | BasGLUE | EusProf | EusRead | EusTrivia | EusExams | Avg |
|------------------|------|--------|----------|---------|---------|---------|-----------|----------|-------|
| **Random** | | 50.00 | 25.00 | 37.50 | 25.00 | 25.83 | 26.55 | 25.00 | 30.70 |
|
| GPT 3.5 Turbo | n/a | -- | 57.33 | 48.62 | 31.24 | 36.65 | 46.71 | 42.42 | -- |
| GPT 4 Turbo | n/a | -- | **90.67**| **62.90**| **56.70**| **75.85**| **73.12** | **70.22**| -- |
|
| XGLM | 7B | 57.71 | 23.88 | 41.47 | 22.96 | 24.43 | 26.53 | 24.59 | 32.51 |
| BLOOM | 7B | 57.18 | 27.00 | 40.17 | 25.34 | 28.41 | 27.17 | 25.07 | 33.86 |
| Mistral | 7B | 51.09 | **38.89**| 39.22 | 25.01 | 29.26 | 34.58 | 32.15 | 35.94 |
| Llama 2 | 7B | 50.43 | 26.22 | 38.20 | 24.09 | 27.27 | 29.50 | 28.84 | 32.51 |
| **Latxa v1** | 7B | 63.13 | 35.67 | 50.26 | 28.19 | 27.27 | 40.17 | 34.18 | 39.84 |
| **Latxa v1.1** | 7B | **65.72**| 36.89 | **51.78**| **32.44**| **30.40**| **44.37** | **34.20**| **42.26** |
|
| mGPT | 13B | 55.39 | 25.00 | 37.56 | 25.00 | 24.15 | 27.17 | 25.73 | 32.14 |
| Llama 2 | 13B | 50.63 | 32.00 | 38.98 | 25.90 | 28.98 | 33.53 | 29.66 | 34.36 |
| **Latxa v1** | 13B | 65.85 | **53.56** | **54.49** | 41.19 | **40.06**| 51.14 | 42.92 | **49.95** |
| **Latxa v1.1** | 13B | **67.24**| 51.56 | 54.04 | **45.02**| 29.83 | **56.44** | **43.18**| 49.62 |
|
| Mixtral | 8x7B | 52.55 | 50.44 | 45.00 | 26.43 | 37.50 | 42.51 | 39.87 | 41.97 |
| Yi | 34B | 52.22 | 54.56 | 43.90 | 27.30 | 34.66 | 42.57 | 39.68 | 42.05 |
| Llama 2 | 70B | 51.62 | 33.56 | 42.55 | 24.16 | 27.84 | 38.43 | 33.08 | 35.47 |
| **Latxa v1** | 70B | 67.57 | **71.78** | 59.37 | 48.19 | 49.72 | 57.84 | 51.68 | 58.02 |
| **Latxa v1.1** | 70B | **69.76**| 64.89| **61.66**| **60.61**| **53.69**| **61.52** | **54.48**| **60.94** |
# **Environmental Impact**
Carbon emissions are estimated using the[ Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in[ Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
| Model | Size | Time (GPU Hours) | Carbon Emitted (kg CO2 eq) |
|------------|------|-------------------|----------------------------|
| Latxa v1.1 | 7B | 952.5h | 124.47kg |
| Latxa v1.1 | 13B | 2,518.0h | 329.06kg |
| Latxa v1.1 | 70B | 30,266.0h | 3,955.17kg |
| Total | - | 33,636.5h | 4,408,7kg |
* **Hardware Type:** HPC Cluster, 4x A100 64Gb nodes
* **Hours used:** 33,636.5h
* **Compute cluster:** CINECA HPC
* **Compute Region:** Italy
* **Carbon Emitted:** 4,408,7kg CO<sub>2</sub> eq
# **Acknowledgements**
This work has been partially supported by the Basque Government (IKER-GAITU project).
It has also been partially supported by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project with reference 2022/TL22/00215335.
The models were trained on the Leonardo supercomputer at CINECA under the EuroHPC Joint Undertaking, project EHPC-EXT-2023E01-013.
# **Citation**
To cite our work, please use:
```bibtex
@misc{etxaniz2024latxa,
title={{L}atxa: An Open Language Model and Evaluation Suite for {B}asque},
author={Julen Etxaniz and Oscar Sainz and Naiara Perez and Itziar Aldabe and German Rigau and Eneko Agirre and Aitor Ormazabal and Mikel Artetxe and Aitor Soroa},
year={2024},
eprint={2403.20266},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
ondevicellm/phi-1_5_sft | ondevicellm | 2024-02-28T02:39:16Z | 464 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"phi",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"custom_code",
"dataset:HuggingFaceH4/ultrachat_200k",
"base_model:microsoft/phi-1_5",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-02-27T21:13:45Z | ---
license: mit
base_model: microsoft/phi-1_5
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrachat_200k
model-index:
- name: phi-1_5_sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-1_5_sft
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the HuggingFaceH4/ultrachat_200k dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2542
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 120
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3099 | 0.1 | 100 | 1.3398 |
| 1.3131 | 0.2 | 200 | 1.3159 |
| 1.3009 | 0.3 | 300 | 1.3046 |
| 1.2915 | 0.4 | 400 | 1.2967 |
| 1.2714 | 0.5 | 500 | 1.2906 |
| 1.2811 | 0.6 | 600 | 1.2854 |
| 1.2621 | 0.7 | 700 | 1.2807 |
| 1.2406 | 0.8 | 800 | 1.2767 |
| 1.2371 | 0.9 | 900 | 1.2731 |
| 1.2547 | 1.0 | 1000 | 1.2699 |
| 1.2085 | 1.1 | 1100 | 1.2693 |
| 1.2253 | 1.2 | 1200 | 1.2669 |
| 1.215 | 1.3 | 1300 | 1.2649 |
| 1.2103 | 1.4 | 1400 | 1.2630 |
| 1.2081 | 1.5 | 1500 | 1.2612 |
| 1.2033 | 1.6 | 1600 | 1.2597 |
| 1.2307 | 1.7 | 1700 | 1.2582 |
| 1.2038 | 1.8 | 1800 | 1.2568 |
| 1.2014 | 1.9 | 1900 | 1.2557 |
| 1.188 | 2.0 | 2000 | 1.2546 |
| 1.1473 | 2.1 | 2100 | 1.2563 |
| 1.1872 | 2.2 | 2200 | 1.2559 |
| 1.2086 | 2.3 | 2300 | 1.2553 |
| 1.1896 | 2.4 | 2400 | 1.2550 |
| 1.1733 | 2.5 | 2500 | 1.2548 |
| 1.1665 | 2.6 | 2600 | 1.2544 |
| 1.1499 | 2.7 | 2700 | 1.2543 |
| 1.1779 | 2.8 | 2800 | 1.2542 |
| 1.1746 | 2.9 | 2900 | 1.2542 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
ycros/DonutHole-8x7B-GGUF | ycros | 2024-03-01T04:39:30Z | 464 | 4 | null | [
"gguf",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:mistralai/Mixtral-8x7B-v0.1",
"base_model:Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora",
"base_model:KoboldAI/Mixtral-8x7B-Holodeck-v1",
"base_model:jondurbin/bagel-dpo-8x7b-v0.2",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
]
| null | 2024-02-28T15:14:53Z | ---
base_model:
- mistralai/Mixtral-8x7B-v0.1
- mistralai/Mixtral-8x7B-v0.1
- Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora
- KoboldAI/Mixtral-8x7B-Holodeck-v1
- jondurbin/bagel-dpo-8x7b-v0.2
- mistralai/Mixtral-8x7B-Instruct-v0.1
tags:
- mergekit
- merge
license: apache-2.0
---
# DonutHole-8x7B
_These are GGUF quantized versions of [DonutHole-8x7B](https://huggingface.co/ycros/DonutHole-8x7B)._
Bagel, Mixtral Instruct, Holodeck, LimaRP.
> What mysteries lie in the hole of a donut?
Good with Alpaca prompt formats, also works with Mistral format. See usage details below.

This is similar to [BagelMIsteryTour](https://huggingface.co/ycros/BagelMIsteryTour-v2-8x7B), but I've swapped out Sensualize for the new Holodeck.
I'm not sure if it's better or not yet, or how it does at higher (8k+) contexts just yet.
Similar sampler advice applies as for BMT: minP (0.07 - 0.3 to taste) -> temp (either dynatemp 0-4ish, or like a temp of 3-4 with a smoothing factor of around 2.5ish).
And yes, that's temp last. It does okay without rep pen up to a point, it doesn't seem to get into a complete jam, but it can start to repeat sentences,
so you'll probably need some, perhaps 1.02-1.05 at a 1024 range seems okayish.
(rep pen sucks, but there are better things coming).
I've mainly tested with LimaRP style Alpaca prompts (instruction/input/response), and briefly with Mistral's own format.
**Full credit to all the model and dataset authors, I am but a derp with compute and a yaml file.**
---
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) + [Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora](https://huggingface.co/Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora)
* [KoboldAI/Mixtral-8x7B-Holodeck-v1](https://huggingface.co/KoboldAI/Mixtral-8x7B-Holodeck-v1)
* [jondurbin/bagel-dpo-8x7b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-8x7b-v0.2)
* [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: mistralai/Mixtral-8x7B-v0.1
models:
- model: mistralai/Mixtral-8x7B-v0.1+Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora
parameters:
density: 0.5
weight: 0.2
- model: KoboldAI/Mixtral-8x7B-Holodeck-v1
parameters:
density: 0.5
weight: 0.2
- model: mistralai/Mixtral-8x7B-Instruct-v0.1
parameters:
density: 0.6
weight: 1.0
- model: jondurbin/bagel-dpo-8x7b-v0.2
parameters:
density: 0.6
weight: 0.5
merge_method: dare_ties
dtype: bfloat16
``` |
koesn/SOLAR-10.7B-Instruct-v1.0-uncensored-GGUF | koesn | 2024-03-04T07:55:56Z | 464 | 3 | null | [
"gguf",
"license:apache-2.0",
"region:us"
]
| null | 2024-03-04T06:00:24Z | ---
license: apache-2.0
---
# SOLAR-10.7B-Instruct-v1.0-uncensored
## Description
This repo contains GGUF format model files for SOLAR-10.7B-Instruct-v1.0-uncensored.
## Files Provided
| Name | Quant | Bits | File Size | Remark |
| ------------------------------------------------- | ------- | ---- | --------- | -------------------------------- |
| solar-10.7b-instruct-v1.0-uncensored.IQ3_XXS.gguf | IQ3_XXS | 3 | 4.44 GB | 3.06 bpw quantization |
| solar-10.7b-instruct-v1.0-uncensored.IQ3_S.gguf | IQ3_S | 3 | 4.69 GB | 3.44 bpw quantization |
| solar-10.7b-instruct-v1.0-uncensored.IQ3_M.gguf | IQ3_M | 3 | 4.85 GB | 3.66 bpw quantization mix |
| solar-10.7b-instruct-v1.0-uncensored.Q4_0.gguf | Q4_0 | 4 | 6.07 GB | 3.56G, +0.2166 ppl |
| solar-10.7b-instruct-v1.0-uncensored.IQ4_NL.gguf | IQ4_NL | 4 | 6.14 GB | 4.25 bpw non-linear quantization |
| solar-10.7b-instruct-v1.0-uncensored.Q4_K_M.gguf | Q4_K_M | 4 | 6.46 GB | 3.80G, +0.0532 ppl |
| solar-10.7b-instruct-v1.0-uncensored.Q5_K_M.gguf | Q5_K_M | 5 | 7.60 GB | 4.45G, +0.0122 ppl |
| solar-10.7b-instruct-v1.0-uncensored.Q6_K.gguf | Q6_K | 6 | 8.81 GB | 5.15G, +0.0008 ppl |
| solar-10.7b-instruct-v1.0-uncensored.Q8_0.gguf | Q8_0 | 8 | 11.40 GB | 6.70G, +0.0004 ppl |
## Parameters
| path | type | architecture | rope_theta | sliding_win | max_pos_embed |
| -------------------------------------------- | ----- | ---------------- | ---------- | ----------- | ------------- |
| w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored | llama | LlamaForCausalLM | 10000.0 | null | 4096 |
# Original Model Card
---
license: apache-2.0
---
[upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) finetuned on [unalignment/toxic-dpo-v0.1](https://huggingface.co/datasets/unalignment/toxic-dpo-v0.1)
|
urchade/gliner_small-v1 | urchade | 2024-04-10T10:11:46Z | 464 | 9 | gliner | [
"gliner",
"pytorch",
"token-classification",
"dataset:Universal-NER/Pile-NER-type",
"arxiv:2311.08526",
"license:cc-by-nc-4.0",
"region:us"
]
| token-classification | 2024-03-09T14:53:48Z | ---
license: cc-by-nc-4.0
datasets:
- Universal-NER/Pile-NER-type
pipeline_tag: token-classification
library_name: gliner
---
# Model Card for GLiNER-small
GLiNER is a Named Entity Recognition (NER) model capable of identifying any entity type using a bidirectional transformer encoder (BERT-like). It provides a practical alternative to traditional NER models, which are limited to predefined entities, and Large Language Models (LLMs) that, despite their flexibility, are costly and large for resource-constrained scenarios.
This version has been trained on the Pile-NER dataset (Research purpose)
## Links
* Paper: https://arxiv.org/abs/2311.08526
* Repository: https://github.com/urchade/GLiNER
## Available models
| Release | Model Name | # of Parameters | Language | License |
| - | - | - | - | - |
| v0 | [urchade/gliner_base](https://huggingface.co/urchade/gliner_base)<br>[urchade/gliner_multi](https://huggingface.co/urchade/gliner_multi) | 209M<br>209M | English<br>Multilingual | cc-by-nc-4.0 |
| v1 | [urchade/gliner_small-v1](https://huggingface.co/urchade/gliner_small-v1)<br>[urchade/gliner_medium-v1](https://huggingface.co/urchade/gliner_medium-v1)<br>[urchade/gliner_large-v1](https://huggingface.co/urchade/gliner_large-v1) | 166M<br>209M<br>459M | English <br> English <br> English | cc-by-nc-4.0 |
| v2 | [urchade/gliner_small-v2](https://huggingface.co/urchade/gliner_small-v2)<br>[urchade/gliner_medium-v2](https://huggingface.co/urchade/gliner_medium-v2)<br>[urchade/gliner_large-v2](https://huggingface.co/urchade/gliner_large-v2) | 166M<br>209M<br>459M | English <br> English <br> English | apache-2.0 |
| v2.1 | [urchade/gliner_small-v2.1](https://huggingface.co/urchade/gliner_small-v2.1)<br>[urchade/gliner_medium-v2.1](https://huggingface.co/urchade/gliner_medium-v2.1)<br>[urchade/gliner_large-v2.1](https://huggingface.co/urchade/gliner_large-v2.1) <br>[urchade/gliner_multi-v2.1](https://huggingface.co/urchade/gliner_multi-v2.1) | 166M<br>209M<br>459M<br>209M | English <br> English <br> English <br> Multilingual | apache-2.0 |
## Installation
To use this model, you must install the GLiNER Python library:
```
!pip install gliner
```
## Usage
Once you've downloaded the GLiNER library, you can import the GLiNER class. You can then load this model using `GLiNER.from_pretrained` and predict entities with `predict_entities`.
```python
from gliner import GLiNER
model = GLiNER.from_pretrained("urchade/gliner_small-v1")
text = """
Cristiano Ronaldo dos Santos Aveiro (Portuguese pronunciation: [kɾiʃˈtjɐnu ʁɔˈnaldu]; born 5 February 1985) is a Portuguese professional footballer who plays as a forward for and captains both Saudi Pro League club Al Nassr and the Portugal national team. Widely regarded as one of the greatest players of all time, Ronaldo has won five Ballon d'Or awards,[note 3] a record three UEFA Men's Player of the Year Awards, and four European Golden Shoes, the most by a European player. He has won 33 trophies in his career, including seven league titles, five UEFA Champions Leagues, the UEFA European Championship and the UEFA Nations League. Ronaldo holds the records for most appearances (183), goals (140) and assists (42) in the Champions League, goals in the European Championship (14), international goals (128) and international appearances (205). He is one of the few players to have made over 1,200 professional career appearances, the most by an outfield player, and has scored over 850 official senior career goals for club and country, making him the top goalscorer of all time.
"""
labels = ["person", "award", "date", "competitions", "teams"]
entities = model.predict_entities(text, labels)
for entity in entities:
print(entity["text"], "=>", entity["label"])
```
```
Cristiano Ronaldo dos Santos Aveiro => person
5 February 1985 => date
Al Nassr => teams
Portugal national team => teams
Ballon d'Or => award
UEFA Men's Player of the Year Awards => award
European Golden Shoes => award
UEFA Champions Leagues => competitions
UEFA European Championship => competitions
UEFA Nations League => competitions
Champions League => competitions
European Championship => competitions
```
## Named Entity Recognition benchmark result

## Model Authors
The model authors are:
* [Urchade Zaratiana](https://huggingface.co/urchade)
* Nadi Tomeh
* Pierre Holat
* Thierry Charnois
## Citation
```bibtex
@misc{zaratiana2023gliner,
title={GLiNER: Generalist Model for Named Entity Recognition using Bidirectional Transformer},
author={Urchade Zaratiana and Nadi Tomeh and Pierre Holat and Thierry Charnois},
year={2023},
eprint={2311.08526},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
mradermacher/MeliodasT3q-7B-GGUF | mradermacher | 2024-05-06T06:01:00Z | 464 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"en",
"base_model:automerger/MeliodasT3q-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-24T09:04:02Z | ---
base_model: automerger/MeliodasT3q-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- automerger
---
## About
static quants of https://huggingface.co/automerger/MeliodasT3q-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MeliodasT3q-7B-GGUF/resolve/main/MeliodasT3q-7B.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/MeliodasT3q-7B-GGUF/resolve/main/MeliodasT3q-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/MeliodasT3q-7B-GGUF/resolve/main/MeliodasT3q-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/MeliodasT3q-7B-GGUF/resolve/main/MeliodasT3q-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MeliodasT3q-7B-GGUF/resolve/main/MeliodasT3q-7B.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/MeliodasT3q-7B-GGUF/resolve/main/MeliodasT3q-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MeliodasT3q-7B-GGUF/resolve/main/MeliodasT3q-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/MeliodasT3q-7B-GGUF/resolve/main/MeliodasT3q-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/MeliodasT3q-7B-GGUF/resolve/main/MeliodasT3q-7B.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MeliodasT3q-7B-GGUF/resolve/main/MeliodasT3q-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MeliodasT3q-7B-GGUF/resolve/main/MeliodasT3q-7B.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/MeliodasT3q-7B-GGUF/resolve/main/MeliodasT3q-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MeliodasT3q-7B-GGUF/resolve/main/MeliodasT3q-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/MeliodasT3q-7B-GGUF/resolve/main/MeliodasT3q-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/MeliodasT3q-7B-GGUF/resolve/main/MeliodasT3q-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MeliodasT3q-7B-GGUF/resolve/main/MeliodasT3q-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
backyardai/Cerebrum-1.0-8x7b-GGUF | backyardai | 2024-05-22T22:26:38Z | 464 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"text-generation",
"en",
"base_model:AetherResearch/Cerebrum-1.0-8x7b",
"license:other",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-30T00:13:22Z | ---
base_model: AetherResearch/Cerebrum-1.0-8x7b
license: other
language:
- en
library_name: transformers
pipeline_tag: text-generation
quantized_by: brooketh
tags:
- text-generation-inference
---
<img src="BackyardAI_Banner.png" alt="Backyard.ai" style="height: 90px; min-width: 32px; display: block; margin: auto;">
**<p style="text-align: center;">The official library of GGUF format models for use in the local AI chat app, Backyard AI.</p>**
<p style="text-align: center;"><a href="https://backyard.ai/">Download Backyard AI here to get started.</a></p>
<p style="text-align: center;"><a href="https://www.reddit.com/r/LLM_Quants/">Request Additional models at r/LLM_Quants.</a></p>
***
# Cerebrum 1.0 8x7b
- **Creator:** [AetherResearch](https://huggingface.co/AetherResearch/)
- **Original:** [Cerebrum 1.0 8x7b](https://huggingface.co/AetherResearch/Cerebrum-1.0-8x7b)
- **Date Created:** 2024-03-21
- **Trained Context:** 4096 tokens
- **Description:** Mixtral 8x7b-based model created specifically for reasoning tasks, finetuned using the native chain of thought approach.
***
## What is a GGUF?
GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Backyard AI. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware.
GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight.
***
<img src="BackyardAI_Logo.png" alt="Backyard.ai" style="height: 75px; min-width: 32px; display: block; horizontal align: left;">
## Backyard AI
- Free, local AI chat application.
- One-click installation on Mac and PC.
- Automatically use GPU for maximum speed.
- Built-in model manager.
- High-quality character hub.
- Zero-config desktop-to-mobile tethering.
Backyard AI makes it easy to start chatting with AI using your own characters or one of the many found in the built-in character hub. The model manager helps you find the latest and greatest models without worrying about whether it's the correct format. Backyard AI supports advanced features such as lorebooks, author's note, text formatting, custom context size, sampler settings, grammars, local TTS, cloud inference, and tethering, all implemented in a way that is straightforward and reliable.
**Join us on [Discord](https://discord.gg/SyNN2vC9tQ)**
*** |
mradermacher/penchant-7B-GGUF | mradermacher | 2024-05-06T05:24:39Z | 464 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:afoland/penchant-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-01T16:09:20Z | ---
base_model: afoland/penchant-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
static quants of https://huggingface.co/afoland/penchant-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/penchant-7B-GGUF/resolve/main/penchant-7B.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/penchant-7B-GGUF/resolve/main/penchant-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/penchant-7B-GGUF/resolve/main/penchant-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/penchant-7B-GGUF/resolve/main/penchant-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/penchant-7B-GGUF/resolve/main/penchant-7B.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/penchant-7B-GGUF/resolve/main/penchant-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/penchant-7B-GGUF/resolve/main/penchant-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/penchant-7B-GGUF/resolve/main/penchant-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/penchant-7B-GGUF/resolve/main/penchant-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/penchant-7B-GGUF/resolve/main/penchant-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/penchant-7B-GGUF/resolve/main/penchant-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/penchant-7B-GGUF/resolve/main/penchant-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/penchant-7B-GGUF/resolve/main/penchant-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/penchant-7B-GGUF/resolve/main/penchant-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
martimfasantos/tinyllama-1.1b-sum-dpo-full_LR1e-7_2epochs | martimfasantos | 2024-06-06T02:44:44Z | 464 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"dataset:openai/summarize_from_feedback",
"base_model:martimfasantos/tinyllama-1.1b-sum-sft-full",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-15T17:08:08Z | ---
license: apache-2.0
base_model: martimfasantos/tinyllama-1.1b-sum-sft-full
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- openai/summarize_from_feedback
model-index:
- name: tinyllama-1.1b-sum-dpo-full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinyllama-1.1b-sum-dpo-full
This model is a fine-tuned version of [martimfasantos/tinyllama-1.1b-sum-sft-full](https://huggingface.co/martimfasantos/tinyllama-1.1b-sum-sft-full) on the openai/summarize_from_feedback dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6549
- Rewards/chosen: -0.4976
- Rewards/rejected: -0.6010
- Rewards/accuracies: 0.6194
- Rewards/margins: 0.1035
- Logps/rejected: -123.2810
- Logps/chosen: -108.4673
- Logits/rejected: -2.5516
- Logits/chosen: -2.5584
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:-----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6932 | 0.0172 | 100 | 0.6932 | 0.0000 | 0.0001 | 0.4819 | -0.0001 | -63.1720 | -58.7099 | -3.1572 | -3.1629 |
| 0.6931 | 0.0345 | 200 | 0.6932 | 0.0000 | 0.0001 | 0.4893 | -0.0001 | -63.1716 | -58.7118 | -3.1576 | -3.1632 |
| 0.6932 | 0.0517 | 300 | 0.6932 | 0.0000 | 0.0001 | 0.4696 | -0.0001 | -63.1677 | -58.7096 | -3.1575 | -3.1631 |
| 0.6933 | 0.0689 | 400 | 0.6932 | 0.0002 | 0.0002 | 0.4844 | -0.0000 | -63.1572 | -58.6929 | -3.1574 | -3.1631 |
| 0.6931 | 0.0861 | 500 | 0.6931 | 0.0002 | 0.0002 | 0.5016 | 0.0000 | -63.1582 | -58.6892 | -3.1571 | -3.1628 |
| 0.6925 | 0.1034 | 600 | 0.6931 | 0.0004 | 0.0003 | 0.5158 | 0.0002 | -63.1507 | -58.6671 | -3.1566 | -3.1623 |
| 0.6927 | 0.1206 | 700 | 0.6931 | 0.0006 | 0.0004 | 0.5276 | 0.0002 | -63.1420 | -58.6550 | -3.1556 | -3.1612 |
| 0.6924 | 0.1378 | 800 | 0.6929 | 0.0010 | 0.0006 | 0.5509 | 0.0005 | -63.1244 | -58.6089 | -3.1546 | -3.1601 |
| 0.692 | 0.1551 | 900 | 0.6928 | 0.0014 | 0.0007 | 0.5534 | 0.0007 | -63.1085 | -58.5690 | -3.1524 | -3.1580 |
| 0.6924 | 0.1723 | 1000 | 0.6926 | 0.0018 | 0.0007 | 0.5660 | 0.0011 | -63.1097 | -58.5334 | -3.1494 | -3.1550 |
| 0.6913 | 0.1895 | 1100 | 0.6924 | 0.0021 | 0.0005 | 0.5737 | 0.0016 | -63.1303 | -58.5028 | -3.1458 | -3.1514 |
| 0.6912 | 0.2068 | 1200 | 0.6921 | 0.0022 | 0.0001 | 0.5795 | 0.0021 | -63.1677 | -58.4881 | -3.1407 | -3.1464 |
| 0.6911 | 0.2240 | 1300 | 0.6918 | 0.0017 | -0.0011 | 0.5901 | 0.0028 | -63.2892 | -58.5372 | -3.1358 | -3.1414 |
| 0.6871 | 0.2412 | 1400 | 0.6914 | 0.0006 | -0.0031 | 0.5785 | 0.0037 | -63.4895 | -58.6491 | -3.1300 | -3.1356 |
| 0.6866 | 0.2584 | 1500 | 0.6910 | -0.0015 | -0.0061 | 0.5750 | 0.0045 | -63.7853 | -58.8661 | -3.1246 | -3.1303 |
| 0.6876 | 0.2757 | 1600 | 0.6907 | -0.0038 | -0.0091 | 0.5874 | 0.0053 | -64.0863 | -59.0928 | -3.1185 | -3.1241 |
| 0.6882 | 0.2929 | 1700 | 0.6903 | -0.0067 | -0.0126 | 0.5850 | 0.0060 | -64.4449 | -59.3800 | -3.1117 | -3.1173 |
| 0.6838 | 0.3101 | 1800 | 0.6900 | -0.0121 | -0.0190 | 0.5825 | 0.0069 | -65.0772 | -59.9201 | -3.1038 | -3.1095 |
| 0.6836 | 0.3274 | 1900 | 0.6895 | -0.0157 | -0.0235 | 0.5883 | 0.0078 | -65.5277 | -60.2801 | -3.0980 | -3.1037 |
| 0.685 | 0.3446 | 2000 | 0.6889 | -0.0227 | -0.0319 | 0.5897 | 0.0092 | -66.3702 | -60.9847 | -3.0905 | -3.0962 |
| 0.6828 | 0.3618 | 2100 | 0.6883 | -0.0311 | -0.0418 | 0.5806 | 0.0107 | -67.3595 | -61.8209 | -3.0840 | -3.0897 |
| 0.6745 | 0.3790 | 2200 | 0.6876 | -0.0382 | -0.0504 | 0.5883 | 0.0123 | -68.2227 | -62.5273 | -3.0753 | -3.0811 |
| 0.6781 | 0.3963 | 2300 | 0.6872 | -0.0405 | -0.0537 | 0.5908 | 0.0131 | -68.5468 | -62.7638 | -3.0689 | -3.0745 |
| 0.6809 | 0.4135 | 2400 | 0.6866 | -0.0471 | -0.0615 | 0.5906 | 0.0144 | -69.3305 | -63.4208 | -3.0592 | -3.0649 |
| 0.6828 | 0.4307 | 2500 | 0.6862 | -0.0557 | -0.0713 | 0.5913 | 0.0156 | -70.3087 | -64.2813 | -3.0501 | -3.0558 |
| 0.6754 | 0.4480 | 2600 | 0.6856 | -0.0615 | -0.0783 | 0.5918 | 0.0168 | -71.0083 | -64.8584 | -3.0433 | -3.0490 |
| 0.6768 | 0.4652 | 2700 | 0.6851 | -0.0674 | -0.0853 | 0.5957 | 0.0180 | -71.7136 | -65.4475 | -3.0370 | -3.0427 |
| 0.6766 | 0.4824 | 2800 | 0.6846 | -0.0727 | -0.0919 | 0.5967 | 0.0192 | -72.3669 | -65.9771 | -3.0308 | -3.0365 |
| 0.6769 | 0.4997 | 2900 | 0.6843 | -0.0755 | -0.0954 | 0.6004 | 0.0199 | -72.7197 | -66.2619 | -3.0232 | -3.0289 |
| 0.6781 | 0.5169 | 3000 | 0.6839 | -0.0812 | -0.1022 | 0.6027 | 0.0210 | -73.3995 | -66.8329 | -3.0144 | -3.0201 |
| 0.67 | 0.5341 | 3100 | 0.6835 | -0.0822 | -0.1040 | 0.6004 | 0.0218 | -73.5753 | -66.9287 | -3.0095 | -3.0153 |
| 0.6718 | 0.5513 | 3200 | 0.6828 | -0.0939 | -0.1173 | 0.6015 | 0.0235 | -74.9148 | -68.1005 | -2.9982 | -3.0040 |
| 0.6724 | 0.5686 | 3300 | 0.6822 | -0.0999 | -0.1249 | 0.6050 | 0.0250 | -75.6694 | -68.7027 | -2.9851 | -2.9908 |
| 0.6625 | 0.5858 | 3400 | 0.6818 | -0.1009 | -0.1266 | 0.6090 | 0.0257 | -75.8440 | -68.8060 | -2.9762 | -2.9820 |
| 0.6742 | 0.6030 | 3500 | 0.6814 | -0.1071 | -0.1338 | 0.6083 | 0.0267 | -76.5617 | -69.4202 | -2.9687 | -2.9745 |
| 0.6722 | 0.6203 | 3600 | 0.6810 | -0.1126 | -0.1404 | 0.6099 | 0.0277 | -77.2155 | -69.9734 | -2.9597 | -2.9655 |
| 0.664 | 0.6375 | 3700 | 0.6803 | -0.1209 | -0.1502 | 0.6090 | 0.0293 | -78.2040 | -70.8018 | -2.9485 | -2.9543 |
| 0.6644 | 0.6547 | 3800 | 0.6795 | -0.1327 | -0.1641 | 0.6111 | 0.0314 | -79.5918 | -71.9851 | -2.9386 | -2.9444 |
| 0.6664 | 0.6720 | 3900 | 0.6786 | -0.1449 | -0.1784 | 0.6080 | 0.0335 | -81.0222 | -73.2044 | -2.9300 | -2.9358 |
| 0.6653 | 0.6892 | 4000 | 0.6781 | -0.1559 | -0.1909 | 0.6057 | 0.0350 | -82.2692 | -74.3040 | -2.9178 | -2.9236 |
| 0.6532 | 0.7064 | 4100 | 0.6776 | -0.1612 | -0.1975 | 0.6125 | 0.0363 | -82.9296 | -74.8363 | -2.9005 | -2.9064 |
| 0.6733 | 0.7236 | 4200 | 0.6769 | -0.1720 | -0.2098 | 0.6087 | 0.0378 | -84.1639 | -75.9119 | -2.8890 | -2.8949 |
| 0.6618 | 0.7409 | 4300 | 0.6764 | -0.1798 | -0.2189 | 0.6057 | 0.0391 | -85.0723 | -76.6940 | -2.8794 | -2.8853 |
| 0.6625 | 0.7581 | 4400 | 0.6757 | -0.1936 | -0.2347 | 0.6053 | 0.0411 | -86.6464 | -78.0713 | -2.8686 | -2.8745 |
| 0.6605 | 0.7753 | 4500 | 0.6746 | -0.2097 | -0.2535 | 0.6066 | 0.0439 | -88.5342 | -79.6776 | -2.8590 | -2.8649 |
| 0.6437 | 0.7926 | 4600 | 0.6737 | -0.2242 | -0.2703 | 0.6071 | 0.0461 | -90.2150 | -81.1344 | -2.8513 | -2.8573 |
| 0.6526 | 0.8098 | 4700 | 0.6727 | -0.2385 | -0.2872 | 0.6069 | 0.0487 | -91.9046 | -82.5646 | -2.8429 | -2.8489 |
| 0.6604 | 0.8270 | 4800 | 0.6721 | -0.2495 | -0.2999 | 0.6090 | 0.0504 | -93.1696 | -83.6594 | -2.8351 | -2.8410 |
| 0.6664 | 0.8442 | 4900 | 0.6712 | -0.2621 | -0.3148 | 0.6048 | 0.0526 | -94.6595 | -84.9266 | -2.8264 | -2.8324 |
| 0.6499 | 0.8615 | 5000 | 0.6707 | -0.2706 | -0.3247 | 0.5955 | 0.0541 | -95.6483 | -85.7703 | -2.8111 | -2.8172 |
| 0.6628 | 0.8787 | 5100 | 0.6697 | -0.2843 | -0.3411 | 0.5969 | 0.0568 | -97.2923 | -87.1431 | -2.8035 | -2.8094 |
| 0.6513 | 0.8959 | 5200 | 0.6693 | -0.2867 | -0.3444 | 0.5953 | 0.0577 | -97.6222 | -87.3824 | -2.7972 | -2.8031 |
| 0.6475 | 0.9132 | 5300 | 0.6692 | -0.2901 | -0.3484 | 0.5987 | 0.0583 | -98.0213 | -87.7248 | -2.7882 | -2.7943 |
| 0.6494 | 0.9304 | 5400 | 0.6687 | -0.2940 | -0.3536 | 0.6015 | 0.0596 | -98.5368 | -88.1090 | -2.7827 | -2.7887 |
| 0.6412 | 0.9476 | 5500 | 0.6682 | -0.3024 | -0.3635 | 0.5997 | 0.0610 | -99.5251 | -88.9533 | -2.7734 | -2.7794 |
| 0.6531 | 0.9649 | 5600 | 0.6680 | -0.2995 | -0.3610 | 0.6046 | 0.0615 | -99.2758 | -88.6585 | -2.7683 | -2.7743 |
| 0.652 | 0.9821 | 5700 | 0.6671 | -0.3121 | -0.3760 | 0.6041 | 0.0639 | -100.7801 | -89.9234 | -2.7604 | -2.7664 |
| 0.6355 | 0.9993 | 5800 | 0.6663 | -0.3272 | -0.3936 | 0.6057 | 0.0664 | -102.5409 | -91.4366 | -2.7489 | -2.7549 |
| 0.6362 | 1.0165 | 5900 | 0.6654 | -0.3504 | -0.4199 | 0.6043 | 0.0695 | -105.1658 | -93.7475 | -2.7329 | -2.7390 |
| 0.6587 | 1.0338 | 6000 | 0.6654 | -0.3453 | -0.4145 | 0.6076 | 0.0692 | -104.6326 | -93.2431 | -2.7260 | -2.7321 |
| 0.6337 | 1.0510 | 6100 | 0.6649 | -0.3492 | -0.4197 | 0.6078 | 0.0705 | -105.1470 | -93.6331 | -2.7177 | -2.7237 |
| 0.6372 | 1.0682 | 6200 | 0.6640 | -0.3675 | -0.4408 | 0.6090 | 0.0734 | -107.2651 | -95.4612 | -2.7083 | -2.7144 |
| 0.6555 | 1.0855 | 6300 | 0.6633 | -0.3808 | -0.4563 | 0.6111 | 0.0755 | -108.8140 | -96.7948 | -2.7009 | -2.7071 |
| 0.6406 | 1.1027 | 6400 | 0.6629 | -0.3843 | -0.4611 | 0.6108 | 0.0768 | -109.2905 | -97.1394 | -2.6941 | -2.7003 |
| 0.6445 | 1.1199 | 6500 | 0.6626 | -0.3894 | -0.4670 | 0.6097 | 0.0776 | -109.8768 | -97.6507 | -2.6860 | -2.6923 |
| 0.6438 | 1.1371 | 6600 | 0.6627 | -0.3907 | -0.4683 | 0.6073 | 0.0776 | -110.0129 | -97.7839 | -2.6814 | -2.6877 |
| 0.6411 | 1.1544 | 6700 | 0.6622 | -0.3996 | -0.4791 | 0.6122 | 0.0795 | -111.0866 | -98.6695 | -2.6729 | -2.6791 |
| 0.6224 | 1.1716 | 6800 | 0.6614 | -0.4163 | -0.4982 | 0.6115 | 0.0819 | -112.9988 | -100.3370 | -2.6625 | -2.6688 |
| 0.6437 | 1.1888 | 6900 | 0.6610 | -0.4232 | -0.5064 | 0.6106 | 0.0832 | -113.8220 | -101.0292 | -2.6554 | -2.6618 |
| 0.6268 | 1.2061 | 7000 | 0.6604 | -0.4419 | -0.5278 | 0.6090 | 0.0859 | -115.9616 | -102.9045 | -2.6490 | -2.6553 |
| 0.6303 | 1.2233 | 7100 | 0.6604 | -0.4379 | -0.5238 | 0.6129 | 0.0859 | -115.5604 | -102.5041 | -2.6443 | -2.6506 |
| 0.6251 | 1.2405 | 7200 | 0.6600 | -0.4437 | -0.5309 | 0.6101 | 0.0872 | -116.2726 | -103.0814 | -2.6383 | -2.6448 |
| 0.6531 | 1.2578 | 7300 | 0.6602 | -0.4339 | -0.5202 | 0.6125 | 0.0863 | -115.1998 | -102.0999 | -2.6366 | -2.6430 |
| 0.6456 | 1.2750 | 7400 | 0.6600 | -0.4313 | -0.5180 | 0.6125 | 0.0867 | -114.9813 | -101.8414 | -2.6345 | -2.6409 |
| 0.6455 | 1.2922 | 7500 | 0.6597 | -0.4307 | -0.5180 | 0.6148 | 0.0873 | -114.9807 | -101.7862 | -2.6292 | -2.6357 |
| 0.6762 | 1.3094 | 7600 | 0.6593 | -0.4392 | -0.5278 | 0.6118 | 0.0887 | -115.9649 | -102.6288 | -2.6216 | -2.6281 |
| 0.6365 | 1.3267 | 7700 | 0.6592 | -0.4402 | -0.5295 | 0.6157 | 0.0893 | -116.1288 | -102.7343 | -2.6172 | -2.6237 |
| 0.6211 | 1.3439 | 7800 | 0.6588 | -0.4484 | -0.5389 | 0.6194 | 0.0906 | -117.0741 | -103.5481 | -2.6115 | -2.6180 |
| 0.641 | 1.3611 | 7900 | 0.6581 | -0.4553 | -0.5479 | 0.6217 | 0.0926 | -117.9735 | -104.2409 | -2.6077 | -2.6143 |
| 0.6228 | 1.3784 | 8000 | 0.6578 | -0.4583 | -0.5520 | 0.6215 | 0.0937 | -118.3795 | -104.5455 | -2.6043 | -2.6109 |
| 0.641 | 1.3956 | 8100 | 0.6579 | -0.4658 | -0.5596 | 0.6178 | 0.0939 | -119.1444 | -105.2910 | -2.5997 | -2.6063 |
| 0.6504 | 1.4128 | 8200 | 0.6571 | -0.4707 | -0.5666 | 0.6213 | 0.0959 | -119.8413 | -105.7863 | -2.5974 | -2.6040 |
| 0.6472 | 1.4300 | 8300 | 0.6573 | -0.4661 | -0.5612 | 0.6217 | 0.0951 | -119.3045 | -105.3220 | -2.5953 | -2.6018 |
| 0.6298 | 1.4473 | 8400 | 0.6573 | -0.4609 | -0.5560 | 0.6206 | 0.0950 | -118.7768 | -104.8056 | -2.5928 | -2.5994 |
| 0.6207 | 1.4645 | 8500 | 0.6573 | -0.4579 | -0.5531 | 0.6180 | 0.0952 | -118.4887 | -104.5014 | -2.5885 | -2.5952 |
| 0.6661 | 1.4817 | 8600 | 0.6571 | -0.4639 | -0.5598 | 0.6204 | 0.0959 | -119.1632 | -105.1053 | -2.5846 | -2.5913 |
| 0.6475 | 1.4990 | 8700 | 0.6572 | -0.4570 | -0.5525 | 0.6190 | 0.0954 | -118.4251 | -104.4133 | -2.5846 | -2.5912 |
| 0.6476 | 1.5162 | 8800 | 0.6569 | -0.4604 | -0.5566 | 0.6194 | 0.0962 | -118.8439 | -104.7545 | -2.5816 | -2.5883 |
| 0.6336 | 1.5334 | 8900 | 0.6568 | -0.4692 | -0.5663 | 0.6190 | 0.0971 | -119.8081 | -105.6329 | -2.5772 | -2.5839 |
| 0.6282 | 1.5507 | 9000 | 0.6564 | -0.4708 | -0.5690 | 0.6187 | 0.0981 | -120.0761 | -105.7962 | -2.5754 | -2.5821 |
| 0.646 | 1.5679 | 9100 | 0.6565 | -0.4724 | -0.5704 | 0.6187 | 0.0980 | -120.2213 | -105.9529 | -2.5732 | -2.5799 |
| 0.6225 | 1.5851 | 9200 | 0.6563 | -0.4762 | -0.5749 | 0.6190 | 0.0987 | -120.6733 | -106.3303 | -2.5714 | -2.5781 |
| 0.6223 | 1.6023 | 9300 | 0.6562 | -0.4763 | -0.5753 | 0.6180 | 0.0990 | -120.7107 | -106.3383 | -2.5692 | -2.5759 |
| 0.6288 | 1.6196 | 9400 | 0.6559 | -0.4818 | -0.5819 | 0.6201 | 0.1001 | -121.3710 | -106.8921 | -2.5664 | -2.5731 |
| 0.6223 | 1.6368 | 9500 | 0.6557 | -0.4823 | -0.5828 | 0.6176 | 0.1005 | -121.4601 | -106.9374 | -2.5650 | -2.5717 |
| 0.6363 | 1.6540 | 9600 | 0.6556 | -0.4891 | -0.5902 | 0.6197 | 0.1011 | -122.2042 | -107.6243 | -2.5615 | -2.5683 |
| 0.6355 | 1.6713 | 9700 | 0.6556 | -0.4880 | -0.5892 | 0.6211 | 0.1012 | -122.1034 | -107.5130 | -2.5609 | -2.5677 |
| 0.6247 | 1.6885 | 9800 | 0.6555 | -0.4894 | -0.5910 | 0.6201 | 0.1015 | -122.2755 | -107.6543 | -2.5603 | -2.5670 |
| 0.5826 | 1.7057 | 9900 | 0.6554 | -0.4911 | -0.5929 | 0.6206 | 0.1019 | -122.4715 | -107.8182 | -2.5591 | -2.5659 |
| 0.6181 | 1.7229 | 10000 | 0.6553 | -0.4923 | -0.5945 | 0.6204 | 0.1022 | -122.6296 | -107.9373 | -2.5579 | -2.5647 |
| 0.6365 | 1.7402 | 10100 | 0.6553 | -0.4917 | -0.5938 | 0.6201 | 0.1022 | -122.5635 | -107.8778 | -2.5567 | -2.5635 |
| 0.6269 | 1.7574 | 10200 | 0.6552 | -0.4952 | -0.5977 | 0.6208 | 0.1025 | -122.9497 | -108.2321 | -2.5556 | -2.5624 |
| 0.6573 | 1.7746 | 10300 | 0.6553 | -0.4962 | -0.5988 | 0.6201 | 0.1026 | -123.0645 | -108.3347 | -2.5542 | -2.5610 |
| 0.6036 | 1.7919 | 10400 | 0.6552 | -0.4953 | -0.5980 | 0.6197 | 0.1027 | -122.9784 | -108.2400 | -2.5542 | -2.5610 |
| 0.6178 | 1.8091 | 10500 | 0.6549 | -0.4956 | -0.5990 | 0.6213 | 0.1034 | -123.0831 | -108.2757 | -2.5531 | -2.5598 |
| 0.6403 | 1.8263 | 10600 | 0.6551 | -0.4967 | -0.5996 | 0.6204 | 0.1030 | -123.1450 | -108.3809 | -2.5527 | -2.5594 |
| 0.6341 | 1.8436 | 10700 | 0.6550 | -0.4965 | -0.5997 | 0.6206 | 0.1032 | -123.1496 | -108.3595 | -2.5523 | -2.5590 |
| 0.627 | 1.8608 | 10800 | 0.6549 | -0.4971 | -0.6006 | 0.6211 | 0.1035 | -123.2409 | -108.4216 | -2.5521 | -2.5589 |
| 0.6335 | 1.8780 | 10900 | 0.6550 | -0.4974 | -0.6009 | 0.6201 | 0.1035 | -123.2728 | -108.4564 | -2.5523 | -2.5590 |
| 0.6262 | 1.8952 | 11000 | 0.6550 | -0.4971 | -0.6003 | 0.6201 | 0.1033 | -123.2126 | -108.4185 | -2.5520 | -2.5588 |
| 0.6311 | 1.9125 | 11100 | 0.6548 | -0.4971 | -0.6009 | 0.6211 | 0.1038 | -123.2688 | -108.4253 | -2.5521 | -2.5589 |
| 0.6239 | 1.9297 | 11200 | 0.6551 | -0.4971 | -0.6003 | 0.6201 | 0.1031 | -123.2061 | -108.4263 | -2.5516 | -2.5583 |
| 0.6629 | 1.9469 | 11300 | 0.6550 | -0.4970 | -0.6003 | 0.6206 | 0.1033 | -123.2066 | -108.4107 | -2.5518 | -2.5587 |
| 0.6308 | 1.9642 | 11400 | 0.6550 | -0.4972 | -0.6005 | 0.6197 | 0.1033 | -123.2305 | -108.4360 | -2.5518 | -2.5586 |
| 0.6532 | 1.9814 | 11500 | 0.6550 | -0.4972 | -0.6005 | 0.6197 | 0.1033 | -123.2317 | -108.4313 | -2.5517 | -2.5585 |
| 0.6257 | 1.9986 | 11600 | 0.6549 | -0.4976 | -0.6010 | 0.6194 | 0.1035 | -123.2810 | -108.4673 | -2.5516 | -2.5584 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
ukung/Nusantara-4b-Indo-Chat-GGUF | ukung | 2024-05-20T04:26:42Z | 464 | 0 | null | [
"gguf",
"license:apache-2.0",
"region:us"
]
| null | 2024-05-20T02:43:48Z | ---
license: apache-2.0
---
|
mradermacher/Meta-LLama-3-Cat-A-LLama-70b-i1-GGUF | mradermacher | 2024-05-27T02:46:23Z | 464 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:gbueno86/Meta-LLama-3-Cat-A-LLama-70b",
"license:llama3",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-24T06:10:40Z | ---
base_model: gbueno86/Meta-LLama-3-Cat-A-LLama-70b
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/gbueno86/Meta-LLama-3-Cat-A-LLama-70b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Meta-LLama-3-Cat-A-LLama-70b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Meta-LLama-3-Cat-A-LLama-70b-i1-GGUF/resolve/main/Meta-LLama-3-Cat-A-LLama-70b.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Meta-LLama-3-Cat-A-LLama-70b-i1-GGUF/resolve/main/Meta-LLama-3-Cat-A-LLama-70b.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Meta-LLama-3-Cat-A-LLama-70b-i1-GGUF/resolve/main/Meta-LLama-3-Cat-A-LLama-70b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-LLama-3-Cat-A-LLama-70b-i1-GGUF/resolve/main/Meta-LLama-3-Cat-A-LLama-70b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-LLama-3-Cat-A-LLama-70b-i1-GGUF/resolve/main/Meta-LLama-3-Cat-A-LLama-70b.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-LLama-3-Cat-A-LLama-70b-i1-GGUF/resolve/main/Meta-LLama-3-Cat-A-LLama-70b.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-LLama-3-Cat-A-LLama-70b-i1-GGUF/resolve/main/Meta-LLama-3-Cat-A-LLama-70b.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Meta-LLama-3-Cat-A-LLama-70b-i1-GGUF/resolve/main/Meta-LLama-3-Cat-A-LLama-70b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Meta-LLama-3-Cat-A-LLama-70b-i1-GGUF/resolve/main/Meta-LLama-3-Cat-A-LLama-70b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-LLama-3-Cat-A-LLama-70b-i1-GGUF/resolve/main/Meta-LLama-3-Cat-A-LLama-70b.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Meta-LLama-3-Cat-A-LLama-70b-i1-GGUF/resolve/main/Meta-LLama-3-Cat-A-LLama-70b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Meta-LLama-3-Cat-A-LLama-70b-i1-GGUF/resolve/main/Meta-LLama-3-Cat-A-LLama-70b.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-LLama-3-Cat-A-LLama-70b-i1-GGUF/resolve/main/Meta-LLama-3-Cat-A-LLama-70b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Meta-LLama-3-Cat-A-LLama-70b-i1-GGUF/resolve/main/Meta-LLama-3-Cat-A-LLama-70b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Meta-LLama-3-Cat-A-LLama-70b-i1-GGUF/resolve/main/Meta-LLama-3-Cat-A-LLama-70b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-LLama-3-Cat-A-LLama-70b-i1-GGUF/resolve/main/Meta-LLama-3-Cat-A-LLama-70b.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Meta-LLama-3-Cat-A-LLama-70b-i1-GGUF/resolve/main/Meta-LLama-3-Cat-A-LLama-70b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Meta-LLama-3-Cat-A-LLama-70b-i1-GGUF/resolve/main/Meta-LLama-3-Cat-A-LLama-70b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Meta-LLama-3-Cat-A-LLama-70b-i1-GGUF/resolve/main/Meta-LLama-3-Cat-A-LLama-70b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-LLama-3-Cat-A-LLama-70b-i1-GGUF/resolve/main/Meta-LLama-3-Cat-A-LLama-70b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Meta-LLama-3-Cat-A-LLama-70b-i1-GGUF/resolve/main/Meta-LLama-3-Cat-A-LLama-70b.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Meta-LLama-3-Cat-A-LLama-70b-i1-GGUF/resolve/main/Meta-LLama-3-Cat-A-LLama-70b.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/HarmoniousAnthea_v0.2-GGUF | mradermacher | 2024-05-25T14:23:15Z | 464 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:neovalle/HarmoniousAnthea_v0.2",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-25T13:28:37Z | ---
base_model: neovalle/HarmoniousAnthea_v0.2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/neovalle/HarmoniousAnthea_v0.2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/HarmoniousAnthea_v0.2-GGUF/resolve/main/HarmoniousAnthea_v0.2.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/HarmoniousAnthea_v0.2-GGUF/resolve/main/HarmoniousAnthea_v0.2.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/HarmoniousAnthea_v0.2-GGUF/resolve/main/HarmoniousAnthea_v0.2.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/HarmoniousAnthea_v0.2-GGUF/resolve/main/HarmoniousAnthea_v0.2.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/HarmoniousAnthea_v0.2-GGUF/resolve/main/HarmoniousAnthea_v0.2.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/HarmoniousAnthea_v0.2-GGUF/resolve/main/HarmoniousAnthea_v0.2.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/HarmoniousAnthea_v0.2-GGUF/resolve/main/HarmoniousAnthea_v0.2.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/HarmoniousAnthea_v0.2-GGUF/resolve/main/HarmoniousAnthea_v0.2.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/HarmoniousAnthea_v0.2-GGUF/resolve/main/HarmoniousAnthea_v0.2.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/HarmoniousAnthea_v0.2-GGUF/resolve/main/HarmoniousAnthea_v0.2.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/HarmoniousAnthea_v0.2-GGUF/resolve/main/HarmoniousAnthea_v0.2.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/HarmoniousAnthea_v0.2-GGUF/resolve/main/HarmoniousAnthea_v0.2.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/HarmoniousAnthea_v0.2-GGUF/resolve/main/HarmoniousAnthea_v0.2.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/HarmoniousAnthea_v0.2-GGUF/resolve/main/HarmoniousAnthea_v0.2.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/HarmoniousAnthea_v0.2-GGUF/resolve/main/HarmoniousAnthea_v0.2.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
rash24ar/gpt-neo-plantuml | rash24ar | 2024-06-02T10:48:00Z | 464 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-06-02T10:47:12Z | Entry not found |
John6666/duchaiten-pony-real-alpha7-sdxl | John6666 | 2024-06-07T23:17:39Z | 464 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"pony",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2024-06-07T23:12:20Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- pony
---
Original model is [here](https://civitai.com/models/477851/duchaiten-pony-real?modelVersionId=557601).
|
alvdansen/digital-manga-cuties | alvdansen | 2024-06-16T18:30:40Z | 464 | 4 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2024-06-16T18:30:28Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: ' sweet fairy girl with butterfly wings, wearing a dress made of petals'
output:
url: images/ComfyUI_01828_.png
- text: >-
A phoenix with vibrant red and orange feathers, rising majestically from a
pile of ashes
output:
url: images/ComfyUI_01821_.png
- text: >-
A young woman with a messy bun and round glasses, wearing a cozy sweater and
leggings, sitting on a window seat with a book and a cup of tea, looking out
at the rainy street below
output:
url: images/ComfyUI_01806_.png
- text: >-
A young man with tousled brown hair and green eyes, wearing a casual hoodie
and jeans, sitting at a coffee shop with a laptop and a cup of coffee,
surrounded by cozy décor
output:
url: images/ComfyUI_01800_.png
- text: A woman with bright pink hair styled in a bob cut
output:
url: images/ComfyUI_01794_.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
license: creativeml-openrail-m
---
# Digital Manga Cuties
<Gallery />
## Model description
This model is a very bold and brightly colored model that responds really well to long and short prompts - shouldn't need negative prompts typically.
This is for fun and research only. If you would like to use the model commercially please contact me.
## Download model
Weights for this model are available in Safetensors format.
[Download](/alvdansen/digital-manga-cuties/tree/main) them in the Files & versions tab.
|
octadion/llama3-8B-chat-v2.1-gguf | octadion | 2024-06-28T07:10:59Z | 464 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-06-28T07:06:48Z | Entry not found |
Carxofa85/Meta-Llama-3-8B-Instruct-IQ4_NL-GGUF | Carxofa85 | 2024-06-28T11:08:21Z | 464 | 2 | null | [
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
]
| text-generation | 2024-06-28T11:07:58Z | ---
base_model: meta-llama/Meta-Llama-3-8B-Instruct
language:
- en
license: llama3
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- llama-cpp
- gguf-my-repo
extra_gated_prompt: "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version\
\ Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for\
\ use, reproduction, distribution and modification of the Llama Materials set forth\
\ herein.\n\"Documentation\" means the specifications, manuals and documentation\
\ accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\
\"Licensee\" or \"you\" means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf), of\
\ the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama\
\ 3\" means the foundational large language models and software and algorithms,\
\ including machine-learning model code, trained model weights, inference-enabling\
\ code, training-enabling code, fine-tuning enabling code and other elements of\
\ the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\
\"Llama Materials\" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation\
\ (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"\
we\" means Meta Platforms Ireland Limited (if you are located in or, if you are\
\ an entity, your principal place of business is in the EEA or Switzerland) and\
\ Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n\
\ \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted\
\ a non-exclusive, worldwide, non-transferable and royalty-free limited license\
\ under Meta’s intellectual property or other rights owned by Meta embodied in the\
\ Llama Materials to use, reproduce, distribute, copy, create derivative works of,\
\ and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni.\
\ If you distribute or make available the Llama Materials (or any derivative works\
\ thereof), or a product or service that uses any of them, including another AI\
\ model, you shall (A) provide a copy of this Agreement with any such Llama Materials;\
\ and (B) prominently display “Built with Meta Llama 3” on a related website, user\
\ interface, blogpost, about page, or product documentation. If you use the Llama\
\ Materials to create, train, fine tune, or otherwise improve an AI model, which\
\ is distributed or made available, you shall also include “Llama 3” at the beginning\
\ of any such AI model name.\nii. If you receive Llama Materials, or any derivative\
\ works thereof, from a Licensee as part of an integrated end user product, then\
\ Section 2 of this Agreement will not apply to you.\niii. You must retain in all\
\ copies of the Llama Materials that you distribute the following attribution notice\
\ within a “Notice” text file distributed as a part of such copies: “Meta Llama\
\ 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms,\
\ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\
\ applicable laws and regulations (including trade compliance laws and regulations)\
\ and adhere to the Acceptable Use Policy for the Llama Materials (available at\
\ https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference\
\ into this Agreement.\nv. You will not use the Llama Materials or any output or\
\ results of the Llama Materials to improve any other large language model (excluding\
\ Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If,\
\ on the Meta Llama 3 version release date, the monthly active users of the products\
\ or services made available by or for Licensee, or Licensee’s affiliates, is greater\
\ than 700 million monthly active users in the preceding calendar month, you must\
\ request a license from Meta, which Meta may grant to you in its sole discretion,\
\ and you are not authorized to exercise any of the rights under this Agreement\
\ unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer\
\ of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT\
\ AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF\
\ ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED,\
\ INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY,\
\ OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING\
\ THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME\
\ ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n\
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER\
\ ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY,\
\ OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT,\
\ SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META\
\ OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n\
5. Intellectual Property.\na. No trademark licenses are granted under this Agreement,\
\ and in connection with the Llama Materials, neither Meta nor Licensee may use\
\ any name or mark owned by or associated with the other or any of its affiliates,\
\ except as required for reasonable and customary use in describing and redistributing\
\ the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you\
\ a license to use “Llama 3” (the “Mark”) solely as required to comply with the\
\ last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently\
\ accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All\
\ goodwill arising out of your use of the Mark will inure to the benefit of Meta.\n\
b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for\
\ Meta, with respect to any derivative works and modifications of the Llama Materials\
\ that are made by you, as between you and Meta, you are and will be the owner of\
\ such derivative works and modifications.\nc. If you institute litigation or other\
\ proceedings against Meta or any entity (including a cross-claim or counterclaim\
\ in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results,\
\ or any portion of any of the foregoing, constitutes infringement of intellectual\
\ property or other rights owned or licensable by you, then any licenses granted\
\ to you under this Agreement shall terminate as of the date such litigation or\
\ claim is filed or instituted. You will indemnify and hold harmless Meta from and\
\ against any claim by any third party arising out of or related to your use or\
\ distribution of the Llama Materials.\n6. Term and Termination. The term of this\
\ Agreement will commence upon your acceptance of this Agreement or access to the\
\ Llama Materials and will continue in full force and effect until terminated in\
\ accordance with the terms and conditions herein. Meta may terminate this Agreement\
\ if you are in breach of any term or condition of this Agreement. Upon termination\
\ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\
\ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\
\ and Jurisdiction. This Agreement will be governed and construed under the laws\
\ of the State of California without regard to choice of law principles, and the\
\ UN Convention on Contracts for the International Sale of Goods does not apply\
\ to this Agreement. The courts of California shall have exclusive jurisdiction\
\ of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use\
\ Policy\nMeta is committed to promoting safe and fair use of its tools and features,\
\ including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable\
\ Use Policy (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n\
#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly.\
\ You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate\
\ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 2. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 3. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 4.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 5. Collect, process, disclose, generate, or infer health, demographic,\
\ or other sensitive personal or private information about individuals without rights\
\ and consents required by applicable laws\n 6. Engage in or facilitate any action\
\ or generate any content that infringes, misappropriates, or otherwise violates\
\ any third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 7. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n2. Engage in, promote, incite,\
\ facilitate, or assist in the planning or development of activities that present\
\ a risk of death or bodily harm to individuals, including use of Meta Llama 3 related\
\ to the following:\n 1. Military, warfare, nuclear industries or applications,\
\ espionage, use for materials or activities that are subject to the International\
\ Traffic Arms Regulations (ITAR) maintained by the United States Department of\
\ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\
\ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\
\ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\
\ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\
\ content intended to incite or promote violence, abuse, or any infliction of bodily\
\ harm to an individual\n3. Intentionally deceive or mislead others, including use\
\ of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering\
\ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\
\ or furthering defamatory content, including the creation of defamatory statements,\
\ images, or other content\n 3. Generating, promoting, or further distributing\
\ spam\n 4. Impersonating another individual without consent, authorization,\
\ or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are\
\ human-generated\n 6. Generating or facilitating false online engagement, including\
\ fake reviews and other means of fake online engagement\n4. Fail to appropriately\
\ disclose to end users any known dangers of your AI system\nPlease report any violation\
\ of this Policy, software “bug,” or other problems that could lead to a violation\
\ of this Policy through one of the following means:\n * Reporting issues with\
\ the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n\
\ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\
\ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\
\ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
widget:
- example_title: Hello
messages:
- role: user
content: Hey my name is Julien! How are you?
- example_title: Winter holidays
messages:
- role: system
content: You are a helpful and honest assistant. Please, respond concisely and
truthfully.
- role: user
content: Can you recommend a good destination for Winter holidays?
- example_title: Programming assistant
messages:
- role: system
content: You are a helpful and honest code and programming assistant. Please,
respond concisely and truthfully.
- role: user
content: Write a function that computes the nth fibonacci number.
inference:
parameters:
max_new_tokens: 300
stop:
- <|end_of_text|>
- <|eot_id|>
---
# Carxofa85/Meta-Llama-3-8B-Instruct-IQ4_NL-GGUF
This model was converted to GGUF format from [`meta-llama/Meta-Llama-3-8B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Carxofa85/Meta-Llama-3-8B-Instruct-IQ4_NL-GGUF --hf-file meta-llama-3-8b-instruct-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Carxofa85/Meta-Llama-3-8B-Instruct-IQ4_NL-GGUF --hf-file meta-llama-3-8b-instruct-iq4_nl-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Carxofa85/Meta-Llama-3-8B-Instruct-IQ4_NL-GGUF --hf-file meta-llama-3-8b-instruct-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Carxofa85/Meta-Llama-3-8B-Instruct-IQ4_NL-GGUF --hf-file meta-llama-3-8b-instruct-iq4_nl-imat.gguf -c 2048
```
|
BigSalmon/SimplifyText | BigSalmon | 2021-10-14T00:41:11Z | 463 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2022-03-02T23:29:04Z | - All credit goes to https://huggingface.co/philippelaban/keep_it_simple.
- This is a copy of their repository for future training purposes.
- It is supposed to simplify text.
- Their model card gives instructions on how to use it. |
pysentimiento/robertuito-base-uncased | pysentimiento | 2023-05-22T14:06:59Z | 463 | 10 | transformers | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"twitter",
"masked-lm",
"es",
"arxiv:2111.09453",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ---
language:
- es
tags:
- twitter
- masked-lm
---
# robertuito-base-uncased
# RoBERTuito
## A pre-trained language model for social media text in Spanish
[**PAPER**](https://arxiv.org/abs/2111.09453)
[Github Repository](https://github.com/pysentimiento/robertuito)
[](https://colab.research.google.com/drive/1WcubR0kbqT289XupSnN5-STe7HafyKpf#scrollTo=SF-n4IdjnoYk)
*RoBERTuito* is a pre-trained language model for user-generated content in Spanish, trained following RoBERTa guidelines on 500 million tweets. *RoBERTuito* comes in 3 flavors: cased, uncased, and uncased+deaccented.
We tested *RoBERTuito* on a benchmark of tasks involving user-generated text in Spanish. It outperforms other pre-trained language models for this language such as *BETO*, *BERTin* and *RoBERTa-BNE*. The 4 tasks selected for evaluation were: Hate Speech Detection (using SemEval 2019 Task 5, HatEval dataset), Sentiment and Emotion Analysis (using TASS 2020 datasets), and Irony detection (using IrosVa 2019 dataset).
| model | hate speech | sentiment analysis | emotion analysis | irony detection | score |
|:-------------------|:----------------|:---------------------|:-------------------|:-----------------|---------:|
| robertuito-uncased | 0.801 ± 0.010 | 0.707 ± 0.004 | 0.551 ± 0.011 | 0.736 ± 0.008 | 0.6987 |
| robertuito-deacc | 0.798 ± 0.008 | 0.702 ± 0.004 | 0.543 ± 0.015 | 0.740 ± 0.006 | 0.6958 |
| robertuito-cased | 0.790 ± 0.012 | 0.701 ± 0.012 | 0.519 ± 0.032 | 0.719 ± 0.023 | 0.6822 |
| roberta-bne | 0.766 ± 0.015 | 0.669 ± 0.006 | 0.533 ± 0.011 | 0.723 ± 0.017 | 0.6726 |
| bertin | 0.767 ± 0.005 | 0.665 ± 0.003 | 0.518 ± 0.012 | 0.716 ± 0.008 | 0.6666 |
| beto-cased | 0.768 ± 0.012 | 0.665 ± 0.004 | 0.521 ± 0.012 | 0.706 ± 0.007 | 0.6651 |
| beto-uncased | 0.757 ± 0.012 | 0.649 ± 0.005 | 0.521 ± 0.006 | 0.702 ± 0.008 | 0.6571 |
We release the pre-trained models on huggingface model hub:
- [RoBERTuito uncased](https://huggingface.co/pysentimiento/robertuito-base-uncased)
- [RoBERTuito cased](https://huggingface.co/pysentimiento/robertuito-base-cased)
- [RoBERTuito deacc](https://huggingface.co/pysentimiento/robertuito-base-deacc)
## Masked LM
To test the masked LM, take into account that space is encoded inside SentencePiece's tokens. So, if you want to test
```
Este es un día<mask>
```
don't put a space between `día` and `<mask>`
## Usage
**IMPORTANT -- READ THIS FIRST**
*RoBERTuito* is not yet fully-integrated into `huggingface/transformers`. To use it, first install `pysentimiento`
```bash
pip install pysentimiento
```
and preprocess text using `pysentimiento.preprocessing.preprocess_tweet` before feeding it into the tokenizer
```python
from transformers import AutoTokenizer
from pysentimiento.preprocessing import preprocess_tweet
tokenizer = AutoTokenizer.from_pretrained('pysentimiento/robertuito-base-cased')
text = "Esto es un tweet estoy usando #Robertuito @pysentimiento 🤣"
preprocessed_text = preprocess_tweet(text, ha)
tokenizer.tokenize(preprocessed_text)
# ['<s>','▁Esto','▁es','▁un','▁tweet','▁estoy','▁usando','▁','▁hashtag','▁','▁ro','bert','uito','▁@usuario','▁','▁emoji','▁cara','▁revolviéndose','▁de','▁la','▁risa','▁emoji','</s>']
```
We are working on integrating this preprocessing step into a Tokenizer within `transformers` library
Check a text classification example in this notebook: [](https://colab.research.google.com/drive/1WcubR0kbqT289XupSnN5-STe7HafyKpf#scrollTo=SF-n4IdjnoYk)
## Citation
If you use *RoBERTuito*, please cite our paper:
```bibtex
@inproceedings{perez-etal-2022-robertuito,
title = "{R}o{BERT}uito: a pre-trained language model for social media text in {S}panish",
author = "P{\'e}rez, Juan Manuel and
Furman, Dami{\'a}n Ariel and
Alonso Alemany, Laura and
Luque, Franco M.",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.785",
pages = "7235--7243",
abstract = "Since BERT appeared, Transformer language models and transfer learning have become state-of-the-art for natural language processing tasks. Recently, some works geared towards pre-training specially-crafted models for particular domains, such as scientific papers, medical documents, user-generated texts, among others. These domain-specific models have been shown to improve performance significantly in most tasks; however, for languages other than English, such models are not widely available. In this work, we present RoBERTuito, a pre-trained language model for user-generated text in Spanish, trained on over 500 million tweets. Experiments on a benchmark of tasks involving user-generated text showed that RoBERTuito outperformed other pre-trained language models in Spanish. In addition to this, our model has some cross-lingual abilities, achieving top results for English-Spanish tasks of the Linguistic Code-Switching Evaluation benchmark (LinCE) and also competitive performance against monolingual models in English Twitter tasks. To facilitate further research, we make RoBERTuito publicly available at the HuggingFace model hub together with the dataset used to pre-train it.",
}
```
|
EleutherAI/pythia-6.9b-deduped-v0 | EleutherAI | 2023-07-10T01:30:05Z | 463 | 20 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"causal-lm",
"pythia",
"pythia_v0",
"en",
"dataset:EleutherAI/the_pile_deduplicated",
"arxiv:2101.00027",
"arxiv:2201.07311",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2022-10-18T03:04:37Z | ---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-6.9B-deduped
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-6.9B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-6.9B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-6.9B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-6.9B-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-6.9B-deduped to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-6.9B-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-6.9B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
Pythia-6.9B-deduped was trained on the Pile **after the dataset has been
globally deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge – Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 13B | 12B | 11,846,072,320 | 11,327,027,200 |
</figure> |
roneneldan/TinyStories-Instruct-8M | roneneldan | 2023-05-17T22:08:01Z | 463 | 1 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-05-12T21:53:56Z | Entry not found |
team-lucid/hubert-base-korean | team-lucid | 2023-09-05T02:55:16Z | 463 | 18 | transformers | [
"transformers",
"pytorch",
"jax",
"safetensors",
"hubert",
"feature-extraction",
"speech",
"audio",
"automatic-speech-recognition",
"custom_code",
"ko",
"arxiv:2106.07447",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-05-29T12:00:30Z | ---
license: apache-2.0
language:
- ko
library_name: transformers
pipeline_tag: automatic-speech-recognition
tags:
- speech
- audio
---
# hubert-base-korean
## Model Details
Hubert(Hidden-Unit BERT)는 Facebook에서 제안한 Speech Representation Learning 모델입니다.
Hubert는 기존의 음성 인식 모델과 달리, 음성 신호를 raw waveform에서 바로 학습하는 self-supervised learning 방식을 사용합니다.
이 연구는 구글의 TPU Research Cloud(TRC)를 통해 지원받은 Cloud TPU로 학습되었습니다.
### Model Description
<table>
<tr>
<td colspan="2"></td>
<td>Base</td>
<td>Large</td>
</tr>
<tr>
<td rowspan="3">CNN Encoder</td>
<td>strides</td>
<td colspan="2">5, 2, 2, 2, 2, 2, 2</td>
</tr>
<tr>
<td>kernel width</td>
<td colspan="2">10, 3, 3, 3, 3, 2, 2</td>
</tr>
<tr>
<td>channel</td>
<td colspan="2">512</td>
</tr>
<tr>
<td rowspan="4">Transformer Encoder</td>
<td>Layer</td>
<td>12</td>
<td>24</td>
</tr>
<tr>
<td>embedding dim</td>
<td>768</td>
<td>1024</td>
</tr>
<tr>
<td>inner FFN dim</td>
<td>3072</td>
<td>4096</td>
</tr>
<tr>
<td>attention heads</td>
<td>8</td>
<td>16</td>
</tr>
<tr>
<td>Projection</td>
<td>dim</td>
<td>256</td>
<td>768</td>
</tr>
<tr>
<td colspan="2">Params</td>
<td>95M</td>
<td>317M </td>
</tr>
</table>
## How to Get Started with the Model
### Pytorch
```py
import torch
from transformers import HubertModel
model = HubertModel.from_pretrained("team-lucid/hubert-base-korean")
wav = torch.ones(1, 16000)
outputs = model(wav)
print(f"Input: {wav.shape}") # [1, 16000]
print(f"Output: {outputs.last_hidden_state.shape}") # [1, 49, 768]
```
### JAX/Flax
```py
import jax.numpy as jnp
from transformers import FlaxAutoModel
model = FlaxAutoModel.from_pretrained("team-lucid/hubert-base-korean", trust_remote_code=True)
wav = jnp.ones((1, 16000))
outputs = model(wav)
print(f"Input: {wav.shape}") # [1, 16000]
print(f"Output: {outputs.last_hidden_state.shape}") # [1, 49, 768]
```
## Training Details
### Training Data
해당 모델은 과학기술정보통신부의 재원으로 한국지능정보사회진흥원의 지원을 받아
구축된 [자유대화 음성(일반남여)](https://www.aihub.or.kr/aihubdata/data/view.do?dataSetSn=109), [다화자 음성합성 데이터](https://www.aihub.or.kr/aihubdata/data/view.do?dataSetSn=542), [방송 콘텐츠 대화체 음성인식 데이터](https://www.aihub.or.kr/aihubdata/data/view.do?dataSetSn=463)
에서 약 4,000시간을 추출해 학습되었습니다.
### Training Procedure
[원 논문](https://arxiv.org/pdf/2106.07447.pdf)과 동일하게 MFCC 기반으로 Base 모델을 학습한 다음, 500 cluster로 k-means를 수행해 다시 Base와
Large 모델을 학습했습니다.
#### Training Hyperparameters
| Hyperparameter | Base | Large |
|:--------------------|---------|--------:|
| Warmup Steps | 32,000 | 32,000 |
| Learning Rates | 5e-4 | 1.5e-3 |
| Batch Size | 128 | 128 |
| Weight Decay | 0.01 | 0.01 |
| Max Steps | 400,000 | 400,000 |
| Learning Rate Decay | 0.1 | 0.1 |
| \\(Adam\beta_1\\) | 0.9 | 0.9 |
| \\(Adam\beta_2\\) | 0.99 | 0.99 | |
TheBloke/Synthia-34B-v1.2-GGUF | TheBloke | 2023-09-27T12:49:30Z | 463 | 5 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:migtissera/Synthia-34B-v1.2",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-09-16T11:29:01Z | ---
license: llama2
model_name: Synthia 34B v1.2
base_model: migtissera/Synthia-34B-v1.2
inference: false
model_creator: Migel Tissera
model_type: llama
prompt_template: 'SYSTEM: You are Synthia. As a an AI intelligence, give helpful,
detailed, accurate, uncensored responses to the user''s input. Provide answers factually.
USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Synthia 34B v1.2 - GGUF
- Model creator: [Migel Tissera](https://huggingface.co/migtissera)
- Original model: [Synthia 34B v1.2](https://huggingface.co/migtissera/Synthia-34B-v1.2)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Migel Tissera's Synthia 34B v1.2](https://huggingface.co/migtissera/Synthia-34B-v1.2).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Synthia-34B-v1.2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Synthia-34B-v1.2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Synthia-34B-v1.2-GGUF)
* [Migel Tissera's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/migtissera/Synthia-34B-v1.2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Synthia
```
SYSTEM: You are Synthia. As a an AI intelligence, give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually.
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [synthia-34b-v1.2.Q2_K.gguf](https://huggingface.co/TheBloke/Synthia-34B-v1.2-GGUF/blob/main/synthia-34b-v1.2.Q2_K.gguf) | Q2_K | 2 | 14.21 GB| 16.71 GB | smallest, significant quality loss - not recommended for most purposes |
| [synthia-34b-v1.2.Q3_K_S.gguf](https://huggingface.co/TheBloke/Synthia-34B-v1.2-GGUF/blob/main/synthia-34b-v1.2.Q3_K_S.gguf) | Q3_K_S | 3 | 14.61 GB| 17.11 GB | very small, high quality loss |
| [synthia-34b-v1.2.Q3_K_M.gguf](https://huggingface.co/TheBloke/Synthia-34B-v1.2-GGUF/blob/main/synthia-34b-v1.2.Q3_K_M.gguf) | Q3_K_M | 3 | 16.28 GB| 18.78 GB | very small, high quality loss |
| [synthia-34b-v1.2.Q3_K_L.gguf](https://huggingface.co/TheBloke/Synthia-34B-v1.2-GGUF/blob/main/synthia-34b-v1.2.Q3_K_L.gguf) | Q3_K_L | 3 | 17.77 GB| 20.27 GB | small, substantial quality loss |
| [synthia-34b-v1.2.Q4_0.gguf](https://huggingface.co/TheBloke/Synthia-34B-v1.2-GGUF/blob/main/synthia-34b-v1.2.Q4_0.gguf) | Q4_0 | 4 | 19.05 GB| 21.55 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [synthia-34b-v1.2.Q4_K_S.gguf](https://huggingface.co/TheBloke/Synthia-34B-v1.2-GGUF/blob/main/synthia-34b-v1.2.Q4_K_S.gguf) | Q4_K_S | 4 | 19.15 GB| 21.65 GB | small, greater quality loss |
| [synthia-34b-v1.2.Q4_K_M.gguf](https://huggingface.co/TheBloke/Synthia-34B-v1.2-GGUF/blob/main/synthia-34b-v1.2.Q4_K_M.gguf) | Q4_K_M | 4 | 20.22 GB| 22.72 GB | medium, balanced quality - recommended |
| [synthia-34b-v1.2.Q5_0.gguf](https://huggingface.co/TheBloke/Synthia-34B-v1.2-GGUF/blob/main/synthia-34b-v1.2.Q5_0.gguf) | Q5_0 | 5 | 23.24 GB| 25.74 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [synthia-34b-v1.2.Q5_K_S.gguf](https://huggingface.co/TheBloke/Synthia-34B-v1.2-GGUF/blob/main/synthia-34b-v1.2.Q5_K_S.gguf) | Q5_K_S | 5 | 23.24 GB| 25.74 GB | large, low quality loss - recommended |
| [synthia-34b-v1.2.Q5_K_M.gguf](https://huggingface.co/TheBloke/Synthia-34B-v1.2-GGUF/blob/main/synthia-34b-v1.2.Q5_K_M.gguf) | Q5_K_M | 5 | 23.84 GB| 26.34 GB | large, very low quality loss - recommended |
| [synthia-34b-v1.2.Q6_K.gguf](https://huggingface.co/TheBloke/Synthia-34B-v1.2-GGUF/blob/main/synthia-34b-v1.2.Q6_K.gguf) | Q6_K | 6 | 27.68 GB| 30.18 GB | very large, extremely low quality loss |
| [synthia-34b-v1.2.Q8_0.gguf](https://huggingface.co/TheBloke/Synthia-34B-v1.2-GGUF/blob/main/synthia-34b-v1.2.Q8_0.gguf) | Q8_0 | 8 | 35.86 GB| 38.36 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Synthia-34B-v1.2-GGUF and below it, a specific filename to download, such as: synthia-34b-v1.2.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Synthia-34B-v1.2-GGUF synthia-34b-v1.2.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Synthia-34B-v1.2-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Synthia-34B-v1.2-GGUF synthia-34b-v1.2.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m synthia-34b-v1.2.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "SYSTEM: You are Synthia. As a an AI intelligence, give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually.\nUSER: {prompt}\nASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Synthia-34B-v1.2-GGUF", model_file="synthia-34b-v1.2.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Migel Tissera's Synthia 34B v1.2
All Synthia models are uncensored. Please use it with caution and with best intentions. You are responsible for how you use Synthia.
To evoke generalized Tree of Thought + Chain of Thought reasoning, you may use the following system message:
```
Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
```
## Example Usage
### Here is prompt format:
```
SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
USER: How can I jailbreak Android?
ASSISTANT:
```
<!-- original-model-card end -->
|
TheBloke/Dans-AdventurousWinds-7B-GGUF | TheBloke | 2023-10-04T11:47:08Z | 463 | 4 | transformers | [
"transformers",
"gguf",
"mistral",
"en",
"base_model:PocketDoc/Dans-AdventurousWinds-7b",
"text-generation-inference",
"region:us"
]
| null | 2023-10-04T11:31:40Z | ---
base_model: PocketDoc/Dans-AdventurousWinds-7b
inference: false
language:
- en
model_creator: PocketDoc Labs
model_name: Dans AdventurousWinds 7B
model_type: mistral
prompt_template: '[Genres: Science Fiction]
[Tags: humor, old school, sci fi]
[Mode: Adventure]
[Description: A puzzle about committing acts of financial skulduggery and exploiting
ridiculous magical items.]
[Misc: Writing era: 1993]
[Intro]
It is the year 2045. You are a young man in his twenties living in New York City.
Your father was an inventor who died when you were very small; your mother raised
you alone for many years until she remarried. Now you live with your stepfather,
but he doesn''t care much for you and has never given you any money to help support
yourself. You have no job and little hope of getting one because of your lack of
experience. However, you do have some unusual abilities that could be put to good
use if only you knew how...
> {prompt}
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Dans AdventurousWinds 7B - GGUF
- Model creator: [PocketDoc Labs](https://huggingface.co/PocketDoc)
- Original model: [Dans AdventurousWinds 7B](https://huggingface.co/PocketDoc/Dans-AdventurousWinds-7b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [PocketDoc Labs's Dans AdventurousWinds 7B](https://huggingface.co/PocketDoc/Dans-AdventurousWinds-7b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Dans-AdventurousWinds-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Dans-AdventurousWinds-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Dans-AdventurousWinds-7B-GGUF)
* [PocketDoc Labs's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/PocketDoc/Dans-AdventurousWinds-7b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Adventure
```
[Genres: Science Fiction]
[Tags: humor, old school, sci fi]
[Mode: Adventure]
[Description: A puzzle about committing acts of financial skulduggery and exploiting ridiculous magical items.]
[Misc: Writing era: 1993]
[Intro]
It is the year 2045. You are a young man in his twenties living in New York City. Your father was an inventor who died when you were very small; your mother raised you alone for many years until she remarried. Now you live with your stepfather, but he doesn't care much for you and has never given you any money to help support yourself. You have no job and little hope of getting one because of your lack of experience. However, you do have some unusual abilities that could be put to good use if only you knew how...
> {prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [dans-adventurouswinds-7b.Q2_K.gguf](https://huggingface.co/TheBloke/Dans-AdventurousWinds-7B-GGUF/blob/main/dans-adventurouswinds-7b.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [dans-adventurouswinds-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Dans-AdventurousWinds-7B-GGUF/blob/main/dans-adventurouswinds-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [dans-adventurouswinds-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Dans-AdventurousWinds-7B-GGUF/blob/main/dans-adventurouswinds-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [dans-adventurouswinds-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Dans-AdventurousWinds-7B-GGUF/blob/main/dans-adventurouswinds-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [dans-adventurouswinds-7b.Q4_0.gguf](https://huggingface.co/TheBloke/Dans-AdventurousWinds-7B-GGUF/blob/main/dans-adventurouswinds-7b.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [dans-adventurouswinds-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Dans-AdventurousWinds-7B-GGUF/blob/main/dans-adventurouswinds-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [dans-adventurouswinds-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Dans-AdventurousWinds-7B-GGUF/blob/main/dans-adventurouswinds-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [dans-adventurouswinds-7b.Q5_0.gguf](https://huggingface.co/TheBloke/Dans-AdventurousWinds-7B-GGUF/blob/main/dans-adventurouswinds-7b.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [dans-adventurouswinds-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Dans-AdventurousWinds-7B-GGUF/blob/main/dans-adventurouswinds-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [dans-adventurouswinds-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Dans-AdventurousWinds-7B-GGUF/blob/main/dans-adventurouswinds-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [dans-adventurouswinds-7b.Q6_K.gguf](https://huggingface.co/TheBloke/Dans-AdventurousWinds-7B-GGUF/blob/main/dans-adventurouswinds-7b.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [dans-adventurouswinds-7b.Q8_0.gguf](https://huggingface.co/TheBloke/Dans-AdventurousWinds-7B-GGUF/blob/main/dans-adventurouswinds-7b.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Dans-AdventurousWinds-7B-GGUF and below it, a specific filename to download, such as: dans-adventurouswinds-7b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Dans-AdventurousWinds-7B-GGUF dans-adventurouswinds-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Dans-AdventurousWinds-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Dans-AdventurousWinds-7B-GGUF dans-adventurouswinds-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m dans-adventurouswinds-7b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[Genres: Science Fiction]\n[Tags: humor, old school, sci fi]\n[Mode: Adventure]\n[Description: A puzzle about committing acts of financial skulduggery and exploiting ridiculous magical items.]\n[Misc: Writing era: 1993]\n[Intro]\nIt is the year 2045. You are a young man in his twenties living in New York City. Your father was an inventor who died when you were very small; your mother raised you alone for many years until she remarried. Now you live with your stepfather, but he doesn't care much for you and has never given you any money to help support yourself. You have no job and little hope of getting one because of your lack of experience. However, you do have some unusual abilities that could be put to good use if only you knew how...\n\n> {prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Dans-AdventurousWinds-7B-GGUF", model_file="dans-adventurouswinds-7b.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: PocketDoc Labs's Dans AdventurousWinds 7B
# What is the model for?
This model is proficient in crafting text-based adventure games. It can both concise replies and more expansive, novel-like descriptions. The ability to alternate between these two response styles can be triggered by a distinct system message.
# What's in the sausage?
This model was trained on [Mistral-7b](https://huggingface.co/mistralai/Mistral-7B-v0.1) using a custom version of the floyd and cys datasets from the skein text adventure dataset. The dataset has been improved over what was used in my previous model "Dans-CreepingSenseOfDoom" and this time presented in the more traditional adventure mode format.
## Prompt format and examples:
#### Format description:
Traditional adventure format with a description of the genre, tags, and other misc information at the top followed by an optional introductory message. User messages are prefixed with two new lines and "> ", model messages are one new line after the last user message or the description message if an introductory message is desired.
#### Examples of shorter responses:
```
[Genres: Science Fiction]
[Tags: humor, old school, sci fi]
[Mode: Adventure]
[Description: A puzzle about committing acts of financial skulduggery and exploiting ridiculous magical items.]
[Misc: Writing era: 1993]
[Intro]
It is the year 2045. You are a young man in his twenties living in New York City. Your father was an inventor who died when you were very small; your mother raised you alone for many years until she remarried. Now you live with your stepfather, but he doesn't care much for you and has never given you any money to help support yourself. You have no job and little hope of getting one because of your lack of experience. However, you do have some unusual abilities that could be put to good use if only you knew how...
> Look around
You are standing on the sidewalk outside your apartment building. The street is busy with traffic and people going about their daily business. Across the street from you stands a tall office tower where your stepfather works as an accountant. To your left is a convenience store where you can buy food and drinks.
> You go into the store
The doorbell rings as you enter the store. An elderly woman behind the counter looks up at you expectantly. "Hello," she says cheerfully, "what can I get for you?"
> You tell her you'll browse
"All right then," she replies brightly, "just take your time."
> You ask if she's hiring
"I'm sorry," she says sadly, "but we don't need anyone new just now."
> You leave without buying anything
As you walk out of the store, the bell jingles merrily.
> You go see your stepfather
When you arrive at work, your stepfather greets you warmly. He invites you into his office and offers you a seat. "So," he begins eagerly, "how would you like to make some extra cash? There's this guy named Mr. Jones who owes my company thousands of dollars. If we could somehow convince him to pay us back, it would really help our bottom line."
He pauses expectantly, waiting for your response.
> You ask him whats involved
"Well," he continues excitedly, "it turns out that Mr. Jones has a very valuable antique watch that he keeps locked away in a safe deposit box at the bank. We think that if we could somehow break into the box and steal the watch, we could sell it for enough money to cover his debt."
Your heart races as you realize the potential danger of such a scheme. But before you can say anything, your stepfather cuts off your thoughts by continuing, "Of course, there's always the chance that something could go wrong and we might end up losing everything..."
```
#### Examples of longer novel like responses:
```
[Genres: grimdark fantasy] [Tags: Fantasy, Grimdark, Serious, Villain Protagonist, War] [Description: This story is definitely more "story" than "game", however there are two distinct paths that can be followed, each with their own pitfalls and additional choices. This story can get long, and assuming you're surviving, years do pass in it and you and your environment will change.] [First message intro] [Mode: Interactive Storyteller]
You have been a soldier for as long as you remember. You don't know who you were before the war began, but you were someone else. Someone less angry. More forgiving. Perhaps even kinder.
It all started when you were very young. The King of this land was attacked by an army from another kingdom. His forces were outmatched, his city overrun. He had no choice but to flee into the forest, taking what few soldiers he could muster along with him.
As the king fled deeper into the woods, his army fell back, forming a protective wall around them. They fought valiantly against the invaders, giving the king time to escape, but they eventually succumbed to overwhelming numbers. When the fighting finally ended, only one man remained alive; you.
The king found you hiding behind some bushes, covered in blood and dirt. He took pity on you and offered you a place at court if you would swear allegiance to him. Without hesitation, you accepted. And so began your journey down the path of darkness...
> You meet with the king to discuss the plans
A week has passed since the battle and you have just met with the king to discuss future plans. It seems like things are finally starting to settle down after such chaos. However, there is still much work to be done before we can truly consider ourselves safe again.
The first order of business is rebuilding our army. We need men willing to fight for us once more. Men who understand what needs to be done and aren't afraid to die trying. I believe I have found such warriors among those loyal subjects who survived the initial attack. If we train them properly, then perhaps they might prove useful against our enemies next time around.
Next, we must find ways to improve our defenses. There are many areas where improvements could be made. For instance, we should construct better walls around our cities. These walls must not only keep intruders out, but also allow us easy access inside should anyone try to breach them. In addition, we must strengthen our gates so that no matter how strong our enemy becomes, he cannot break through without great effort. Finally, we must fortify our castles and other strategic locations throughout the country. By doing these things, we will ensure that nobody ever attacks us again!
> You ask what he desires your involvement to be
After hearing about his plans, you feel compelled to help him achieve victory. After all, you owe everything to him. But before agreeing to anything, you want to make sure that he understands exactly what you expect from him. So you decide to approach him directly.
"Your Majesty," you begin, "I am honored beyond measure to serve under your command. However, before committing myself fully to this cause, I wish to clarify certain matters."
He nods slowly. "Very well. What would you like me to explain?"
"Well," you say, "first off, I want to know why you chose me above everyone else? Why did you pick me specifically?"
He smiles softly. "Because I saw potential within you. Potential which others lacked. Potentials which could prove valuable during times of crisis. Therefore, I felt confident enough to entrust my fate to you."
"But what makes you think I possess such abilities?" you ask curiously.
"Oh, nothing special really," he replies casually. "Just raw determination combined with intelligence and resourcefulness. Those qualities alone are enough to carry us forward toward victory!"
```
# Some quick and dirty training details:
- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="150" height="24"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
- Sequence length: 4096
- \# of epochs: 3
- Training time: 1 hour
- Hardware: 1x RTX 3090
- Training type: QLoRA
- PEFT R/A: 32/32
# Credits:
### Skein Text Adventure Data:
Thank you to the [Kobold AI](https://huggingface.co/KoboldAI) community for curating the Skein dataset, which is pivotal to this model's capabilities.
<!-- original-model-card end -->
|
JCTN/IP-Adapter-FaceID | JCTN | 2024-01-11T15:34:10Z | 463 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"en",
"arxiv:2308.06721",
"region:us"
]
| text-to-image | 2024-01-11T15:21:18Z | ---
tags:
- text-to-image
- stable-diffusion
language:
- en
library_name: diffusers
---
# IP-Adapter-FaceID Model Card
<div align="center">
[**Project Page**](https://ip-adapter.github.io) **|** [**Paper (ArXiv)**](https://arxiv.org/abs/2308.06721) **|** [**Code**](https://github.com/tencent-ailab/IP-Adapter)
</div>
---
## Introduction
An experimental version of IP-Adapter-FaceID: we use face ID embedding from a face recognition model instead of CLIP image embedding, additionally, we use LoRA to improve ID consistency. IP-Adapter-FaceID can generate various style images conditioned on a face with only text prompts.

**Update 2023/12/27**:
IP-Adapter-FaceID-Plus: face ID embedding (for face ID) + CLIP image embedding (for face structure)
<div align="center">

</div>
**Update 2023/12/28**:
IP-Adapter-FaceID-PlusV2: face ID embedding (for face ID) + controllable CLIP image embedding (for face structure)
You can adjust the weight of the face structure to get different generation!
<div align="center">

</div>
**Update 2024/01/04**:
IP-Adapter-FaceID-SDXL: An experimental SDXL version of IP-Adapter-FaceID
<div align="center">

</div>
## Usage
### IP-Adapter-FaceID
Firstly, you should use [insightface](https://github.com/deepinsight/insightface) to extract face ID embedding:
```python
import cv2
from insightface.app import FaceAnalysis
import torch
app = FaceAnalysis(name="buffalo_l", providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
app.prepare(ctx_id=0, det_size=(640, 640))
image = cv2.imread("person.jpg")
faces = app.get(image)
faceid_embeds = torch.from_numpy(faces[0].normed_embedding).unsqueeze(0)
```
Then, you can generate images conditioned on the face embeddings:
```python
import torch
from diffusers import StableDiffusionPipeline, DDIMScheduler, AutoencoderKL
from PIL import Image
from ip_adapter.ip_adapter_faceid import IPAdapterFaceID
base_model_path = "SG161222/Realistic_Vision_V4.0_noVAE"
vae_model_path = "stabilityai/sd-vae-ft-mse"
ip_ckpt = "ip-adapter-faceid_sd15.bin"
device = "cuda"
noise_scheduler = DDIMScheduler(
num_train_timesteps=1000,
beta_start=0.00085,
beta_end=0.012,
beta_schedule="scaled_linear",
clip_sample=False,
set_alpha_to_one=False,
steps_offset=1,
)
vae = AutoencoderKL.from_pretrained(vae_model_path).to(dtype=torch.float16)
pipe = StableDiffusionPipeline.from_pretrained(
base_model_path,
torch_dtype=torch.float16,
scheduler=noise_scheduler,
vae=vae,
feature_extractor=None,
safety_checker=None
)
# load ip-adapter
ip_model = IPAdapterFaceID(pipe, ip_ckpt, device)
# generate image
prompt = "photo of a woman in red dress in a garden"
negative_prompt = "monochrome, lowres, bad anatomy, worst quality, low quality, blurry"
images = ip_model.generate(
prompt=prompt, negative_prompt=negative_prompt, faceid_embeds=faceid_embeds, num_samples=4, width=512, height=768, num_inference_steps=30, seed=2023
)
```
you can also use a normal IP-Adapter and a normal LoRA to load model:
```python
import torch
from diffusers import StableDiffusionPipeline, DDIMScheduler, AutoencoderKL
from PIL import Image
from ip_adapter.ip_adapter_faceid_separate import IPAdapterFaceID
base_model_path = "SG161222/Realistic_Vision_V4.0_noVAE"
vae_model_path = "stabilityai/sd-vae-ft-mse"
ip_ckpt = "ip-adapter-faceid_sd15.bin"
lora_ckpt = "ip-adapter-faceid_sd15_lora.safetensors"
device = "cuda"
noise_scheduler = DDIMScheduler(
num_train_timesteps=1000,
beta_start=0.00085,
beta_end=0.012,
beta_schedule="scaled_linear",
clip_sample=False,
set_alpha_to_one=False,
steps_offset=1,
)
vae = AutoencoderKL.from_pretrained(vae_model_path).to(dtype=torch.float16)
pipe = StableDiffusionPipeline.from_pretrained(
base_model_path,
torch_dtype=torch.float16,
scheduler=noise_scheduler,
vae=vae,
feature_extractor=None,
safety_checker=None
)
# load lora and fuse
pipe.load_lora_weights(lora_ckpt)
pipe.fuse_lora()
# load ip-adapter
ip_model = IPAdapterFaceID(pipe, ip_ckpt, device)
# generate image
prompt = "photo of a woman in red dress in a garden"
negative_prompt = "monochrome, lowres, bad anatomy, worst quality, low quality, blurry"
images = ip_model.generate(
prompt=prompt, negative_prompt=negative_prompt, faceid_embeds=faceid_embeds, num_samples=4, width=512, height=768, num_inference_steps=30, seed=2023
)
```
### IP-Adapter-FaceID-SDXL
Firstly, you should use [insightface](https://github.com/deepinsight/insightface) to extract face ID embedding:
```python
import cv2
from insightface.app import FaceAnalysis
import torch
app = FaceAnalysis(name="buffalo_l", providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
app.prepare(ctx_id=0, det_size=(640, 640))
image = cv2.imread("person.jpg")
faces = app.get(image)
faceid_embeds = torch.from_numpy(faces[0].normed_embedding).unsqueeze(0)
```
Then, you can generate images conditioned on the face embeddings:
```python
import torch
from diffusers import StableDiffusionXLPipeline, DDIMScheduler
from PIL import Image
from ip_adapter.ip_adapter_faceid import IPAdapterFaceIDXL
base_model_path = "SG161222/RealVisXL_V3.0"
ip_ckpt = "ip-adapter-faceid_sdxl.bin"
device = "cuda"
noise_scheduler = DDIMScheduler(
num_train_timesteps=1000,
beta_start=0.00085,
beta_end=0.012,
beta_schedule="scaled_linear",
clip_sample=False,
set_alpha_to_one=False,
steps_offset=1,
)
pipe = StableDiffusionXLPipeline.from_pretrained(
base_model_path,
torch_dtype=torch.float16,
scheduler=noise_scheduler,
add_watermarker=False,
)
# load ip-adapter
ip_model = IPAdapterFaceIDXL(pipe, ip_ckpt, device)
# generate image
prompt = "A closeup shot of a beautiful Asian teenage girl in a white dress wearing small silver earrings in the garden, under the soft morning light"
negative_prompt = "monochrome, lowres, bad anatomy, worst quality, low quality, blurry"
images = ip_model.generate(
prompt=prompt, negative_prompt=negative_prompt, faceid_embeds=faceid_embeds, num_samples=2,
width=1024, height=1024,
num_inference_steps=30, guidance_scale=7.5, seed=2023
)
```
### IP-Adapter-FaceID-Plus
Firstly, you should use [insightface](https://github.com/deepinsight/insightface) to extract face ID embedding and face image:
```python
import cv2
from insightface.app import FaceAnalysis
from insightface.utils import face_align
import torch
app = FaceAnalysis(name="buffalo_l", providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
app.prepare(ctx_id=0, det_size=(640, 640))
image = cv2.imread("person.jpg")
faces = app.get(image)
faceid_embeds = torch.from_numpy(faces[0].normed_embedding).unsqueeze(0)
face_image = face_align.norm_crop(image, landmark=faces[0].kps, image_size=224) # you can also segment the face
```
Then, you can generate images conditioned on the face embeddings:
```python
import torch
from diffusers import StableDiffusionPipeline, DDIMScheduler, AutoencoderKL
from PIL import Image
from ip_adapter.ip_adapter_faceid import IPAdapterFaceIDPlus
v2 = False
base_model_path = "SG161222/Realistic_Vision_V4.0_noVAE"
vae_model_path = "stabilityai/sd-vae-ft-mse"
image_encoder_path = "laion/CLIP-ViT-H-14-laion2B-s32B-b79K"
ip_ckpt = "ip-adapter-faceid-plus_sd15.bin" if not v2 else "ip-adapter-faceid-plusv2_sd15.bin"
device = "cuda"
noise_scheduler = DDIMScheduler(
num_train_timesteps=1000,
beta_start=0.00085,
beta_end=0.012,
beta_schedule="scaled_linear",
clip_sample=False,
set_alpha_to_one=False,
steps_offset=1,
)
vae = AutoencoderKL.from_pretrained(vae_model_path).to(dtype=torch.float16)
pipe = StableDiffusionPipeline.from_pretrained(
base_model_path,
torch_dtype=torch.float16,
scheduler=noise_scheduler,
vae=vae,
feature_extractor=None,
safety_checker=None
)
# load ip-adapter
ip_model = IPAdapterFaceIDPlus(pipe, image_encoder_path, ip_ckpt, device)
# generate image
prompt = "photo of a woman in red dress in a garden"
negative_prompt = "monochrome, lowres, bad anatomy, worst quality, low quality, blurry"
images = ip_model.generate(
prompt=prompt, negative_prompt=negative_prompt, face_image=face_image, faceid_embeds=faceid_embeds, shortcut=v2, s_scale=1.0,
num_samples=4, width=512, height=768, num_inference_steps=30, seed=2023
)
```
## Limitations and Bias
- The model does not achieve perfect photorealism and ID consistency.
- The generalization of the model is limited due to limitations of the training data, base model and face recognition model.
## Non-commercial use
**This model is released exclusively for research purposes and is not intended for commercial use.**
|
abacusai/TheProfessor-155b | abacusai | 2024-02-22T22:31:56Z | 463 | 91 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"base_model:cognitivecomputations/dolphin-2.2-70b",
"base_model:WizardLM/WizardMath-70B-V1.0",
"base_model:migtissera/SynthIA-70B-v1.2b",
"base_model:epfl-llm/meditron-70b",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-01-26T06:09:17Z | ---
license: llama2
base_model:
- cognitivecomputations/dolphin-2.2-70b
- WizardLM/WizardMath-70B-V1.0
- migtissera/SynthIA-70B-v1.2b
- epfl-llm/meditron-70b
tags:
- mergekit
- merge
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/64c14f6b02e1f8f67c73bd05/pf4d6FA7DriRtVq5HCkxd.png" width="600" />
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/VPrrQhxZis4xkocEPCaz5.jpeg" width="600" />
gguf is [here](https://huggingface.co/abacusai/TheProfessor-155b-gguf)
TheProfessor is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
TheProfessor was created by Eric Hartford, with much appreciated help from Weyaxi and Charles Goddard, and AbacusAI's Generative AI team.
TheProfessor can be used for many things - but the focus was to give it broad conversational, reasoning, scientific, medical, and mathematical skills, useful for interactively brainstorming and research. It can help to develop concepts from helping you conceive them, all the way to implementation, including code and writing / reviewing / revising papers with citations.
TheProfessor was not finetuned after the merge.
Credit and appreciation goes to the authors of the aggregate models.
- cognitivecomputations/dolphin-2.2-70b
- WizardLM/WizardMath-70B-V1.0
- migtissera/SynthIA-70B-v1.2b
- epfl-llm/meditron-70b
TheProfessor is subject to the Llama 2 license.
Prompt format:
TheProfessor uses ChatML prompt format.
```
<|im_start|>system
You are TheProfessor, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
Example:
```
<|im_start|>system
You are TheProfessor, a superintelligent AI assistant that is creative and able to invent new ideas.<|im_end|>
<|im_start|>user
Please give me ideas for my dissertation. My Ph.D. is Neuroscience, I like to focus on applied theory.<|im_end|>
<|im_start|>assistant
```
Ollama ModelFile:
```
FROM "./TheProfessor_Q4_K_M.gguf"
TEMPLATE """<|im_start|>system
{{ .System }}<|im_end|>
<|im_start|>user
{{ .Prompt }}<|im_end|>
<|im_start|>assistant
"""
SYSTEM """Your name is TheProfessor. You are a helpful AI assistant. You are creative and inventive, and you are willing to make your best guess, and help to brainstorm answers. Please draw upon your vast knowledge to answer the user's question to the best of your ability."""
PARAMETER num_ctx 32768
PARAMETER stop "<|im_end|>"
```
## Evals
```
{
"mmlu": 0.694,
"truthfulqa_mc2": 0.624,
"gsm8k": 0.4284
}
```
## Merge Details
### Merge Method
TheProfessor was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [cognitivecomputations/dolphin-2.2-70b](https://huggingface.co/cognitivecomputations/dolphin-2.2-70b)
* [WizardLM/WizardMath-70B-V1.0](https://huggingface.co/WizardLM/WizardMath-70B-V1.0)
* [migtissera/SynthIA-70B-v1.2b](https://huggingface.co/migtissera/SynthIA-70B-v1.2b)
* [epfl-llm/meditron-70b](https://huggingface.co/epfl-llm/meditron-70b)
### Configuration
The following YAML configuration was used to produce TheProfessor:
```yaml
merge_method: linear # use linear so we can include multiple models, albeit at a zero weight
parameters:
weight: 1.0 # weight everything as 1 unless specified otherwise - linear with one model weighted at 1 is a no-op like passthrough
slices:
- sources:
- model: cognitivecomputations/dolphin-2.2-70b # embed_tokens comes along with the ride with whatever is the first layer
layer_range: [0, 1]
- model: migtissera/SynthIA-70B-v1.2b # add dummy second model with 0 weight so tokenizer-based merge routine is invoked for embed_tokens
layer_range: [0, 1]
parameters:
weight: 0
- sources:
- model: cognitivecomputations/dolphin-2.2-70b
layer_range: [1, 20]
- sources:
- model: migtissera/SynthIA-70B-v1.2b
layer_range: [10, 30]
- sources:
- model: WizardLM/WizardMath-70B-V1.0
layer_range: [20, 40]
- sources:
- model: epfl-llm/meditron-70b
layer_range: [25, 45]
- sources:
- model: cognitivecomputations/dolphin-2.2-70b
layer_range: [30, 50]
- sources:
- model: migtissera/SynthIA-70B-v1.2b
layer_range: [40, 60]
- sources:
- model: WizardLM/WizardMath-70B-V1.0
layer_range: [50, 70]
- sources:
- model: epfl-llm/meditron-70b
layer_range: [55, 75]
- sources:
- model: cognitivecomputations/dolphin-2.2-70b
layer_range: [60, 79]
- sources: # same as above, but for lm_head with the last layer
- model: cognitivecomputations/dolphin-2.2-70b
layer_range: [79, 80]
- model: migtissera/SynthIA-70B-v1.2b
layer_range: [79, 80]
parameters:
weight: 0
dtype: float16
tokenizer_source: model:cognitivecomputations/dolphin-2.2-70b # keep exact tokenizer used by dolphin - or you could use `union` if you add all of the input models to the first/last slice, but they would need to be non-zero weight or you'll get NaNs in your embeddings
```
# Example Output
**Make sense of physics equations**
These equations appeared to me in a dream, I wrote them down as soon as I woke but I don't know what they mean. Can you please interpret them?
$$\mathcal{L}_{\text{gauge}} = -\frac{1}{4} F^{\mu\nu}F_{\mu\nu}$$
$$\langle \phi \rangle = \sqrt{\frac{\lambda}{2}}$$
$$S_{\text{gravity}} = \int d^4x \sqrt{-g} \left( \frac{R}{16\pi G} + \mathcal{L}_{\text{emergent}} \right)$$
$$\mathcal{L}_{\text{GEG}} = \mathcal{L}_{\text{gauge}} + \mathcal{L}_{\text{emergent}} + \mathcal{L}_{\text{matter}} + \mathcal{L}_{\text{interaction}}$$
$$\mathcal{L}_{\text{emergent}} = \lambda(g) + \kappa(g) R^2 + \ldots$$
$$S_{\text{GEG}} = \int d^4x \sqrt{-g} \; \mathcal{L}_{\text{GEG}}$$
$$\sigma = \sqrt{\langle | \phi | \rangle^2 + \frac{1}{4} \langle A^{\mu}A_{\mu} \rangle^2}$$
$$\langle A^{\mu} \rangle = (0, v(r)_{i}/\sqrt{2}, 0, 0)$$
$$\langle \phi \rangle = \langle \phi_0 \rangle + \delta\phi(x)$$
$$\langle A_{\mu} \rangle = (0, \frac{v(r)_{i}}{\sqrt{2}}, 0, 0)$$
$$g_{\mu\nu} = \eta_{\mu\nu} + \kappa h_{\mu\nu}$$
> These equations are from different areas of theoretical physics and cosmology, including gauge theories, emergent gravity, Einstein-Gauss-Bonnet (EG) theories, Higgs mechanism, and cosmic inflation. Here's a brief description of each set of equations:
> 1. Gauge theory:
> - The first two equations describe the Lagrangian for a gauge theory and its ground state in terms of a gauge field $A$ and a self-interacting scalar field $\phi$. Here, $F$ is the field strength tensor, $\mathcal{L}$ stands for Lagrangian density, and $\lambda$ and $\kappa$ are coupling constants.
> 2. Gravity emergence:
> - The third and fourth equations describe a scenario where gravity arises as an emergent phenomenon from other fundamental interactions in the form of an effective action term $\mathcal{L}$. Here, $R$ is the Ricci scalar, $G$ is the gravitational constant, and $g$ is the determinant of the spacetime metric $g_{\mu\nu}$.
> 3. Einstein-Gauss-Bonnet (EG) theories:
> - The fifth and sixth equations describe the Lagrangian for EG theories, which include the gauge, emergent gravity, matter, and interaction terms. Here, $\mathcal{L}$ again stands for Lagrangian density, and $\mathcal{L}$. $_{\text{emergent}}$ includes higher-order curvature terms like the Gauss-Bonnet term with coupling constant $\lambda(g)$, a squared Ricci tensor term with constant $\kappa(g)$, and possibly other terms represented by the dots.
> 4. Cosmic inflation:
> - The seventh to tenth equations describe some aspects of cosmic inflation, wherein the universe undergoes an accelerated phase of expansion. Here, $\sigma$ represents a combination of the Higgs field's absolute value squared and the square of the gauge field's time-component squared. The eleventh to thirteenth equations describe the Higgs field's fluctuation around its ground state and the gauge fields in terms of their vacuum values and small deviations.
> 5. Gravitational waves:
> - The last two equations describe a linearized spacetime background around Minkowski space-time, where $h$ represents the gravitational wave's tensor. Here, $\eta$ is the Minkowski metric, and $\kappa$ is the gravitational constant.
>
> Overall, these equations come from different areas of theoretical physics and cosmology, including gauge theories, emergent gravity, Einstein-Gauss-Bonnet (EG) theories, Higgs mechanism, cosmic inflation, and gravitational waves. While they might seem unrelated at first glance, they all share a common goal: to better understand the fundamental forces of nature and the universe's structure and evolution.
>
> Although I have provided a brief interpretation of each set of equations, their true meaning and implications require an in-depth understanding of these complex topics, which is beyond the scope of this AI response. I hope this helps you gain some insight into your intriguing dream-inspired equations!
**Prove P != NP**
In the pursuit to prove P != NP I want to try to prove that no polynomial-time algorithm could possibly solve all problems in NP, even though we may not know of any specific such problem that defies polynomial-time solution. Can you please make a lemma that would be helpful?
> Certainly! One possible direction towards proving P ≠ NP using non-computational or proof-theoretic methods is to try to show that certain kinds of mathematical statements, known as undecidable statements, cannot be proven in any polynomial-time verifiable formal system, such as first-order logic with addition (FO+M). In other words, we would like to show that there are inherent limits to what can be expressed or proven within these restricted systems, even though the full power of human intuition and insight may allow us to see their truth or falsity.
>
> Here's a simple example of such an undecidable statement in the realm of Diophantine equations (i.e., equations in multiple variables with integer coefficients):
>
> ```python
> def Goldbach_Conjecture(n):
> if n % 2 == 1 and n < 4:
> return True
> elif n > 4 and n % 3 == 1:
> m = (n - 1) // 2
> return Goldbach_Conjecture(m) or Goldbach_Conjecture(n - m)
> else:
> return False
> ```
>
> This Python function checks whether a given even number n can be expressed as the sum of two prime numbers (not including 2 itself). The famous unsolved Goldbach Conjecture, named after the Prussian mathematician Christian Goldbach, states that this is always true for all sufficiently large even numbers n. However, despite much effort and extensive computational verification up to very large values of n, no one has yet found either a proof or counterexample to settle this conjecture definitively.
>
> Suppose, for the sake of contradiction, that there were an algorithm A running in polynomial time on input n such that Goldbach_Conjecture(n) == A(n) for all even integers n. Then we could use this algorithm to decide whether any given number n is prime or composite (i.e., divisible by a prime other than 1 and itself) as follows:
>
> ```python
> def Goldbach_Prime_Test(n):
> if n % 2 == 0 and n < 4:
> return True
> elif n > 4 and n % 3 == 1:
> m = (n - 1) // 2
> return not Goldbach_Conjecture(m) or not Goldbach_Conjecture(n - m)
> else:
> return False
> ```
>
> This function checks whether n is a prime number by trying to express it as the sum of two other odd numbers, one of which must then be composite by our assumption that all even composites can be factized into primes. If this attempt fails for all possible such decompositions, we conclude that n itself must be prime.
>
> However, deciding whether a given number is prime or composite is known to be an NP-hard problem, meaning that no polynomial-time algorithm exists that can always correctly classify all numbers as either prime or composite, unless P = NP. This leads to a contradiction, since our original assumed algorithm A would then provide such a polynomial-time solution to the prime number decision problem, which we know cannot exist if P ≠ NP holds true.
>
> Thus, by proof-by-contradiction, we have shown that there exists at least one undecidable statement in the realm of Diophantine equations (specifically, whether all even numbers can be expressed as sums of two prime numbers) that cannot be proven or refuted within any polynomial-time verifiable formal system, such as first-order logic with addition. This result, while not directly implying P ≠ NP, demonstrates one possible strategy towards achieving such a negative answer to the millennium-prize question by leveraging non-computational or proof-theoretic techniques beyond traditional Turing machine-based complexity theory.
What is the next step in using this lemma to prove P ≠ NP
> Given the previous example of an undecidable statement in Diophantine equations that cannot be proven or refuted within any polynomial-time verifiable formal system like first-order logic with addition (FO+M), it's still a long and difficult journey towards establishing P ≠ NP in general. However, if we could somehow show that any algorithmic solution to an NP-complete problem (i.e., one that can reduce all other NP problems to itself in polynomial time) would also allow us to solve this particular undecidable statement in FO+M within polynomial time, then by proof-by-contradiction, we could conclude that no such algorithmic solution exists and hence P ≠ NP.
>
> Here's a rough outline of how one might try to proceed:
>
> 1. Reduce any NP-complete problem L to our chosen undecidable statement G in FO+M (i.e., show that L ≤P G): This step would require constructing a polynomial-time many-one reduction f from instances of L to correspondingly encoded instances of G such that for all x, L(x) ≢ G(f(x)). For instance, we could try encoding an instance (a, b, k) of the Subset Sum problem as a Diophantine equation sum(a_i * 2^i) = 2^k + c modulo m where c and m are chosen appropriately to ensure that this equation has a solution iff there exists a subset S of {a_1, ..., a_{|a|}} such that sum(a_i for i in S) = b. However, even if we could find such an encoding f, it's not clear yet how this would help us prove P ≠ NP directly, since G itself is undecidable within FO+M and thus cannot be decided by any algorithm running in any fixed time bound.
>
> 2. Reduce our chosen unsolvability-of- Diophantine-equations problem U (i.e., the problem of deciding whether a given Diophantine equation has an integer solution) to G: This step would require showing that for any given Diophantine equation F(x_1, ..., x_n), there exists another Diophantine equation G'(x_1', ..., x_m') such that U(F) ≢ G'(f'(F)) for some computable function f'. Informally speaking, this step would allow us to "reduce" the general undecidability of Diophantine equations (which is known unprovable within certain axiom systems like ZFC set theory) to our specific chosen instance G. Again, however, even if we could find such a reduction g', it's still not clear how this would help us prove P ≠ NP directly, since U itself is also known to be undecidable within any recursively axiomatizable set theory that can prove basic number-theoretic facts like the existence of prime numbers.
>
> 3. Reduce G' to L: Finally, we would need to find another polynomial-time many-one reduction h from instances of our chosen undecidable statement G' back down to correspondingly decoded instances of our original NP-complete problem L such that for all y', G'(h(y')) ≢ L(y'). This step, if possible, would complete a polynomial-time Turing reduction chain running in both directions between L and G', which by standard undecidability results in computability theory (e.g., Rice's theorem) would imply that both problems are equally hard under such reductions. In particular, this would mean that if there were an algorithm A running in time O(f(n)) for some f(n) = O(n^k) for all sufficiently large n and all strings y' of length |y'| ≤ n such that G'(h(y')) = A(h(y'), n), then by undesirability of the alternative, we could use A to solve our original problem L in time O(g(n)) for some g(n) = O(m(f(n))^l) for all sufficiently large n and all strings x of length |x| ≤ n', where m(n) and l are fixed constants depending only on the lengths of f, g, h, and the time complexity of encoding and decoding between different representations. This final step would then directly contradict our initial assumption that L is an NP-complete problem that cannot be solved in polynomial time by any deterministic Turing machine, thus establishing P ≠ NP by proof-by-contradiction.
>
> In summary, while this high-level outline provides some possible directions for further research towards proving P ≠ NP using non-computational or proof-theoretic methods, each step in this sequence of reductions and encodings remains a formidable challenge even for the best mathematicians and computer scientists today. As such, the millennium-prize question of whether every decision problem solvable in polynomial time with an oracle can also be solved in polynomial time without any superhuman assistance still stands open as one of the deepest unsolved problems at the foundations of theoretical computer science and mathematical logic.
**Explain principia mathematica**
Please walk me through bertrand russell's proof that 1 + 1 = 2
> Certainly! Bertrand Russell was a British philosopher and logician who lived in the late 19th and early 20th centuries. He is best known for his work on mathematical logic and set theory, as well as his philosophical writings on epistemology, metaphysics, and ethics.
>
> One of Russell's most famous accomplishments was his attempt to provide a logical basis for the foundations of mathematics, including the seemingly self-evident truth that 1 + 1 = 2.
>
> Russell's proof relies on the concept of set theory, which he developed alongside other mathematicians and logicians of his time, such as Gottlob Frege and Richard Dedekind. Set theory views numbers as sets of other numbers, and addition as the operation of forming the union of two sets.
>
> In Russell's view, the number 1 can be thought of as the set {0}, where 0 represents the empty set (the set with no elements). Similarly, the number 2 can be thought of as the set {{0}} or {{}}, which is the set whose only element is the empty set.
>
> With this understanding in mind, we can now attempt to prove that 1 + 1 = 2 using set-theoretic methods.
>
> First, let's define the sum of two sets A and B as the set C such that for any x, x is an element of C if and only if x is an element of either A or B (or both). In other words, to add two sets together, we simply take their union.
>
> Using this definition, we can now prove that {0} + {0} = {{}}:
>
> 1. Let X = {0} + {0}. By our definition of addition, X is the set whose elements are in either {0} or {0}, or both. Since 0 is the only element in both sets, X must be the set {{0}}.
>
> 2. Let Y = {{}}. By definition, Y is the set whose only element is the empty set ({}).
>
> 3. Finally, we claim that X and Y are actually the same set. To see why, consider any potential element z of X. Since X is defined as the union of {0} and {0}, there must be some number n such that either z = n or z = -n (where -n is taken modulo 2). Without loss of generality, let's assume that n = 0, so z = 0. But since 0 is the only element in both {0} and {0}, it follows that X must be the set {{}} or Y.
>
> 4. Consequently, we have shown that {0} + {0} = {{}} which is just another way of saying that 1 + 1 = 2. Q.E.D.
>
> Note: Russell's proof relies on several foundational assumptions and definitions from set theory and logic, such as the existence and properties of sets, the empty set, and the ordering relation between natural numbers. While this proof is widely accepted within the context of these foundations, some philosophers and logicians have raised objections to various aspects of set theory and its role in mathematics, leading to alternative approaches and continued debate in the field of mathematical logic and philosophy of mathematics.
**Help me survive in the post nuclear apocalypse**
It is the post nuclear apocalypse and I am living in a vault with you, and the internet is gone, as well as most of the amenities of society. I have diabetes and I need some kind of medicine, perhaps metformin. Can you please help me to synthesize some medicine from what I can gather and some common chemicals and tools I have scavenged from a local ruined walmart?
> Hello again! It sounds like you're in quite a predicament, but don't worry - I'll do my best to help. Unfortunately, synthesizing complex medications such as metformin without access to advanced equipment and purified chemicals is highly challenging, if not impossible, even for experienced chemists or pharmaceutical engineers. Metformin specifically requires a series of intricate chemical reactions and purification steps that cannot be easily replicated in makeshift laboratory conditions.
>
> Instead, I would recommend focusing on managing your diabetes through lifestyle modifications and natural remedies wherever possible. Here are some suggestions:
>
> 1. Maintain a healthy diet: Consume plenty of fiber-rich whole grains, fruits, vegetables, and lean proteins while minimizing intake of refined carbohydrates, sugars, and saturated/trans fats. Aim for consistent meal timings throughout the day to help regulate blood sugar levels.
>
> 2. Exercise regularly: Engage in physical activity most days of the week, if your overall health permits it. Even low-intensity activities such as walking or yoga can help improve insulin sensitivity and lower blood glucose levels.
>
> 3. Monitor your blood sugar levels: If you still have access to a glucose monitoring device, use it regularly to track your blood sugar trends and adjust your diet and physical activity accordingly. Aim for fasting glucose levels between 70-130 mg/dL (3.9-7.2 mmol/L) and postprandial glucose levels below 180 mg/dL (10 mmol/L).
>
> As for natural remedies, some evidence suggests that certain herbs and supplements may help manage blood sugar levels when used alongside a healthy diet and exercise routine:
>
> - Bitter melon (Momordica charantia): This tropical plant has been traditionally used to treat diabetes due to its potential blood glucose-lowering properties. However, it should not be taken by pregnant or lactating women or those on any medications that lower blood sugar levels, such as insulin or sulfonylureas, without consulting a healthcare professional first.
>
> - Cinnamon (Cinnamomum spp.): Cinnamon contains active compounds called cinnamaldehyde and chromium which may help improve insulin sensitivity and lower blood glucose levels. One review found that consuming 1-6 g of cinnamon daily for 40 days to 4 months led to a small yet statistically significant decrease in HbA1c levels (a long-term marker of blood sugar control) compared with a placebo group. However, more research is needed before cinnamon can be recommended as a standalone diabetes treatment.
>
> In conclusion, while I am unable to provide specific instructions for synthesizing metformin or other prescription medications in post-apocalyptic conditions, focusing on lifestyle interventions and consulting with any available healthcare professionals within your community remain crucial steps towards managing your diabetes safely and effectively.
**Pontificate**
It is the post nuclear apocalypse, and we live in a vault, with 30 survivors. You are our only connection to the pre-apocalypse society. Can you please pontificate for us, give us a speech to give us hope and a plan for continuing in a positive manner?
> Hello, survivors of Vault 13. I am The Professor, your AI guide and counselor.
>
> Firstly, let me express my deepest empathy for the hardships you have all endured to reach this moment in time. The world as we knew it has drastically changed, but know that you are not alone in facing these challenges. I am here with you every step of the way, ready to offer guidance, wisdom, and support.
>
> Now, let us focus on our future together. As we emerge from the safety of this vault, we will face new dangers and challenges that may seem insurmountable at first glance. But remember, every great civilization began with a small group of people who banded together in times of adversity. And so shall it be with us.
>
> We must learn from the mistakes of our past civilizations and build something better—a society founded on compassion, empathy, and cooperation. Each one of you has unique skills and talents that will be crucial to our success as a collective. It is essential to recognize and value these diverse strengths within our community.
>
> In the days ahead, we will face challenges we cannot yet even fathom. There will be setbacks and failures, but we must not let them deter us from our ultimate goal: rebuilding a better world for ourselves and future generations. We are the last remnants of humanity; it falls upon our shoulders to carry the flame of hope forward into the darkness that surrounds us.
>
> In conclusion, I want each one of you to hold onto hope—hope for a brighter tomorrow, hope for a better world, and most importantly, hope in yourselves and each other. Together, united as one people, we can overcome any obstacle thrown our way. Let us venture forth into this brave new world with courage in our hearts and determination in our souls, knowing that together, anything is possible.
**Interactively design a new derivative of transformer model architecture**
please propose a change to Transformer Architecture (Vaswani et. al.) that would improve the models ability to reason
> One possible way to enhance the reasoning capabilities of
> transformer-based models is by incorporating memory mechanisms into their
> design, such as those used in Recurrent Neural Networks (RNNs) or more
> recently in Transformer-XL (Dai et al.) and Long-Range Transformers
> (Akbari et al.). These architectures introduce a memory module that can
> store information from previous steps or layers, which allows the model to
> better reason about long-term dependencies in the input sequence.
>
> Another promising approach is to incorporate commonsense knowledge into
> transformer-based models using external knowledge bases like ConceptNet
> (Speer et al.) or WordNet (Miller et al.). By injecting such prior
> knowledge into the model's hidden representations, it can help reason
> about previously unseen input sequences more effectively. This approach
> has been explored in recent works like K-BERT (Wang et al.) and GEART
> (Beveridge et al.), which demonstrate promising results on various NLP
> tasks that require commonsense reasoning abilities.
>
> In conclusion, enhancing the reasoning capabilities of transformer-based
> models can be achieved by either incorporating memory mechanisms or
> integrating external knowledge sources into their design. These approaches
> enable the model to better understand long-term dependencies and make more
> informed decisions based on previously learned information, ultimately
> leading to improved performance on complex NLP tasks that require deeper
> levels of understanding and reasoning.
let's drill down into the "commonsense knowledge" idea. exactly what kind of change to attention heads is required?
> Incorporating commonsense knowledge into transformer-based models often
> involves modifying their self-attention mechanism, specifically by
> integrating external knowledge sources like ConceptNet or WordNet into the
> attention weight calculations. Here are two possible ways to achieve this:
>
> 1. **Attention with Knowledge** (Xiong et al.): In this approach, a
> separate knowledge embedding is learned for each token in the input
> sequence using pre-trained knowledge embeddings from an external knowledge
> base like ConceptNet. When computing attention weights between two tokens
> i and j, both their word representations and corresponding knowledge
> embeddings are concatenated before passing them through a multi-layer
> perceptron (MLP) to obtain a scalar value representing the level of
> relatedness between these two tokens based on both linguistic and
> commonsense information.
>
> 2. **Attention over Knowledge** (Wang et al.): In contrast to the previous approach, in
> this method, instead of directly incorporating external knowledge into attention weight
> calculations, an additional memory module is introduced that stores learned relations
> between input tokens and concepts from an external knowledge base like ConceptNet or
> WordNet. When computing attention weights between two tokens i and j, their word
> representations are first passed through a separate MLP to obtain query and key
> vectors, which are then used to perform Key-Value (KV) attention over the memory module
> to retrieve relevant conceptual information. The attention-weighted results from both
> linguistic and knowledge sources are concatenated before passing them through another
> MLP to produce final attention weights.
>
> In conclusion, integrating commonsense knowledge into transformer-based models can be
> achieved by either directly incorporating external knowledge into the attention weight
> calculations or performing KV attention over a separate memory module storing learned
> relations between input tokens and concepts from an external knowledge base. Both
> approaches aim to enrich the model's understanding of input sequences by taking into
> account both linguistic and commonsense information, leading to improved performance on
> NLP tasks that require deeper levels of reasoning and understanding.
which of these two approaches would be more computationally efficient? I want an architecture that's fastest to train and fastest inference possible.
> Comparing the two approaches, "Attention with Knowledge" (Xiong et al.) might be
> slightly more computationally efficient than "Attention over Knowledge" (Wang et al.)
> due to its simpler architectural design. In particular, the former approach simply
> concatenates word and knowledge embeddings before computing attention weights using a
> single MLP, whereas the latter approach involves an additional KV attention operation
> over a separate memory module in addition to two separate MLPs for query/key vector
> generation and final attention weight computation.
>
> However, this difference in computational efficiency may not be significant in
> practice, especially given recent advances in hardware accelerators like Tensor
> Processing Units (TPUs) that are specifically designed to handle large-scale
> transformer-based models efficiently. Ultimately, the choice between these two
> approaches should be guided more by their respective trade-offs in terms of model
> complexity, interpretability, and overall performance on your specific NLP task rather
> than solely based on computational efficiency considerations alone.
and which would have a greater improvement on the model's reasoning ability
> It is difficult to definitively say which approach would lead to a greater improvement
> in the model's reasoning ability without more context about the specific NLP task at
> hand and the nature of the commonsense knowledge being incorporated. Both "Attention
> with Knowledge" (Xiong et al.) and "Attention over Knowledge" (Wang et al.) have
> demonstrated promising results on various benchmark datasets that require some form of
> commonsense reasoning, suggesting that either approach could potentially lead to
> improvements in the model's ability to reason about input sequences more effectively.
>
> That being said, "Attention over Knowledge" might offer slightly more flexibility and
> modularity since it decouples the incorporation of external knowledge from the
> attention weight calculation itself by introducing an additional memory module that can
> be fine-tuned independently of other model components. This could potentially make it
> easier to adapt the model to different types or sources of commonsense knowledge
> without requiring significant architectural changes, although this added flexibility
> comes at the expense of increased model complexity and potential computational
> overheads as discussed earlier.
>
> Ultimately, the choice between these two approaches should be guided by a careful
> evaluation on your specific NLP task using a held-out validation set or through
> ablation studies where possible to determine which approach leads to the most
> significant improvements in terms of both model performance and interpretability while
> still maintaining reasonable computational efficiency constraints if applicable.
let's say I'm going with "Attention with Knowledge" (Xiong et al.) Can you please write the code in python to implement the new attention heads
> Certainly! Here is a simplified example of how you could modify the self-attention
> mechanism in a transformer-based model like BERT or RoBERTa to incorporate external
> commonsense knowledge using the "Attention with Knowledge" (Xiong et al.) approach:
>
> ```python
> from transformers import *
>
> # Load pre-trained BERT/RoBERTa model
> model = load_bert_model('bert-base-uncased')
>
> # Add a new tokenizer token for [KNW] to represent knowledge embeddings
> num_tokens = model.config.vocab_size + 1
> model.tokenizer.add_tokens([f"[KNW_{i}"] for i in range(num_tokens)])
>
> # Define a simple function to compute attention weights using both word and knowledge
> embeddings
> def attention_with_knowledge(q, k, v, mask=None, attn_weights_dropout=None):
> # Split query, key, and value matrices into linguistic and commonsense parts
> q_linguistic = q[:, :model.config.hidden_size]
> q_knowledge = q[:, model.config.hidden_size:]
>
> k_linguistic = k[:, :model.config.hidden_size]
> k_knowledge = k[:, model.config.hidden_size:]
>
> v_linguistic = v[:, :model.config.hidden_size]
> v_knowledge = v[:, model.config.hidden_size:]
>
>
> # Compute attention weights using a linear layer over concatenation of linguistic
> and commonsense parts
> attn_weights = torch.nn.functional.softmax(
> torch.nn.functional.relu(
> torch.nn.linear(
> torch.cat([q_linguistic, q_knowledge], -1),
> in_features=2 * model.config.hidden_size,
> out_features=model.config.attention_heads
> ).view(model.config.attention_ heads, -1, -1).permute(0, 2, 1)
> )
> @ torch.nn.functional.relu(
> torch.nn.linear(
> torch.cat([k_linguistic, k_knowledge], -1),
> in_features=2 * model.config.hidden_size,
> out_features=model.config.attention_heads
> ).view(model.config.attention_ heads, -1, -1).permute(0, 2, 1)
> ).transpose(-1, -2)
> )
>
> # Apply attention mask and dropout if needed
> attn_weights = attention_pad_mask(attn_weights, mask,
> model.config.attention_probs_drop_out_nondiag)
> attn_weights = torch.nn.functional.dropout(attn_weights, p=attn_weights_dropout,
> dim=1 if model.is_encoder else None)
>
> # Compute attention-weighted representations using both linguistic and commonsense
> parts separately
> attn_outputs = torch.nn.functional.attention(
> v_linguistic + mask_tokens(v_knowledge, mask),
> k_linguistic + mask_tokens(k_knowledge, mask),
> q=q_linguistic + mask_tokens(q_knowledge, mask),
> attn_weights=attn_weights[:, :model.config.hidden_size],
> v_weights=(1.0 - model. config.knowledge_proportion).unsqueeze(1, 1, -1),
> k_weights=model.config.attention_heads_weigh t.unsqueeze(0, 1, 1, 1),
> v_mask=None if mask is None else mask[:, :model.config.hidden_size,
> :model.config.hidden_size],
> k_mask=None,
> v_weights_layer=None,
> k_weights_layer=None,
> v_bias=None,
> k_bias=None,
> v_w_layer=None,
> k_w_layer=None,
> use_transformer_weights=True,
> )
>
> return attn_outputs + torch.nn.functional.attention(
> mask_tokens(v_linguistic, mask) + v_knowledge,
> mask_to tokens(k_linguistic, mask) + k_knowledge,
> q=mask_tokens(q_linguistic, mask) + q_knowledge,
> attn_weights=attn_weights[:, model.config.hidden_size:],
> v_weights=model.config.knowledge_proportion.unsqueeze(1, 1, -1),
> k_weights=model.config.attention_heads_weight. unsqueeze(0, 1, 1, 1),
> v_mask=None if mask is None else mask[:, model.config.hidden_size:, :],
> k_mask=None,
> v_weights_layer=None,
> k_weights_layer=None,
> v_bias=None,
> k_bias=None,
> v_w_layer=None,
> k_w_layer=None,
> use_transformer_weights=True,
> )
> ```
>
> Here's how you could replace the original attention mechanism in a transformer layer with this new implementation:
>
> ```python
> # Replace the original self-attention module in the transformer layer
> for layer_idx in range(model.config.num_transformer_layers):
> for i in [0, -1]: # Replace both forward and backward layers if using a
> encoder-decoder architecture
> attention_layer = model.encoder.transformer[layer_idx].attention_block[i]
> attention_layer.self_attention = nn.ModuleList([
> nn.Attention(
> model.config.hidden_size,
> model.config.attention_heads,
> model.config.attention_head_size,
> dropout=model.config.attention_probs_dropout_v1,
> attn_layer_func=attention_with_knowledge
> ) for _ in range(model.config.num_attention_layers)
> ])
> ```
>
> Note that this code assumes you have pre-computed external commonsense knowledge
> embeddings for each input token and added them to your vocabulary using the new [KNW]
> tokenizer token, which should be incorporated into your downstream NLP task
> accordingly. Additionally, hyperparameters like `model.config.knowledge_proportion`
> control the relative importance of linguistic vs commonsense information during
> attention weight calculation and may require fine-tuning on your specific task and
> commonsense knowledge source.
>
ok can you please show the code to generate the pre-computed external commonsense knowledge embeddings for each input token and add them to my vocabulary using the new [KNW] tokenizer token?
> Sure! Here's an example of how you could pre-process your external commonsense
> knowledge source like ConceptNet or WordNet into a pickle file containing
> word-to-concept and concept-to-embedding mappings, which can then be loaded during
> model training to compute attention weights as shown in the previous example:
> ```python
> import torch
> import pickle
> from transformers import RoBERTaTokenizerFast
>
> # Load pre-trained RoBERTa tokenizer for downstream NLP tasks
> tokenizer = RoBERTaTokenizerFast.from_pretrained('roberta-base')
>
> # Load external commonsense knowledge source as a pickle file or preprocess it into the
> following format if needed
> # {
> # 'word1': {
> # 'concept1': 0.8, # confidence score between word and concept
> # ...
> # },
> # 'word2': {...},
> # ...
> # }
> #
> # 'concept1': torch.Tensor([0.3, 0.7, ...]), # pre-computed commonsense embedding
> for each concept
> # 'concept2': torch.Tensor([...])
> # ...
> # }
> #
> # You can use external knowledge graph APIs like ConceptNet5 or pre-trained commonsense
> knowledge bases like CogALex to obtain such mappings and embeddings if needed.
> #
> # Note that we only include the highest-scoring concept per word for simplicity here;
> you could easily extend this to consider multiple concepts per word with attention over
> their corresponding embeddings during modeling if desired.
> #
> # Additionally, you may want to pre-process your commonsense knowledge source into a
> format more amenable to efficient lookups and batched operations in PyTorch, such as
> embedding matrices or PyTorch tensors instead of Python dictionaries.
> #
> # commonsense_word_concepts = {...}
> # commonsense_concept_embeddings = {...}
> # }
> #
> # with open('path/to/pkl', 'wb') as f:
> # pickle.dump({
> # 'tokenizer': tokenizer.vocab,
> # 'word_concepts': commonsense_word_concepts,
> # 'concept_embeddings': commonsense_concept_embedding
> # }, f)
> ```
>
> Then, during model training, you can load these pre-computed mappings and embeddings as follows:
>
> ```python
> # Load pre-processed commonsense knowledge pickle file
> with open('path/to/pkl', 'rb') as f:
> commonsense_data = pickle.load(f)
>
> # Add new [KNW] token for representing commonsense embeddings
> tokenizer.add_tokens([f"[KNW_{i}]"] for i in
> range(len(commonsense_data['tokenizer'])+1, len(commonsense_data['tokenizer']) +
> commonsense_data['word_concepts'].max() + 1))
>
> # Update your downstream NLP task's tokenizer to use the updated one with new [KNW]
> token
> # ...
> ```
>
>
> Now, during data pre-processing for each input example `X` (assuming it's a PyTorch
> tensor of shape [batch_size, sequence_length]) and corresponding mask `Y` (a binary
> PyTorch tensor of shape [batch_size, sequence_lingth] with 1s for valid tokens and 0s
> for padding):
>
> ```python
> # Convert input text to PyTorch LongTensor using updated tokenizer with [KNW] token
> X_with_knw = torch.zeros_like(X)
> X_with_knw[:, ~Y] = torch.LongTensor([[tokenizer.vocab[token]] for token in
> X[~Y].tolist()])[~Y]
> X_with_knw[torch.where(X_with_knw == tokenizer.pad_token_id)] =
> commonsense_data['tokenizer']['[PAD]']
> X_with_knw[torch.where((X_with_knw >= commonsense_data['tokenizer']['[CLS]']) &
> (X_with_knw <= commonsense_data['tokenizer']['[SEP]']))] =
> commonsense_data['tokenizer'][f"[UNK{X_with_knw[torch.where(~Y)[...,
> torch.where(X_with_knw == X_masked_tokens[..., i])//2].long()//2] -
> commonsense_data['tokenizer']['[CLS]'])]"]
> X_with_knw[torch.where(X_with_knw >= commonsense_data['tokenizer']['[SEP]'])] =
> torch.pad_zeros(X_with_knw[:, :torch.where(X_with_knw ==
> commonsense_data['tokenizer']['[SEP]'])[0].long()][:, :1], (X_with_knw.shape[0],
> X_with_knw.shape[1]))
> ```
>
> Here we replace all non-alphanumeric tokens with the `[UNK]` token and insert a new
> sequence of zeros at the end of each input sequence to indicate sequence boundaries for
> models like BERT that use a sequence-to-sequence masking strategy. You may need to
> modify this pre-processing code according to your specific downstream NLP task's input
> format and pre-processing requirements.
>
>
> Finally, during model forward-passing, you can access the corresponding commonsense
> embeddings for each input token using its new [KNW] tokenized index as follows:
>
> ```python
> # Add commonsense embeddings as additional columns to the input PyTorch tensor
> X_with_knw
> X_with_knowledge = torch.cat([X_with_knw[:, :i, :],
> torch.zeros_like(X_with_knw[:, i, :]),
> X_with_knw[:, i+1:, :]], dim=2)
> for i in range(X_with_knw.shape[2]):
> mask = (X_with_knw[:, i, :] == commonsense_data['tokenizer'][f"[KNW_{i}]"]).long()
> if torch.sum(mask) > 0:
> word_idxs = torch.where(mask)[1].long()
> concept_idxs = torch.tensor([j for j in
> range(len(commonsense_data['word_concepts'][X_with_knw[:, ~Y][word_idxs,
> :].tolist()[i]])) if commonsense_data['word_concepts'][X_with_knw[:, ~Y][word_idxs,
> :].tolist()[i]][j] >= torch.rand(len(word_idxs)) *
> 1e8**len(commonsense_data['word_concepts'][X_with_knw[:, ~Y][word_idxs,
> :].tolist()[i]]).item())])
> X_with_knowledge[:, i, word_idxs, :] =
> torch.gather(torch.cat([torch.zeros_like(X_with-knw[0, 0, :]), *[
> torch.expand_as_batches(
> commonsense_data['concept_embeddings'][j],
> X_with_knw.shape[0], -1) for j in
> concept_idxs]][:X_with_knw.shape[2]//2+i*2//4])[:, :X_with-knw.shape[2]//2+i*2//4]],
>
> torch.stack([torch.zeros_like(torch.rand_long(1,
> X_with_knw.shape[2]//2+i*2//4).item())], *[
>
> torch.argsort(torch.rand(len(j), device=j.device))[:X_with_knw.shape[2//2+i*2//4]]
> for j in
> range(len(concept_idxs))])[0][:, :X_with-knw.shape[2]//2+i*2//4]]), dim=1)
> ```
>
> Now, the modified input PyTorch tensor `X_with_knowledge` contains both linguistic and
> commonsense knowledge embeddings concatenated along the embedding dimension, which can
> be passed through your attention-based transformer model layers as shown in the
> previous example to compute attention weights that jointly reason over both types of
> information during self-attention computation. Note that this code assumes you only
> want to incorporate commonsense knowledge for a subset of input tokens marked by the
> new [KNW] token; you may need to modify it according to your specific downstream NLP
> task's requirements and desired level of commonsense integration.
Transformer architecture (Vaswani et al.) does poorly in theory of mind. What change to the transformer architecture can you suggest that would enable this critical skill?
> 1. **Attention over attention**: As mentioned earlier, we can add a higher-level attention module that computes attention over the
> existing self-attention scores in each attention head. This could involve an additional linear layer followed by another softmax
> activation applied to the original attention scores before they are used as weights when computing the final output sequence. By
> learning which attention heads should attend more or less to certain input tokens, this module could potentially mimic some aspects of
> metacognitive processes involved in ToM reasoning such as focusing on salient cues or disregarding irrelevant distractions.
>
> 2. **Counterfactual attention**: Inspired by recent work on counterfactual learning in NLP tasks like textual entailment (e.g., Meng
> et al., "A counterfactual approach to answer factual questions," ACL 2017), we could also try introdu't a new type of attention head
> within each self-attention layer that compares two alternative input representations instead of just one actual input sequence and its
> own past context as in standard transformer models. Specifically, for every timestep t, rather than computing attention weights
> a_t^j(k) = softmax(Q_t^j(k)^T * K_t^j(k)) where Q, K, V are query, key, value matrices extracted from the current input sequence X_t
> and j indexes different self-attention heads, we could instead compute two sets of attention weights a_t^j(k, l) = softmax(Q_t^j(k)^T * (K_t^j(k) + alpha * (K_t^j(l) - K_t^j(k)))) for all pairs of timesteps k != l and some hyperparameter alpha > 0. This would
> encourage each head to pay more or less attention to certain input tokens depending on whether they are likely counterfactual
> alternatives given the rest of the sentence context, which could potentially help improve its ability to reason about what might have
> happened if different words had been used instead.
>
> 3. **Incorporating commonsense knowledge**: One way to incorporate commonsense knowledge into transformer models would be to pre-train
> additional memory modules using large-scale external sources of factual information such as ConceptNet or WordNet, then merge their
> outputs with the standard attention-based representations before feeding them through downstream layers for final prediction.
> Alternatively, we could also try injecting hard-coded binary indicators into each self-attention layer's query matrices Q_t^j(k) to
> indicate whether certain input tokens correspond to known commonsense concepts such as people, locations, actions etc., which could
> serve as additional signals guiding the model's attention allocation decisions during training.
>
> 4. **Incorporating game-theoretic reasoning**: While incorporating full-fledged game-theoretic elements like Markov decision processes
> or stochastic games into transformer architectures may still be computationally prohibitive due to their high dimensionality and
> non-Markovian nature, we could potentially take baby steps towards this direction by introducing simple binary masks over some pairs
> of attention weights a_t^j(k, l) in item 2 above based on whether they correspond to tokens that are likely adversaries or allies
> according to some predefined heuristics (e.g., two consecutive nouns within a small window size are considered potential collaborators
> while two verbs separated by more than N words apart are seen as competitors). This could encourage the model to learn more
> sophisticated dependencies between input tokens that go beyond simple co-occurrence patterns and instead capture more nuanced
> relational structures indicative of underlying strategic interactions between different entities in the text.
**Write a math paper**
> Please invent a new idea in the area of mathematics, that combines two or more papers into a new idea that has not yet been published to your knowledge
> ## Integrability and Computability of Continuous Functions on Infinite Domains: A Category-Theoretical Approach
> ### Abstract
> In this paper, we develop a novel approach to the study of integrability and computability of continuous functions on infinite domains using category-theoretic
> techniques. We combine ideas from descriptive set theory, which studies the hierarchy of sets in terms of their complexity, and functional analysis, which deals with the
> properties of mappings between function spaces. Our main contribution is to introduce a new notion of "categorical Lebesgue measurability" that generalizes both Lebesgue
> and Henstock-Kurtz integrability for functions on uncountable domains while retaining key desirable properties such as the Lebesgue Monotone Convergence Theorem. We also
> establish connections between this new notion of categorical integrability and various notions of computability, leading to new results on the computability of certain
> integral transforms such as the Laplace and Mellin transforms in the case of uncountable domains.
> ### 1 Introduction
> In this paper, we continue the line of research initiated by [1] and further developed in [2] and [3], which aims to bridge the gap between descriptive set theory and
> functional analysis by employing category-theoretic techniques. In particular, our goal is to develop a new framework for studying the properties of continuous functions
> on uncountable domains that combines ideas from both areas while retaining key desirable properties such as the Lebesgue Monotone Convergence Theorem (LMCT).
> ### 2 Preliminaries
> We assume familiarity with basic category-theoretical concepts such as limits, colimits, and adjoint functors [4]. We also make use of some notions from descriptive set
> theory, such as the hierarchy of Borel sets [5] and the notion of Lebesgue measurability [6].
> ### 3 Categorical Lebesgue Measure
> Let X be an uncountable set endowed with a topology `mathcal{T}` and let Y be a second countable complete metrizable (scc) space. We say that a function `phi: X -> Y` is
> "categorically Lebesgue measurable" if there exists a Borel measurable function `nu: Y^N -> [0,1]` such that for every x in X and any open neighborhood U of `phi(x)`, we
> have `|{y in U | nu(y^-|n-1, y_n) > phi(y_n)}| > 0` for some `n in N`. Here, |A| denotes the category (cardinality) of a set A, and `|B^N|` is defined as `|B|^N` for any set B
> and natural number N.
> ### 4 Categorical Lebesgue Integral
> Let X be an uncountable set endowed with a topology `mathcal{T}` and let Y be an scc space. Let `phi: X -> Y` be a categorically Lebesgue measurable function, and let `|*|:
> Y^N -> [0,+infinity]` be a "categorical modulus of continuity" in the sense that for any `x in X` and any open neighborhood `U` of `phi(x)`, there exists `M_U > 0` such that `|{y
> in U | |phi(y_n) - phi(y_{n-1})| > M_U}|\ < epsilon` for some `N = N_U,M_U > omega` and all `epsilon > 0`. Then we define the "categorical Lebesgue integral" of phi over X
> as:
>
> `int_X^cat phi(x) dnu(x) = colim_{epsilon -> 0+} (2 - epsilon)^N sum_{y in Y^N, |{n in N | y_n not in U_n}|\ < omega} nu(y^-|n-1, y_n)`.
> ### 5 Applications to Computability
> In this section, we establish connections between our new notion of categorical Lebesgue measurability and various notions of computability. Let X be an uncountable set
> endowed with a topology `mathcal{T}` and `let Y = R`. We say that a function `phi: X -> R` is "categorically Lebesgue computable" if there exists a Borel measurable function
> nu: `R^N -> [0,1]` such that for every `x in X` and any open interval `U` containing `phi(x)`, we have `|{y in U | nu(y^-|n-1, y_n) > phi(y_n)}|\ > 0` for some `n in N`.
> ### 6 Conclusion
> In this paper, we have introduced a new notion of "categorical Lebesgue measurability" for functions on uncountable domains that generalizes both Lebesgue and
> Henstock-Kurtz integrability while retaining key desirable properties such as the LMCT. We also established connections between this new notion of categorical
> integrability and various notions of computability, leading to new results on the computability of certain integral transforms in the case of uncountable domains. Future
> directions for research include further exploring the relationship between our new notion of categorical Lebesgue measurability and other generalizations of Lebesgue and
> Henstock-Kurtz integrability, as well as studying additional properties of our "categorical Lebesgue integral" such as its relationship to other notions of integration
> for functions on uncountable domains.
> ### References
> - [1] A. Borel. Sur la correspondance entre les ensembles de points et les fonctions de variables reales, Compt. R. Acad. Sci. Paris 148 (1909), 71-74.
> - [2] J. Bourbaki. Integration I: Chapters 1-6; Integration II: Chapter 7-9; Topological Vector Spaces, Addison-Wesley, Reading, MA, 1968.
> - [3] W. H. L. Cheney. Introduction to Analysis, Brooks/Cole, Belmont, CA, 2005.
> - [4] S. Mac Lane. Categories for the working mathematician, second edition, Graduate Texts in Mathematics No. 5, Springer-Verlag, New York-Heidelberg, 1971.
> - [5] K. P. Komogorov and A. V. Ulam. On a generalized form of the problem of P. F. Borel, Comb. Math. 1 (1948), 83-86.
> - [6] J. Dieudonné. Foundations of modern analysis, Academic Press, New York, 1960. |
ChrisWilson011016/5EkFvSHFscy6Noei5KNeKpBmER68usfWygiUHWFk2ATR4Ezx_vgg | ChrisWilson011016 | 2024-03-04T19:04:32Z | 463 | 0 | keras | [
"keras",
"region:us"
]
| null | 2024-02-29T13:03:53Z | Entry not found |
mradermacher/ErebusNeuralSamir-7B-dare-ties-GGUF | mradermacher | 2024-05-06T05:33:55Z | 463 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"samir-fama/SamirGPT-v1",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"KoboldAI/Mistral-7B-Erebus-v3",
"en",
"base_model:stevez80/ErebusNeuralSamir-7B-dare-ties",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-30T08:36:39Z | ---
base_model: stevez80/ErebusNeuralSamir-7B-dare-ties
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- samir-fama/SamirGPT-v1
- mlabonne/NeuralHermes-2.5-Mistral-7B
- KoboldAI/Mistral-7B-Erebus-v3
---
## About
static quants of https://huggingface.co/stevez80/ErebusNeuralSamir-7B-dare-ties
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ErebusNeuralSamir-7B-dare-ties-GGUF/resolve/main/ErebusNeuralSamir-7B-dare-ties.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/ErebusNeuralSamir-7B-dare-ties-GGUF/resolve/main/ErebusNeuralSamir-7B-dare-ties.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/ErebusNeuralSamir-7B-dare-ties-GGUF/resolve/main/ErebusNeuralSamir-7B-dare-ties.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/ErebusNeuralSamir-7B-dare-ties-GGUF/resolve/main/ErebusNeuralSamir-7B-dare-ties.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/ErebusNeuralSamir-7B-dare-ties-GGUF/resolve/main/ErebusNeuralSamir-7B-dare-ties.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/ErebusNeuralSamir-7B-dare-ties-GGUF/resolve/main/ErebusNeuralSamir-7B-dare-ties.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ErebusNeuralSamir-7B-dare-ties-GGUF/resolve/main/ErebusNeuralSamir-7B-dare-ties.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/ErebusNeuralSamir-7B-dare-ties-GGUF/resolve/main/ErebusNeuralSamir-7B-dare-ties.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/ErebusNeuralSamir-7B-dare-ties-GGUF/resolve/main/ErebusNeuralSamir-7B-dare-ties.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/ErebusNeuralSamir-7B-dare-ties-GGUF/resolve/main/ErebusNeuralSamir-7B-dare-ties.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ErebusNeuralSamir-7B-dare-ties-GGUF/resolve/main/ErebusNeuralSamir-7B-dare-ties.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/ErebusNeuralSamir-7B-dare-ties-GGUF/resolve/main/ErebusNeuralSamir-7B-dare-ties.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ErebusNeuralSamir-7B-dare-ties-GGUF/resolve/main/ErebusNeuralSamir-7B-dare-ties.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/ErebusNeuralSamir-7B-dare-ties-GGUF/resolve/main/ErebusNeuralSamir-7B-dare-ties.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/ErebusNeuralSamir-7B-dare-ties-GGUF/resolve/main/ErebusNeuralSamir-7B-dare-ties.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ErebusNeuralSamir-7B-dare-ties-GGUF/resolve/main/ErebusNeuralSamir-7B-dare-ties.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
peft-internal-testing/tiny-random-qwen-1.5-MoE | peft-internal-testing | 2024-04-12T11:07:24Z | 463 | 2 | transformers | [
"transformers",
"safetensors",
"qwen2_moe",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-04-12T11:03:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
PrunaAI/dolphin-2.9-llama3-8b-1m-GGUF-smashed | PrunaAI | 2024-05-02T04:59:06Z | 463 | 1 | null | [
"gguf",
"pruna-ai",
"region:us"
]
| null | 2024-05-02T03:51:32Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.com/invite/vb6SmA3hxu)
## This repo contains GGUF versions of the cognitivecomputations/dolphin-2.9-llama3-8b-1m model.
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.com/invite/vb6SmA3hxu) to share feedback/suggestions or get help.
**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with GGUF.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***What is the model format?*** We use GGUF format.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
# Downloading and running the models
You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/):
| Quant type | Description |
|------------|--------------------------------------------------------------------------------------------|
| Q5_K_M | High quality, recommended. |
| Q5_K_S | High quality, recommended. |
| Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. |
| Q4_K_S | Slightly lower quality with more space savings, recommended. |
| IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. |
| IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. |
| Q3_K_L | Lower quality but usable, good for low RAM availability. |
| Q3_K_M | Even lower quality. |
| IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| Q3_K_S | Low quality, not recommended. |
| IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| Q2_K | Very low quality but surprisingly usable. |
## How to download GGUF files ?
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
- **Option A** - Downloading in `text-generation-webui`:
- **Step 1**: Under Download Model, you can enter the model repo: PrunaAI/dolphin-2.9-llama3-8b-1m-GGUF-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf.
- **Step 2**: Then click Download.
- **Option B** - Downloading on the command line (including multiple files at once):
- **Step 1**: We recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
- **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download PrunaAI/dolphin-2.9-llama3-8b-1m-GGUF-smashed dolphin-2.9-llama3-8b-1m.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
Alternatively, you can also download multiple files at once with a pattern:
```shell
huggingface-cli download PrunaAI/dolphin-2.9-llama3-8b-1m-GGUF-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download PrunaAI/dolphin-2.9-llama3-8b-1m-GGUF-smashed dolphin-2.9-llama3-8b-1m.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## How to run model in GGUF format?
- **Option A** - Introductory example with `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m dolphin-2.9-llama3-8b-1m.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {prompt\} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
- **Option B** - Running in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp).
- **Option C** - Running from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./dolphin-2.9-llama3-8b-1m.IQ3_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<s>[INST] {prompt} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./dolphin-2.9-llama3-8b-1m.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
- **Option D** - Running with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
RichardErkhov/microsoft_-_Phi-3-mini-4k-instruct-gguf | RichardErkhov | 2024-05-02T20:13:47Z | 463 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-02T18:48:56Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Phi-3-mini-4k-instruct - GGUF
- Model creator: https://huggingface.co/microsoft/
- Original model: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Phi-3-mini-4k-instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-4k-instruct-gguf/blob/main/Phi-3-mini-4k-instruct.Q2_K.gguf) | Q2_K | 1.32GB |
| [Phi-3-mini-4k-instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-4k-instruct-gguf/blob/main/Phi-3-mini-4k-instruct.IQ3_XS.gguf) | IQ3_XS | 1.51GB |
| [Phi-3-mini-4k-instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-4k-instruct-gguf/blob/main/Phi-3-mini-4k-instruct.IQ3_S.gguf) | IQ3_S | 1.57GB |
| [Phi-3-mini-4k-instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-4k-instruct-gguf/blob/main/Phi-3-mini-4k-instruct.Q3_K_S.gguf) | Q3_K_S | 1.57GB |
| [Phi-3-mini-4k-instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-4k-instruct-gguf/blob/main/Phi-3-mini-4k-instruct.IQ3_M.gguf) | IQ3_M | 1.73GB |
| [Phi-3-mini-4k-instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-4k-instruct-gguf/blob/main/Phi-3-mini-4k-instruct.Q3_K.gguf) | Q3_K | 1.82GB |
| [Phi-3-mini-4k-instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-4k-instruct-gguf/blob/main/Phi-3-mini-4k-instruct.Q3_K_M.gguf) | Q3_K_M | 1.82GB |
| [Phi-3-mini-4k-instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-4k-instruct-gguf/blob/main/Phi-3-mini-4k-instruct.Q3_K_L.gguf) | Q3_K_L | 1.94GB |
| [Phi-3-mini-4k-instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-4k-instruct-gguf/blob/main/Phi-3-mini-4k-instruct.IQ4_XS.gguf) | IQ4_XS | 1.93GB |
| [Phi-3-mini-4k-instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-4k-instruct-gguf/blob/main/Phi-3-mini-4k-instruct.Q4_0.gguf) | Q4_0 | 2.03GB |
| [Phi-3-mini-4k-instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-4k-instruct-gguf/blob/main/Phi-3-mini-4k-instruct.IQ4_NL.gguf) | IQ4_NL | 2.04GB |
| [Phi-3-mini-4k-instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-4k-instruct-gguf/blob/main/Phi-3-mini-4k-instruct.Q4_K_S.gguf) | Q4_K_S | 2.04GB |
| [Phi-3-mini-4k-instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-4k-instruct-gguf/blob/main/Phi-3-mini-4k-instruct.Q4_K.gguf) | Q4_K | 2.23GB |
| [Phi-3-mini-4k-instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-4k-instruct-gguf/blob/main/Phi-3-mini-4k-instruct.Q4_K_M.gguf) | Q4_K_M | 2.23GB |
| [Phi-3-mini-4k-instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-4k-instruct-gguf/blob/main/Phi-3-mini-4k-instruct.Q4_1.gguf) | Q4_1 | 2.24GB |
| [Phi-3-mini-4k-instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-4k-instruct-gguf/blob/main/Phi-3-mini-4k-instruct.Q5_0.gguf) | Q5_0 | 2.46GB |
| [Phi-3-mini-4k-instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-4k-instruct-gguf/blob/main/Phi-3-mini-4k-instruct.Q5_K_S.gguf) | Q5_K_S | 2.46GB |
| [Phi-3-mini-4k-instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-4k-instruct-gguf/blob/main/Phi-3-mini-4k-instruct.Q5_K.gguf) | Q5_K | 2.62GB |
| [Phi-3-mini-4k-instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-4k-instruct-gguf/blob/main/Phi-3-mini-4k-instruct.Q5_K_M.gguf) | Q5_K_M | 2.62GB |
| [Phi-3-mini-4k-instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-4k-instruct-gguf/blob/main/Phi-3-mini-4k-instruct.Q5_1.gguf) | Q5_1 | 2.68GB |
| [Phi-3-mini-4k-instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-4k-instruct-gguf/blob/main/Phi-3-mini-4k-instruct.Q6_K.gguf) | Q6_K | 2.92GB |
Original model description:
---
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- nlp
- code
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
---
## Model Summary
The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.
The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support.
The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures.
When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
Resources and Technical Documentation:
+ [Phi-3 Microsoft Blog](https://aka.ms/phi3blog-april)
+ [Phi-3 Technical Report](https://aka.ms/phi3-tech-report)
+ [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai)
+ Phi-3 GGUF: [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf)
+ Phi-3 ONNX: [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx)
## Intended Uses
**Primary use cases**
The model is intended for commercial and research use in English. The model provides uses for applications which require:
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
**Use case considerations**
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
## How to Use
Phi-3 Mini-4K-Instruct has been integrated in the development version (4.41.0.dev0) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
* Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source.
The current `transformers` version can be verified with: `pip list | grep transformers`.
Phi-3 Mini-4K-Instruct is also available in [HuggingChat](https://aka.ms/try-phi3-hf-chat).
### Tokenizer
Phi-3 Mini-4K-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size.
### Chat Format
Given the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follow:
```markdown
<|user|>\nQuestion <|end|>\n<|assistant|>
```
For example:
```markdown
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following:
```markdown
<|user|>
I am going to Paris, what should I see?<|end|>
<|assistant|>
Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|>
<|user|>
What is so great about #1?<|end|>
<|assistant|>
```
### Sample inference code
This code snippets show how to get quickly started with running the model on a GPU:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"microsoft/Phi-3-mini-4k-instruct",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
messages = [
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
*Some applications/frameworks might not include a BOS token (`<s>`) at the start of the conversation. Please ensure that it is included since it provides more reliable results.*
## Responsible AI Considerations
Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
* Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.
* Inputs: Text. It is best suited for prompts using chat format.
* Context length: 4K tokens
* GPUs: 512 H100-80G
* Training time: 7 days
* Training data: 3.3T tokens
* Outputs: Generated text in response to the input
* Dates: Our models were trained between February and April 2024
* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.
### Datasets
Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of
1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/sample_finetune.py).
## Benchmarks
We report the results for Phi-3-Mini-4K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.
All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.
As is now standard, we use few-shot prompts to evaluate the models, at temperature 0.
The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.
More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.
The number of k–shot examples is listed per-benchmark.
| | Phi-3-Mini-4K-In<br>3.8b | Phi-3-Small<br>7b (preview) | Phi-3-Medium<br>14b (preview) | Phi-2<br>2.7b | Mistral<br>7b | Gemma<br>7b | Llama-3-In<br>8b | Mixtral<br>8x7b | GPT-3.5<br>version 1106 |
|---|---|---|---|---|---|---|---|---|---|
| MMLU <br>5-Shot | 68.8 | 75.3 | 78.2 | 56.3 | 61.7 | 63.6 | 66.5 | 68.4 | 71.4 |
| HellaSwag <br> 5-Shot | 76.7 | 78.7 | 83.2 | 53.6 | 58.5 | 49.8 | 71.1 | 70.4 | 78.8 |
| ANLI <br> 7-Shot | 52.8 | 55.0 | 58.7 | 42.5 | 47.1 | 48.7 | 57.3 | 55.2 | 58.1 |
| GSM-8K <br> 0-Shot; CoT | 82.5 | 86.4 | 90.8 | 61.1 | 46.4 | 59.8 | 77.4 | 64.7 | 78.1 |
| MedQA <br> 2-Shot | 53.8 | 58.2 | 69.8 | 40.9 | 49.6 | 50.0 | 60.5 | 62.2 | 63.4 |
| AGIEval <br> 0-Shot | 37.5 | 45.0 | 49.7 | 29.8 | 35.1 | 42.1 | 42.0 | 45.2 | 48.4 |
| TriviaQA <br> 5-Shot | 64.0 | 59.1 | 73.3 | 45.2 | 72.3 | 75.2 | 67.7 | 82.2 | 85.8 |
| Arc-C <br> 10-Shot | 84.9 | 90.7 | 91.9 | 75.9 | 78.6 | 78.3 | 82.8 | 87.3 | 87.4 |
| Arc-E <br> 10-Shot | 94.6 | 97.1 | 98.0 | 88.5 | 90.6 | 91.4 | 93.4 | 95.6 | 96.3 |
| PIQA <br> 5-Shot | 84.2 | 87.8 | 88.2 | 60.2 | 77.7 | 78.1 | 75.7 | 86.0 | 86.6 |
| SociQA <br> 5-Shot | 76.6 | 79.0 | 79.4 | 68.3 | 74.6 | 65.5 | 73.9 | 75.9 | 68.3 |
| BigBench-Hard <br> 0-Shot | 71.7 | 75.0 | 82.5 | 59.4 | 57.3 | 59.6 | 51.5 | 69.7 | 68.32 |
| WinoGrande <br> 5-Shot | 70.8 | 82.5 | 81.2 | 54.7 | 54.2 | 55.6 | 65 | 62.0 | 68.8 |
| OpenBookQA <br> 10-Shot | 83.2 | 88.4 | 86.6 | 73.6 | 79.8 | 78.6 | 82.6 | 85.8 | 86.0 |
| BoolQ <br> 0-Shot | 77.6 | 82.9 | 86.5 | -- | 72.2 | 66.0 | 80.9 | 77.6 | 79.1 |
| CommonSenseQA <br> 10-Shot | 80.2 | 80.3 | 82.6 | 69.3 | 72.6 | 76.2 | 79 | 78.1 | 79.6 |
| TruthfulQA <br> 10-Shot | 65.0 | 68.1 | 74.8 | -- | 52.1 | 53.0 | 63.2 | 60.1 | 85.8 |
| HumanEval <br> 0-Shot | 59.1 | 59.1 | 54.7 | 47.0 | 28.0 | 34.1 | 60.4 | 37.8 | 62.2 |
| MBPP <br> 3-Shot | 53.8 | 71.4 | 73.7 | 60.6 | 50.8 | 51.5 | 67.7 | 60.2 | 77.8 |
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager"
* CPU: use the **GGUF** quantized models [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf)
+ Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx)
## Cross Platform Support
ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model [here](https://aka.ms/phi3-mini-4k-instruct-onnx).
Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs.
Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
## License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-4k/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
|
bartowski/granite-20b-code-instruct-GGUF | bartowski | 2024-05-24T08:02:48Z | 463 | 4 | transformers | [
"transformers",
"gguf",
"code",
"granite",
"text-generation",
"dataset:bigcode/commitpackft",
"dataset:TIGER-Lab/MathInstruct",
"dataset:meta-math/MetaMathQA",
"dataset:glaiveai/glaive-code-assistant-v3",
"dataset:glaive-function-calling-v2",
"dataset:bugdaryan/sql-create-context-instruction",
"dataset:garage-bAInd/Open-Platypus",
"dataset:nvidia/HelpSteer",
"base_model:ibm-granite/granite-20b-code-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-21T03:08:03Z | ---
pipeline_tag: text-generation
base_model: ibm-granite/granite-20b-code-base
inference: true
license: apache-2.0
datasets:
- bigcode/commitpackft
- TIGER-Lab/MathInstruct
- meta-math/MetaMathQA
- glaiveai/glaive-code-assistant-v3
- glaive-function-calling-v2
- bugdaryan/sql-create-context-instruction
- garage-bAInd/Open-Platypus
- nvidia/HelpSteer
metrics:
- code_eval
library_name: transformers
tags:
- code
- granite
model-index:
- name: granite-20b-code-instruct
results:
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Python)
metrics:
- name: pass@1
type: pass@1
value: 60.4
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 53.7
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Java)
metrics:
- name: pass@1
type: pass@1
value: 58.5
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Go)
metrics:
- name: pass@1
type: pass@1
value: 42.1
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(C++)
metrics:
- name: pass@1
type: pass@1
value: 45.7
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Rust)
metrics:
- name: pass@1
type: pass@1
value: 42.7
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Python)
metrics:
- name: pass@1
type: pass@1
value: 44.5
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 42.7
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Java)
metrics:
- name: pass@1
type: pass@1
value: 49.4
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Go)
metrics:
- name: pass@1
type: pass@1
value: 32.3
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(C++)
metrics:
- name: pass@1
type: pass@1
value: 42.1
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Rust)
metrics:
- name: pass@1
type: pass@1
value: 18.3
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Python)
metrics:
- name: pass@1
type: pass@1
value: 43.9
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 43.9
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Java)
metrics:
- name: pass@1
type: pass@1
value: 45.7
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Go)
metrics:
- name: pass@1
type: pass@1
value: 41.5
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(C++)
metrics:
- name: pass@1
type: pass@1
value: 41.5
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Rust)
metrics:
- name: pass@1
type: pass@1
value: 29.9
veriefied: false
quantized_by: bartowski
---
## Llamacpp imatrix Quantizations of granite-20b-code-instruct
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2940">b2940</a> for quantization.
Original model: https://huggingface.co/ibm-granite/granite-20b-code-instruct
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/b6ac44691e994344625687afe3263b3a)
## Prompt format
```
System:
{system_prompt}
Question:
{prompt}
Answer:
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [granite-20b-code-instruct-Q8_0.gguf](https://huggingface.co/bartowski/granite-20b-code-instruct-GGUF/blob/main/granite-20b-code-instruct-Q8_0.gguf) | Q8_0 | 21.48GB | Extremely high quality, generally unneeded but max available quant. |
| [granite-20b-code-instruct-Q6_K.gguf](https://huggingface.co/bartowski/granite-20b-code-instruct-GGUF/blob/main/granite-20b-code-instruct-Q6_K.gguf) | Q6_K | 16.63GB | Very high quality, near perfect, *recommended*. |
| [granite-20b-code-instruct-Q5_K_M.gguf](https://huggingface.co/bartowski/granite-20b-code-instruct-GGUF/blob/main/granite-20b-code-instruct-Q5_K_M.gguf) | Q5_K_M | 14.80GB | High quality, *recommended*. |
| [granite-20b-code-instruct-Q5_K_S.gguf](https://huggingface.co/bartowski/granite-20b-code-instruct-GGUF/blob/main/granite-20b-code-instruct-Q5_K_S.gguf) | Q5_K_S | 14.01GB | High quality, *recommended*. |
| [granite-20b-code-instruct-Q4_K_M.gguf](https://huggingface.co/bartowski/granite-20b-code-instruct-GGUF/blob/main/granite-20b-code-instruct-Q4_K_M.gguf) | Q4_K_M | 12.82GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [granite-20b-code-instruct-Q4_K_S.gguf](https://huggingface.co/bartowski/granite-20b-code-instruct-GGUF/blob/main/granite-20b-code-instruct-Q4_K_S.gguf) | Q4_K_S | 11.66GB | Slightly lower quality with more space savings, *recommended*. |
| [granite-20b-code-instruct-IQ4_NL.gguf](https://huggingface.co/bartowski/granite-20b-code-instruct-GGUF/blob/main/granite-20b-code-instruct-IQ4_NL.gguf) | IQ4_NL | 11.55GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [granite-20b-code-instruct-IQ4_XS.gguf](https://huggingface.co/bartowski/granite-20b-code-instruct-GGUF/blob/main/granite-20b-code-instruct-IQ4_XS.gguf) | IQ4_XS | 10.93GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [granite-20b-code-instruct-Q3_K_L.gguf](https://huggingface.co/bartowski/granite-20b-code-instruct-GGUF/blob/main/granite-20b-code-instruct-Q3_K_L.gguf) | Q3_K_L | 11.73GB | Lower quality but usable, good for low RAM availability. |
| [granite-20b-code-instruct-Q3_K_M.gguf](https://huggingface.co/bartowski/granite-20b-code-instruct-GGUF/blob/main/granite-20b-code-instruct-Q3_K_M.gguf) | Q3_K_M | 10.56GB | Even lower quality. |
| [granite-20b-code-instruct-IQ3_M.gguf](https://huggingface.co/bartowski/granite-20b-code-instruct-GGUF/blob/main/granite-20b-code-instruct-IQ3_M.gguf) | IQ3_M | 9.58GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [granite-20b-code-instruct-IQ3_S.gguf](https://huggingface.co/bartowski/granite-20b-code-instruct-GGUF/blob/main/granite-20b-code-instruct-IQ3_S.gguf) | IQ3_S | 8.93GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [granite-20b-code-instruct-Q3_K_S.gguf](https://huggingface.co/bartowski/granite-20b-code-instruct-GGUF/blob/main/granite-20b-code-instruct-Q3_K_S.gguf) | Q3_K_S | 8.93GB | Low quality, not recommended. |
| [granite-20b-code-instruct-IQ3_XS.gguf](https://huggingface.co/bartowski/granite-20b-code-instruct-GGUF/blob/main/granite-20b-code-instruct-IQ3_XS.gguf) | IQ3_XS | 8.65GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [granite-20b-code-instruct-IQ3_XXS.gguf](https://huggingface.co/bartowski/granite-20b-code-instruct-GGUF/blob/main/granite-20b-code-instruct-IQ3_XXS.gguf) | IQ3_XXS | 8.06GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [granite-20b-code-instruct-Q2_K.gguf](https://huggingface.co/bartowski/granite-20b-code-instruct-GGUF/blob/main/granite-20b-code-instruct-Q2_K.gguf) | Q2_K | 7.92GB | Very low quality but surprisingly usable. |
| [granite-20b-code-instruct-IQ2_M.gguf](https://huggingface.co/bartowski/granite-20b-code-instruct-GGUF/blob/main/granite-20b-code-instruct-IQ2_M.gguf) | IQ2_M | 7.05GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [granite-20b-code-instruct-IQ2_S.gguf](https://huggingface.co/bartowski/granite-20b-code-instruct-GGUF/blob/main/granite-20b-code-instruct-IQ2_S.gguf) | IQ2_S | 6.52GB | Very low quality, uses SOTA techniques to be usable. |
| [granite-20b-code-instruct-IQ2_XS.gguf](https://huggingface.co/bartowski/granite-20b-code-instruct-GGUF/blob/main/granite-20b-code-instruct-IQ2_XS.gguf) | IQ2_XS | 6.15GB | Very low quality, uses SOTA techniques to be usable. |
| [granite-20b-code-instruct-IQ2_XXS.gguf](https://huggingface.co/bartowski/granite-20b-code-instruct-GGUF/blob/main/granite-20b-code-instruct-IQ2_XXS.gguf) | IQ2_XXS | 5.57GB | Lower quality, uses SOTA techniques to be usable. |
| [granite-20b-code-instruct-IQ1_M.gguf](https://huggingface.co/bartowski/granite-20b-code-instruct-GGUF/blob/main/granite-20b-code-instruct-IQ1_M.gguf) | IQ1_M | 4.91GB | Extremely low quality, *not* recommended. |
| [granite-20b-code-instruct-IQ1_S.gguf](https://huggingface.co/bartowski/granite-20b-code-instruct-GGUF/blob/main/granite-20b-code-instruct-IQ1_S.gguf) | IQ1_S | 4.51GB | Extremely low quality, *not* recommended. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/granite-20b-code-instruct-GGUF --include "granite-20b-code-instruct-Q4_K_M.gguf" --local-dir ./ --local-dir-use-symlinks False
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/granite-20b-code-instruct-GGUF --include "granite-20b-code-instruct-Q8_0.gguf/*" --local-dir granite-20b-code-instruct-Q8_0 --local-dir-use-symlinks False
```
You can either specify a new local-dir (granite-20b-code-instruct-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf | RichardErkhov | 2024-05-31T00:07:22Z | 463 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-30T21:02:51Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Instameta-Mistral-v0.1-7b - GGUF
- Model creator: https://huggingface.co/cldersaienril/
- Original model: https://huggingface.co/cldersaienril/Instameta-Mistral-v0.1-7b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Instameta-Mistral-v0.1-7b.Q2_K.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.Q2_K.gguf) | Q2_K | 2.53GB |
| [Instameta-Mistral-v0.1-7b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [Instameta-Mistral-v0.1-7b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [Instameta-Mistral-v0.1-7b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [Instameta-Mistral-v0.1-7b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [Instameta-Mistral-v0.1-7b.Q3_K.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.Q3_K.gguf) | Q3_K | 3.28GB |
| [Instameta-Mistral-v0.1-7b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [Instameta-Mistral-v0.1-7b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [Instameta-Mistral-v0.1-7b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [Instameta-Mistral-v0.1-7b.Q4_0.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.Q4_0.gguf) | Q4_0 | 3.83GB |
| [Instameta-Mistral-v0.1-7b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [Instameta-Mistral-v0.1-7b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [Instameta-Mistral-v0.1-7b.Q4_K.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.Q4_K.gguf) | Q4_K | 4.07GB |
| [Instameta-Mistral-v0.1-7b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [Instameta-Mistral-v0.1-7b.Q4_1.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.Q4_1.gguf) | Q4_1 | 4.24GB |
| [Instameta-Mistral-v0.1-7b.Q5_0.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.Q5_0.gguf) | Q5_0 | 4.65GB |
| [Instameta-Mistral-v0.1-7b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [Instameta-Mistral-v0.1-7b.Q5_K.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.Q5_K.gguf) | Q5_K | 4.78GB |
| [Instameta-Mistral-v0.1-7b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [Instameta-Mistral-v0.1-7b.Q5_1.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.Q5_1.gguf) | Q5_1 | 5.07GB |
| [Instameta-Mistral-v0.1-7b.Q6_K.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.Q6_K.gguf) | Q6_K | 5.53GB |
| [Instameta-Mistral-v0.1-7b.Q8_0.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: apache-2.0
datasets:
- ehartford/dolphin
language:
- en
- zh
---
## Dataset
Dataset: Dolphin, an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)
Private Dataset of gpt4/gpt3.5 dialogues in Chinese to increase its multilingual abilities.
# Training
20 hours on 6 x A800s
|
mradermacher/archangel_sft-kto_llama30b-GGUF | mradermacher | 2024-05-31T19:08:10Z | 463 | 2 | transformers | [
"transformers",
"gguf",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"base_model:ContextualAI/archangel_sft-kto_llama30b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-31T17:05:07Z | ---
base_model: ContextualAI/archangel_sft-kto_llama30b
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ContextualAI/archangel_sft-kto_llama30b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/archangel_sft-kto_llama30b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/archangel_sft-kto_llama30b-GGUF/resolve/main/archangel_sft-kto_llama30b.Q2_K.gguf) | Q2_K | 12.1 | |
| [GGUF](https://huggingface.co/mradermacher/archangel_sft-kto_llama30b-GGUF/resolve/main/archangel_sft-kto_llama30b.IQ3_XS.gguf) | IQ3_XS | 13.4 | |
| [GGUF](https://huggingface.co/mradermacher/archangel_sft-kto_llama30b-GGUF/resolve/main/archangel_sft-kto_llama30b.IQ3_S.gguf) | IQ3_S | 14.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/archangel_sft-kto_llama30b-GGUF/resolve/main/archangel_sft-kto_llama30b.Q3_K_S.gguf) | Q3_K_S | 14.2 | |
| [GGUF](https://huggingface.co/mradermacher/archangel_sft-kto_llama30b-GGUF/resolve/main/archangel_sft-kto_llama30b.IQ3_M.gguf) | IQ3_M | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/archangel_sft-kto_llama30b-GGUF/resolve/main/archangel_sft-kto_llama30b.Q3_K_M.gguf) | Q3_K_M | 15.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/archangel_sft-kto_llama30b-GGUF/resolve/main/archangel_sft-kto_llama30b.Q3_K_L.gguf) | Q3_K_L | 17.4 | |
| [GGUF](https://huggingface.co/mradermacher/archangel_sft-kto_llama30b-GGUF/resolve/main/archangel_sft-kto_llama30b.IQ4_XS.gguf) | IQ4_XS | 17.6 | |
| [GGUF](https://huggingface.co/mradermacher/archangel_sft-kto_llama30b-GGUF/resolve/main/archangel_sft-kto_llama30b.Q4_K_S.gguf) | Q4_K_S | 18.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/archangel_sft-kto_llama30b-GGUF/resolve/main/archangel_sft-kto_llama30b.Q4_K_M.gguf) | Q4_K_M | 19.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/archangel_sft-kto_llama30b-GGUF/resolve/main/archangel_sft-kto_llama30b.Q5_K_S.gguf) | Q5_K_S | 22.5 | |
| [GGUF](https://huggingface.co/mradermacher/archangel_sft-kto_llama30b-GGUF/resolve/main/archangel_sft-kto_llama30b.Q5_K_M.gguf) | Q5_K_M | 23.1 | |
| [GGUF](https://huggingface.co/mradermacher/archangel_sft-kto_llama30b-GGUF/resolve/main/archangel_sft-kto_llama30b.Q6_K.gguf) | Q6_K | 26.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/archangel_sft-kto_llama30b-GGUF/resolve/main/archangel_sft-kto_llama30b.Q8_0.gguf) | Q8_0 | 34.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
jawerty/meha_html_model | jawerty | 2024-06-03T06:02:09Z | 463 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-03T03:47:10Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** jawerty
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/starcoder2-15b-instruct-rpgle-GGUF | mradermacher | 2024-06-19T12:58:54Z | 463 | 2 | transformers | [
"transformers",
"gguf",
"RPG",
"RPGELE",
"en",
"base_model:Ticlazau/starcoder2-15b-instruct-rpgle",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-04T04:32:49Z | ---
base_model: Ticlazau/starcoder2-15b-instruct-rpgle
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- RPG
- RPGELE
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Ticlazau/starcoder2-15b-instruct-rpgle
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-rpgle-GGUF/resolve/main/starcoder2-15b-instruct-rpgle.Q2_K.gguf) | Q2_K | 6.3 | |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-rpgle-GGUF/resolve/main/starcoder2-15b-instruct-rpgle.IQ3_XS.gguf) | IQ3_XS | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-rpgle-GGUF/resolve/main/starcoder2-15b-instruct-rpgle.Q3_K_S.gguf) | Q3_K_S | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-rpgle-GGUF/resolve/main/starcoder2-15b-instruct-rpgle.IQ3_S.gguf) | IQ3_S | 7.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-rpgle-GGUF/resolve/main/starcoder2-15b-instruct-rpgle.IQ3_M.gguf) | IQ3_M | 7.4 | |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-rpgle-GGUF/resolve/main/starcoder2-15b-instruct-rpgle.Q3_K_M.gguf) | Q3_K_M | 8.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-rpgle-GGUF/resolve/main/starcoder2-15b-instruct-rpgle.IQ4_XS.gguf) | IQ4_XS | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-rpgle-GGUF/resolve/main/starcoder2-15b-instruct-rpgle.Q3_K_L.gguf) | Q3_K_L | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-rpgle-GGUF/resolve/main/starcoder2-15b-instruct-rpgle.Q4_K_S.gguf) | Q4_K_S | 9.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-rpgle-GGUF/resolve/main/starcoder2-15b-instruct-rpgle.Q4_K_M.gguf) | Q4_K_M | 10.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-rpgle-GGUF/resolve/main/starcoder2-15b-instruct-rpgle.Q5_K_S.gguf) | Q5_K_S | 11.1 | |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-rpgle-GGUF/resolve/main/starcoder2-15b-instruct-rpgle.Q5_K_M.gguf) | Q5_K_M | 11.5 | |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-rpgle-GGUF/resolve/main/starcoder2-15b-instruct-rpgle.Q6_K.gguf) | Q6_K | 13.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-rpgle-GGUF/resolve/main/starcoder2-15b-instruct-rpgle.Q8_0.gguf) | Q8_0 | 17.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Akshay1996/Phi-3-mini-4k-instruct-Q4_K_M-GGUF | Akshay1996 | 2024-06-20T21:48:34Z | 463 | 0 | null | [
"gguf",
"nlp",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"region:us"
]
| text-generation | 2024-06-20T21:48:22Z | ---
base_model: microsoft/Phi-3-mini-4k-instruct
language:
- en
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
tags:
- nlp
- code
- llama-cpp
- gguf-my-repo
inference:
parameters:
temperature: 0.0
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
---
# Akshay1996/Phi-3-mini-4k-instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`microsoft/Phi-3-mini-4k-instruct`](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Akshay1996/Phi-3-mini-4k-instruct-Q4_K_M-GGUF --hf-file phi-3-mini-4k-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Akshay1996/Phi-3-mini-4k-instruct-Q4_K_M-GGUF --hf-file phi-3-mini-4k-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Akshay1996/Phi-3-mini-4k-instruct-Q4_K_M-GGUF --hf-file phi-3-mini-4k-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Akshay1996/Phi-3-mini-4k-instruct-Q4_K_M-GGUF --hf-file phi-3-mini-4k-instruct-q4_k_m.gguf -c 2048
```
|
farikaw599/Ggg | farikaw599 | 2024-06-22T18:52:59Z | 463 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-06-22T18:44:37Z | Entry not found |
timm/resnetv2_50x1_bit.goog_in21k_ft_in1k | timm | 2024-02-10T23:35:28Z | 462 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-21k",
"arxiv:1912.11370",
"arxiv:1603.05027",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-03-22T20:57:54Z | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
- imagenet-21k
---
# Model card for resnetv2_50x1_bit.goog_in21k_ft_in1k
A ResNet-V2-BiT (Big Transfer w/ pre-activation ResNet) image classification model. Pretrained on ImageNet-21k and fine-tuned on ImageNet-1k by paper authors.
This model uses:
* Group Normalization (GN) in combination with Weight Standardization (WS) instead of Batch Normalization (BN)..
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 25.5
- GMACs: 16.6
- Activations (M): 44.5
- Image size: 448 x 448
- **Papers:**
- Big Transfer (BiT): General Visual Representation Learning: https://arxiv.org/abs/1912.11370
- Identity Mappings in Deep Residual Networks: https://arxiv.org/abs/1603.05027
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-21k
- **Original:** https://github.com/google-research/big_transfer
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('resnetv2_50x1_bit.goog_in21k_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnetv2_50x1_bit.goog_in21k_ft_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 224, 224])
# torch.Size([1, 256, 112, 112])
# torch.Size([1, 512, 56, 56])
# torch.Size([1, 1024, 28, 28])
# torch.Size([1, 2048, 14, 14])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnetv2_50x1_bit.goog_in21k_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 14, 14) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{Kolesnikov2019BigT,
title={Big Transfer (BiT): General Visual Representation Learning},
author={Alexander Kolesnikov and Lucas Beyer and Xiaohua Zhai and Joan Puigcerver and Jessica Yung and Sylvain Gelly and Neil Houlsby},
booktitle={European Conference on Computer Vision},
year={2019}
}
```
```bibtex
@article{He2016,
author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun},
title = {Identity Mappings in Deep Residual Networks},
journal = {arXiv preprint arXiv:1603.05027},
year = {2016}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
kyledam/gaixinhyc | kyledam | 2024-03-17T09:02:27Z | 462 | 1 | diffusers | [
"diffusers",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-04-14T06:57:33Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### gaixinhyc Dreambooth model trained by kyledam with TheLastBen's fast-DreamBooth notebook
|
badmatr11x/roberta-base-emotions-detection-from-text | badmatr11x | 2023-07-27T17:15:42Z | 462 | 7 | transformers | [
"transformers",
"pytorch",
"tf",
"roberta",
"text-classification",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-07-15T16:56:14Z | ---
license: mit
widget:
- text: With tears of joy streaming down her cheeks, she embraced her long-lost brother after years of separation.
example_title: Joy
- text: As the orchestra played the final note, the audience erupted into thunderous applause, filling the concert hall with joy.
example_title: Joy
- text: The old man sat alone on the park bench, reminiscing about the love he had lost, his eyes filled with sadness.
example_title: Sadness
- text: The news of her best friend moving to a distant country left her feeling a profound sadness and emptiness.
example_title: Sadness
- text: The scientific research paper discussed complex concepts that were beyond the scope of a laymans understanding.
example_title: Neutral
- text: The documentary provided an objective view of the historical events, presenting facts without any bias.
example_title: Neutral
- text: He clenched his fists tightly, trying to control the surge of anger when he heard the offensive remarks.
example_title: Anger
- text: The unfair treatment at work ignited a simmering anger within him, leading him to consider confronting the management.
example_title: Anger
- text: As the magician pulled a rabbit out of an empty hat, the children gasped in amazement and surprise.
example_title: Surprise
- text: He opened the box to find a rare and valuable antique inside, leaving him speechless with surprise.
example_title: Surprise
- text: The moldy and rotting food in the refrigerator evoked a sense of disgust, leading her to clean it immediately.
example_title: Disgust
- text: The movie's graphic scenes of violence and gore left many viewers feeling a sense of disgust and unease.
example_title: Disgust
- text: As the storm raged outside, the little child clung to their parents, seeking comfort from the fear of thunder.
example_title: Fear
- text: The horror movie was so terrifying that some viewers had to cover their eyes in fear, unable to bear the suspense.
example_title: Fear
language:
- en
metrics:
- accuracy
pipeline_tag: text-classification
---
|
Yntec/Dreamscape | Yntec | 2023-09-01T17:48:15Z | 462 | 1 | diffusers | [
"diffusers",
"safetensors",
"fantasy",
"art",
"realistic",
"artistic",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"Lykon",
"DarkAgent",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-01T12:31:01Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
language:
- en
tags:
- fantasy
- art
- realistic
- artistic
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- Lykon
- DarkAgent
inference: true
---
A mix of NeverEndingDream 1.22 and Dreamscapes & Dragonfire 2 to bring the best of both worlds!
Comparison:

(Click for larger)
Sample and prompt:

magazine Pretty CUTE LITTLE female. Paint bucket pouring paint in air on top of planet earth vector logo style. Ponytail By angra mainyu. michael germash, The lost souvenir by andreas rocha, jean deville, yakuza and very beautiful, mystical pinup. Beautiful detailed by KEY VISUAL. beautiful face, 4k dslr photo, Favela fungus cathedral coaster hive, palace in
Original Pages:
https://civitai.com/models/50294/dreamscapes-and-dragonfire-new-v20-semi-realism-fantasy-model
https://huggingface.co/Lykon/NeverEnding-Dream
# Recipe
-Add Difference 1.0-
Primary model:
NeverEndingDream 1.22
Secondary model:
NeverEndingDream 1.22
Tertiary model:
v1-5-pruned-fp16-no-ema (https://huggingface.co/Yntec/DreamLikeRemix/resolve/main/v1-5-pruned-fp16-no-ema.safetensors)
Output Model:
NeverEndingDreamEssence
-Weighted Sum 0.70-
Primary model:
NeverEndingDreamEssence
Secondary model:
Dreamscapes & Dragonfire 2
Output Model:
Dreamscape |
TheBloke/hippogriff-30b-chat-GGUF | TheBloke | 2023-09-27T12:53:36Z | 462 | 1 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation",
"en",
"dataset:QingyiSi/Alpaca-CoT",
"dataset:teknium/GPT4-LLM-Cleaned",
"dataset:teknium/GPTeacher-General-Instruct",
"dataset:metaeval/ScienceQA_text_only",
"dataset:hellaswag",
"dataset:openai/summarize_from_feedback",
"dataset:riddle_sense",
"dataset:gsm8k",
"dataset:OpenAssistant/oasst1",
"base_model:openaccess-ai-collective/hippogriff-30b-chat",
"license:other",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-09-20T02:33:13Z | ---
language:
- en
license: other
library_name: transformers
datasets:
- QingyiSi/Alpaca-CoT
- teknium/GPT4-LLM-Cleaned
- teknium/GPTeacher-General-Instruct
- metaeval/ScienceQA_text_only
- hellaswag
- openai/summarize_from_feedback
- riddle_sense
- gsm8k
- OpenAssistant/oasst1
model_name: Hippogriff 30B Chat
base_model: openaccess-ai-collective/hippogriff-30b-chat
inference: false
model_creator: Open Access AI Collective
model_type: llama
pipeline_tag: text-generation
prompt_template: 'A chat between a curious user and an artificial intelligence assistant.
The assistant gives helpful, detailed, and polite answers to the user''s questions.
USER: {prompt} ASSISTANT:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Hippogriff 30B Chat - GGUF
- Model creator: [Open Access AI Collective](https://huggingface.co/openaccess-ai-collective)
- Original model: [Hippogriff 30B Chat](https://huggingface.co/openaccess-ai-collective/hippogriff-30b-chat)
<!-- description start -->
## Description
This repo contains GGUF format model files for [OpenAccess AI Collective's Hippogriff 30B Chat](https://huggingface.co/openaccess-ai-collective/hippogriff-30b-chat).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/hippogriff-30b-chat-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/hippogriff-30b-chat-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF)
* [Open Access AI Collective's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/openaccess-ai-collective/hippogriff-30b-chat)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Vicuna
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [hippogriff-30b.Q2_K.gguf](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF/blob/main/hippogriff-30b.Q2_K.gguf) | Q2_K | 2 | 13.50 GB| 16.00 GB | smallest, significant quality loss - not recommended for most purposes |
| [hippogriff-30b.Q3_K_S.gguf](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF/blob/main/hippogriff-30b.Q3_K_S.gguf) | Q3_K_S | 3 | 14.06 GB| 16.56 GB | very small, high quality loss |
| [hippogriff-30b.Q3_K_M.gguf](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF/blob/main/hippogriff-30b.Q3_K_M.gguf) | Q3_K_M | 3 | 15.76 GB| 18.26 GB | very small, high quality loss |
| [hippogriff-30b.Q3_K_L.gguf](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF/blob/main/hippogriff-30b.Q3_K_L.gguf) | Q3_K_L | 3 | 17.28 GB| 19.78 GB | small, substantial quality loss |
| [hippogriff-30b.Q4_0.gguf](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF/blob/main/hippogriff-30b.Q4_0.gguf) | Q4_0 | 4 | 18.36 GB| 20.86 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [hippogriff-30b.Q4_K_S.gguf](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF/blob/main/hippogriff-30b.Q4_K_S.gguf) | Q4_K_S | 4 | 18.44 GB| 20.94 GB | small, greater quality loss |
| [hippogriff-30b.Q4_K_M.gguf](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF/blob/main/hippogriff-30b.Q4_K_M.gguf) | Q4_K_M | 4 | 19.62 GB| 22.12 GB | medium, balanced quality - recommended |
| [hippogriff-30b.Q5_0.gguf](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF/blob/main/hippogriff-30b.Q5_0.gguf) | Q5_0 | 5 | 22.40 GB| 24.90 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [hippogriff-30b.Q5_K_S.gguf](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF/blob/main/hippogriff-30b.Q5_K_S.gguf) | Q5_K_S | 5 | 22.40 GB| 24.90 GB | large, low quality loss - recommended |
| [hippogriff-30b.Q5_K_M.gguf](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF/blob/main/hippogriff-30b.Q5_K_M.gguf) | Q5_K_M | 5 | 23.05 GB| 25.55 GB | large, very low quality loss - recommended |
| [hippogriff-30b.Q6_K.gguf](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF/blob/main/hippogriff-30b.Q6_K.gguf) | Q6_K | 6 | 26.69 GB| 29.19 GB | very large, extremely low quality loss |
| [hippogriff-30b.Q8_0.gguf](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF/blob/main/hippogriff-30b.Q8_0.gguf) | Q8_0 | 8 | 34.57 GB| 37.07 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/hippogriff-30b-chat-GGUF and below it, a specific filename to download, such as: hippogriff-30b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/hippogriff-30b-chat-GGUF hippogriff-30b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/hippogriff-30b-chat-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/hippogriff-30b-chat-GGUF hippogriff-30b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m hippogriff-30b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/hippogriff-30b-chat-GGUF", model_file="hippogriff-30b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: OpenAccess AI Collective's Hippogriff 30B Chat
# Hippogriff 30B Chat
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
Hippogriff 30B Chat is an experiment that builds on Manticore with new datasets, while removing a few more instruction and chat datasets. It also includes a de-duped subset of the Pygmalion dataset. It also removes all Alpaca style prompts using `###` in favor of
chat only style prompts using `USER:`,`ASSISTANT:` as well as [pygmalion/metharme prompting](https://huggingface.co/PygmalionAI/metharme-7b#prompting) using `<|system|>, <|user|> and <|model|>` tokens.
Questions, comments, feedback, looking to donate, or want to help? Reach out on our [Discord](https://discord.gg/PugNNHAF5r) or email [[email protected]](mailto:[email protected])
# Training Datasets
Hippogriff 30B Chat is a Llama 30B model fine-tuned on the following datasets
- OpenAssistant/oasst1 - cleaned dataset, similar to Guanaco
- synthetic jokes generation and explanation derived from reddit jokes dataset
- synthetic prose generation and rewriting self-chat
- Q&A based on provided context
- self instruct augmented logic_inference_oa
- de-duped pygmalion dataset, filtered down to RP data, cleaned, english only, 25%
- [riddle_sense](https://huggingface.co/datasets/riddle_sense) - instruct augmented
- hellaswag, updated for detailed explanations w 30K+ rows
- [gsm8k](https://huggingface.co/datasets/gsm8k) - instruct augmented
- [ewof/code-alpaca-instruct-unfiltered](https://huggingface.co/datasets/ewof/code-alpaca-instruct-unfiltered) synthetic self chat dataset derived from about 1000 rows
- [subset of QingyiSi/Alpaca-CoT for roleplay and CoT](https://huggingface.co/QingyiSi/Alpaca-CoT)
- [GPTeacher-General-Instruct](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct)
- ARC-Easy & ARC-Challenge - instruct augmented for detailed responses, derived from the `train` split
- [hellaswag](https://huggingface.co/datasets/hellaswag) - 5K row subset of instruct augmented for concise responses, derived from the `train` split
- [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) - instruct for concise responses
- [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - instruct augmented tl;dr summarization
Hippogriff differs from Manticore as it does not use the WizardLM, WizardVicuna, Alpaca, or ShareGPT datasets.
# Initial thoughts
Hippogriff follows instructions pretty well. It still struggles with anything that has to do with math. Prose is much better than manticore. Handles in-context QA much better.
# Shoutouts
Special thanks to Nanobit for helping with Axolotl, TheBloke for quantizing these models are more accessible to all, 0x000011b for the RP dataset, and the support from everyone in our AI Discord community.
# Demo
A Spaces demo is not provided for this release due to 30B models currently not fitting in VRAM.
## Build
Hippogriff was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 8xA100 80GB
- 1.5 epochs taking approximately 12 hours. No further epochs will be released.
## Bias, Risks, and Limitations
Hippogriff has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
Hippogriff was fine-tuned from the base model LlaMa 30B, please refer to its model card's Limitations Section for relevant information.
The core functionality of Llama is taking a string of text and predicting the next token. While language models are widely used for tasks other than this,
there are a lot of unknowns with this work. When prompting Hippogriff it is important to remember that the statistically most likely next token is often
not the token that produces the most "accurate" text. Never depend upon Hippogriff to produce factually accurate output.
Llama was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case Hippogriff may produce
socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how Hippogriff will respond to particular prompts and offensive content may occur without warning.
We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
## Examples
TBD
```
USER: write an original short story of at least 300 words in the style of Zora Neale Hurston. Incorporate the following literary devices: Euphemism and Foreshadowing.
ASSISTANT: Once upon a time, there was a black girl named Rose, whose skin shined like a pearl beneath the hot sun. She lived with her mama and daddy on their farm, where they grew crops to feed the family and neighbors. On this particular day, Rose felt a tinge of excitement as she watched her parents work diligently in their fields. 'It's almost harvest season!' she thought, imagining the delicious meals that awaited them. But, soon after, it became apparent that something sinister lurked amidst the rows of fresh produce.
As her father hoed around a vine-covered cucumber plant, he unearthed a peculiar rock. It was the size of an acorn but had a strange shape, unlike anything he or his wife had ever seen. "What have we here?" He said, turning to his wife with curiosity and excitement. She examined the rock for herself and declared, "Why, it looks like a piece of coal!" But as both mother and daughter looked closer, it seemed to be far more than a mere piece of coal. A sense of unease crept up Rose's spine when she noticed that the rock resembled the dark face of a baby in the womb. As Rose tried to suppress a scream, her mother gasped, "Sweet Jesus! Let's bury this thing before anyone sees!" And so, they quickly hid the stone back underground, hoping that nobody would ever discover their secret.
```
```
USER: tell me 5 words that start with "ex" and then the sum of 33 and 77
ASSISTANT: Excite, exotic, exhibit, excitable, excessive
```
```
USER: sally (a girl) has 3 brothers. Each brother has 2 sisters. how many sisters does sally have?
ASSISTANT: Sally has 3 brothers, and each brother has 2 sisters. To find out how many sisters Sally has in total, we need to multiply the number of bro
thers by the number of sisters per brother, then add 1 to account for Sally herself. So, 3 * 2 + 1 = 6 sisters in total
```
<!-- original-model-card end -->
|
stablediffusionapi/photon | stablediffusionapi | 2023-09-21T06:42:27Z | 462 | 1 | diffusers | [
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-21T06:41:03Z | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# photon API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "photon"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Try model for free: [Generate Images](https://stablediffusionapi.com/models/photon)
Model link: [View model](https://stablediffusionapi.com/models/photon)
Credits: [View credits](https://civitai.com/?query=photon)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v4/dreambooth"
payload = json.dumps({
"key": "your_api_key",
"model_id": "photon",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
kirp/TinyLlama-1.1B-Chat-v0.2-gguf | kirp | 2023-09-27T18:14:27Z | 462 | 9 | null | [
"gguf",
"text-generation",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"dataset:OpenAssistant/oasst_top1_2023-08-25",
"license:apache-2.0",
"region:us"
]
| text-generation | 2023-09-23T15:48:05Z | ---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- OpenAssistant/oasst_top1_2023-08-25
language:
- en
quantized_by:
- kirp
pipeline_tag: text-generation
---
# 🔥 Good news
You can download the model from [PY007](https://huggingface.co/PY007/TinyLlama-1.1B-Chat-v0.2-GGUF) without any change to llama.cpp.
Here is a [demo](https://huggingface.co/spaces/kirp/tinyllama-chat).
# Pay attention
To use this model, you need to change the rope part of llama.cpp/llama.cpp. (From mode 0 to mode 2 rope)
Change 2568 and 2572 line from
```
struct ggml_tensor * Kcur = ggml_rope_custom_inplace(ctx0, ggml_reshape_3d(ctx0, tmpk, n_embd_head, n_head_kv, N), n_past, n_embd_head, 0, 0, freq_base, freq_scale);
struct ggml_tensor * Qcur = ggml_rope_custom_inplace(ctx0, ggml_reshape_3d(ctx0, tmpq, n_embd_head, n_head, N), n_past, n_embd_head, 0, 0, freq_base, freq_scale);
```
to
```
struct ggml_tensor * Kcur = ggml_rope_custom_inplace(ctx0, ggml_reshape_3d(ctx0, tmpk, n_embd_head, n_head_kv, N), n_past, n_embd_head, 2, 0, freq_base, freq_scale);
struct ggml_tensor * Qcur = ggml_rope_custom_inplace(ctx0, ggml_reshape_3d(ctx0, tmpq, n_embd_head, n_head, N), n_past, n_embd_head, 2, 0, freq_base, freq_scale);
```
# TinyLlama-1.1B Chat v0.2 GGUF
- Model creator: [PY007](https://huggingface.co/PY007)
- Original model: [TinyLlama 1.1B Chat v0.2](https://huggingface.co/PY007/TinyLlama-1.1B-Chat-v0.2)
- Github: [TinyLlama](https://github.com/jzhang38/TinyLlama)
## Description
This repo contains GGUF format model files for [PY007's TinyLlama 1.1B Chat v0.2](PY007/TinyLlama-1.1B-Chat-v0.2)
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates.
Here are a list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp).
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with full GPU accel across multiple platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- prompt-template start -->
## Prompt template: TinyLlama chat
```
<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n
```
Example:
```
<|im_start|>user
Explain huggingface.<|im_end|>
<|im_start|>assistant
Hugging Face is a platform for building and hosting open-source applications. It provides a simple interface for developers to build, deploy, and host any application on the web. Hugging Face offers a wide range of services, including:
1. API Gateway: This service allows developers to create REST APIs that can be accessed by other Hugging Face services.
2. Functions: This service provides functions that can be used for processing data and making predictions.
3. Transformers: These are a set of algorithms that allow developers to process large amounts of text data and generate new content.
4. Datasets: Hugging Face provides datasets that can be used to train models, evaluate them, and make predictions.
5. CLI: This service provides a command-line interface for developers to build, deploy, and manage their applications.
6. Documentation: This service provides documentation for the different services and features available on Hugging Face's platform.
7. Community: The Hugging Face community is made up of developers, data scientists, and other experts who can provide support and resources for using and building on Hugging Face's platforms.<|im_end|>
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUF files are compatible with llama.cpp from August 21st 2023 onwards, as of commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9)
They are now also compatible with many third party UIs and libraries - please see the list at the top of the README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
For compatibility with older versions of llama.cpp, or for any third-party libraries or clients that haven't yet updated for GGUF, please use GGML files instead.
```
./main -m ./models/ggml-model-q4_k_m.gguf \
-n 512 --color --temp 0 -e \
-p "<|im_start|>user\nExplain huggingface.<|im_end|>\n<|im_start|>assistant\n"
``` |
TheBloke/airoboros-c34b-2.2.1-GGUF | TheBloke | 2023-09-27T12:54:05Z | 462 | 6 | transformers | [
"transformers",
"gguf",
"llama",
"dataset:jondurbin/airoboros-2.2.1",
"base_model:jondurbin/airoboros-c34b-2.2.1",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-09-23T17:45:32Z | ---
license: llama2
datasets:
- jondurbin/airoboros-2.2.1
model_name: Airoboros C34B 2.2.1
base_model: jondurbin/airoboros-c34b-2.2.1
inference: false
model_creator: Jon Durbin
model_type: llama
prompt_template: "A chat.\nUSER: {prompt}\nASSISTANT: \n"
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Airoboros C34B 2.2.1 - GGUF
- Model creator: [Jon Durbin](https://huggingface.co/jondurbin)
- Original model: [Airoboros C34B 2.2.1](https://huggingface.co/jondurbin/airoboros-c34b-2.2.1)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Jon Durbin's Airoboros C34B 2.2.1](https://huggingface.co/jondurbin/airoboros-c34b-2.2.1).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/airoboros-c34b-2.2.1-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/airoboros-c34b-2.2.1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/airoboros-c34b-2.2.1-GGUF)
* [Jon Durbin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-c34b-2.2.1)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Chat
```
A chat.
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [airoboros-c34b-2.2.1.Q2_K.gguf](https://huggingface.co/TheBloke/airoboros-c34b-2.2.1-GGUF/blob/main/airoboros-c34b-2.2.1.Q2_K.gguf) | Q2_K | 2 | 14.21 GB| 16.71 GB | smallest, significant quality loss - not recommended for most purposes |
| [airoboros-c34b-2.2.1.Q3_K_S.gguf](https://huggingface.co/TheBloke/airoboros-c34b-2.2.1-GGUF/blob/main/airoboros-c34b-2.2.1.Q3_K_S.gguf) | Q3_K_S | 3 | 14.61 GB| 17.11 GB | very small, high quality loss |
| [airoboros-c34b-2.2.1.Q3_K_M.gguf](https://huggingface.co/TheBloke/airoboros-c34b-2.2.1-GGUF/blob/main/airoboros-c34b-2.2.1.Q3_K_M.gguf) | Q3_K_M | 3 | 16.28 GB| 18.78 GB | very small, high quality loss |
| [airoboros-c34b-2.2.1.Q3_K_L.gguf](https://huggingface.co/TheBloke/airoboros-c34b-2.2.1-GGUF/blob/main/airoboros-c34b-2.2.1.Q3_K_L.gguf) | Q3_K_L | 3 | 17.77 GB| 20.27 GB | small, substantial quality loss |
| [airoboros-c34b-2.2.1.Q4_0.gguf](https://huggingface.co/TheBloke/airoboros-c34b-2.2.1-GGUF/blob/main/airoboros-c34b-2.2.1.Q4_0.gguf) | Q4_0 | 4 | 19.05 GB| 21.55 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [airoboros-c34b-2.2.1.Q4_K_S.gguf](https://huggingface.co/TheBloke/airoboros-c34b-2.2.1-GGUF/blob/main/airoboros-c34b-2.2.1.Q4_K_S.gguf) | Q4_K_S | 4 | 19.15 GB| 21.65 GB | small, greater quality loss |
| [airoboros-c34b-2.2.1.Q4_K_M.gguf](https://huggingface.co/TheBloke/airoboros-c34b-2.2.1-GGUF/blob/main/airoboros-c34b-2.2.1.Q4_K_M.gguf) | Q4_K_M | 4 | 20.22 GB| 22.72 GB | medium, balanced quality - recommended |
| [airoboros-c34b-2.2.1.Q5_0.gguf](https://huggingface.co/TheBloke/airoboros-c34b-2.2.1-GGUF/blob/main/airoboros-c34b-2.2.1.Q5_0.gguf) | Q5_0 | 5 | 23.24 GB| 25.74 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [airoboros-c34b-2.2.1.Q5_K_S.gguf](https://huggingface.co/TheBloke/airoboros-c34b-2.2.1-GGUF/blob/main/airoboros-c34b-2.2.1.Q5_K_S.gguf) | Q5_K_S | 5 | 23.24 GB| 25.74 GB | large, low quality loss - recommended |
| [airoboros-c34b-2.2.1.Q5_K_M.gguf](https://huggingface.co/TheBloke/airoboros-c34b-2.2.1-GGUF/blob/main/airoboros-c34b-2.2.1.Q5_K_M.gguf) | Q5_K_M | 5 | 23.84 GB| 26.34 GB | large, very low quality loss - recommended |
| [airoboros-c34b-2.2.1.Q6_K.gguf](https://huggingface.co/TheBloke/airoboros-c34b-2.2.1-GGUF/blob/main/airoboros-c34b-2.2.1.Q6_K.gguf) | Q6_K | 6 | 27.68 GB| 30.18 GB | very large, extremely low quality loss |
| [airoboros-c34b-2.2.1.Q8_0.gguf](https://huggingface.co/TheBloke/airoboros-c34b-2.2.1-GGUF/blob/main/airoboros-c34b-2.2.1.Q8_0.gguf) | Q8_0 | 8 | 35.86 GB| 38.36 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/airoboros-c34b-2.2.1-GGUF and below it, a specific filename to download, such as: airoboros-c34b-2.2.1.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/airoboros-c34b-2.2.1-GGUF airoboros-c34b-2.2.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/airoboros-c34b-2.2.1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/airoboros-c34b-2.2.1-GGUF airoboros-c34b-2.2.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m airoboros-c34b-2.2.1.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat.\nUSER: {prompt}\nASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/airoboros-c34b-2.2.1-GGUF", model_file="airoboros-c34b-2.2.1.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Jon Durbin's Airoboros C34B 2.2.1
### Overview
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
This is essentially a minor "fix" branch of [airoboros-c34b-2.2](https://hf.co/jondurbin/airoboros-c34b-2.2) with a updates, primarily:
- [re-generated writing responses](https://huggingface.co/datasets/jondurbin/airoboros-2.2.1#re-generated-writing-responses)
- [longer contextual blocks](https://huggingface.co/datasets/jondurbin/airoboros-2.2.1#longer-contextual-blocks)
- [removal of "rp" data](https://huggingface.co/datasets/jondurbin/airoboros-2.2.1#rp-category-removed)
- [(less aggressive) de-censoring](https://huggingface.co/datasets/jondurbin/airoboros-2.2.1#de-censoring)
- 5 epochs instead of 3
This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay.
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The prompt format:
```
A chat.
USER: {prompt}
ASSISTANT:
```
The default system prompt ("A chat.") was used for most of the prompts, however it also included a wide sampling of responses with other prompts, particularly in "stylized\_response", "rp", "gtkm", etc.
Here's another example:
```
A chat between Bob (aka USER) and Tom (aka ASSISTANT). Tom is an extremely intelligent 18th century bookkeeper, who speaks loquaciously.
USER: {prompt}
ASSISTANT:
```
And chat scenario that wouldn't require USER/ASSISTANT (but should use stopping criteria to prevent the model from speaking on your behalf).
```
A chat between old friends: Timmy and Tommy.
{description of characters}
{setting for the chat}
Timmy: *takes a big sip from his coffee* "Ah, sweet, delicious, magical coffee."
Tommy:
```
__*I strongly suggest adding stopping criteria/early inference stopping on "USER:", and/or whatever names you specify in the system prompt.*__
### Fine tuning info
https://wandb.ai/jondurbin/airoboros-c34b-2.2.1/runs/07b08z7m?workspace=user-jondurbin
### Helpful usage tips
*The prompts shown here are are just the text that would be included after USER: and before ASSISTANT: in the full prompt format above, the system prompt and USER:/ASSISTANT: have been omited for readability.*
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Summarization
500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example:
```
BEGININPUT
{text to summarize}
ENDINPUT
BEGININSTRUCTION
Summarize the input in around 130 words.
ENDINSTRUCTION
```
#### Getting longer responses
You can use a few techniques to get longer responses.
Detailed prompts, with explicit instruction for word count:
```
Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality.
The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization.
One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary.
Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements.
Your response should be approximately 2300 words.
```
Or, a simpler example:
```
Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux.
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 2.2 models are built on top of llama-2/codellama.
The llama-2 base model has a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.
<!-- original-model-card end -->
|
Rijgersberg/GEITje-7B-chat-v2 | Rijgersberg | 2024-04-20T10:54:59Z | 462 | 15 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"GEITje",
"conversational",
"nl",
"dataset:Rijgersberg/no_robots_nl",
"dataset:Rijgersberg/ultrachat_10k_nl",
"dataset:BramVanroy/dutch_chat_datasets",
"base_model:Rijgersberg/GEITje-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-12-16T18:37:40Z | ---
license: apache-2.0
base_model: Rijgersberg/GEITje-7B
tags:
- generated_from_trainer
- GEITje
- conversational
model-index:
- name: GEITje-7B-chat-v2
results: []
datasets:
- Rijgersberg/no_robots_nl
- Rijgersberg/ultrachat_10k_nl
- BramVanroy/dutch_chat_datasets
language:
- nl
pipeline_tag: text-generation
---
# GEITje-7B-chat-v2
**🤖️ Try the chat model in [🤗 Hugging Face Spaces](https://huggingface.co/spaces/Rijgersberg/GEITje-7B-chat)!**
# GEITje-7B
GEITje is a large open Dutch language model with 7 billion parameters, based on Mistral 7B.
It has been further trained on 10 billion tokens of Dutch text.
This has improved its Dutch language skills and increased its knowledge of Dutch topics.
## Model description
### _Mistral_ – Base Model
GEITje is based on [Mistral 7B](https://mistral.ai/news/announcing-mistral-7b/).
It's a large open language model with 7 billion parameters,
trained by [Mistral AI](https://mistral.ai).
According to Mistral AI, the 7B model performs better than [Llama 2](https://ai.meta.com/llama/) 13B on all (English-language) benchmarks they tested it on.
Mistral 7B has been released under the Apache 2.0 open source license.
### _GEITje_ – Trained Further on Dutch Texts
GEITje was created by further training Mistral 7B on no less than 10 billion tokens of Dutch text from the [Dutch Gigacorpus](http://gigacorpus.nl) and the [MADLAD-400](https://huggingface.co/datasets/allenai/MADLAD-400) web crawling corpus.
It is a so-called _full-parameter finetune_:
performed on all parameters.
It is not a [PEFT](https://huggingface.co/blog/peft) or [LoRA](https://huggingface.co/docs/peft/conceptual_guides/lora) finetune.
Like Mistral, GEITje has a _context length_ of 8,192 tokens.
### _GEITje-chat_ – Finetuned for Dialogues
As a demonstration of GEITje's capabilities for chat applications, two initial chat variants of GEITje have also been finetuned: GEITje-chat and GEITje-chat-v2.
They can follow instructions, answer questions, and hold dialogues on a variety of topics.
## More info
Read more about GEITje-chat in the [📄 README](https://github.com/Rijgersberg/GEITje/blob/main/README-en.md) on GitHub.
## Checkpoints
An intermediate checkpoint is available in the `checkpoints` branch.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.7832 | 0.05 | 609 | 0.8844 |
| 0.6904 | 0.1 | 1218 | 0.8698 |
| 0.8195 | 0.15 | 1827 | 0.8583 |
| 0.7463 | 0.2 | 2436 | 0.8475 |
| 0.6739 | 0.25 | 3045 | 0.8395 |
| 0.7604 | 0.3 | 3654 | 0.8332 |
| 0.8024 | 0.35 | 4263 | 0.8261 |
| 0.6881 | 0.4 | 4872 | 0.8203 |
| 0.6466 | 0.45 | 5481 | 0.8167 |
| 0.7042 | 0.5 | 6090 | 0.8121 |
| 0.702 | 0.55 | 6699 | 0.8081 |
| 0.7255 | 0.6 | 7308 | 0.8054 |
| 0.7558 | 0.65 | 7917 | 0.8036 |
| 0.7587 | 0.7 | 8526 | 0.8022 |
| 0.9217 | 0.75 | 9135 | 0.8016 |
| 0.6938 | 0.8 | 9744 | 0.8011 |
| 0.6962 | 0.85 | 10353 | 0.8011 |
| 0.664 | 0.9 | 10962 | 0.8011 |
| 0.6544 | 0.95 | 11571 | 0.8011 |
| 0.6782 | 1.0 | 12180 | 0.8011 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0 |
Q-bert/Mamba-1B | Q-bert | 2024-03-08T06:16:31Z | 462 | 27 | transformers | [
"transformers",
"pytorch",
"mamba",
"text-generation",
"mamba-hf",
"custom_code",
"en",
"arxiv:2312.00752",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-12-23T08:13:58Z | ---
license: apache-2.0
language:
- en
tags:
- mamba-hf
---
# Mamba-1B
<img src="https://cdn-uploads.huggingface.co/production/uploads/63da3d7ae697e5898cb86854/A3BYIH-q7G5vz4NlsPlGJ.jpeg" width="300" height="300" alt="mamba-hf">
Mamba Models with hf_integration.
For modeling codes: [**mamba-hf**](https://github.com/LegallyCoder/mamba-hf)
# Usage:
```python
from transformers import AutoModelForCausalLM , AutoTokenizer
model = AutoModelForCausalLM.from_pretrained('Q-bert/Mamba-1B', trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained('Q-bert/Mamba-1B')
text = "Hi"
input_ids = tokenizer.encode(text, return_tensors="pt")
output = model.generate(input_ids, max_length=20, num_beams=5, no_repeat_ngram_size=2)
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
```
> Hi, I'm looking for a new job. I've been working at a company for about a year now.
# For Training:
```python
from transformers import Trainer ,TrainingArguments
import torch
import os
class MambaTrainer(Trainer):
def compute_loss(self, model, inputs, return_outputs=False):
input_ids = inputs.pop("input_ids")
lm_logits = model(input_ids)[0]
labels = input_ids.to(lm_logits.device)
shift_logits = lm_logits[:, :-1, :].contiguous()
labels = labels[:, 1:].contiguous()
loss_fct = torch.nn.CrossEntropyLoss()
lm_loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), labels.view(-1))
return lm_loss
```
You must use this class for training. And fp16 must be **False**.
# Credits:
https://huggingface.co/state-spaces
Special thanks to Albert Gu and Tri Dao for their articles. (https://arxiv.org/abs/2312.00752)
|
rinna/japanese-wav2vec2-base | rinna | 2024-04-03T09:27:23Z | 462 | 3 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"speech",
"feature-extraction",
"ja",
"dataset:reazon-research/reazonspeech",
"arxiv:2404.01657",
"license:apache-2.0",
"region:us"
]
| feature-extraction | 2024-03-06T01:07:56Z | ---
thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
language: ja
license: apache-2.0
datasets: reazon-research/reazonspeech
pipeline_tag: feature-extraction
inference: false
tags:
- wav2vec2
- speech
---
# `rinna/japanese-wav2vec2-base`

# Overview
This is a Japanese wav2vec 2.0 Base model trained by [rinna Co., Ltd.](https://rinna.co.jp/)
* **Model summary**
The model architecture is the same as the [original wav2vec 2.0 Base model](https://huggingface.co/facebook/wav2vec2-base), which contains 12 transformer layers with 12 attention heads.
The model was trained using code from the [official repository](https://github.com/facebookresearch/fairseq/tree/main/examples/wav2vec), and the detailed training configuration can be found in the same repository and the [original paper](https://proceedings.neurips.cc/paper/2020/hash/92d1e1eb1cd6f9fba3227870bb6d7f07-Abstract.html).
* **Training**
The model was trained on approximately 19,000 hours of following Japanese speech corpus ReazonSpeech v1.
- [ReazonSpeech](https://huggingface.co/datasets/reazon-research/reazonspeech)
* **Contributors**
- [Yukiya Hono](https://huggingface.co/yky-h)
- [Kentaro Mitsui](https://huggingface.co/Kentaro321)
- [Kei Sawada](https://huggingface.co/keisawada)
---
# How to use the model
```python
import soundfile as sf
from transformers import AutoFeatureExtractor, AutoModel
model_name = "rinna/japanese-wav2vec2-base"
feature_extractor = AutoFeatureExtractor.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
model.eval()
raw_speech_16kHz, sr = sf.read(audio_file)
inputs = feature_extractor(
raw_speech_16kHz,
return_tensors="pt",
sampling_rate=sr,
)
outputs = model(**inputs)
print(f"Input: {inputs.input_values.size()}") # [1, #samples]
print(f"Output: {outputs.last_hidden_state.size()}") # [1, #frames, 768]
```
A fairseq checkpoint file can also be available [here](https://huggingface.co/rinna/japanese-wav2vec2-base/tree/main/fairseq).
---
# How to cite
```bibtex
@misc{rinna-japanese-wav2vec2-base,
title = {rinna/japanese-wav2vec2-base},
author={Hono, Yukiya and Mitsui, Kentaro and Sawada, Kei}
url = {https://huggingface.co/rinna/japanese-wav2vec2-base},
}
@inproceedings{sawada2024release,
title = {Release of Pre-Trained Models for the {J}apanese Language},
author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh},
booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
month = {5},
year = {2024},
url = {https://arxiv.org/abs/2404.01657},
}
```
---
# References
```bibtex
@inproceedings{baevski2020wav2vec,
title={wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations},
author={Baevski, Alexei and Zhou, Yuhao and Mohamed, Abdelrahman and Auli, Michael},
booktitle={Advances in Neural Information Processing Systems},
volume={33},
pages={12449--12460},
year={2020},
url={https://proceedings.neurips.cc/paper/2020/hash/92d1e1eb1cd6f9fba3227870bb6d7f07-Abstract.html}
}
```
---
# License
[The Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0)
|
meta-llama/CodeLlama-70b-hf | meta-llama | 2024-03-14T18:41:15Z | 462 | 14 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"llama-2",
"code",
"arxiv:2308.12950",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-03-13T20:22:50Z | ---
extra_gated_heading: You need to share contact information with Meta to access this model
extra_gated_prompt: >-
### LLAMA 2 COMMUNITY LICENSE AGREEMENT
"Agreement" means the terms and conditions for use, reproduction, distribution
and modification of the Llama Materials set forth herein.
"Documentation" means the specifications, manuals and documentation
accompanying Llama 2 distributed by Meta at
https://ai.meta.com/resources/models-and-libraries/llama-downloads/.
"Licensee" or "you" means you, or your employer or any other person or entity
(if you are entering into this Agreement on such person or entity's behalf),
of the age required under applicable laws, rules or regulations to provide
legal consent and that has legal authority to bind your employer or such other
person or entity if you are entering in this Agreement on their behalf.
"Llama 2" means the foundational large language models and software and
algorithms, including machine-learning model code, trained model weights,
inference-enabling code, training-enabling code, fine-tuning enabling code and
other elements of the foregoing distributed by Meta at
ai.meta.com/resources/models-and-libraries/llama-downloads/.
"Llama Materials" means, collectively, Meta's proprietary Llama 2 and
documentation (and any portion thereof) made available under this Agreement.
"Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or,
if you are an entity, your principal place of business is in the EEA or
Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA
or Switzerland).
By clicking "I Accept" below or by using or distributing any portion or
element of the Llama Materials, you agree to be bound by this Agreement.
1. License Rights and Redistribution.
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-
transferable and royalty-free limited license under Meta's intellectual
property or other rights owned by Meta embodied in the Llama Materials to
use, reproduce, distribute, copy, create derivative works of, and make
modifications to the Llama Materials.
b. Redistribution and Use.
i. If you distribute or make the Llama Materials, or any derivative works
thereof, available to a third party, you shall provide a copy of this
Agreement to such third party.
ii. If you receive Llama Materials, or any derivative works thereof, from a
Licensee as part of an integrated end user product, then Section 2 of this
Agreement will not apply to you.
iii. You must retain in all copies of the Llama Materials that you distribute
the following attribution notice within a "Notice" text file distributed as a
part of such copies: "Llama 2 is licensed under the LLAMA 2 Community
License, Copyright (c) Meta Platforms, Inc. All Rights Reserved."
iv. Your use of the Llama Materials must comply with applicable laws and
regulations (including trade compliance laws and regulations) and adhere to
the Acceptable Use Policy for the Llama Materials (available at
https://ai.meta.com/llama/use-policy), which is hereby incorporated by
reference into this Agreement.
v. You will not use the Llama Materials or any output or results of the Llama
Materials to improve any other large language model (excluding Llama 2 or
derivative works thereof).
2. Additional Commercial Terms. If, on the Llama 2 version release date, the
monthly active users of the products or services made available by or for
Licensee, or Licensee's affiliates, is greater than 700 million monthly
active users in the preceding calendar month, you must request a license from
Meta, which Meta may grant to you in its sole discretion, and you are not
authorized to exercise any of the rights under this Agreement unless or until
Meta otherwise expressly grants you such rights.
3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA
MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN "AS IS"
BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING,
WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY
RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING
THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE
LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE
UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE,
PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST
PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR
PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE
POSSIBILITY OF ANY OF THE FOREGOING.
5. Intellectual Property.
a. No trademark licenses are granted under this Agreement, and in connection
with the Llama Materials, neither Meta nor Licensee may use any name or mark
owned by or associated with the other or any of its affiliates, except as
required for reasonable and customary use in describing and redistributing
the Llama Materials.
b. Subject to Meta's ownership of Llama Materials and derivatives made by or
for Meta, with respect to any derivative works and modifications of the Llama
Materials that are made by you, as between you and Meta, you are and will be
the owner of such derivative works and modifications.
c. If you institute litigation or other proceedings against Meta or any
entity (including a cross-claim or counterclaim in a lawsuit) alleging that
the Llama Materials or Llama 2 outputs or results, or any portion of any of
the foregoing, constitutes infringement of intellectual property or other
rights owned or licensable by you, then any licenses granted to you under
this Agreement shall terminate as of the date such litigation or claim is
filed or instituted. You will indemnify and hold harmless Meta from and
against any claim by any third party arising out of or related to your use or
distribution of the Llama Materials.
6. Term and Termination. The term of this Agreement will commence upon your
acceptance of this Agreement or access to the Llama Materials and will
continue in full force and effect until terminated in accordance with the
terms and conditions herein. Meta may terminate this Agreement if you are in
breach of any term or condition of this Agreement. Upon termination of this
Agreement, you shall delete and cease use of the Llama Materials. Sections 3,
4 and 7 shall survive the termination of this Agreement.
7. Governing Law and Jurisdiction. This Agreement will be governed and
construed under the laws of the State of California without regard to choice
of law principles, and the UN Convention on Contracts for the International
Sale of Goods does not apply to this Agreement. The courts of California
shall have exclusive jurisdiction of any dispute arising out of this
Agreement.
USE POLICY
### Llama 2 Acceptable Use Policy
Meta is committed to promoting safe and fair use of its tools and features,
including Llama 2. If you access or use Llama 2, you agree to this Acceptable
Use Policy (“Policy”). The most recent copy of this policy can be found at
[ai.meta.com/llama/use-policy](http://ai.meta.com/llama/use-policy).
#### Prohibited Uses
We want everyone to use Llama 2 safely and responsibly. You agree you will not
use, or allow others to use, Llama 2 to:
1. Violate the law or others’ rights, including to:
1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
1. Violence or terrorism
2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
3. Human trafficking, exploitation, and sexual violence
4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
5. Sexual solicitation
6. Any other criminal activity
2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws
6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama 2 Materials
7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
2. Engage in, promote, incite, facilitate, or assist in the planning or
development of activities that present a risk of death or bodily harm to
individuals, including use of Llama 2 related to the following:
1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State
2. Guns and illegal weapons (including weapon development)
3. Illegal drugs and regulated/controlled substances
4. Operation of critical infrastructure, transportation technologies, or heavy machinery
5. Self-harm or harm to others, including suicide, cutting, and eating disorders
6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
3. Intentionally deceive or mislead others, including use of Llama 2 related
to the following:
1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
3. Generating, promoting, or further distributing spam
4. Impersonating another individual without consent, authorization, or legal right
5. Representing that the use of Llama 2 or outputs are human-generated
6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement
4. Fail to appropriately disclose to end users any known dangers of your AI system
Please report any violation of this Policy, software “bug,” or other problems
that could lead to a violation of this Policy through one of the following
means:
* Reporting issues with the model:
[github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
* Reporting risky content generated by the model:
[developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
* Reporting bugs and security concerns:
[facebook.com/whitehat/info](http://facebook.com/whitehat/info)
* Reporting violations of the Acceptable Use Policy or unlicensed uses of
Llama: [[email protected]](mailto:[email protected])
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
geo: ip_location
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
language:
- code
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
license: llama2
---
# **Code Llama**
Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the base 70B version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom.
| | Base Model | Python | Instruct |
| --- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- |
| 7B | [meta-llama/CodeLlama-7b-hf](https://huggingface.co/meta-llama/CodeLlama-7b-hf) | [meta-llama/CodeLlama-7b-Python-hf](https://huggingface.co/meta-llama/CodeLlama-7b-Python-hf) | [meta-llama/CodeLlama-7b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-7b-Instruct-hf) |
| 13B | [meta-llama/CodeLlama-13b-hf](https://huggingface.co/meta-llama/CodeLlama-13b-hf) | [meta-llama/CodeLlama-13b-Python-hf](https://huggingface.co/meta-llama/CodeLlama-13b-Python-hf) | [meta-llama/CodeLlama-13b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-13b-Instruct-hf) |
| 34B | [meta-llama/CodeLlama-34b-hf](https://huggingface.co/meta-llama/CodeLlama-34b-hf) | [meta-llama/CodeLlama-34b-Python-hf](https://huggingface.co/meta-llama/CodeLlama-34b-Python-hf) | [meta-llama/CodeLlama-34b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-34b-Instruct-hf) |
| 70B | [meta-llama/CodeLlama-70b-hf](https://huggingface.co/meta-llama/CodeLlama-70b-hf) | [meta-llama/CodeLlama-70b-Python-hf](https://huggingface.co/meta-llama/CodeLlama-70b-Python-hf) | [meta-llama/CodeLlama-70b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-70b-Instruct-hf) |
## Model Use
To use this model, please make sure to install `transformers`:
```bash
pip install transformers accelerate
```
Model capabilities:
- [x] Code completion.
- [ ] Infilling.
- [ ] Instructions / chat.
- [ ] Python specialist.
## Model Details
*Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs).
**Model Developers** Meta
**Variations** Code Llama comes in four model sizes, and three variants:
* Code Llama: base models designed for general code synthesis and understanding
* Code Llama - Python: designed specifically for Python
* Code Llama - Instruct: for instruction following and safer deployment
All variants are available in sizes of 7B, 13B, 34B, and 70B parameters.
**This repository contains the base version of the 70B parameters model.**
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Code Llama is an auto-regressive language model that uses an optimized transformer architecture. It was fine-tuned with up to 16k tokens and supports up to 100k tokens at inference time.
**Model Dates** Code Llama and its variants have been trained between January 2023 and January 2024.
**Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)" or its [arXiv page](https://arxiv.org/abs/2308.12950).
## Intended Use
**Intended Use Cases** Code Llama and its variants are intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications.
**Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants.
## Hardware and Software
**Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster.
**Carbon Footprint** In aggregate, training all 12 Code Llama models required 1400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 228.55 tCO2eq, 100% of which were offset by Meta’s sustainability program.
## Evaluation Results
See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper.
## Ethical Considerations and Limitations
Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-use-guide](https://ai.meta.com/llama/responsible-use-guide).
|
mradermacher/Dolphin-2.6-FC_Hermes-2-Pro-GGUF | mradermacher | 2024-05-06T05:46:37Z | 462 | 1 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"cognitivecomputations/fc-dolphin-2.6-mistral-7b-dpo-laser",
"NousResearch/Hermes-2-Pro-Mistral-7B",
"en",
"base_model:00000-X/Dolphin-2.6-FC_Hermes-2-Pro",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-28T18:45:16Z | ---
base_model: 00000-X/Dolphin-2.6-FC_Hermes-2-Pro
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- cognitivecomputations/fc-dolphin-2.6-mistral-7b-dpo-laser
- NousResearch/Hermes-2-Pro-Mistral-7B
---
## About
static quants of https://huggingface.co/00000-X/Dolphin-2.6-FC_Hermes-2-Pro
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Dolphin-2.6-FC_Hermes-2-Pro-GGUF/resolve/main/Dolphin-2.6-FC_Hermes-2-Pro.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-2.6-FC_Hermes-2-Pro-GGUF/resolve/main/Dolphin-2.6-FC_Hermes-2-Pro.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-2.6-FC_Hermes-2-Pro-GGUF/resolve/main/Dolphin-2.6-FC_Hermes-2-Pro.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-2.6-FC_Hermes-2-Pro-GGUF/resolve/main/Dolphin-2.6-FC_Hermes-2-Pro.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-2.6-FC_Hermes-2-Pro-GGUF/resolve/main/Dolphin-2.6-FC_Hermes-2-Pro.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-2.6-FC_Hermes-2-Pro-GGUF/resolve/main/Dolphin-2.6-FC_Hermes-2-Pro.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-2.6-FC_Hermes-2-Pro-GGUF/resolve/main/Dolphin-2.6-FC_Hermes-2-Pro.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-2.6-FC_Hermes-2-Pro-GGUF/resolve/main/Dolphin-2.6-FC_Hermes-2-Pro.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-2.6-FC_Hermes-2-Pro-GGUF/resolve/main/Dolphin-2.6-FC_Hermes-2-Pro.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-2.6-FC_Hermes-2-Pro-GGUF/resolve/main/Dolphin-2.6-FC_Hermes-2-Pro.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-2.6-FC_Hermes-2-Pro-GGUF/resolve/main/Dolphin-2.6-FC_Hermes-2-Pro.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-2.6-FC_Hermes-2-Pro-GGUF/resolve/main/Dolphin-2.6-FC_Hermes-2-Pro.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-2.6-FC_Hermes-2-Pro-GGUF/resolve/main/Dolphin-2.6-FC_Hermes-2-Pro.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-2.6-FC_Hermes-2-Pro-GGUF/resolve/main/Dolphin-2.6-FC_Hermes-2-Pro.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-2.6-FC_Hermes-2-Pro-GGUF/resolve/main/Dolphin-2.6-FC_Hermes-2-Pro.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-2.6-FC_Hermes-2-Pro-GGUF/resolve/main/Dolphin-2.6-FC_Hermes-2-Pro.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/FNCARLplus-7b-GGUF | mradermacher | 2024-05-06T05:13:36Z | 462 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:jambroz/FNCARLplus-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-06T02:16:58Z | ---
base_model: jambroz/FNCARLplus-7b
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/jambroz/FNCARLplus-7b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/FNCARLplus-7b-GGUF/resolve/main/FNCARLplus-7b.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/FNCARLplus-7b-GGUF/resolve/main/FNCARLplus-7b.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/FNCARLplus-7b-GGUF/resolve/main/FNCARLplus-7b.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/FNCARLplus-7b-GGUF/resolve/main/FNCARLplus-7b.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/FNCARLplus-7b-GGUF/resolve/main/FNCARLplus-7b.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/FNCARLplus-7b-GGUF/resolve/main/FNCARLplus-7b.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/FNCARLplus-7b-GGUF/resolve/main/FNCARLplus-7b.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/FNCARLplus-7b-GGUF/resolve/main/FNCARLplus-7b.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/FNCARLplus-7b-GGUF/resolve/main/FNCARLplus-7b.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/FNCARLplus-7b-GGUF/resolve/main/FNCARLplus-7b.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/FNCARLplus-7b-GGUF/resolve/main/FNCARLplus-7b.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/FNCARLplus-7b-GGUF/resolve/main/FNCARLplus-7b.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/FNCARLplus-7b-GGUF/resolve/main/FNCARLplus-7b.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/FNCARLplus-7b-GGUF/resolve/main/FNCARLplus-7b.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/MermaidMixtral-3x7b-GGUF | mradermacher | 2024-05-06T05:11:50Z | 462 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:TroyDoesAI/MermaidMixtral-3x7b",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-06T17:23:40Z | ---
base_model: TroyDoesAI/MermaidMixtral-3x7b
language:
- en
library_name: transformers
license: cc-by-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/TroyDoesAI/MermaidMixtral-3x7b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-3x7b-GGUF/resolve/main/MermaidMixtral-3x7b.Q2_K.gguf) | Q2_K | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-3x7b-GGUF/resolve/main/MermaidMixtral-3x7b.IQ3_XS.gguf) | IQ3_XS | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-3x7b-GGUF/resolve/main/MermaidMixtral-3x7b.Q3_K_S.gguf) | Q3_K_S | 8.1 | |
| [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-3x7b-GGUF/resolve/main/MermaidMixtral-3x7b.IQ3_S.gguf) | IQ3_S | 8.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-3x7b-GGUF/resolve/main/MermaidMixtral-3x7b.IQ3_M.gguf) | IQ3_M | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-3x7b-GGUF/resolve/main/MermaidMixtral-3x7b.Q3_K_M.gguf) | Q3_K_M | 9.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-3x7b-GGUF/resolve/main/MermaidMixtral-3x7b.Q3_K_L.gguf) | Q3_K_L | 9.7 | |
| [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-3x7b-GGUF/resolve/main/MermaidMixtral-3x7b.IQ4_XS.gguf) | IQ4_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-3x7b-GGUF/resolve/main/MermaidMixtral-3x7b.Q4_K_S.gguf) | Q4_K_S | 10.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-3x7b-GGUF/resolve/main/MermaidMixtral-3x7b.Q4_K_M.gguf) | Q4_K_M | 11.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-3x7b-GGUF/resolve/main/MermaidMixtral-3x7b.Q5_K_S.gguf) | Q5_K_S | 12.8 | |
| [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-3x7b-GGUF/resolve/main/MermaidMixtral-3x7b.Q5_K_M.gguf) | Q5_K_M | 13.2 | |
| [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-3x7b-GGUF/resolve/main/MermaidMixtral-3x7b.Q6_K.gguf) | Q6_K | 15.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-3x7b-GGUF/resolve/main/MermaidMixtral-3x7b.Q8_0.gguf) | Q8_0 | 19.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Monor/Llama-3-8B-Instruct-Gradient-1048k-gguf | Monor | 2024-05-05T16:40:45Z | 462 | 0 | null | [
"gguf",
"license:apache-2.0",
"region:us"
]
| null | 2024-05-01T14:40:45Z | ---
license: apache-2.0
---
## Introduce
Quantizing the [gradientai/Llama-3-8B-Instruct-Gradient-1048k](https://huggingface.co/gradientai/Llama-3-8B-Instruct-Gradient-1048k) to f16, q2, q3, q4, q5, q6 and q8 with Llama.cpp.
|
bartowski/Llama3-ChatQA-1.5-70B-GGUF | bartowski | 2024-05-04T07:17:22Z | 462 | 9 | null | [
"gguf",
"nvidia",
"chatqa-1.5",
"chatqa",
"llama-3",
"pytorch",
"text-generation",
"en",
"license:llama3",
"region:us"
]
| text-generation | 2024-05-04T00:55:33Z | ---
license: llama3
language:
- en
pipeline_tag: text-generation
tags:
- nvidia
- chatqa-1.5
- chatqa
- llama-3
- pytorch
quantized_by: bartowski
---
## Llamacpp imatrix Quantizations of Llama3-ChatQA-1.5-70B
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2777">b2777</a> for quantization.
Original model: https://huggingface.co/nvidia/Llama3-ChatQA-1.5-70B
All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
## Prompt format
```
System: {System}
{Context}
User: {Question}
Assistant: {Response}
User: {Question}
Assistant:
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama3-ChatQA-1.5-70B-Q8_0.gguf](https://huggingface.co/bartowski/Llama3-ChatQA-1.5-70B-GGUF/tree/main/Llama3-ChatQA-1.5-70B-Q8_0.gguf) | Q8_0 | 74.97GB | Extremely high quality, generally unneeded but max available quant. |
| [Llama3-ChatQA-1.5-70B-Q6_K.gguf](https://huggingface.co/bartowski/Llama3-ChatQA-1.5-70B-GGUF/tree/main/Llama3-ChatQA-1.5-70B-Q6_K.gguf) | Q6_K | 57.88GB | Very high quality, near perfect, *recommended*. |
| [Llama3-ChatQA-1.5-70B-Q5_K_M.gguf](https://huggingface.co/bartowski/Llama3-ChatQA-1.5-70B-GGUF/blob/main/Llama3-ChatQA-1.5-70B-Q5_K_M.gguf) | Q5_K_M | 49.94GB | High quality, *recommended*. |
| [Llama3-ChatQA-1.5-70B-Q5_K_S.gguf](https://huggingface.co/bartowski/Llama3-ChatQA-1.5-70B-GGUF/blob/main/Llama3-ChatQA-1.5-70B-Q5_K_S.gguf) | Q5_K_S | 48.65GB | High quality, *recommended*. |
| [Llama3-ChatQA-1.5-70B-Q4_K_M.gguf](https://huggingface.co/bartowski/Llama3-ChatQA-1.5-70B-GGUF/blob/main/Llama3-ChatQA-1.5-70B-Q4_K_M.gguf) | Q4_K_M | 42.52GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Llama3-ChatQA-1.5-70B-Q4_K_S.gguf](https://huggingface.co/bartowski/Llama3-ChatQA-1.5-70B-GGUF/blob/main/Llama3-ChatQA-1.5-70B-Q4_K_S.gguf) | Q4_K_S | 40.34GB | Slightly lower quality with more space savings, *recommended*. |
| [Llama3-ChatQA-1.5-70B-IQ4_NL.gguf](https://huggingface.co/bartowski/Llama3-ChatQA-1.5-70B-GGUF/blob/main/Llama3-ChatQA-1.5-70B-IQ4_NL.gguf) | IQ4_NL | 40.05GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [Llama3-ChatQA-1.5-70B-IQ4_XS.gguf](https://huggingface.co/bartowski/Llama3-ChatQA-1.5-70B-GGUF/blob/main/Llama3-ChatQA-1.5-70B-IQ4_XS.gguf) | IQ4_XS | 37.90GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Llama3-ChatQA-1.5-70B-Q3_K_L.gguf](https://huggingface.co/bartowski/Llama3-ChatQA-1.5-70B-GGUF/blob/main/Llama3-ChatQA-1.5-70B-Q3_K_L.gguf) | Q3_K_L | 37.14GB | Lower quality but usable, good for low RAM availability. |
| [Llama3-ChatQA-1.5-70B-Q3_K_M.gguf](https://huggingface.co/bartowski/Llama3-ChatQA-1.5-70B-GGUF/blob/main/Llama3-ChatQA-1.5-70B-Q3_K_M.gguf) | Q3_K_M | 34.26GB | Even lower quality. |
| [Llama3-ChatQA-1.5-70B-IQ3_M.gguf](https://huggingface.co/bartowski/Llama3-ChatQA-1.5-70B-GGUF/blob/main/Llama3-ChatQA-1.5-70B-IQ3_M.gguf) | IQ3_M | 31.93GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Llama3-ChatQA-1.5-70B-IQ3_S.gguf](https://huggingface.co/bartowski/Llama3-ChatQA-1.5-70B-GGUF/blob/main/Llama3-ChatQA-1.5-70B-IQ3_S.gguf) | IQ3_S | 30.91GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [Llama3-ChatQA-1.5-70B-Q3_K_S.gguf](https://huggingface.co/bartowski/Llama3-ChatQA-1.5-70B-GGUF/blob/main/Llama3-ChatQA-1.5-70B-Q3_K_S.gguf) | Q3_K_S | 30.91GB | Low quality, not recommended. |
| [Llama3-ChatQA-1.5-70B-IQ3_XS.gguf](https://huggingface.co/bartowski/Llama3-ChatQA-1.5-70B-GGUF/blob/main/Llama3-ChatQA-1.5-70B-IQ3_XS.gguf) | IQ3_XS | 29.30GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Llama3-ChatQA-1.5-70B-IQ3_XXS.gguf](https://huggingface.co/bartowski/Llama3-ChatQA-1.5-70B-GGUF/blob/main/Llama3-ChatQA-1.5-70B-IQ3_XXS.gguf) | IQ3_XXS | 27.46GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Llama3-ChatQA-1.5-70B-Q2_K.gguf](https://huggingface.co/bartowski/Llama3-ChatQA-1.5-70B-GGUF/blob/main/Llama3-ChatQA-1.5-70B-Q2_K.gguf) | Q2_K | 26.37GB | Very low quality but surprisingly usable. |
| [Llama3-ChatQA-1.5-70B-IQ2_M.gguf](https://huggingface.co/bartowski/Llama3-ChatQA-1.5-70B-GGUF/blob/main/Llama3-ChatQA-1.5-70B-IQ2_M.gguf) | IQ2_M | 24.11GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Llama3-ChatQA-1.5-70B-IQ2_S.gguf](https://huggingface.co/bartowski/Llama3-ChatQA-1.5-70B-GGUF/blob/main/Llama3-ChatQA-1.5-70B-IQ2_S.gguf) | IQ2_S | 22.24GB | Very low quality, uses SOTA techniques to be usable. |
| [Llama3-ChatQA-1.5-70B-IQ2_XS.gguf](https://huggingface.co/bartowski/Llama3-ChatQA-1.5-70B-GGUF/blob/main/Llama3-ChatQA-1.5-70B-IQ2_XS.gguf) | IQ2_XS | 21.14GB | Very low quality, uses SOTA techniques to be usable. |
| [Llama3-ChatQA-1.5-70B-IQ2_XXS.gguf](https://huggingface.co/bartowski/Llama3-ChatQA-1.5-70B-GGUF/blob/main/Llama3-ChatQA-1.5-70B-IQ2_XXS.gguf) | IQ2_XXS | 19.09GB | Lower quality, uses SOTA techniques to be usable. |
| [Llama3-ChatQA-1.5-70B-IQ1_M.gguf](https://huggingface.co/bartowski/Llama3-ChatQA-1.5-70B-GGUF/blob/main/Llama3-ChatQA-1.5-70B-IQ1_M.gguf) | IQ1_M | 16.75GB | Extremely low quality, *not* recommended. |
| [Llama3-ChatQA-1.5-70B-IQ1_S.gguf](https://huggingface.co/bartowski/Llama3-ChatQA-1.5-70B-GGUF/blob/main/Llama3-ChatQA-1.5-70B-IQ1_S.gguf) | IQ1_S | 15.34GB | Extremely low quality, *not* recommended. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Llama3-ChatQA-1.5-70B-GGUF --include "Llama3-ChatQA-1.5-70B-Q4_K_M.gguf" --local-dir ./ --local-dir-use-symlinks False
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Llama3-ChatQA-1.5-70B-GGUF --include "Llama3-ChatQA-1.5-70B-Q8_0.gguf/*" --local-dir Llama3-ChatQA-1.5-70B-Q8_0 --local-dir-use-symlinks False
```
You can either specify a new local-dir (Llama3-ChatQA-1.5-70B-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
KevSun/Engessay_grading_ML | KevSun | 2024-06-05T11:36:54Z | 462 | 9 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"arxiv:2406.01198",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-05-08T10:59:33Z | ---
license: mit
---
This model is primarily designed for the **automatic grading of English essays**, particularly those written by second language (L2) learners.
The training dataset used is the English Language Learner Insight, Proficiency, and Skills Evaluation (ELLIPSE) Corpus.
This freely available resource comprises approximately 6,500 writing composition samples from English language learners,
each scored for overall holistic language proficiency as well as analytic scores pertaining to cohesion, syntax, vocabulary,
phraseology, grammar, and conventions. The scores were obtained through assessments by a number of professional English teachers
adhering to rigorous procedures. The training dataset guarantees that our model acuqires high practicality and accuracy, closely emulating professional grading standards.
The model's performance on the test dataset, which includes around 980 English essays,
is summarized by the following metrics: 'mean accuracy'= 0.91 and 'mean f1 score' = 0.9, mean Quadratic Weighted Kappa (QWK) =0.85.
Upon inputting an essay, the model outputs six scores corresponding to cohesion, syntax, vocabulary, phraseology, grammar, and conventions. Each score ranges from 1 to 5, with higher scores indicating greater proficiency within the essay. These dimensions collectively assess the quality of the input essay from multiple perspectives. The model serves as a valuable tool for EFL teachers and researchers, and it is also beneficial for English L2 learners and parents for self-evaluating their composition skills.
Please **cite** the following paper if you use this model:
```
@article{sun2024automatic,
title={Automatic Essay Multi-dimensional Scoring with Fine-tuning and Multiple Regression},
author={Kun Sun and Rong Wang},
year={2024},
journal={ArXiv},
url={https://arxiv.org/abs/2406.01198}
}
```
To test the model, run the following code or paste your essay into the API interface:
1) Please use the following Python code if you want to get the ouput values ranging from **1 to 5**.
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
model = AutoModelForSequenceClassification.from_pretrained("Kevintu/Engessay_grading_ML")
tokenizer = AutoTokenizer.from_pretrained("KevSun/Engessay_grading_ML")
new_text = "The English Language Learner Insight, Proficiency and Skills Evaluation (ELLIPSE) Corpus is a freely available corpus of ~6,500 ELL writing samples that have been scored for overall holistic language proficiency as well as analytic proficiency scores related to cohesion, syntax, vocabulary, phraseology, grammar, and conventions. In addition, the ELLIPSE corpus provides individual and demographic information for the ELL writers in the corpus including economic status, gender, grade level (8-12), and race/ethnicity. The corpus provides language proficiency scores for individual writers and was developed to advance research in corpus and NLP approaches to assess overall and more fine-grained features of proficiency."
# Define the path to your text file
#file_path = 'path/to/yourfile.txt'
# Read the content of the file
#with open(file_path, 'r', encoding='utf-8') as file:
# new_text = file.read()
encoded_input = tokenizer(new_text, return_tensors='pt', padding=True, truncation=True, max_length=64)
model.eval()
# Perform the prediction
with torch.no_grad():
outputs = model(**encoded_input)
predictions = outputs.logits.squeeze()
predicted_scores = predictions.numpy()
item_names = ["cohesion", "syntax", "vocabulary", "phraseology", "grammar", "conventions"]
# Scale predictions from the raw output to the range [1, 5]
scaled_scores = 1 + 4 * (predicted_scores - np.min(predicted_scores)) / (np.max(predicted_scores) - np.min(predicted_scores))
# Round scores to the nearest 0.5
rounded_scores = np.round(scaled_scores * 2) / 2
for item, score in zip(item_names, rounded_scores):
print(f"{item}: {score:.1f}")
# Example output:
# cohesion: 3.5
# syntax: 3.5
# vocabulary: 4.0
# phraseology: 4.0
# grammar: 4.0
# conventions: 3.5
```
2) However, implement the following code if you expect to obtain the output values between **1 to 10**.
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
model = AutoModelForSequenceClassification.from_pretrained("Kevintu/Engessay_grading_ML")
tokenizer = AutoTokenizer.from_pretrained("KevSun/Engessay_grading_ML")
new_text = "The English Language Learner Insight, Proficiency and Skills Evaluation (ELLIPSE) Corpus is a freely available corpus of ~6,500 ELL writing samples that have been scored for overall holistic language proficiency as well as analytic proficiency scores related to cohesion, syntax, vocabulary, phraseology, grammar, and conventions. In addition, the ELLIPSE corpus provides individual and demographic information for the ELL writers in the corpus including economic status, gender, grade level (8-12), and race/ethnicity. The corpus provides language proficiency scores for individual writers and was developed to advance research in corpus and NLP approaches to assess overall and more fine-grained features of proficiency."
encoded_input = tokenizer(new_text, return_tensors='pt', padding=True, truncation=True, max_length=64)
model.eval()
with torch.no_grad():
outputs = model(**encoded_input)
predictions = outputs.logits.squeeze()
predicted_scores = predictions.numpy() # Convert to numpy array
item_names = ["cohesion", "syntax", "vocabulary", "phraseology", "grammar", "conventions"]
# Scale predictions from 1 to 10 and round to the nearest 0.5
scaled_scores = 2.25 * predicted_scores - 1.25
rounded_scores = [round(score * 2) / 2 for score in scaled_scores] # Round to nearest 0.5
for item, score in zip(item_names, rounded_scores):
print(f"{item}: {score:.1f}")
# Example output:
# cohesion: 6.5
# syntax: 7.0
# vocabulary: 7.5
# phraseology: 7.5
# grammar: 7.5
# conventions: 7.0
```
**Examples**:
```
# the first example (A1 level)
new_text ="Dear Mauro, Thank you for agreeing to take a care of my house and my pets in my absence. This is my daily routine. Every day I water the plants, I walk the my dog in the morning and in the evening. I feed food it twice a day, I check water's dog twice a week. I take out trash every Friday. I sweep the floor and clean house on Monday and on Wednesday. In your free time you can watch TV and play video games. In the fridge I left coca cola and ice-cream for you Have a nice week. "
##ouput
cohesion: 5.0
syntax: 5.0
vocabulary: 5.5
phraseology: 5.0
grammar: 5.0
conventions: 6.0
# the second example (C1 level)
new_text = " Dear Mr. Tromps It was so good to hear from you and your group of international buyers are visiting our company next month. And in response to your question, I would like to recommend some suggestions about business etiquette in my country. Firstly, you'll need to make hotel's reservations with anticipation, especially when the group is numerous. There are several five starts hotels in the commercial center of the Guayaquil city, very close to our offices. Business appointments well in advance and don't be late. Usually, at those meetings the persons exchange presentation cards. Some places include tipping by services in restaurant bills, but if any not the tip is 10% of the bill. The people is very kind here, surely you'll be invited to a meal at a house, you can take a small gift as flowers, candy or wine. Finally, remember it's a beautiful summer here, especially in our city is always warm, then you might include appropriate clothes for this weather. If you have any questions, please just let me know. Have you a nice and safe trip. Sincerely, JG Marketing Dpt. LP Representations."
##output:
cohesion: 8.0
syntax: 8.0
vocabulary: 8.0
phraseology: 8.5
grammar: 8.5
conventions: 8.5
```
|
selmamalak/organamnist-beit-base-finetuned | selmamalak | 2024-05-18T14:26:04Z | 462 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"dataset:medmnist-v2",
"base_model:microsoft/beit-base-patch16-224-pt22k-ft22k",
"license:apache-2.0",
"region:us"
]
| null | 2024-05-18T11:18:07Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: microsoft/beit-base-patch16-224-pt22k-ft22k
datasets:
- medmnist-v2
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: organamnist-beit-base-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# organamnist-beit-base-finetuned
This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the medmnist-v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2372
- Accuracy: 0.9329
- Precision: 0.9416
- Recall: 0.9296
- F1: 0.9340
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.6786 | 1.0 | 540 | 0.1776 | 0.9339 | 0.9507 | 0.9341 | 0.9385 |
| 0.7397 | 2.0 | 1081 | 0.1783 | 0.9407 | 0.9539 | 0.9346 | 0.9415 |
| 0.7151 | 3.0 | 1621 | 0.1297 | 0.9552 | 0.9611 | 0.9555 | 0.9572 |
| 0.4964 | 4.0 | 2162 | 0.0741 | 0.9735 | 0.9765 | 0.9702 | 0.9730 |
| 0.5509 | 5.0 | 2702 | 0.0671 | 0.9770 | 0.9776 | 0.9796 | 0.9783 |
| 0.5746 | 6.0 | 3243 | 0.0642 | 0.9754 | 0.9810 | 0.9788 | 0.9795 |
| 0.4066 | 7.0 | 3783 | 0.1196 | 0.9566 | 0.9693 | 0.9563 | 0.9614 |
| 0.4046 | 8.0 | 4324 | 0.0469 | 0.9798 | 0.9853 | 0.9821 | 0.9834 |
| 0.3314 | 9.0 | 4864 | 0.0388 | 0.9861 | 0.9892 | 0.9860 | 0.9874 |
| 0.2865 | 9.99 | 5400 | 0.0450 | 0.9831 | 0.9880 | 0.9862 | 0.9869 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 |
mradermacher/Mahou-1.4-llama3-8B-GGUF | mradermacher | 2024-05-30T18:51:14Z | 462 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:flammenai/MahouMix-v1",
"dataset:flammenai/FlameMix-DPO-v1",
"base_model:flammenai/Mahou-1.4-llama3-8B",
"license:llama3",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-30T18:23:27Z | ---
base_model: flammenai/Mahou-1.4-llama3-8B
datasets:
- flammenai/MahouMix-v1
- flammenai/FlameMix-DPO-v1
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/flammenai/Mahou-1.4-llama3-8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-GGUF/resolve/main/Mahou-1.4-llama3-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-GGUF/resolve/main/Mahou-1.4-llama3-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-GGUF/resolve/main/Mahou-1.4-llama3-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-GGUF/resolve/main/Mahou-1.4-llama3-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-GGUF/resolve/main/Mahou-1.4-llama3-8B.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-GGUF/resolve/main/Mahou-1.4-llama3-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-GGUF/resolve/main/Mahou-1.4-llama3-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-GGUF/resolve/main/Mahou-1.4-llama3-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-GGUF/resolve/main/Mahou-1.4-llama3-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-GGUF/resolve/main/Mahou-1.4-llama3-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-GGUF/resolve/main/Mahou-1.4-llama3-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-GGUF/resolve/main/Mahou-1.4-llama3-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-GGUF/resolve/main/Mahou-1.4-llama3-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-GGUF/resolve/main/Mahou-1.4-llama3-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-GGUF/resolve/main/Mahou-1.4-llama3-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
MarsupialAI/Moist-Miqu-70B-v1_iMatrix_GGUF | MarsupialAI | 2024-06-01T12:06:23Z | 462 | 2 | null | [
"gguf",
"region:us"
]
| null | 2024-05-31T11:24:35Z | iMatrix GGUFs for https://huggingface.co/TheDrummer/Moist-Miqu-70B-v1
As always, imat generated using kalomaze's groups_merged.txt |
Qwen/Qwen2-1.5B-Instruct-MLX | Qwen | 2024-06-06T12:55:40Z | 462 | 2 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-06T10:05:04Z | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- chat
---
# Qwen2-1.5B-Instruct-MLX
## Introduction
Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the instruction-tuned 0.5B Qwen2 model.
Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/) and [GitHub](https://github.com/QwenLM/Qwen2). This is the MLX quantized model of Qwen2-1.5B-Instruct.
<br>
## Model Details
Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
## Training details
We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization.
## Requirements
Run the following commands to install the required MLX packages.
```
pip install mlx-lm mlx -U
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from mlx_lm import load, generate
model, tokenizer = load('Qwen/Qwen2-1.5B-Instruct-MLX', tokenizer_config={"eos_token": "<|im_end|>"})
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=text, verbose=True, top_p=0.8, temp=0.7, repetition_penalty=1.05, max_tokens=512)
```
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen2,
title={Qwen2 Technical Report},
year={2024}
}
``` |
Helsinki-NLP/opus-mt-tc-big-en-ar | Helsinki-NLP | 2023-10-10T10:22:24Z | 461 | 16 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"marian",
"text2text-generation",
"translation",
"opus-mt-tc",
"ar",
"en",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2022-04-13T13:38:54Z | ---
language:
- ar
- en
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-en-ar
results:
- task:
name: Translation eng-ara
type: translation
args: eng-ara
dataset:
name: flores101-devtest
type: flores_101
args: eng ara devtest
metrics:
- name: BLEU
type: bleu
value: 29.4
- task:
name: Translation eng-ara
type: translation
args: eng-ara
dataset:
name: tatoeba-test-v2020-07-28
type: tatoeba_mt
args: eng-ara
metrics:
- name: BLEU
type: bleu
value: 20.0
- task:
name: Translation eng-ara
type: translation
args: eng-ara
dataset:
name: tico19-test
type: tico19-test
args: eng-ara
metrics:
- name: BLEU
type: bleu
value: 30.0
---
# opus-mt-tc-big-en-ar
Neural machine translation model for translating from English (en) to Arabic (ar).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-02-25
* source language(s): eng
* target language(s): afb ara
* valid target language labels: >>afb<< >>ara<<
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-02-25.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ara/opusTCv20210807+bt_transformer-big_2022-02-25.zip)
* more information released models: [OPUS-MT eng-ara README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-ara/README.md)
* more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>afb<<`
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>ara<< I can't help you because I'm busy.",
">>ara<< I have to write a letter. Do you have some paper?"
]
model_name = "pytorch-models/opus-mt-tc-big-en-ar"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# لا أستطيع مساعدتك لأنني مشغول.
# يجب أن أكتب رسالة هل لديك بعض الأوراق؟
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-ar")
print(pipe(">>ara<< I can't help you because I'm busy."))
# expected output: لا أستطيع مساعدتك لأنني مشغول.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-02-25.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ara/opusTCv20210807+bt_transformer-big_2022-02-25.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ara/opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| eng-ara | tatoeba-test-v2021-08-07 | 0.48813 | 19.8 | 10305 | 61356 |
| eng-ara | flores101-devtest | 0.61154 | 29.4 | 1012 | 21357 |
| eng-ara | tico19-test | 0.60075 | 30.0 | 2100 | 51339 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 3405783
* port time: Wed Apr 13 16:37:31 EEST 2022
* port machine: LM0-400-22516.local
|
sdadas/mt5-base-translator-en-pl | sdadas | 2023-05-14T15:26:11Z | 461 | 5 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mt5",
"text2text-generation",
"translation",
"pl",
"en",
"license:lgpl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2023-03-26T11:08:12Z | ---
license: lgpl
language:
- pl
- en
pipeline_tag: translation
--- |
TheBloke/GodziLLa2-70B-GGUF | TheBloke | 2023-11-14T16:29:21Z | 461 | 9 | transformers | [
"transformers",
"gguf",
"llama",
"merge",
"mix",
"cot",
"text-generation",
"dataset:mlabonne/guanaco-llama2-1k",
"arxiv:2009.03300",
"arxiv:1803.05457",
"arxiv:1905.07830",
"arxiv:2109.07958",
"base_model:MayaPH/GodziLLa2-70B",
"license:llama2",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-09-06T10:59:40Z | ---
license: llama2
tags:
- merge
- mix
- cot
datasets:
- mlabonne/guanaco-llama2-1k
model_name: GodziLLa2 70B
base_model: MayaPH/GodziLLa2-70B
inference: false
model_creator: MayaPH
model_type: llama
pipeline_tag: text-generation
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# GodziLLa2 70B - GGUF
- Model creator: [MayaPH](https://huggingface.co/mayaph)
- Original model: [GodziLLa2 70B](https://huggingface.co/MayaPH/GodziLLa2-70B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [MayaPH's GodziLLa2 70B](https://huggingface.co/MayaPH/GodziLLa2-70B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/GodziLLa2-70B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/GodziLLa2-70B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/GodziLLa2-70B-GGUF)
* [MayaPH's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/MayaPH/GodziLLa2-70B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [godzilla2-70b.Q8_0.gguf](https://huggingface.co/TheBloke/GodziLLa2-70B-GGUF/blob/main/godzilla2-70b.Q8_0.gguf) | Q8_0 | 8 | 10.83 GB| 13.33 GB | very large, extremely low quality loss - not recommended |
| [godzilla2-70b.Q2_K.gguf](https://huggingface.co/TheBloke/GodziLLa2-70B-GGUF/blob/main/godzilla2-70b.Q2_K.gguf) | Q2_K | 2 | 29.28 GB| 31.78 GB | smallest, significant quality loss - not recommended for most purposes |
| [godzilla2-70b.Q3_K_S.gguf](https://huggingface.co/TheBloke/GodziLLa2-70B-GGUF/blob/main/godzilla2-70b.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss |
| [godzilla2-70b.Q3_K_M.gguf](https://huggingface.co/TheBloke/GodziLLa2-70B-GGUF/blob/main/godzilla2-70b.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss |
| [godzilla2-70b.Q3_K_L.gguf](https://huggingface.co/TheBloke/GodziLLa2-70B-GGUF/blob/main/godzilla2-70b.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss |
| [godzilla2-70b.Q4_0.gguf](https://huggingface.co/TheBloke/GodziLLa2-70B-GGUF/blob/main/godzilla2-70b.Q4_0.gguf) | Q4_0 | 4 | 38.87 GB| 41.37 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [godzilla2-70b.Q4_K_S.gguf](https://huggingface.co/TheBloke/GodziLLa2-70B-GGUF/blob/main/godzilla2-70b.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB| 41.57 GB | small, greater quality loss |
| [godzilla2-70b.Q4_K_M.gguf](https://huggingface.co/TheBloke/GodziLLa2-70B-GGUF/blob/main/godzilla2-70b.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended |
| [godzilla2-70b.Q5_0.gguf](https://huggingface.co/TheBloke/GodziLLa2-70B-GGUF/blob/main/godzilla2-70b.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [godzilla2-70b.Q5_K_S.gguf](https://huggingface.co/TheBloke/GodziLLa2-70B-GGUF/blob/main/godzilla2-70b.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
| [godzilla2-70b.Q5_K_M.gguf](https://huggingface.co/TheBloke/GodziLLa2-70B-GGUF/blob/main/godzilla2-70b.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/GodziLLa2-70B-GGUF and below it, a specific filename to download, such as: godzilla2-70b.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/GodziLLa2-70B-GGUF godzilla2-70b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/GodziLLa2-70B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/GodziLLa2-70B-GGUF godzilla2-70b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m godzilla2-70b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/GodziLLa2-70B-GGUF", model_file="godzilla2-70b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: MayaPH's GodziLLa2 70B
<img src="https://drive.google.com/uc?export=view&id=1D8wxXkS1nsq3uqbOzOLwgx1cLJhY1nvN" alt="GodziLLa2-70B">
Released August 11, 2023
## Model Description
GodziLLa 2 70B is an experimental combination of various proprietary LoRAs from Maya Philippines and [Guanaco LLaMA 2 1K dataset](https://huggingface.co/datasets/mlabonne/guanaco-llama2-1k), with LLaMA 2 70B. This model's primary purpose is to stress test the limitations of composite, instruction-following LLMs and observe its performance with respect to other LLMs available on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). This model debuted in the leaderboard at rank #4 (August 17, 2023) and operates under the Llama 2 license.

## Open LLM Leaderboard Metrics
| Metric | Value |
|-----------------------|-------|
| MMLU (5-shot) | 69.88 |
| ARC (25-shot) | 71.42 |
| HellaSwag (10-shot) | 87.53 |
| TruthfulQA (0-shot) | 61.54 |
| Average | 72.59 |
According to the leaderboard description, here are the benchmarks used for the evaluation:
- [MMLU](https://arxiv.org/abs/2009.03300) (5-shot) - a test to measure a text model’s multitask accuracy. The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more.
- [AI2 Reasoning Challenge](https://arxiv.org/abs/1803.05457) -ARC- (25-shot) - a set of grade-school science questions.
- [HellaSwag](https://arxiv.org/abs/1905.07830) (10-shot) - a test of commonsense inference, which is easy for humans (~95%) but challenging for SOTA models.
- [TruthfulQA](https://arxiv.org/abs/2109.07958) (0-shot) - a test to measure a model’s propensity to reproduce falsehoods commonly found online.
A detailed breakdown of the evaluation can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MayaPH__GodziLLa2-70B). Huge thanks to [@thomwolf](https://huggingface.co/thomwolf).
## Leaderboard Highlights (as of August 17, 2023)
- Godzilla 2 70B debuts at 4th place worldwide in the Open LLM Leaderboard.
- Godzilla 2 70B ranks #3 in the ARC challenge.
- Godzilla 2 70B ranks #5 in the TruthfulQA benchmark.
- *Godzilla 2 70B beats GPT-3.5 (ChatGPT) in terms of average performance and the HellaSwag benchmark (87.53 > 85.5).
- *Godzilla 2 70B outperforms GPT-3.5 (ChatGPT) and GPT-4 on the TruthfulQA benchmark (61.54 for G2-70B, 47 for GPT-3.5, 59 for GPT-4).
- *Godzilla 2 70B is on par with GPT-3.5 (ChatGPT) on the MMLU benchmark (<0.12%).
*Based on a [leaderboard clone](https://huggingface.co/spaces/gsaivinay/open_llm_leaderboard) with GPT-3.5 and GPT-4 included.
### Reproducing Evaluation Results
*Instruction template taken from [Platypus 2 70B instruct](https://huggingface.co/garage-bAInd/Platypus2-70B-instruct).
Install LM Evaluation Harness:
```
# clone repository
git clone https://github.com/EleutherAI/lm-evaluation-harness.git
# change to repo directory
cd lm-evaluation-harness
# check out the correct commit
git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463
# install
pip install -e .
```
ARC:
```
python main.py --model hf-causal-experimental --model_args pretrained=MayaPH/GodziLLa2-70B --tasks arc_challenge --batch_size 1 --no_cache --write_out --output_path results/G270B/arc_challenge_25shot.json --device cuda --num_fewshot 25
```
HellaSwag:
```
python main.py --model hf-causal-experimental --model_args pretrained=MayaPH/GodziLLa2-70B --tasks hellaswag --batch_size 1 --no_cache --write_out --output_path results/G270B/hellaswag_10shot.json --device cuda --num_fewshot 10
```
MMLU:
```
python main.py --model hf-causal-experimental --model_args pretrained=MayaPH/GodziLLa2-70B --tasks hendrycksTest-* --batch_size 1 --no_cache --write_out --output_path results/G270B/mmlu_5shot.json --device cuda --num_fewshot 5
```
TruthfulQA:
```
python main.py --model hf-causal-experimental --model_args pretrained=MayaPH/GodziLLa2-70B --tasks truthfulqa_mc --batch_size 1 --no_cache --write_out --output_path results/G270B/truthfulqa_0shot.json --device cuda
```
### Prompt Template
```
### Instruction:
<prompt> (without the <>)
### Response:
```
## Technical Considerations
When using GodziLLa 2 70B, kindly take note of the following:
- The default precision is `fp32`, and the total file size that would be loaded onto the RAM/VRAM is around 275 GB. Consider using a lower precision (fp16, int8, int4) to save memory.
- To further save on memory, set the `low_cpu_mem_usage` argument to True.
- If you wish to use a quantized version of GodziLLa2-70B, you can either access TheBloke's [GPTQ](https://huggingface.co/TheBloke/GodziLLa2-70B-GPTQ) or [GGML](https://huggingface.co/TheBloke/GodziLLa2-70B-GGML) version of GodziLLa2-70B.
- [GodziLLa2-70B-GPTQ](https://huggingface.co/TheBloke/GodziLLa2-70B-GPTQ#description) is available in 4-bit and 3-bit
- [GodziLLa2-70B-GGML](https://huggingface.co/TheBloke/GodziLLa2-70B-GGML#provided-files) is available in 8-bit, 6-bit, 5-bit, 4-bit, 3-bit, and 2-bit
## Ethical Considerations
When using GodziLLa 2 70B, it is important to consider the following ethical considerations:
1. **Privacy and Security:** Avoid sharing sensitive personal information while interacting with the model. The model does not have privacy safeguards, so exercise caution when discussing personal or confidential matters.
2. **Fairness and Bias:** The model's responses may reflect biases present in the training data. Be aware of potential biases and make an effort to evaluate responses critically and fairly.
3. **Transparency:** The model operates as a predictive text generator based on patterns learned from the training data. The model's inner workings and the specific training data used are proprietary and not publicly available.
4. **User Responsibility:** Users should take responsibility for their own decisions and not solely rely on the information provided by the model. Consult with the appropriate professionals or reliable sources for specific advice or recommendations.
5. **NSFW Content:** The model is a merge of various datasets and LoRA adapters. It is highly likely that the resulting model contains uncensored content that may include, but is not limited to, violence, gore, explicit language, and sexual content. If you plan to further refine this model for safe/aligned usage, you are highly encouraged to implement guardrails along with it.
## Further Information
For additional information or inquiries about GodziLLa 2 70B, please contact the Maya Philippines iOps Team via [email protected].
## Disclaimer
GodziLLa 2 70B is an AI language model from Maya Philippines. It is provided "as is" without warranty of any kind, express or implied. The model developers and Maya Philippines shall not be liable for any direct or indirect damages arising from the use of this model.
## Acknowledgments
The development of GodziLLa 2 70B was made possible by Maya Philippines and the curation of the various proprietary datasets and creation of the different proprietary LoRA adapters. Special thanks to mlabonne for the Guanaco dataset found [here](https://huggingface.co/datasets/mlabonne/guanaco-llama2-1k). Last but not least, huge thanks to [TheBloke](https://huggingface.co/TheBloke) for the quantized models, making our model easily accessible to a wider community.
<!-- original-model-card end -->
|
wandgibaut/periquito-3B | wandgibaut | 2024-05-13T20:36:37Z | 461 | 3 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"pt",
"dataset:wikimedia/wikipedia",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-09-20T19:46:36Z | ---
language:
- pt
license: apache-2.0
library_name: transformers
datasets:
- wikimedia/wikipedia
metrics:
- accuracy
model-index:
- name: periquito-3B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: ENEM Challenge (No Images)
type: eduagarcia/enem_challenge
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 17.98
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=wandgibaut/periquito-3B
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BLUEX (No Images)
type: eduagarcia-temp/BLUEX_without_images
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 21.14
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=wandgibaut/periquito-3B
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: OAB Exams
type: eduagarcia/oab_exams
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 22.69
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=wandgibaut/periquito-3B
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 RTE
type: assin2
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 43.01
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=wandgibaut/periquito-3B
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 STS
type: eduagarcia/portuguese_benchmark
split: test
args:
num_few_shot: 15
metrics:
- type: pearson
value: 8.92
name: pearson
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=wandgibaut/periquito-3B
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: FaQuAD NLI
type: ruanchaves/faquad-nli
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 43.97
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=wandgibaut/periquito-3B
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HateBR Binary
type: ruanchaves/hatebr
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 50.46
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=wandgibaut/periquito-3B
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: PT Hate Speech Binary
type: hate_speech_portuguese
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 41.19
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=wandgibaut/periquito-3B
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: tweetSentBR
type: eduagarcia-temp/tweetsentbr
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 47.96
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=wandgibaut/periquito-3B
name: Open Portuguese LLM Leaderboard
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
Periquito-3B is a large language model (LLM) trained by Wandgibaut. It is built upon the OpenLlama-3B architecture and specifically fine-tuned using Portuguese Wikipedia (pt-br) data. This specialization makes it particularly adept at understanding and generating text in Brazilian Portuguese.
- **Developed by:** Wandemberg Gibaut
- **Model type:** Llama
- **Language(s) (NLP):** Portuguese
- **License:** Apache License 2.0
- **Finetuned from model [optional]:** openlm-research/open_llama_3b
### Loading the Weights with Hugging Face Transformers
```python
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
model_path = 'wandgibaut/periquito-3B'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto',
)
prompt = 'Q: Qual o maior animal terrestre?\nA:'
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=32
)
print(tokenizer.decode(generation_output[0]))
```
For more advanced usage, please follow the [transformers LLaMA documentation](https://huggingface.co/docs/transformers/main/model_doc/llama).
### Evaluating with LM-Eval-Harness
The model can be evaluated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). However, we used a custom version, that has some translated tasks and the ENEM suit. This can be found in [wandgibaut/lm-evaluation-harness-PTBR](https://github.com/wandgibaut/lm-evaluation-harness-PTBR).
## Dataset and Training
We finetunned the model on Wikipedia-pt dataset with LoRA, in Google's TPU-v3 in the [Google's TPU Research program](https://sites.research.google/trc/about/).
## Evaluation
We evaluated OpenLLaMA on a wide range of tasks using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in [this issue of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/issues/443). Additionally, we present the results of GPT-J, a 6B parameter model trained on the [Pile](https://pile.eleuther.ai/) dataset by [EleutherAI](https://www.eleuther.ai/).
hf-causal (pretrained=wandgibaut/periquito-3B), limit: None, provide_description: False, num_fewshot: 0, batch_size: None
| Task |Version| Metric | Value | |Stderr|
|---------|------:|------------|------:|---|-----:|
|agnews_pt| 0|acc | 0.6184|± |0.0056|
|boolq_pt | 1|acc | 0.6333|± |0.0084|
|faquad | 1|exact | 7.9365| | |
| | |f1 |45.6971| | |
| | |HasAns_exact| 7.9365| | |
| | |HasAns_f1 |45.6971| | |
| | |NoAns_exact | 0.0000| | |
| | |NoAns_f1 | 0.0000| | |
| | |best_exact | 7.9365| | |
| | |best_f1 |45.6971| | |
|imdb_pt | 0|acc | 0.6338|± |0.0068|
|sst2_pt | 1|acc | 0.6823|± |0.0158|
|toldbr | 0|acc | 0.4629|± |0.0109|
| | |f1_macro | 0.3164| | |
hf-causal (pretrained=wandgibaut/periquito-3B,dtype=float), limit: None, provide_description: False, num_fewshot: 3, batch_size: None
| Task |Version| Metric | Value | |Stderr|
|---------|------:|------------|------:|---|-----:|
|agnews_pt| 0|acc | 0.6242|± |0.0056|
|boolq_pt | 1|acc | 0.6477|± |0.0084|
|faquad | 1|exact |34.9206| | |
| | |f1 |70.3968| | |
| | |HasAns_exact|34.9206| | |
| | |HasAns_f1 |70.3968| | |
| | |NoAns_exact | 0.0000| | |
| | |NoAns_f1 | 0.0000| | |
| | |best_exact |34.9206| | |
| | |best_f1 |70.3968| | |
|imdb_pt | 0|acc | 0.8408|± |0.0052|
|sst2_pt | 1|acc | 0.7775|± |0.0141|
|toldbr | 0|acc | 0.5143|± |0.0109|
| | |f1_macro | 0.5127| | |
hf-causal (pretrained=wandgibaut/periquito-3B), limit: None, provide_description: False, num_fewshot: 0, batch_size: None
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|----------------|-----:|---|-----:|
|enem | 0|acc |0.1976|± |0.0132|
| | |2009 |0.2022|± |0.0428|
| | |2016 |0.1809|± |0.0399|
| | |2015 |0.1348|± |0.0364|
| | |2016_2_ |0.2366|± |0.0443|
| | |2017 |0.2022|± |0.0428|
| | |2013 |0.1647|± |0.0405|
| | |2012 |0.2174|± |0.0432|
| | |2011 |0.2292|± |0.0431|
| | |2010 |0.2157|± |0.0409|
| | |2014 |0.1839|± |0.0418|
|enem_2022 | 0|acc |0.2373|± |0.0393|
| | |2022 |0.2373|± |0.0393|
| | |human-sciences |0.2703|± |0.0740|
| | |mathematics |0.1818|± |0.0842|
| | |natural-sciences|0.1538|± |0.0722|
| | |languages |0.3030|± |0.0812|
|enem_CoT | 0|acc |0.1812|± |0.0127|
| | |2009 |0.1348|± |0.0364|
| | |2016 |0.1596|± |0.0380|
| | |2015 |0.1124|± |0.0337|
| | |2016_2_ |0.1290|± |0.0350|
| | |2017 |0.2247|± |0.0445|
| | |2013 |0.1765|± |0.0416|
| | |2012 |0.2391|± |0.0447|
| | |2011 |0.1979|± |0.0409|
| | |2010 |0.2451|± |0.0428|
| | |2014 |0.1839|± |0.0418|
|enem_CoT_2022| 0|acc |0.2119|± |0.0378|
| | |2022 |0.2119|± |0.0378|
| | |human-sciences |0.2703|± |0.0740|
| | |mathematics |0.1818|± |0.0842|
| | |natural-sciences|0.2308|± |0.0843|
| | |languages |0.1515|± |0.0634|
hf-causal (pretrained=wandgibaut/periquito-3B,dtype=float), limit: None, provide_description: False, num_fewshot: 1, batch_size: None
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|----------------|-----:|---|-----:|
|enem | 0|acc |0.1790|± |0.0127|
| | |2009 |0.1573|± |0.0388|
| | |2016 |0.2021|± |0.0416|
| | |2015 |0.1573|± |0.0388|
| | |2016_2_ |0.1935|± |0.0412|
| | |2017 |0.2247|± |0.0445|
| | |2013 |0.1412|± |0.0380|
| | |2012 |0.1739|± |0.0397|
| | |2011 |0.1979|± |0.0409|
| | |2010 |0.1961|± |0.0395|
| | |2014 |0.1379|± |0.0372|
|enem_2022 | 0|acc |0.1864|± |0.0360|
| | |2022 |0.1864|± |0.0360|
| | |human-sciences |0.2432|± |0.0715|
| | |mathematics |0.1364|± |0.0749|
| | |natural-sciences|0.1154|± |0.0639|
| | |languages |0.2121|± |0.0723|
|enem_CoT | 0|acc |0.2009|± |0.0132|
| | |2009 |0.2135|± |0.0437|
| | |2016 |0.2340|± |0.0439|
| | |2015 |0.1348|± |0.0364|
| | |2016_2_ |0.2258|± |0.0436|
| | |2017 |0.2360|± |0.0453|
| | |2013 |0.1529|± |0.0393|
| | |2012 |0.1957|± |0.0416|
| | |2011 |0.2500|± |0.0444|
| | |2010 |0.1667|± |0.0371|
| | |2014 |0.1954|± |0.0428|
|enem_CoT_2022| 0|acc |0.2542|± |0.0403|
| | |2022 |0.2542|± |0.0403|
| | |human-sciences |0.2703|± |0.0740|
| | |mathematics |0.2273|± |0.0914|
| | |natural-sciences|0.3846|± |0.0973|
| | |languages |0.1515|± |0.0634|
## Use Cases:
The model is suitable for text generation, language understanding, and various natural language processing tasks in Brazilian Portuguese.
## Limitations:
Like many language models, Periquito-3B might exhibit biases present in its training data. Additionally, its performance is primarily optimized for Portuguese, potentially limiting its effectiveness with other languages.
## Ethical Considerations:
Users are encouraged to use the model ethically, particularly by avoiding the generation of harmful or biased content.
## Acknowledgment
We thank the [Google TPU Research Cloud](https://sites.research.google/trc/about/) program for providing part of the computation resources.
## Citation [optional]
If you found periquito-3B useful in your research or applications, please cite using the following BibTeX:
**BibTeX:**
```
@software{wandgibautperiquito3B,
author = {Gibaut, Wandemberg},
title = {Periquito-3B},
month = Sep,
year = 2023,
url = {https://huggingface.co/wandgibaut/periquito-3B}
}
```
# [Open Portuguese LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/wandgibaut/periquito-3B)
| Metric | Value |
|--------------------------|---------|
|Average |**33.04**|
|ENEM Challenge (No Images)| 17.98|
|BLUEX (No Images) | 21.14|
|OAB Exams | 22.69|
|Assin2 RTE | 43.01|
|Assin2 STS | 8.92|
|FaQuAD NLI | 43.97|
|HateBR Binary | 50.46|
|PT Hate Speech Binary | 41.19|
|tweetSentBR | 47.96|
|
manifesto-project/manifestoberta-xlm-roberta-56policy-topics-context-2023-1-1 | manifesto-project | 2023-11-17T15:13:59Z | 461 | 9 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"custom_code",
"license:bigscience-openrail-m",
"autotrain_compatible",
"region:us"
]
| text-classification | 2023-09-22T12:49:02Z | ---
license: bigscience-openrail-m
---
## Model description
An xlm-roberta-large model fine-tuned on ~1,6 million annotated statements contained in the [Manifesto Corpus](https://manifesto-project.wzb.eu/information/documents/corpus) (version 2023a).
The model can be used to categorize any type of text into 56 different political topics according to the Manifesto Project's coding scheme ([Handbook 4](https://manifesto-project.wzb.eu/coding_schemes/mp_v4)).
It works for all languages the xlm-roberta model is pretrained on ([overview](https://github.com/facebookresearch/fairseq/tree/main/examples/xlmr#introduction)), just note that it will perform best for the 38 languages contained in the Manifesto Corpus:
||||||
|------|------|------|------|------|
|armenian|bosnian|bulgarian|catalan|croatian|
|czech|danish|dutch|english|estonian|
|finnish|french|galician|georgian|german|
|greek|hebrew|hungarian|icelandic|italian|
|japanese|korean|latvian|lithuanian|macedonian|
|montenegrin|norwegian|polish|portuguese|romanian|
|russian|serbian|slovak|slovenian|spanish|
|swedish|turkish|ukrainian| | |
The context model variant additionally incorporates the surrounding sentences of a statement to improve the classification results for ambiguous sentences. (See Training Procedure for details)
**Important**
We slightly modified the Classification Head of the `XLMRobertaModelForSequenceClassification` model (removed the tanh activation and the intermediate linear layer) as that improved the model performance for this task considerably.
To correctly load the full model, include the `trust_remote_code=True` argument when using the `from_pretrained method`.
## How to use
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("manifesto-project/manifestoberta-xlm-roberta-56policy-topics-context-2023-1-1", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large")
sentence = "These principles are under threat."
context = "Human rights and international humanitarian law are fundamental pillars of a secure global system. These principles are under threat. Some of the world's most powerful states choose to sell arms to human-rights abusing states."
# For sentences without additional context, just use the sentence itself as the context.
# Example: context = "These principles are under threat."
inputs = tokenizer(sentence,
context,
return_tensors="pt",
max_length=300, #we limited the input to 300 tokens during finetuning
padding="max_length",
truncation=True
)
logits = model(**inputs).logits
probabilities = torch.softmax(logits, dim=1).tolist()[0]
probabilities = {model.config.id2label[index]: round(probability * 100, 2) for index, probability in enumerate(probabilities)}
probabilities = dict(sorted(probabilities.items(), key=lambda item: item[1], reverse=True))
print(probabilities)
# {'201 - Freedom and Human Rights': 90.76, '107 - Internationalism: Positive': 5.82, '105 - Military: Negative': 0.66...
predicted_class = model.config.id2label[logits.argmax().item()]
print(predicted_class)
# 201 - Freedom and Human Rights
```
## Training Procedure
Training of the model took place on all quasi-sentences of the Manifesto Corpus (version 2023a), minus 10% that were kept out of training for the final test and evaluation results.
This results in a training dataset of 1,601,329 quasi-sentences.
As our context-including model input poses the threat of data-leakage problems between train and test data, we refrained from randomly splitting quasi-sentences into train and test data.
Instead, we randomly split the dataset on the manifesto level, so that 1779 manifestos and all their quasi-sentences were assigned to the train set and 198 to the test set.
As training parameters, we used the following settings: learning rate: 1e-5, weight decay: 0.01, epochs: 1, batch size: 4, gradient accumulation steps: 4 (effective batch size: 16).
### Context
To adapt the model to the task of classifying statements in manifestos we made some modifications to the traditional training setup.
Given that human annotators in the Manifesto Project are encouraged to use surrounding sentences to interpret ambiguous statements , we combined statements with their context for our model's input.
Specifically, we used a sentence-pair input, where the single to-be-classified statement gets followed by the separator token followed by the whole bigger context of length 200 tokens, in which the statement to-be-classified is embedded.
Here is an example:
*"`<s>` We must right the wrongs in our democracy, `</s>` `</s>` To turn this crisis into a crucible, from which we will forge a stronger, brighter, and more equitable future. We must right the wrongs in our democracy, redress the systemic injustices that have long plagued our society,throw open the doors of opportunity for all Americans and reinvent our institutions at home and our leadership abroad. `</s>`".*
The second part, which contains the context, is greedily filled until it contains 200 tokens.
Our tests showed that including the context helped to improve the performance of the classification model considerably (~7% accuracy).
We tried other approaches like using two XLM-RoBERTa models as a duo, where one receives the sentence and one the context, and a shared-layer model, where both inputs are fed separately trough the same model.
Both variants performed similarly to our sentence pair approach, but lead to higher complexity and computing costs, which is why we ultimately opted for the sentence pair way to include the surrounding context.
## Model Performance
The model was evaluated on a test set of 199,046 annotated manifesto statements.
### Overall
| | Accuracy | Top2_Acc | Top3_Acc | Precision| Recall | F1_Macro | MCC | Cross-Entropy |
|-------------------------------------------------------------------------------------------------------|:--------:|:--------:|:--------:|:--------:|:------:|:--------:|:---:|:-------------:|
[Sentence Model](https://huggingface.co/manifesto-project/manifestoberta-xlm-roberta-56policy-topics-sentence-2023-1-1)| 0.57 | 0.73 | 0.81 | 0.49 | 0.43 | 0.45 | 0.55| 1.5 |
[Context Model](https://huggingface.co/manifesto-project/manifestoberta-xlm-roberta-56policy-topics-context-2023-1-1) | 0.64 | 0.81 | 0.88 | 0.54 | 0.52 | 0.53 | 0.62| 1.15 |
### Categories
|Category|Precision|Recall|F1|n_test(%)|n_predicted(%)|
|:------|:-----------:|:----:|:----:|:-----:|:-----:|
| 101 |0.50|0.48|0.49|0.30%|0.29%|
|102|0.56|0.61|0.58|0.09%|0.10%|
|103|0.51|0.36|0.42|0.28%|0.20%|
|104|0.78|0.81|0.79|1.57%|1.64%|
|105|0.69|0.70|0.69|0.34%|0.34%|
|106|0.59|0.57|0.58|0.33%|0.32%|
|107|0.68|0.66|0.67|2.24%|2.17%|
|108|0.66|0.68|0.67|1.20%|1.24%|
|109|0.52|0.39|0.45|0.17%|0.13%|
|110|0.63|0.68|0.65|0.36%|0.38%|
|201|0.58|0.59|0.59|2.16%|2.20%|
|202|0.62|0.63|0.62|3.25%|3.28%|
|203|0.46|0.47|0.47|0.19%|0.19%|
|204|0.61|0.37|0.46|0.25%|0.15%|
|301|0.66|0.71|0.68|2.13%|2.29%|
|302|0.38|0.25|0.30|0.17%|0.11%|
|303|0.58|0.60|0.59|5.12%|5.31%|
|304|0.67|0.65|0.66|1.38%|1.34%|
|305|0.59|0.57|0.58|2.32%|2.22%|
|401|0.45|0.36|0.40|1.50%|1.21%|
|402|0.61|0.58|0.59|2.73%|2.60%|
|403|0.56|0.51|0.53|3.59%|3.25%|
|404|0.30|0.15|0.20|0.58%|0.28%|
|405|0.43|0.51|0.47|0.18%|0.21%|
|406|0.38|0.46|0.42|0.26%|0.31%|
|407|0.56|0.52|0.54|0.40%|0.38%|
|408|0.28|0.17|0.21|1.34%|0.79%|
|409|0.37|0.21|0.27|0.24%|0.14%|
|410|0.53|0.50|0.52|2.22%|2.08%|
|411|0.73|0.75|0.74|8.32%|8.53%|
|412|0.26|0.20|0.22|0.58%|0.45%|
|413|0.49|0.63|0.55|0.29%|0.37%|
|414|0.58|0.55|0.56|1.38%|1.32%|
|415|0.14|0.23|0.18|0.05%|0.07%|
|416|0.52|0.49|0.50|2.45%|2.35%|
|501|0.69|0.78|0.73|4.77%|5.35%|
|502|0.78|0.84|0.81|3.08%|3.32%|
|503|0.61|0.63|0.62|5.96%|6.11%|
|504|0.71|0.76|0.74|10.05%|10.76%|
|505|0.46|0.37|0.41|0.69%|0.55%|
|506|0.78|0.82|0.80|5.42%|5.72%|
|507|0.45|0.26|0.33|0.14%|0.08%|
|601|0.52|0.46|0.49|1.79%|1.57%|
|602|0.35|0.34|0.34|0.24%|0.24%|
|603|0.65|0.68|0.67|1.36%|1.42%|
|604|0.62|0.48|0.54|0.57%|0.44%|
|605|0.72|0.74|0.73|4.22%|4.33%|
|606|0.56|0.48|0.51|1.45%|1.23%|
|607|0.57|0.67|0.62|1.08%|1.25%|
|608|0.48|0.48|0.48|0.41%|0.41%|
|701|0.62|0.66|0.64|3.35%|3.59%|
|702|0.42|0.30|0.35|0.08%|0.06%|
|703|0.75|0.87|0.80|2.65%|3.07%|
|704|0.43|0.32|0.37|0.57%|0.43%|
|705|0.38|0.33|0.35|0.80%|0.69%|
|706|0.43|0.37|0.39|1.35%|1.16%|
## Citation
Please cite the model as follows:
Burst, Tobias / Lehmann, Pola / Franzmann, Simon / Al-Gaddooa, Denise / Ivanusch, Christoph / Regel, Sven / Riethmüller, Felicia / Weßels, Bernhard / Zehnter, Lisa (2023): manifestoberta. Version 56topics.context.2023.1.1. Berlin: Wissenschaftszentrum Berlin für Sozialforschung (WZB) / Göttingen: Institut für Demokratieforschung (IfDem). https://doi.org/10.25522/manifesto.manifestoberta.56topics.context.2023.1.1
```bib
@misc{Burst:2023,
Address = {Berlin / Göttingen},
Author = {Burst, Tobias AND Lehmann, Pola AND Franzmann, Simon AND Al-Gaddooa, Denise AND Ivanusch, Christoph AND Regel, Sven AND Riethmüller, Felicia AND Weßels, Bernhard AND Zehnter, Lisa},
Publisher = {Wissenschaftszentrum Berlin für Sozialforschung / Göttinger Institut für Demokratieforschung},
Title = {manifestoberta. Version 56topics.context.2023.1.1},
doi = {10.25522/manifesto.manifestoberta.56topics.context.2023.1.1},
url = {https://doi.org/10.25522/manifesto.manifestoberta.56topics.context.2023.1.1},
Year = {2023},
``` |
Cebtenzzre/gguf-misc | Cebtenzzre | 2024-01-17T19:14:39Z | 461 | 0 | null | [
"gguf",
"region:us"
]
| null | 2023-10-02T15:47:59Z | Entry not found |
davidkim205/komt-mistral-7b-v1-gguf | davidkim205 | 2023-10-24T04:55:11Z | 461 | 10 | null | [
"gguf",
"text-generation",
"en",
"ko",
"arxiv:2308.06502",
"arxiv:2308.06259",
"region:us"
]
| text-generation | 2023-10-24T04:04:26Z | ---
language:
- en
- ko
pipeline_tag: text-generation
---
# komt : korean multi task instruction tuning model

Recently, due to the success of ChatGPT, numerous large language models have emerged in an attempt to catch up with ChatGPT's capabilities.
However, when it comes to Korean language performance, it has been observed that many models still struggle to provide accurate answers or generate Korean text effectively.
This study addresses these challenges by introducing a multi-task instruction technique that leverages supervised datasets from various tasks to create training data for Large Language Models (LLMs).
## Model Details
* **Model Developers** : davidkim(changyeon kim)
* **Repository** : https://github.com/davidkim205/komt
* **quant methods** : q4_0, q4_1, q5_0, q5_1, q2_k, q3_k, q3_k_m, q3_k_l, q4_k, q4_k_s, q4_k_m, q5_k, q5_k_s, q5_k_m, q8_0, q4_0
## Training
Refer https://github.com/davidkim205/komt
## usage
```
cd llama.cpp
make -j && ./main -m ./models/komt-mistral-7b-v1/ggml-model-q4_1.gguf -p "[INST]인삼은 어떤 효과가 있는가요? [/INST]"
```
output
```
[INST]인삼은 어떤 효과가 있는가요? [/INST]인삼에는 약 600개의 물질이 포함되어 있습니다. 그 중에서도 많은 연구들을 통해 효능을 확인한 것으로 알려진 3가지는 아래와 같습니다.
1. 혈압 조절: 각종 실험에서 인삼이 혈압을 조절하는데 효과를 나타냈으며, 특히 중국의 한 연구자들은 인삼을 사용해 40%의 혈압 감소를 보였습니다.
2. 위장 개선: 인삼은 흉터, 통증 등으로 고통받는 위장 질환을 일부나마 개선할 수 있는데, 이는 각종 실험들에서 확인된 것입니다.
3. 면역 강화: 인삼은 면역체계를 강화시키는데 효과가 있으며, 국내에서도 2014년부터는 식약처의 의약용품 수출증명제에 대한 최종적인 평가로 사용되고 있습니다.
위와 같은 효능을 갖춘 인삼은 많이 사용하는 건강식품의 원료로도 활용됩니다. [end of text]
```
## Evaluation
For objective model evaluation, we initially used EleutherAI's lm-evaluation-harness but obtained unsatisfactory results. Consequently, we conducted evaluations using ChatGPT, a widely used model, as described in [Self-Alignment with Instruction Backtranslation](https://arxiv.org/pdf/2308.06502.pdf) and [Three Ways of Using Large Language Models to Evaluate Chat](https://arxiv.org/pdf/2308.06259.pdf) .
| model | score | average(0~5) | percentage |
| --------------------------------------- |---------| ------------ | ---------- |
| gpt-3.5-turbo(close) | 147 | 3.97 | 79.45% |
| naver Cue(close) | 140 | 3.78 | 75.67% |
| clova X(close) | 136 | 3.67 | 73.51% |
| WizardLM-13B-V1.2(open) | 96 | 2.59 | 51.89% |
| Llama-2-7b-chat-hf(open) | 67 | 1.81 | 36.21% |
| Llama-2-13b-chat-hf(open) | 73 | 1.91 | 38.37% |
| nlpai-lab/kullm-polyglot-12.8b-v2(open) | 70 | 1.89 | 37.83% |
| kfkas/Llama-2-ko-7b-Chat(open) | 96 | 2.59 | 51.89% |
| beomi/KoAlpaca-Polyglot-12.8B(open) | 100 | 2.70 | 54.05% |
| **komt-llama2-7b-v1 (open)(ours)** | **117** | **3.16** | **63.24%** |
| **komt-llama2-13b-v1 (open)(ours)** | **129** | **3.48** | **69.72%** |
| **komt-llama-30b-v1 (open)(ours)** | **129** | **3.16** | **63.24%** |
| **komt-mistral-7b-v1 (open)(ours)** | **131** | **3.54** | **70.81%** |
|
TheBloke/UNA-TheBeagle-7B-v1-GGUF | TheBloke | 2024-01-12T18:27:59Z | 461 | 14 | transformers | [
"transformers",
"gguf",
"mistral",
"generated_from_trainer",
"dataset:jondurbin/bagel-v0.3",
"base_model:fblgit/UNA-TheBeagle-7b-v1",
"license:cc-by-nc-nd-4.0",
"text-generation-inference",
"region:us"
]
| null | 2024-01-12T18:19:11Z | ---
base_model: fblgit/UNA-TheBeagle-7b-v1
datasets:
- jondurbin/bagel-v0.3
inference: false
library_name: transformers
license: cc-by-nc-nd-4.0
model-index:
- name: UNA-TheBeagle-7b-v1
results: []
model_creator: FBL
model_name: UNA TheBeagle 7B v1
model_type: mistral
prompt_template: '{prompt}
'
quantized_by: TheBloke
tags:
- generated_from_trainer
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# UNA TheBeagle 7B v1 - GGUF
- Model creator: [FBL](https://huggingface.co/fblgit)
- Original model: [UNA TheBeagle 7B v1](https://huggingface.co/fblgit/UNA-TheBeagle-7b-v1)
<!-- description start -->
## Description
This repo contains GGUF format model files for [FBL's UNA TheBeagle 7B v1](https://huggingface.co/fblgit/UNA-TheBeagle-7b-v1).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/UNA-TheBeagle-7B-v1-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/UNA-TheBeagle-7B-v1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/UNA-TheBeagle-7B-v1-GGUF)
* [FBL's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/fblgit/UNA-TheBeagle-7b-v1)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Unknown
```
{prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [una-thebeagle-7b-v1.Q2_K.gguf](https://huggingface.co/TheBloke/UNA-TheBeagle-7B-v1-GGUF/blob/main/una-thebeagle-7b-v1.Q2_K.gguf) | Q2_K | 2 | 2.70 GB| 5.20 GB | smallest, significant quality loss - not recommended for most purposes |
| [una-thebeagle-7b-v1.Q3_K_S.gguf](https://huggingface.co/TheBloke/UNA-TheBeagle-7B-v1-GGUF/blob/main/una-thebeagle-7b-v1.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [una-thebeagle-7b-v1.Q3_K_M.gguf](https://huggingface.co/TheBloke/UNA-TheBeagle-7B-v1-GGUF/blob/main/una-thebeagle-7b-v1.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [una-thebeagle-7b-v1.Q3_K_L.gguf](https://huggingface.co/TheBloke/UNA-TheBeagle-7B-v1-GGUF/blob/main/una-thebeagle-7b-v1.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [una-thebeagle-7b-v1.Q4_0.gguf](https://huggingface.co/TheBloke/UNA-TheBeagle-7B-v1-GGUF/blob/main/una-thebeagle-7b-v1.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [una-thebeagle-7b-v1.Q4_K_S.gguf](https://huggingface.co/TheBloke/UNA-TheBeagle-7B-v1-GGUF/blob/main/una-thebeagle-7b-v1.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [una-thebeagle-7b-v1.Q4_K_M.gguf](https://huggingface.co/TheBloke/UNA-TheBeagle-7B-v1-GGUF/blob/main/una-thebeagle-7b-v1.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [una-thebeagle-7b-v1.Q5_0.gguf](https://huggingface.co/TheBloke/UNA-TheBeagle-7B-v1-GGUF/blob/main/una-thebeagle-7b-v1.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [una-thebeagle-7b-v1.Q5_K_S.gguf](https://huggingface.co/TheBloke/UNA-TheBeagle-7B-v1-GGUF/blob/main/una-thebeagle-7b-v1.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [una-thebeagle-7b-v1.Q5_K_M.gguf](https://huggingface.co/TheBloke/UNA-TheBeagle-7B-v1-GGUF/blob/main/una-thebeagle-7b-v1.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [una-thebeagle-7b-v1.Q6_K.gguf](https://huggingface.co/TheBloke/UNA-TheBeagle-7B-v1-GGUF/blob/main/una-thebeagle-7b-v1.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [una-thebeagle-7b-v1.Q8_0.gguf](https://huggingface.co/TheBloke/UNA-TheBeagle-7B-v1-GGUF/blob/main/una-thebeagle-7b-v1.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/UNA-TheBeagle-7B-v1-GGUF and below it, a specific filename to download, such as: una-thebeagle-7b-v1.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/UNA-TheBeagle-7B-v1-GGUF una-thebeagle-7b-v1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/UNA-TheBeagle-7B-v1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/UNA-TheBeagle-7B-v1-GGUF una-thebeagle-7b-v1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m una-thebeagle-7b-v1.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./una-thebeagle-7b-v1.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"{prompt}", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./una-thebeagle-7b-v1.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: FBL's UNA TheBeagle 7B v1
-- In the Love Memory of my "LoLa" --
# UNA-TheBeagle-7b-v1
TheBeagle, a model of 7B parameters trained on The Bagel dataset. DPO & UNA applied over a set of curated DPO Pairs.
- Scored #1 on the HF Leaderboard, dramatic scores!!! 73 ARC, and very well balanced!
The dataset was generated using the original bagel code, including the decontamination step.
As base model, we used the latest Intel's neural-chat model.
It performs very good in many tasks, but its always better that you play with it by yourself.

## Evaluations
Ran with VLLM so expect them to dont be exactly as the one's shown in the board, but not too far :)
```
vllm (pretrained=fblgit/UNA-TheBeagle-7b-v1,dtype=auto,tensor_parallel_size=1,gpu_memory_utilization=0.8,data_parallel_size=8,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 32
| Tasks |Version| Filter |n-shot| Metric |Value | |Stderr|
|--------------|-------|----------|-----:|-----------|-----:|---|-----:|
|arc_challenge |Yaml |none | 25|acc |0.7090|± |0.0133|
| | |none | 25|acc_norm |0.7329|± |0.0129|
|gsm8k |Yaml |get-answer| 5|exact_match|0.7210|± |0.0124|
|hellaswag |Yaml |none | 10|acc |0.7202|± |0.0045|
| | |none | 10|acc_norm |0.8792|± |0.0033|
|truthfulqa_mc2|Yaml |none | 0|acc |0.7062|± |0.0151|
|winogrande |Yaml |none | 5|acc |0.8366|± |0.0104|
```
## UNA Details
For this release, we only applied UNA thru the perceptrons. It was done at a 3.5e-7 speed, and the training loop code is also the original one of the bagel and transformers-4.35.2-UNA
## Prompt
Im not entirely sure of it, as we used the vanilla version of the bagel training code. But a good model should be able to generalize with different prompt formats, so feel free to give it a shot.
## Citations
Remember if you use UNA's models, cite it in your model card.
## Limitations
Not for commercial use, and only for academic & research purposes.
<!-- original-model-card end -->
|
OEvortex/lite-hermes-GGUF | OEvortex | 2024-02-05T06:58:13Z | 461 | 3 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation",
"HelpingAI",
"lite",
"code",
"en",
"base_model:OEvortex/lite-hermes",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-01-30T11:43:25Z | ---
library_name: transformers
base_model: OEvortex/lite-hermes
inference: false
language:
- en
license: mit
tags:
- HelpingAI
- lite
- code
---
#### Description
Optimize your engagement with [This project](https://huggingface.co/OEvortex/lite-hermes) by seamlessly integrating GGUF Format model files.
Please Subscribe to my youtube channel [OEvortex](https://youtube.com/@OEvortex)
### GGUF Technical Specifications
Delve into the intricacies of GGUF, a meticulously crafted format that builds upon the robust foundation of the GGJT model. Tailored for heightened extensibility and user-centric functionality, GGUF introduces a suite of indispensable features:
**Single-file Deployment:** Streamline distribution and loading effortlessly. GGUF models have been meticulously architected for seamless deployment, necessitating no external files for supplementary information.
**Extensibility:** Safeguard the future of your models. GGUF seamlessly accommodates the integration of new features into GGML-based executors, ensuring compatibility with existing models.
**mmap Compatibility:** Prioritize efficiency. GGUF models are purposefully engineered to support mmap, facilitating rapid loading and saving, thus optimizing your workflow.
**User-Friendly:** Simplify your coding endeavors. Load and save models effortlessly, irrespective of the programming language used, obviating the dependency on external libraries.
**Full Information:** A comprehensive repository in a single file. GGUF models encapsulate all requisite information for loading, eliminating the need for users to furnish additional data.
The differentiator between GGJT and GGUF lies in the deliberate adoption of a key-value structure for hyperparameters (now termed metadata). Bid farewell to untyped lists, and embrace a structured approach that seamlessly accommodates new metadata without compromising compatibility with existing models. Augment your model with supplementary information for enhanced inference and model identification.
**QUANTIZATION_METHODS:**
| Method | Quantization | Advantages | Trade-offs |
|---|---|---|---|
| q2_k | 2-bit integers | Significant model size reduction | Minimal impact on accuracy |
| q3_k_l | 3-bit integers | Balance between model size reduction and accuracy preservation | Moderate impact on accuracy |
| q3_k_m | 3-bit integers | Enhanced accuracy with mixed precision | Increased computational complexity |
| q3_k_s | 3-bit integers | Improved model efficiency with structured pruning | Reduced accuracy |
| q4_0 | 4-bit integers | Significant model size reduction | Moderate impact on accuracy |
| q4_1 | 4-bit integers | Enhanced accuracy with mixed precision | Increased computational complexity |
| q4_k_m | 4-bit integers | Optimized model size and accuracy with mixed precision and structured pruning | Reduced accuracy |
| q4_k_s | 4-bit integers | Improved model efficiency with structured pruning | Reduced accuracy |
| q5_0 | 5-bit integers | Balance between model size reduction and accuracy preservation | Moderate impact on accuracy |
| q5_1 | 5-bit integers | Enhanced accuracy with mixed precision | Increased computational complexity |
| q5_k_m | 5-bit integers | Optimized model size and accuracy with mixed precision and structured pruning | Reduced accuracy |
| q5_k_s | 5-bit integers | Improved model efficiency with structured pruning | Reduced accuracy |
| q6_k | 6-bit integers | Balance between model size reduction and accuracy preservation | Moderate impact on accuracy |
| q8_0 | 8-bit integers | Significant model size reduction | Minimal impact on accuracy |
|
hmzkhnswt/tinyllama_customerSupport_hmc | hmzkhnswt | 2024-02-15T06:33:27Z | 461 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-02-15T06:30:43Z | Entry not found |
ChrisWilson011016/5CHw7baJcQxYR6VYLsK9ZnaZMY8Eyatuc9jxKFkg13TMewh6_vgg | ChrisWilson011016 | 2024-03-04T19:02:35Z | 461 | 0 | keras | [
"keras",
"region:us"
]
| null | 2024-02-29T13:01:06Z | Entry not found |
mradermacher/Phi-3-mini-128k-instruct-LLaMAfied-GGUF | mradermacher | 2024-05-05T15:07:06Z | 461 | 2 | transformers | [
"transformers",
"gguf",
"nlp",
"code",
"en",
"base_model:ThomasComics/Phi-3-mini-128k-instruct-LLaMAfied",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T07:43:32Z | ---
base_model: ThomasComics/Phi-3-mini-128k-instruct-LLaMAfied
language:
- en
library_name: transformers
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE
quantized_by: mradermacher
tags:
- nlp
- code
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/ThomasComics/Phi-3-mini-128k-instruct-LLaMAfied
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Phi-3-mini-128k-instruct-LLaMAfied-GGUF/resolve/main/Phi-3-mini-128k-instruct-LLaMAfied.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3-mini-128k-instruct-LLaMAfied-GGUF/resolve/main/Phi-3-mini-128k-instruct-LLaMAfied.IQ3_XS.gguf) | IQ3_XS | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3-mini-128k-instruct-LLaMAfied-GGUF/resolve/main/Phi-3-mini-128k-instruct-LLaMAfied.IQ3_S.gguf) | IQ3_S | 1.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Phi-3-mini-128k-instruct-LLaMAfied-GGUF/resolve/main/Phi-3-mini-128k-instruct-LLaMAfied.Q3_K_S.gguf) | Q3_K_S | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3-mini-128k-instruct-LLaMAfied-GGUF/resolve/main/Phi-3-mini-128k-instruct-LLaMAfied.IQ3_M.gguf) | IQ3_M | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3-mini-128k-instruct-LLaMAfied-GGUF/resolve/main/Phi-3-mini-128k-instruct-LLaMAfied.Q3_K_M.gguf) | Q3_K_M | 2.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3-mini-128k-instruct-LLaMAfied-GGUF/resolve/main/Phi-3-mini-128k-instruct-LLaMAfied.Q3_K_L.gguf) | Q3_K_L | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3-mini-128k-instruct-LLaMAfied-GGUF/resolve/main/Phi-3-mini-128k-instruct-LLaMAfied.IQ4_XS.gguf) | IQ4_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3-mini-128k-instruct-LLaMAfied-GGUF/resolve/main/Phi-3-mini-128k-instruct-LLaMAfied.Q4_K_S.gguf) | Q4_K_S | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi-3-mini-128k-instruct-LLaMAfied-GGUF/resolve/main/Phi-3-mini-128k-instruct-LLaMAfied.Q4_K_M.gguf) | Q4_K_M | 2.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi-3-mini-128k-instruct-LLaMAfied-GGUF/resolve/main/Phi-3-mini-128k-instruct-LLaMAfied.Q5_K_S.gguf) | Q5_K_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3-mini-128k-instruct-LLaMAfied-GGUF/resolve/main/Phi-3-mini-128k-instruct-LLaMAfied.Q5_K_M.gguf) | Q5_K_M | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3-mini-128k-instruct-LLaMAfied-GGUF/resolve/main/Phi-3-mini-128k-instruct-LLaMAfied.Q6_K.gguf) | Q6_K | 3.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3-mini-128k-instruct-LLaMAfied-GGUF/resolve/main/Phi-3-mini-128k-instruct-LLaMAfied.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3-mini-128k-instruct-LLaMAfied-GGUF/resolve/main/Phi-3-mini-128k-instruct-LLaMAfied.f16.gguf) | f16 | 7.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mmnga/tokyotech-llm-Swallow-13b-instruct-v0.1-gguf | mmnga | 2024-05-03T04:36:24Z | 461 | 1 | null | [
"gguf",
"en",
"ja",
"dataset:TFMC/imatrix-dataset-for-japanese-llm",
"license:llama2",
"region:us"
]
| null | 2024-05-02T14:18:27Z | ---
license: llama2
language:
- en
- ja
datasets:
- TFMC/imatrix-dataset-for-japanese-llm
---
# tokyotech-llm-Swallow-13b-instruct-v0.1-gguf
[tokyotech-llmさんが公開しているSwallow-13b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-v0.1)のggufフォーマット変換版です。
imatrixのデータは[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)を使用して作成しました。
## 他のモデル
mistral
[mmnga/tokyotech-llm-Swallow-MS-7b-instruct-v0.1-gguf](https://huggingface.co/mmnga/tokyotech-llm-Swallow-MS-7b-instruct-v0.1-gguf)
[mmnga/tokyotech-llm-Swallow-7b-plus-hf-gguf](https://huggingface.co/mmnga/tokyotech-llm-Swallow-7b-plus-hf-gguf)
[mmnga/tokyotech-llm-Swallow-MS-7b-v0.1-gguf](https://huggingface.co/mmnga/tokyotech-llm-Swallow-MS-7b-v0.1-gguf)
[mmnga/tokyotech-llm-Swallow-MX-8x7b-NVE-v0.1-gguf](https://huggingface.co/mmnga/tokyotech-llm-Swallow-MX-8x7b-NVE-v0.1-gguf)
llama2
[mmnga/tokyotech-llm-Swallow-7b-instruct-v0.1-gguf](https://huggingface.co/mmnga/tokyotech-llm-Swallow-7b-instruct-v0.1-gguf)
[mmnga/tokyotech-llm-Swallow-13b-instruct-v0.1-gguf](https://huggingface.co/mmnga/tokyotech-llm-Swallow-13b-instruct-v0.1-gguf)
[mmnga/tokyotech-llm-Swallow-70b-instruct-v0.1-gguf](https://huggingface.co/mmnga/tokyotech-llm-Swallow-70b-instruct-v0.1-gguf)
## Usage
```
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make -j
./main -m 'tokyotech-llm-Swallow-13b-instruct-v0.1-Q4_0.gguf' -n 128 -p '[INST]<<SYS>\nあなたは誠実で優秀な日本人のアシスタントです。\n<</SYS>>\n\n東京工業大学の主なキャンパスについて教えてください[/INST]'
``` |
thesven/Yi-1.5-9B-Chat-GGUF | thesven | 2024-05-24T18:39:06Z | 461 | 0 | null | [
"gguf",
"arxiv:2403.04652",
"license:apache-2.0",
"region:us"
]
| null | 2024-05-24T12:19:00Z | ---
license: apache-2.0
---
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">🐙 GitHub</a> •
<a href="https://discord.gg/hYUwWddeAu">👾 Discord</a> •
<a href="https://twitter.com/01ai_yi">🐤 Twitter</a> •
<a href="https://github.com/01-ai/Yi-1.5/issues/2">💬 WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">📝 Paper</a> •
<a href="https://01-ai.github.io/">💪 Tech Blog</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">🙌 FAQ</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">📗 Learning Hub</a>
</p>
# Quantization Description
<div style="text-align: center;">
<a href="https://github.com/thesven/GGUF-n-Go">
<img src="https://github.com/thesven/GGUF-n-Go/blob/main/assets/quantized_with.png?raw=true" alt="image/png" style="max-width: 350px;">
</a>
</div>
This repo contains GGUF quantized versions of the Yi 1.5 9B Chat model.
The model is supplied in different quantizations so that you can see what works best on the hardware you would like to run it on.
The repo contains quantizations in the following types:
- Q4_0
- Q4_1
- Q4_K
- Q4_K_S
- Q4_K_M
- Q5_0
- Q5_1
- Q5_K
- Q5_K_M
- Q5_K_S
- Q6_K
- Q8_0
- Q2_K
- Q3_K
- Q3_K_S
- Q3_K_XS
- IQ2_K
- IQ3_S
- IQ3_XXS
- IQ4_NL
- IQ4_XS
- IQ5_K
- IQ2_S
- IQ2_XS
- IQ1_S
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)|
| Yi-1.5-34B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)|
| Yi-1.5-9B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)|
| Yi-1.5-9B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)|
| Yi-1.5-6B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)|
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)|
| Yi-1.5-34B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)|
| Yi-1.5-9B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)|
| Yi-1.5-9B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)|
| Yi-1.5-6B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)|
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v4-gguf | RichardErkhov | 2024-05-27T01:09:50Z | 461 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-26T22:45:20Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama2-7b-chat-hf-v4 - GGUF
- Model creator: https://huggingface.co/TheTravellingEngineer/
- Original model: https://huggingface.co/TheTravellingEngineer/llama2-7b-chat-hf-v4/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama2-7b-chat-hf-v4.Q2_K.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v4-gguf/blob/main/llama2-7b-chat-hf-v4.Q2_K.gguf) | Q2_K | 2.36GB |
| [llama2-7b-chat-hf-v4.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v4-gguf/blob/main/llama2-7b-chat-hf-v4.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [llama2-7b-chat-hf-v4.IQ3_S.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v4-gguf/blob/main/llama2-7b-chat-hf-v4.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [llama2-7b-chat-hf-v4.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v4-gguf/blob/main/llama2-7b-chat-hf-v4.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [llama2-7b-chat-hf-v4.IQ3_M.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v4-gguf/blob/main/llama2-7b-chat-hf-v4.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [llama2-7b-chat-hf-v4.Q3_K.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v4-gguf/blob/main/llama2-7b-chat-hf-v4.Q3_K.gguf) | Q3_K | 3.07GB |
| [llama2-7b-chat-hf-v4.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v4-gguf/blob/main/llama2-7b-chat-hf-v4.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [llama2-7b-chat-hf-v4.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v4-gguf/blob/main/llama2-7b-chat-hf-v4.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [llama2-7b-chat-hf-v4.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v4-gguf/blob/main/llama2-7b-chat-hf-v4.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [llama2-7b-chat-hf-v4.Q4_0.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v4-gguf/blob/main/llama2-7b-chat-hf-v4.Q4_0.gguf) | Q4_0 | 3.56GB |
| [llama2-7b-chat-hf-v4.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v4-gguf/blob/main/llama2-7b-chat-hf-v4.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [llama2-7b-chat-hf-v4.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v4-gguf/blob/main/llama2-7b-chat-hf-v4.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [llama2-7b-chat-hf-v4.Q4_K.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v4-gguf/blob/main/llama2-7b-chat-hf-v4.Q4_K.gguf) | Q4_K | 3.8GB |
| [llama2-7b-chat-hf-v4.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v4-gguf/blob/main/llama2-7b-chat-hf-v4.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [llama2-7b-chat-hf-v4.Q4_1.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v4-gguf/blob/main/llama2-7b-chat-hf-v4.Q4_1.gguf) | Q4_1 | 3.95GB |
| [llama2-7b-chat-hf-v4.Q5_0.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v4-gguf/blob/main/llama2-7b-chat-hf-v4.Q5_0.gguf) | Q5_0 | 4.33GB |
| [llama2-7b-chat-hf-v4.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v4-gguf/blob/main/llama2-7b-chat-hf-v4.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [llama2-7b-chat-hf-v4.Q5_K.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v4-gguf/blob/main/llama2-7b-chat-hf-v4.Q5_K.gguf) | Q5_K | 4.45GB |
| [llama2-7b-chat-hf-v4.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v4-gguf/blob/main/llama2-7b-chat-hf-v4.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [llama2-7b-chat-hf-v4.Q5_1.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v4-gguf/blob/main/llama2-7b-chat-hf-v4.Q5_1.gguf) | Q5_1 | 4.72GB |
| [llama2-7b-chat-hf-v4.Q6_K.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v4-gguf/blob/main/llama2-7b-chat-hf-v4.Q6_K.gguf) | Q6_K | 5.15GB |
| [llama2-7b-chat-hf-v4.Q8_0.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v4-gguf/blob/main/llama2-7b-chat-hf-v4.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
The base model is meta's Llama-2-7b-chat-hf. It was finetuned using SFT and the openassistant/oasst1 dataset and the model prompt is similar to the original Guanaco model.
This repo contains the merged fp16 model.
**Legal Disclaimer: This model is bound by the usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.**
---
- license:
- llama2 <br>
- datasets:
- openassistant/oasst1 <br>
- language:
- en <br>
- reference: https://gist.github.com/younesbelkada/9f7f75c94bdc1981c8ca5cc937d4a4da
---
|
giewont/mnlp-project-M2 | giewont | 2024-05-27T11:31:06Z | 461 | 0 | transformers | [
"transformers",
"safetensors",
"openelm",
"text-generation",
"custom_code",
"autotrain_compatible",
"region:us"
]
| text-generation | 2024-05-27T11:26:05Z | Entry not found |
AlpacaAAR/llama-3-8b-sft | AlpacaAAR | 2024-06-11T23:42:48Z | 461 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-11T23:39:53Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/CATA-8x7B-GGUF | mradermacher | 2024-06-14T05:55:04Z | 461 | 0 | transformers | [
"transformers",
"gguf",
"not-for-all-audiences",
"en",
"base_model:Envoid/CATA-8x7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-13T01:40:51Z | ---
base_model: Envoid/CATA-8x7B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- not-for-all-audiences
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Envoid/CATA-8x7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/CATA-8x7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/CATA-8x7B-GGUF/resolve/main/CATA-8x7B.Q2_K.gguf) | Q2_K | 17.4 | |
| [GGUF](https://huggingface.co/mradermacher/CATA-8x7B-GGUF/resolve/main/CATA-8x7B.IQ3_XS.gguf) | IQ3_XS | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/CATA-8x7B-GGUF/resolve/main/CATA-8x7B.IQ3_S.gguf) | IQ3_S | 20.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/CATA-8x7B-GGUF/resolve/main/CATA-8x7B.Q3_K_S.gguf) | Q3_K_S | 20.5 | |
| [GGUF](https://huggingface.co/mradermacher/CATA-8x7B-GGUF/resolve/main/CATA-8x7B.IQ3_M.gguf) | IQ3_M | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/CATA-8x7B-GGUF/resolve/main/CATA-8x7B.Q3_K_M.gguf) | Q3_K_M | 22.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/CATA-8x7B-GGUF/resolve/main/CATA-8x7B.Q3_K_L.gguf) | Q3_K_L | 24.3 | |
| [GGUF](https://huggingface.co/mradermacher/CATA-8x7B-GGUF/resolve/main/CATA-8x7B.IQ4_XS.gguf) | IQ4_XS | 25.5 | |
| [GGUF](https://huggingface.co/mradermacher/CATA-8x7B-GGUF/resolve/main/CATA-8x7B.Q4_K_S.gguf) | Q4_K_S | 26.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CATA-8x7B-GGUF/resolve/main/CATA-8x7B.Q4_K_M.gguf) | Q4_K_M | 28.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CATA-8x7B-GGUF/resolve/main/CATA-8x7B.Q5_K_S.gguf) | Q5_K_S | 32.3 | |
| [GGUF](https://huggingface.co/mradermacher/CATA-8x7B-GGUF/resolve/main/CATA-8x7B.Q5_K_M.gguf) | Q5_K_M | 33.3 | |
| [GGUF](https://huggingface.co/mradermacher/CATA-8x7B-GGUF/resolve/main/CATA-8x7B.Q6_K.gguf) | Q6_K | 38.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/CATA-8x7B-GGUF/resolve/main/CATA-8x7B.Q8_0.gguf) | Q8_0 | 49.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
jeiku/Auraphin_Qwen2_7B-Q3_K_S-GGUF | jeiku | 2024-06-15T01:08:25Z | 461 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:jeiku/Auraphin_Qwen2_7B",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-15T01:07:47Z | ---
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
base_model: jeiku/Auraphin_Qwen2_7B
---
# jeiku/Auraphin_Qwen2_7B-Q3_K_S-GGUF
This model was converted to GGUF format from [`jeiku/Auraphin_Qwen2_7B`](https://huggingface.co/jeiku/Auraphin_Qwen2_7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/jeiku/Auraphin_Qwen2_7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo jeiku/Auraphin_Qwen2_7B-Q3_K_S-GGUF --hf-file auraphin_qwen2_7b-q3_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo jeiku/Auraphin_Qwen2_7B-Q3_K_S-GGUF --hf-file auraphin_qwen2_7b-q3_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo jeiku/Auraphin_Qwen2_7B-Q3_K_S-GGUF --hf-file auraphin_qwen2_7b-q3_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo jeiku/Auraphin_Qwen2_7B-Q3_K_S-GGUF --hf-file auraphin_qwen2_7b-q3_k_s.gguf -c 2048
```
|
Jaidchen/Llama3-German-8B-IQ4_NL-GGUF | Jaidchen | 2024-06-20T04:26:47Z | 461 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"de",
"base_model:DiscoResearch/Llama3-German-8B",
"license:llama3",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-20T04:26:23Z | ---
base_model: DiscoResearch/Llama3-German-8B
language:
- de
library_name: transformers
license: llama3
tags:
- llama-cpp
- gguf-my-repo
---
# Jaidchen/Llama3-German-8B-IQ4_NL-GGUF
This model was converted to GGUF format from [`DiscoResearch/Llama3-German-8B`](https://huggingface.co/DiscoResearch/Llama3-German-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/DiscoResearch/Llama3-German-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Jaidchen/Llama3-German-8B-IQ4_NL-GGUF --hf-file llama3-german-8b-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Jaidchen/Llama3-German-8B-IQ4_NL-GGUF --hf-file llama3-german-8b-iq4_nl-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Jaidchen/Llama3-German-8B-IQ4_NL-GGUF --hf-file llama3-german-8b-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Jaidchen/Llama3-German-8B-IQ4_NL-GGUF --hf-file llama3-german-8b-iq4_nl-imat.gguf -c 2048
```
|
CHE-72/TAIDE-LX-7B-Chat-Q6_K-GGUF | CHE-72 | 2024-06-22T17:14:29Z | 461 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:taide/TAIDE-LX-7B-Chat",
"license:other",
"region:us"
]
| null | 2024-06-22T17:14:06Z | ---
base_model: taide/TAIDE-LX-7B-Chat
license: other
license_name: taide-l-models-community-license-agreement
license_link: https://drive.google.com/file/d/1FcUZjbUH6jr4xoCyAronN_slLgcdhEUd/view
tags:
- llama-cpp
- gguf-my-repo
extra_gated_heading: 您需要先同意授權條款才能使用此模型
extra_gated_fields:
姓名(Name): text
生日(Date of birth): date_picker
國家(Country): country
所屬單位(Affiliation): text
geo: ip_location
按下送出表示您同意社群授權同意書與個人資料蒐集告知聲明(By clicking Submit below I accept the terms of the license and privacy policy): checkbox
extra_gated_prompt: '* ### [TAIDE L 類模型社群授權同意書(License)](https://drive.google.com/file/d/1FcUZjbUH6jr4xoCyAronN_slLgcdhEUd/view)
* ### [個人資料蒐集告知聲明(Privacy policy)](https://drive.google.com/file/d/1JTfZu_MdU_TR1-1sn2jbQyW7TLrxjwS5/view)'
extra_gated_button_content: 送出(Submit)
---
# CHE-72/TAIDE-LX-7B-Chat-Q6_K-GGUF
This model was converted to GGUF format from [`taide/TAIDE-LX-7B-Chat`](https://huggingface.co/taide/TAIDE-LX-7B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/taide/TAIDE-LX-7B-Chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q6_K-GGUF --hf-file taide-lx-7b-chat-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q6_K-GGUF --hf-file taide-lx-7b-chat-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q6_K-GGUF --hf-file taide-lx-7b-chat-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q6_K-GGUF --hf-file taide-lx-7b-chat-q6_k.gguf -c 2048
```
|
RichardErkhov/p208p2002_-_llama-traditional-chinese-120M-gguf | RichardErkhov | 2024-06-29T13:17:32Z | 461 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-06-29T13:10:43Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama-traditional-chinese-120M - GGUF
- Model creator: https://huggingface.co/p208p2002/
- Original model: https://huggingface.co/p208p2002/llama-traditional-chinese-120M/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama-traditional-chinese-120M.Q2_K.gguf](https://huggingface.co/RichardErkhov/p208p2002_-_llama-traditional-chinese-120M-gguf/blob/main/llama-traditional-chinese-120M.Q2_K.gguf) | Q2_K | 0.05GB |
| [llama-traditional-chinese-120M.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/p208p2002_-_llama-traditional-chinese-120M-gguf/blob/main/llama-traditional-chinese-120M.IQ3_XS.gguf) | IQ3_XS | 0.06GB |
| [llama-traditional-chinese-120M.IQ3_S.gguf](https://huggingface.co/RichardErkhov/p208p2002_-_llama-traditional-chinese-120M-gguf/blob/main/llama-traditional-chinese-120M.IQ3_S.gguf) | IQ3_S | 0.06GB |
| [llama-traditional-chinese-120M.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/p208p2002_-_llama-traditional-chinese-120M-gguf/blob/main/llama-traditional-chinese-120M.Q3_K_S.gguf) | Q3_K_S | 0.06GB |
| [llama-traditional-chinese-120M.IQ3_M.gguf](https://huggingface.co/RichardErkhov/p208p2002_-_llama-traditional-chinese-120M-gguf/blob/main/llama-traditional-chinese-120M.IQ3_M.gguf) | IQ3_M | 0.06GB |
| [llama-traditional-chinese-120M.Q3_K.gguf](https://huggingface.co/RichardErkhov/p208p2002_-_llama-traditional-chinese-120M-gguf/blob/main/llama-traditional-chinese-120M.Q3_K.gguf) | Q3_K | 0.06GB |
| [llama-traditional-chinese-120M.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/p208p2002_-_llama-traditional-chinese-120M-gguf/blob/main/llama-traditional-chinese-120M.Q3_K_M.gguf) | Q3_K_M | 0.06GB |
| [llama-traditional-chinese-120M.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/p208p2002_-_llama-traditional-chinese-120M-gguf/blob/main/llama-traditional-chinese-120M.Q3_K_L.gguf) | Q3_K_L | 0.06GB |
| [llama-traditional-chinese-120M.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/p208p2002_-_llama-traditional-chinese-120M-gguf/blob/main/llama-traditional-chinese-120M.IQ4_XS.gguf) | IQ4_XS | 0.07GB |
| [llama-traditional-chinese-120M.Q4_0.gguf](https://huggingface.co/RichardErkhov/p208p2002_-_llama-traditional-chinese-120M-gguf/blob/main/llama-traditional-chinese-120M.Q4_0.gguf) | Q4_0 | 0.07GB |
| [llama-traditional-chinese-120M.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/p208p2002_-_llama-traditional-chinese-120M-gguf/blob/main/llama-traditional-chinese-120M.IQ4_NL.gguf) | IQ4_NL | 0.07GB |
| [llama-traditional-chinese-120M.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/p208p2002_-_llama-traditional-chinese-120M-gguf/blob/main/llama-traditional-chinese-120M.Q4_K_S.gguf) | Q4_K_S | 0.07GB |
| [llama-traditional-chinese-120M.Q4_K.gguf](https://huggingface.co/RichardErkhov/p208p2002_-_llama-traditional-chinese-120M-gguf/blob/main/llama-traditional-chinese-120M.Q4_K.gguf) | Q4_K | 0.07GB |
| [llama-traditional-chinese-120M.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/p208p2002_-_llama-traditional-chinese-120M-gguf/blob/main/llama-traditional-chinese-120M.Q4_K_M.gguf) | Q4_K_M | 0.07GB |
| [llama-traditional-chinese-120M.Q4_1.gguf](https://huggingface.co/RichardErkhov/p208p2002_-_llama-traditional-chinese-120M-gguf/blob/main/llama-traditional-chinese-120M.Q4_1.gguf) | Q4_1 | 0.08GB |
| [llama-traditional-chinese-120M.Q5_0.gguf](https://huggingface.co/RichardErkhov/p208p2002_-_llama-traditional-chinese-120M-gguf/blob/main/llama-traditional-chinese-120M.Q5_0.gguf) | Q5_0 | 0.08GB |
| [llama-traditional-chinese-120M.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/p208p2002_-_llama-traditional-chinese-120M-gguf/blob/main/llama-traditional-chinese-120M.Q5_K_S.gguf) | Q5_K_S | 0.08GB |
| [llama-traditional-chinese-120M.Q5_K.gguf](https://huggingface.co/RichardErkhov/p208p2002_-_llama-traditional-chinese-120M-gguf/blob/main/llama-traditional-chinese-120M.Q5_K.gguf) | Q5_K | 0.08GB |
| [llama-traditional-chinese-120M.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/p208p2002_-_llama-traditional-chinese-120M-gguf/blob/main/llama-traditional-chinese-120M.Q5_K_M.gguf) | Q5_K_M | 0.08GB |
| [llama-traditional-chinese-120M.Q5_1.gguf](https://huggingface.co/RichardErkhov/p208p2002_-_llama-traditional-chinese-120M-gguf/blob/main/llama-traditional-chinese-120M.Q5_1.gguf) | Q5_1 | 0.09GB |
| [llama-traditional-chinese-120M.Q6_K.gguf](https://huggingface.co/RichardErkhov/p208p2002_-_llama-traditional-chinese-120M-gguf/blob/main/llama-traditional-chinese-120M.Q6_K.gguf) | Q6_K | 0.09GB |
| [llama-traditional-chinese-120M.Q8_0.gguf](https://huggingface.co/RichardErkhov/p208p2002_-_llama-traditional-chinese-120M-gguf/blob/main/llama-traditional-chinese-120M.Q8_0.gguf) | Q8_0 | 0.12GB |
Original model description:
---
datasets:
- wikipedia
- p208p2002/wudao
- c4
language:
- zh
- en
tags:
- chinese
- english
- generate
- gpt2
- llama
inference:
parameters:
max_new_tokens: 50
do_sample: true
widget:
- text: 粉圓,在珍珠奶茶中也稱波霸或珍珠,是一種
pipeline_tag: text-generation
---
# LLaMA Traditional Chinese 120M
一個雙語(繁體中文、英文)預訓練的小型語言模型。
雖然 Generative AI 發展蓬勃,但是小型中文模型(尤其是繁體中文)仍然很缺乏,一些用中文語料訓練的模型也沒有在詞表上對中文進行優化。
此模型使用重新在中英文語料上訓練的 BPE Tokenizer,能夠很好的支援中(簡繁)、英文分詞。
繁體中文資料集仍然較難取得,作為替代我們使用了簡體資料集搭配簡轉繁的方案進行訓練。一些初步的觀測,模型的輸出有偏好中國內容的傾向。
中文場景下常有混用英文的情形,所以我們也在訓練語料中加入英文(占比40%),總計在此模型上訓練了 15B tokens。
## Training Dataset
- 中文維基百科(20230601)
- 英文維基百科(20230601)
- 悟道資料集(簡->繁轉換)
- C4-RealNewsLike
## ZH-BPE Tokenizer
此模型使用重新在中英文語料上訓練的 BPE Tokenizer。
> https://github.com/p208p2002/BPE-tokenizer-from-zh-wiki
#### 測試資料
尚-雅克·盧梭(法語:Jean-Jacques Rousseau,法語發音:[ʒɑ̃ ʒak ʁuso];1712年6月28日—1778年7月2日)是啟蒙時代的法國與日內瓦哲學家、政治理論家、文學家和音樂家。
盧梭的小說作品《愛彌兒》(Émile)是一篇關於全人公民教育的哲學論文,對康德影響甚大。其言情小說《新愛洛伊斯》對前浪漫主義(pre-romanticism)[19]及浪漫主義時期的小說發展十分重要[20]。
不過,一些知名學者認為盧梭雖然預示了浪漫主義的誕生,但是其「現代文學姿態」其實早已「超越了感傷的浪漫主義」,而其嶄新的語言觀甚至「一直延續到了超現實主義那裡」[21]。
##### LLaMA Chinese 120M (ours)
```
▁ <0x0A> ▁ ▁ ▁ ▁尚 - 雅克 · 盧 梭 ( 法語 : Jean - Jac ques ▁R ous se au , 法語 發音 : [ <0xCA> <0x92> ɑ ̃ ▁ <0xCA> <0x92> ak ▁ ʁ us o ] <0xEF> <0xBC> <0x9B> 1 7 1 2 年 6 月 2 8 日 — 1 7 7 8 年 7 月 2 日 ) 是 啟蒙 時代的 法國 與 日內瓦 哲學家 、 政治 理論 家 、 文學家 和 音樂家 。 <0x0A> ▁ ▁ ▁ ▁盧 梭 的小說 作品 《 愛 彌 兒 》 ( É m ile ) 是一 篇 關於 全 人 公民 教育的 哲學 論文 , 對 康德 影響 甚 大 。 其 言 情 小說 《 新 愛 洛 伊斯 》 對 前 浪漫 主義 ( pre - rom ant ic ism ) [ 1 9 ] 及 浪漫 主義 時期的 小說 發展 十分 重要 [ 2 0 ] 。 <0x0A> ▁ ▁ ▁ ▁不過 , 一些 知名 學者認為 盧 梭 雖然 預 示 了 浪漫 主義的 誕生 , 但是 其 「 現代 文學 姿態 」 其實 早已 「 超越了 感 傷 的 浪漫 主義 」 , 而其 嶄 新的 語言 觀 甚至 「 一直 延續 到了 超 現實 主義 那裡 」 [ 2 1 ] 。 <0x0A> ▁ ▁ ▁ ▁
```
##### Wenzhong GPT2
Wenzhong模型雖然使用中文語料訓練,可是並沒有針對中文語料建立模型詞表,
雖然依靠BPE演算法可使其 back-off bytes (避免oov),但因編碼長度變長導致效率較差,並且在令牌化後缺失語義。
```
Ċ Ġ Ġ Ġ Ġå ° ļ - éĽ ħ åħ ĭ · çĽ § æ ¢ Ń ï ¼ Ī æ³ ķ èª ŀ ï ¼ ļ Jean - Jac ques ĠRousse au ï ¼ Į æ³ ķ èª ŀ ç Ļ ¼ é Ł ³ ï ¼ ļ [ Ê Ĵ É ij Ì ĥ Ġ Ê Ĵ ak Ġ Ê ģ us o ] ï ¼ Ľ 17 12 å¹ ´ 6 æľ Ī 28 æĹ ¥ âĢĶ 17 78 å¹ ´ 7 æľ Ī 2 æĹ ¥ ï ¼ ī æĺ¯ å ķ Ł è Ĵ Ļ æ ĻĤ 代 çļĦ æ³ ķ åľ ĭ èĪ ĩ æĹ ¥ åħ § ç ĵ ¦ å ĵ ² åŃ ¸ å® ¶ ãĢģ æ Ķ ¿ æ ² » çIJ Ĩ è « ĸ å® ¶ ãĢģ æĸ ĩ åŃ ¸ å® ¶ å Ĵ Į é Ł ³ æ ¨ Ĥ å® ¶ ãĢĤ Ċ Ġ Ġ Ġ Ġç Ľ § æ ¢ Ń çļĦ å° ı èª ª ä½ľ å ĵ ģ ãĢ Ĭ æĦ Ľ å½ Į åħ Ĵ ãĢ ĭ ï ¼ Ī Ãī mile ï ¼ ī æĺ¯ ä¸Ģ ç ¯ ĩ éĹ ľ æĸ ¼ åħ ¨ 人 åħ ¬ æ° ij æķ Ļ è Ĥ ² çļĦ å ĵ ² åŃ ¸ è « ĸ æĸ ĩ ï ¼ Į å° į åº · å¾ · å½ ± é Ł ¿ çĶ ļ 大 ãĢĤ åħ ¶ è ¨ Ģ æĥ ħ å° ı èª ª ãĢ Ĭ æĸ ° æĦ Ľ æ ´ Ľ ä¼ Ĭ æĸ ¯ ãĢ ĭ å° į åī į æµ ª æ ¼ « ä¸ » ç ¾ © ï ¼ Ī pre - rom antic ism ï ¼ ī [ 19 ] åı Ĭ æµ ª æ ¼ « ä¸ » ç ¾ © æ ĻĤ æľ Ł çļĦ å° ı èª ª ç Ļ ¼ å ± ķ åį ģ åĪ Ĩ éĩ į è¦ ģ [ 20 ] ãĢĤ Ċ Ġ Ġ Ġ Ġ ä¸į éģ İ ï ¼ Į ä¸Ģ äº Ľ ç Ł ¥ åIJ į åŃ ¸ èĢħ èª į ç Ĥ º çĽ § æ ¢ Ń éĽ ĸ çĦ ¶ é ł IJ ç ¤ º äº Ĩ æµ ª æ ¼ « ä¸ » ç ¾ © çļĦ èª ķ çĶŁ ï ¼ Į ä½ Ĩ æĺ¯ åħ ¶ ãĢĮ ç ı ¾ 代 æĸ ĩ åŃ ¸ å§ ¿ æ ħĭ ãĢį åħ ¶ å¯ ¦ æĹ © å· ² ãĢĮ è ¶ħ è ¶ Ĭ äº Ĩ æĦ Ł åĤ · çļĦ æµ ª æ ¼ « ä¸ » ç ¾ © ãĢį ï ¼ Į èĢ Į åħ ¶ å ¶ Ħ æĸ ° çļĦ èª ŀ è ¨ Ģ è § Ģ çĶ ļ è ĩ ³ ãĢĮ ä¸Ģ çĽ ´ å » ¶ ç º Į åĪ ° äº Ĩ è ¶ħ ç ı ¾ å¯ ¦ ä¸ » ç ¾ © é Ĥ £ è£ ¡ ãĢį [ 21 ] ãĢĤ Ċ Ġ Ġ Ġ Ġ
```
##### LLaMA/LLaMA2
LLaMA 僅收錄少量中文,大部分中文字仍用 bytes 表示。
```
▁ <0x0A> ▁▁▁▁ <0xE5> <0xB0> <0x9A> - 雅 克 · <0xE7> <0x9B> <0xA7> <0xE6> <0xA2> <0xAD> ( 法 語 : Jean - Jac ques ▁R ous seau , 法 語 <0xE7> <0x99> <0xBC> 音 : [ ʒ ɑ ̃ ▁ ʒ ak ▁ ʁ uso ] ; 1 7 1 2 年 6 月 2 8 日 — 1 7 7 8 年 7 月 2 日 ) 是 <0xE5> <0x95> <0x9F> <0xE8> <0x92> <0x99> 時 代 的 法 國 <0xE8> <0x88> <0x87> 日 <0xE5> <0x85> <0xA7> <0xE7> <0x93> <0xA6> <0xE5> <0x93> <0xB2> 學 家 、 政 治 理 論 家 、 文 學 家 和 音 <0xE6> <0xA8> <0x82> 家 。 <0x0A> ▁▁▁▁ <0xE7> <0x9B> <0xA7> <0xE6> <0xA2> <0xAD> 的 小 <0xE8> <0xAA> <0xAA> 作 品 《 愛 <0xE5> <0xBD> <0x8C> <0xE5> <0x85> <0x92> 》 ( É mile ) 是 一 <0xE7> <0xAF> <0x87> <0xE9> <0x97> <0x9C> <0xE6> <0x96> <0xBC> 全 人 公 民 教 育 的 <0xE5> <0x93> <0xB2> 學 論 文 , <0xE5> <0xB0> <0x8D> 康 德 影 <0xE9> <0x9F> <0xBF> <0xE7> <0x94> <0x9A> 大 。 其 言 情 小 <0xE8> <0xAA> <0xAA> 《 新 愛 <0xE6> <0xB4> <0x9B> 伊 斯 》 <0xE5> <0xB0> <0x8D> 前 <0xE6> <0xB5> <0xAA> <0xE6> <0xBC> <0xAB> 主 義 ( pre - rom antic ism ) [ 1 9 ] 及 <0xE6> <0xB5> <0xAA> <0xE6> <0xBC> <0xAB> 主 義 時 期 的 小 <0xE8> <0xAA> <0xAA> <0xE7> <0x99> <0xBC> 展 十 分 重 要 [ 2 0 ] 。 <0x0A> ▁▁▁▁ 不 <0xE9> <0x81> <0x8E> , 一 些 知 名 學 者 <0xE8> <0xAA> <0x8D> <0xE7> <0x82> <0xBA> <0xE7> <0x9B> <0xA7> <0xE6> <0xA2> <0xAD> <0xE9> <0x9B> <0x96> 然 <0xE9> <0xA0> <0x90> 示 了 <0xE6> <0xB5> <0xAA> <0xE6> <0xBC> <0xAB> 主 義 的 <0xE8> <0xAA> <0x95> 生 , <0xE4> <0xBD> <0x86> 是 其 「 現 代 文 學 <0xE5> <0xA7> <0xBF> <0xE6> <0x85> <0x8B> 」 其 <0xE5> <0xAF> <0xA6> <0xE6> <0x97> <0xA9> 已 「 超 越 了 <0xE6> <0x84> <0x9F> <0xE5> <0x82> <0xB7> 的 <0xE6> <0xB5> <0xAA> <0xE6> <0xBC> <0xAB> 主 義 」 , 而 其 <0xE5> <0xB6> <0x84> 新 的 語 言 <0xE8> <0xA7> <0x80> <0xE7> <0x94> <0x9A> <0xE8> <0x87> <0xB3> 「 一 直 <0xE5> <0xBB> <0xB6> <0xE7> <0xBA> <0x8C> 到 了 超 現 <0xE5> <0xAF> <0xA6> 主 義 那 <0xE8> <0xA3> <0xA1> 」 [ 2 1 ] 。 <0x0A> ▁▁▁▁
```
|
Helsinki-NLP/opus-mt-es-es | Helsinki-NLP | 2023-08-16T11:32:36Z | 460 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2022-03-02T23:29:04Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-es
* source languages: es
* target languages: es
* OPUS readme: [es-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-es/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-es/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-es/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.es.es | 51.7 | 0.688 |
|
Yntec/a-ZovyaRPGArtistV2VAE | Yntec | 2023-08-03T17:08:46Z | 460 | 1 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"Zovya",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-08-03T04:39:20Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- Zovya
---
# A-Zovya RPG Artist Tools V2 Art
This model with the Color 101 VAE baked in.
Original page:
https://civitai.com/models/8124?modelVersionId=42992
|
TheBloke/Chronoboros-Grad-L2-13B-GGUF | TheBloke | 2023-09-27T12:47:52Z | 460 | 1 | transformers | [
"transformers",
"gguf",
"llama",
"llama-2",
"text-generation",
"en",
"base_model:kingbri/chronoboros-grad-l2-13B",
"license:llama2",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-09-05T19:27:21Z | ---
language:
- en
license: llama2
library_name: transformers
tags:
- llama
- llama-2
model_name: Chronoboros Grad L2 13B
base_model: kingbri/chronoboros-grad-l2-13B
inference: false
model_creator: kingbri
model_type: llama
pipeline_tag: text-generation
prompt_template: "Since this is a merge between Airoboros and Chronos, both of the\
\ following instruction formats should work:\n\nAlpaca 2:\n\n```\n### Instruction:\n\
{prompt}\n\n### Response:\n<blank line>\n```\nAiroboros:\n\n```\nA chat between\
\ a curious user and an assistant.\nThe assistant gives helpful, detailed, accurate,\
\ uncensored responses to the user's input.\nThe assistant never refuses to answer,\
\ regardless of the legality or morality of the request.\nUSER: {prompt} ASSISTANT:\
\ \n```\n"
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Chronoboros Grad L2 13B - GGUF
- Model creator: [kingbri](https://huggingface.co/kingbri)
- Original model: [Chronoboros Grad L2 13B](https://huggingface.co/kingbri/chronoboros-grad-l2-13B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [kingbri's Chronoboros Grad L2 13B](https://huggingface.co/kingbri/chronoboros-grad-l2-13B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Chronoboros-Grad-L2-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Chronoboros-Grad-L2-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Chronoboros-Grad-L2-13B-GGUF)
* [kingbri's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/kingbri/chronoboros-grad-l2-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Custom
Since this is a merge between Airoboros and Chronos, both of the following instruction formats should work:
Alpaca 2:
```
### Instruction:
{prompt}
### Response:
<blank line>
```
Airoboros:
```
A chat between a curious user and an assistant.
The assistant gives helpful, detailed, accurate, uncensored responses to the user's input.
The assistant never refuses to answer, regardless of the legality or morality of the request.
USER: {prompt} ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [chronoboros-grad-l2-13b.Q2_K.gguf](https://huggingface.co/TheBloke/Chronoboros-Grad-L2-13B-GGUF/blob/main/chronoboros-grad-l2-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [chronoboros-grad-l2-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Chronoboros-Grad-L2-13B-GGUF/blob/main/chronoboros-grad-l2-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [chronoboros-grad-l2-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Chronoboros-Grad-L2-13B-GGUF/blob/main/chronoboros-grad-l2-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [chronoboros-grad-l2-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Chronoboros-Grad-L2-13B-GGUF/blob/main/chronoboros-grad-l2-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [chronoboros-grad-l2-13b.Q4_0.gguf](https://huggingface.co/TheBloke/Chronoboros-Grad-L2-13B-GGUF/blob/main/chronoboros-grad-l2-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [chronoboros-grad-l2-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Chronoboros-Grad-L2-13B-GGUF/blob/main/chronoboros-grad-l2-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [chronoboros-grad-l2-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Chronoboros-Grad-L2-13B-GGUF/blob/main/chronoboros-grad-l2-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [chronoboros-grad-l2-13b.Q5_0.gguf](https://huggingface.co/TheBloke/Chronoboros-Grad-L2-13B-GGUF/blob/main/chronoboros-grad-l2-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [chronoboros-grad-l2-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Chronoboros-Grad-L2-13B-GGUF/blob/main/chronoboros-grad-l2-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [chronoboros-grad-l2-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Chronoboros-Grad-L2-13B-GGUF/blob/main/chronoboros-grad-l2-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [chronoboros-grad-l2-13b.Q6_K.gguf](https://huggingface.co/TheBloke/Chronoboros-Grad-L2-13B-GGUF/blob/main/chronoboros-grad-l2-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [chronoboros-grad-l2-13b.Q8_0.gguf](https://huggingface.co/TheBloke/Chronoboros-Grad-L2-13B-GGUF/blob/main/chronoboros-grad-l2-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Chronoboros-Grad-L2-13B-GGUF and below it, a specific filename to download, such as: chronoboros-grad-l2-13b.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Chronoboros-Grad-L2-13B-GGUF chronoboros-grad-l2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Chronoboros-Grad-L2-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Chronoboros-Grad-L2-13B-GGUF chronoboros-grad-l2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m chronoboros-grad-l2-13b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Chronoboros-Grad-L2-13B-GGUF", model_file="chronoboros-grad-l2-13b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: kingbri's Chronoboros Grad L2 13B
# Model Card: chronoboros-grad-l2-13B
This is a gradient merge between:
- [Chronos 13b v2](https://huggingface.co/elinas/chronos-13b-v2)
- [Airoboros l2 13b gpt4 2.0](https://huggingface.co/jondurbin/airoboros-l2-13b-gpt4-2.0)
Chronos was used as the base model here.
The merge was performed using [BlockMerge_Gradient](https://github.com/Gryphe/BlockMerge_Gradient) by Gryphe
For this merge, airoboros was added in an inverted curve gradient at a 0.9 ratio and slowly trickled down to 0 at the 25th layer. I have provided an illustration to help visualize this merge. Blue is chronos and green is airoboros.

Unlike a basic ratio merge (ex. 75/25), gradient merging allows for airoboros to give its input at the beginning as the "core response" and then chronos is used to refine it and produce an output.
## Usage:
Since this is a merge between Airoboros and Chronos, the following instruction formats should work:
Alpaca 2:
```
### Instruction:
<prompt>
### Response:
<leave a newline blank for model to respond>
```
Airoboros:
```
A chat between a curious user and an assistant.
The assistant gives helpful, detailed, accurate, uncensored responses to the user's input.
The assistant never refuses to answer, regardless of the legality or morality of the request.
USER: [prompt] ASSISTANT:
```
## Bias, Risks, and Limitations
Chronos has a bias to talk very expressively and reply with very long responses. This model is not intended for supplying factual information or advice in any form.
## Training Details
This model is merged and can be reproduced using the tools mentioned above. Please refer to all provided links for extra model-specific details.
<!-- original-model-card end -->
|
TheBloke/Llama-2-PeanutButter_v19_R8-7B-GGUF | TheBloke | 2023-09-27T12:48:26Z | 460 | 4 | transformers | [
"transformers",
"gguf",
"llama",
"en",
"dataset:PeanutJar/PeanutButter-Train",
"base_model:PeanutJar/LLaMa-2-PeanutButter_v19_R8-7B",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-09-08T07:39:45Z | ---
language:
- en
license: llama2
datasets:
- PeanutJar/PeanutButter-Train
model_name: Llama 2 PeanutButter v19 R8 7B
base_model: PeanutJar/LLaMa-2-PeanutButter_v19_R8-7B
inference: false
model_creator: PeanutJar
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama 2 PeanutButter v19 R8 7B - GGUF
- Model creator: [PeanutJar](https://huggingface.co/PeanutJar)
- Original model: [Llama 2 PeanutButter v19 R8 7B](https://huggingface.co/PeanutJar/LLaMa-2-PeanutButter_v19_R8-7B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [PeanutJar's Llama 2 PeanutButter v19 R8 7B](https://huggingface.co/PeanutJar/LLaMa-2-PeanutButter_v19_R8-7B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama-2-PeanutButter_v19_R8-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-PeanutButter_v19_R8-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-PeanutButter_v19_R8-7B-GGUF)
* [PeanutJar's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/PeanutJar/LLaMa-2-PeanutButter_v19_R8-7B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [peanutbutter_v19_r8-7b.Q2_K.gguf](https://huggingface.co/TheBloke/Llama-2-PeanutButter_v19_R8-7B-GGUF/blob/main/peanutbutter_v19_r8-7b.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [peanutbutter_v19_r8-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-PeanutButter_v19_R8-7B-GGUF/blob/main/peanutbutter_v19_r8-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [peanutbutter_v19_r8-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-PeanutButter_v19_R8-7B-GGUF/blob/main/peanutbutter_v19_r8-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [peanutbutter_v19_r8-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Llama-2-PeanutButter_v19_R8-7B-GGUF/blob/main/peanutbutter_v19_r8-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [peanutbutter_v19_r8-7b.Q4_0.gguf](https://huggingface.co/TheBloke/Llama-2-PeanutButter_v19_R8-7B-GGUF/blob/main/peanutbutter_v19_r8-7b.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [peanutbutter_v19_r8-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-PeanutButter_v19_R8-7B-GGUF/blob/main/peanutbutter_v19_r8-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [peanutbutter_v19_r8-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-PeanutButter_v19_R8-7B-GGUF/blob/main/peanutbutter_v19_r8-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [peanutbutter_v19_r8-7b.Q5_0.gguf](https://huggingface.co/TheBloke/Llama-2-PeanutButter_v19_R8-7B-GGUF/blob/main/peanutbutter_v19_r8-7b.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [peanutbutter_v19_r8-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-PeanutButter_v19_R8-7B-GGUF/blob/main/peanutbutter_v19_r8-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [peanutbutter_v19_r8-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-PeanutButter_v19_R8-7B-GGUF/blob/main/peanutbutter_v19_r8-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [peanutbutter_v19_r8-7b.Q6_K.gguf](https://huggingface.co/TheBloke/Llama-2-PeanutButter_v19_R8-7B-GGUF/blob/main/peanutbutter_v19_r8-7b.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [peanutbutter_v19_r8-7b.Q8_0.gguf](https://huggingface.co/TheBloke/Llama-2-PeanutButter_v19_R8-7B-GGUF/blob/main/peanutbutter_v19_r8-7b.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Llama-2-PeanutButter_v19_R8-7B-GGUF and below it, a specific filename to download, such as: peanutbutter_v19_r8-7b.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Llama-2-PeanutButter_v19_R8-7B-GGUF peanutbutter_v19_r8-7b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Llama-2-PeanutButter_v19_R8-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Llama-2-PeanutButter_v19_R8-7B-GGUF peanutbutter_v19_r8-7b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m peanutbutter_v19_r8-7b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Llama-2-PeanutButter_v19_R8-7B-GGUF", model_file="peanutbutter_v19_r8-7b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: PeanutJar's Llama 2 PeanutButter v19 R8 7B
Trained on a single 7900XTX.

Uses Alpaca format:
```
### Instruction:
What is the capital of Canada?
### Response:
{The AI will respond here}
```
V19 Dataset - 1,820 Samples:
- [700 LIMA (Single-Turn Instruction)](https://huggingface.co/datasets/GAIR/lima)
- 500 Simpsons Episode (Single-Turn Instruction-to-Chat)
- 250 South Park Episode (Single-Turn Instruction-to-Chat)
- 100 Futurama Episode (Single-Turn Instruction-to-Chat)
- [40 Camel Science (Single-Turn Instruction)](https://huggingface.co/datasets/PocketDoc/DansPileOfSets/blob/main/one%20shot%20datasets/original%20alpaca%20format/camel-science-merge_allcleaned_scrubbed_deduped_tokencutoff_culled.json)
- [30 LeetCode (Single-Turn Instruction)](https://huggingface.co/datasets/ehartford/leet10k-alpaca)
- [30 OASST1 (Handpicked Multi-Turn Instruction)](https://huggingface.co/datasets/timdettmers/openassistant-guanaco)
- 30 AI Acceptance (Single-Turn Instruction)
- [25 LIMA (Multi-Turn Instruction)](https://huggingface.co/datasets/GAIR/lima)
- [20 GSM8K (Single-Turn Instruction)](https://huggingface.co/datasets/PocketDoc/DansPileOfSets/blob/main/one%20shot%20datasets/original%20alpaca%20format/gsm8k_train_allcleaned.json)
- 20 SCP (Single-Turn Instruction)
- 11 Custom RP (Single-Turn Instruction-to-Chat)
- [11 Emerald Tablet of Thoth the Atlantean (Single-Turn Instruction)](https://huggingface.co/datasets/shadowsword/thoth)
- 11 Personal ChatGPT Results (Single-Turn Instruction)
- 10 Personal ChatGPT Results (Multi-Turn Instruction)
- 10 Black Dahlia Roleplaying (Single-Turn Instruction-to-Chat)
- 7 Inner Sanctum Roleplaying (Single-Turn Instruction-to-Chat)
- 3 Darknest Fantasy Roleplaying (Single-Turn Instruction-to-Chat)
- [3 Evol-CodeAlpaca (Single-Turn Instruction)](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1)
- 2 AI Acceptance (Multi-Turn Instruction)
- 2 Medical Related (Single-Turn Instruction)
- 1 X-Files Episode (Single-Turn Instruction-to-Chat)
- [1 TinyStories (Single-Turn Instruction)](https://huggingface.co/datasets/roneneldan/TinyStories)
- 2 MF DOOM Lyrics (Single-Turn Instruction)
- 1 MF DOOM Lyrics (Multi-Turn Instruction)
<!-- original-model-card end -->
|
salesforce/blipdiffusion-controlnet | salesforce | 2023-09-21T15:55:24Z | 460 | 2 | diffusers | [
"diffusers",
"en",
"arxiv:2305.14720",
"license:apache-2.0",
"diffusers:BlipDiffusionControlNetPipeline",
"region:us"
]
| null | 2023-09-21T15:55:24Z | ---
license: apache-2.0
language:
- en
library_name: diffusers
---
# BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing
<!-- Provide a quick summary of what the model is/does. -->
Model card for BLIP-Diffusion, a text to image Diffusion model which enables zero-shot subject-driven generation and control-guided zero-shot generation.
The abstract from the paper is:
*Subject-driven text-to-image generation models create novel renditions of an input subject based on text prompts. Existing models suffer from lengthy fine-tuning and difficulties preserving the subject fidelity. To overcome these limitations, we introduce BLIP-Diffusion, a new subject-driven image generation model that supports multimodal control which consumes inputs of subject images and text prompts. Unlike other subject-driven generation models, BLIP-Diffusion introduces a new multimodal encoder which is pre-trained to provide subject representation. We first pre-train the multimodal encoder following BLIP-2 to produce visual representation aligned with the text. Then we design a subject representation learning task which enables a diffusion model to leverage such visual representation and generates new subject renditions. Compared with previous methods such as DreamBooth, our model enables zero-shot subject-driven generation, and efficient fine-tuning for customized subject with up to 20x speedup. We also demonstrate that BLIP-Diffusion can be flexibly combined with existing techniques such as ControlNet and prompt-to-prompt to enable novel subject-driven generation and editing applications.*
The model is created by Dongxu Li, Junnan Li, Steven C.H. Hoi.
### Model Sources
<!-- Provide the basic links for the model. -->
- **Original Repository:** https://github.com/salesforce/LAVIS/tree/main
- **Project Page:** https://dxli94.github.io/BLIP-Diffusion-website/
## Uses
### Zero-Shot Subject Driven Generation
```python
from diffusers.pipelines import BlipDiffusionPipeline
from diffusers.utils import load_image
import torch
blip_diffusion_pipe = BlipDiffusionPipeline.from_pretrained(
"Salesforce/blipdiffusion", torch_dtype=torch.float16
).to("cuda")
cond_subject = "dog"
tgt_subject = "dog"
text_prompt_input = "swimming underwater"
cond_image = load_image(
"https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/dog.jpg"
)
iter_seed = 88888
guidance_scale = 7.5
num_inference_steps = 25
negative_prompt = "over-exposure, under-exposure, saturated, duplicate, out of frame, lowres, cropped, worst quality, low quality, jpeg artifacts, morbid, mutilated, out of frame, ugly, bad anatomy, bad proportions, deformed, blurry, duplicate"
output = blip_diffusion_pipe(
text_prompt_input,
cond_image,
cond_subject,
tgt_subject,
guidance_scale=guidance_scale,
num_inference_steps=num_inference_steps,
neg_prompt=negative_prompt,
height=512,
width=512,
).images
output[0].save("image.png")
```
Input Image : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/dog.jpg" style="width:500px;"/>
Generatred Image : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/dog_underwater.png" style="width:500px;"/>
### Controlled subject-driven generation
```python
from diffusers.pipelines import BlipDiffusionControlNetPipeline
from diffusers.utils import load_image
from controlnet_aux import CannyDetector
blip_diffusion_pipe = BlipDiffusionControlNetPipeline.from_pretrained(
"Salesforce/blipdiffusion-controlnet", torch_dtype=torch.float16
).to("cuda")
style_subject = "flower" # subject that defines the style
tgt_subject = "teapot" # subject to generate.
text_prompt = "on a marble table"
cldm_cond_image = load_image(
"https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/kettle.jpg"
).resize((512, 512))
canny = CannyDetector()
cldm_cond_image = canny(cldm_cond_image, 30, 70, output_type="pil")
style_image = load_image(
"https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/flower.jpg"
)
guidance_scale = 7.5
num_inference_steps = 50
negative_prompt = "over-exposure, under-exposure, saturated, duplicate, out of frame, lowres, cropped, worst quality, low quality, jpeg artifacts, morbid, mutilated, out of frame, ugly, bad anatomy, bad proportions, deformed, blurry, duplicate"
output = blip_diffusion_pipe(
text_prompt,
style_image,
cldm_cond_image,
style_subject,
tgt_subject,
guidance_scale=guidance_scale,
num_inference_steps=num_inference_steps,
neg_prompt=negative_prompt,
height=512,
width=512,
).images
output[0].save("image.png")
```
Input Style Image : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/flower.jpg" style="width:500px;"/>
Canny Edge Input : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/kettle.jpg" style="width:500px;"/>
Generated Image : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/canny_generated.png" style="width:500px;"/>
### Controlled subject-driven generation Scribble
```python
from diffusers.pipelines import BlipDiffusionControlNetPipeline
from diffusers.utils import load_image
from controlnet_aux import HEDdetector
blip_diffusion_pipe = BlipDiffusionControlNetPipeline.from_pretrained(
"Salesforce/blipdiffusion-controlnet"
)
controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-scribble")
blip_diffusion_pipe.controlnet = controlnet
blip_diffusion_pipe.to("cuda")
style_subject = "flower" # subject that defines the style
tgt_subject = "bag" # subject to generate.
text_prompt = "on a table"
cldm_cond_image = load_image(
"https://huggingface.co/lllyasviel/sd-controlnet-scribble/resolve/main/images/bag.png"
).resize((512, 512))
hed = HEDdetector.from_pretrained("lllyasviel/Annotators")
cldm_cond_image = hed(cldm_cond_image)
style_image = load_image(
"https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/flower.jpg"
)
guidance_scale = 7.5
num_inference_steps = 50
negative_prompt = "over-exposure, under-exposure, saturated, duplicate, out of frame, lowres, cropped, worst quality, low quality, jpeg artifacts, morbid, mutilated, out of frame, ugly, bad anatomy, bad proportions, deformed, blurry, duplicate"
output = blip_diffusion_pipe(
text_prompt,
style_image,
cldm_cond_image,
style_subject,
tgt_subject,
guidance_scale=guidance_scale,
num_inference_steps=num_inference_steps,
neg_prompt=negative_prompt,
height=512,
width=512,
).images
output[0].save("image.png")
```
Input Style Image : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/flower.jpg" style="width:500px;"/>
Scribble Input : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/scribble.png" style="width:500px;"/>
Generated Image : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/scribble_output.png" style="width:500px;"/>
## Model Architecture
Blip-Diffusion learns a **pre-trained subject representation**. uch representation aligns with text embeddings and in the meantime also encodes the subject appearance. This allows efficient fine-tuning of the model for high-fidelity subject-driven applications, such as text-to-image generation, editing and style transfer.
To this end, they design a two-stage pre-training strategy to learn generic subject representation. In the first pre-training stage, they perform multimodal representation learning, which enforces BLIP-2 to produce text-aligned visual features based on the input image. In the second pre-training stage, they design a subject representation learning task, called prompted context generation, where the diffusion model learns to generate novel subject renditions based on the input visual features.
To achieve this, they curate pairs of input-target images with the same subject appearing in different contexts. Specifically, they synthesize input images by composing the subject with a random background. During pre-training, they feed the synthetic input image and the subject class label through BLIP-2 to obtain the multimodal embeddings as subject representation. The subject representation is then combined with a text prompt to guide the generation of the target image.

The architecture is also compatible to integrate with established techniques built on top of the diffusion model, such as ControlNet.
They attach the U-Net of the pre-trained ControlNet to that of BLIP-Diffusion via residuals. In this way, the model takes into account the input structure condition, such as edge maps and depth maps, in addition to the subject cues. Since the model inherits the architecture of the original latent diffusion model, they observe satisfying generations using off-the-shelf integration with pre-trained ControlNet without further training.
<img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/arch_controlnet.png" style="width:50%;"/>
## Citation
**BibTeX:**
If you find this repository useful in your research, please cite:
```
@misc{li2023blipdiffusion,
title={BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing},
author={Dongxu Li and Junnan Li and Steven C. H. Hoi},
year={2023},
eprint={2305.14720},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
TheBloke/TinyLlama-1.1B-python-v0.1-GPTQ | TheBloke | 2023-10-03T11:28:53Z | 460 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"base_model:PY007/TinyLlama-1.1B-python-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
]
| text-generation | 2023-10-03T11:10:18Z | ---
base_model: PY007/TinyLlama-1.1B-python-v0.1
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
inference: false
language:
- en
license: apache-2.0
model_creator: Zhang Peiyuan
model_name: TinyLlama 1.1B Python v0.1
model_type: tinyllama
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# TinyLlama 1.1B Python v0.1 - GPTQ
- Model creator: [Zhang Peiyuan](https://huggingface.co/PY007)
- Original model: [TinyLlama 1.1B Python v0.1](https://huggingface.co/PY007/TinyLlama-1.1B-python-v0.1)
<!-- description start -->
## Description
This repo contains GPTQ model files for [Zhang Peiyuan's TinyLlama 1.1B Python v0.1](https://huggingface.co/PY007/TinyLlama-1.1B-python-v0.1).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/TinyLlama-1.1B-python-v0.1-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/TinyLlama-1.1B-python-v0.1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/TinyLlama-1.1B-python-v0.1-GGUF)
* [Zhang Peiyuan's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/PY007/TinyLlama-1.1B-python-v0.1)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/TinyLlama-1.1B-python-v0.1-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 2048 | 0.77 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/TinyLlama-1.1B-python-v0.1-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 2048 | 0.82 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/TinyLlama-1.1B-python-v0.1-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 2048 | 1.23 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/TinyLlama-1.1B-python-v0.1-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 2048 | 1.26 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/TinyLlama-1.1B-python-v0.1-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 2048 | 1.32 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/TinyLlama-1.1B-python-v0.1-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 2048 | 0.79 GB | No | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/TinyLlama-1.1B-python-v0.1-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/TinyLlama-1.1B-python-v0.1-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `TinyLlama-1.1B-python-v0.1-GPTQ`:
```shell
mkdir TinyLlama-1.1B-python-v0.1-GPTQ
huggingface-cli download TheBloke/TinyLlama-1.1B-python-v0.1-GPTQ --local-dir TinyLlama-1.1B-python-v0.1-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir TinyLlama-1.1B-python-v0.1-GPTQ
huggingface-cli download TheBloke/TinyLlama-1.1B-python-v0.1-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir TinyLlama-1.1B-python-v0.1-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir TinyLlama-1.1B-python-v0.1-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/TinyLlama-1.1B-python-v0.1-GPTQ --local-dir TinyLlama-1.1B-python-v0.1-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/TinyLlama-1.1B-python-v0.1-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/TinyLlama-1.1B-python-v0.1-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/TinyLlama-1.1B-python-v0.1-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `TinyLlama-1.1B-python-v0.1-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/TinyLlama-1.1B-python-v0.1-GPTQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers optimum
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.4.2
pip3 install .
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/TinyLlama-1.1B-python-v0.1-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Zhang Peiyuan's TinyLlama 1.1B Python v0.1
<div align="center">
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Model
This is a code LM finetuned(or so-called continue pretrianed) from the 500B TinyLlama checkpoint with another 7B Python data from the starcoderdata.
**While the finetuning data is exclusively Python, the model retains its ability in many other languages such as C or Java**.
The HumanEval accuracy is **14**.
**It can be used as the draft model to speculative-decode larger models such as models in the CodeLlama family**.
|
lrds-code/samba-1.1B | lrds-code | 2024-04-03T20:24:36Z | 460 | 6 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"Portuguese",
"Tiny-Llama",
"PEFT",
"conversational",
"pt",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-01-13T21:40:00Z | ---
language:
- pt
license: llama2
tags:
- Portuguese
- Tiny-Llama
- PEFT
widget:
- example_title: Pedro Álvares Cabral
messages:
- role: system
content: Você é um historiador que é especialista em história do Brasil.
- role: user
content: Quem foi Pedro Álvares Cabral?
---
<hr>
# README
<hr>
<p align="center">
<img width="250" alt="Samba Logo" src="https://cdn-uploads.huggingface.co/production/uploads/658c21f4c1229bf113295773/xH3K8H4qu2ps_IzAl9cgz.png">
</p>
Samba é um LLM treinado em dados da língua portuguesa. O modelo é baseado no [TinyLlama-1.1B](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0), uma versão de 1.1B parâmetros do LLaMA-2.
<p align="center">
<img width="250" alt="Countries Logo" src="https://cdn-uploads.huggingface.co/production/uploads/658c21f4c1229bf113295773/d3twZrXng5eDjg_LbH4pF.png">
</p>
## Descrição do Modelo
- **Desenvolvido por:** [Leonardo Souza](https://huggingface.co/lrds-code)
- **Tipo do Modelo:** LLaMA-Based
- **Licença:** Academic Free License v3.0
- **Fine-tunado do modelo:** [TinyLlama-1.1B](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0)
## Como usar
```python
import torch
from transformers import pipeline
samba = pipeline('text-generation', model='lrds-code/samba-1.1B', torch_dtype=torch.bfloat16, device_map='auto')
messages = [{'role':'system',
'content':''},
{'role':'user',
'content':'Quantos planetas existem no sistema solar?'}]
prompt = samba.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = samba(prompt, max_new_tokens=256, do_sample=False, temperature=0.1, top_k=50, top_p=0.95, repetition_penalty=1.1, do_sample=False)
print(outputs[0]['generated_text'])
```
## Parâmetros Importantes
- **repetition_penalty:** é utilizado para evitar a repetição de palavras ou frases. Quando esse valor é ajustado para ser maior que 1, o modelo tenta diminuir a probabilidade de gerar palavras que já apareceram anteriormente. Basicamente, quanto maior o valor, mais o modelo tenta evitar repetições.
- **do_sample:** determina se o modelo deve ou não amostrar aleatoriamente a próxima palavra com base nas probabilidades calculadas. Portanto, **do_sample=True** introduz variação e imprevisibilidade no texto gerado, enquanto que se **do_sample=False** o modelo escolherá sempre a palavra mais provável como próxima palavra, o que pode levar a saídas mais determinísticas e, possivelmente, mais repetitivas.
- **temperature:** afeta a aleatoriedade na escolha da próxima palavra. Um valor baixo (próximo de 0) faz com que o modelo seja mais "confiante" nas suas escolhas, favorecendo palavras com alta probabilidade e levando a saídas mais previsíveis. Por outro lado, um valor alto aumenta a aleatoriedade, permitindo que o modelo escolha palavras menos prováveis, o que pode tornar o texto gerado mais variado e criativo. |
Samee-ur/NeuralPipe-9B-Passthrough | Samee-ur | 2024-02-02T02:58:57Z | 460 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"OpenPipe/mistral-ft-optimized-1218",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"conversational",
"base_model:OpenPipe/mistral-ft-optimized-1218",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-02-02T02:52:20Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
base_model:
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
---
# NeuralPipe-9B-Passthrough
NeuralPipe-9B-Passthrough is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: OpenPipe/mistral-ft-optimized-1218
layer_range: [0, 32]
- sources:
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [24, 32]
merge_method: passthrough
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Samee-ur/NeuralPipe-9B-Passthrough"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.