modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
Wvolf/ViT_Deepfake_Detection | Wvolf | 2024-01-08T14:43:57Z | 486 | 4 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2024-01-04T20:34:43Z | ---
language:
- en
metrics:
- accuracy
- f1
- mape
- confusion_matrix
- roc_auc
- recall
library_name: transformers
pipeline_tag: image-classification
---
<p>This model was trained by Rudolf Enyimba in partial fulfillment of the requirements
of Solent University for the degree of MSc Artificial Intelligence and Data Science</p>
<p>This model was trained to detect deepfake images.</p>
<p>The model achieved an accuracy of <strong>98.70%</strong> on the test set.</p>
<p>Upload a face image or pick from the samples below to test model accuracy</p> |
recogna-nlp/bode-7b-alpaca-pt-br-gguf | recogna-nlp | 2024-02-01T13:11:46Z | 486 | 18 | transformers | [
"transformers",
"gguf",
"llama",
"LLM",
"Portuguese",
"Bode",
"Alpaca",
"Llama 2",
"text-generation",
"pt",
"en",
"arxiv:2401.02909",
"license:mit",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-01-26T00:18:26Z | ---
license: mit
language:
- pt
- en
metrics:
- accuracy
- f1
- precision
- recall
pipeline_tag: text-generation
tags:
- LLM
- Portuguese
- Bode
- Alpaca
- Llama 2
inference: false
---
# BODE - GGUF VERSION
<!--- PROJECT LOGO -->
<p align="center">
<img src="https://huggingface.co/recogna-nlp/bode-7b-alpaca-pt-br-gguf/resolve/main/Logo_Bode_LLM_GGUF.jpeg" alt="Bode Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
</p>
Este repositório contém o modelo Bode de 7B de parâmetros em formato GGUF, na versão de 32 e 16 bits e também nas versões quantizadas de 8, 5 e 4 bits.
Bode é um modelo de linguagem (LLM) para o português desenvolvido a partir do modelo Llama 2 por meio de fine-tuning no dataset Alpaca, traduzido para o português pelos autores do Cabrita. Este modelo é projetado para tarefas de processamento de linguagem natural em português, como geração de texto, tradução automática, resumo de texto e muito mais.
O objetivo do desenvolvimento do BODE é suprir a escassez de LLMs para a língua portuguesa. Modelos clássicos, como o próprio LLaMa, são capazes de responder prompts em português, mas estão sujeitos a muitos erros de gramática e, por vezes, geram respostas na língua inglesa. Ainda há poucos modelos em português disponíveis para uso gratuito e, segundo nosso conhecimento, não modelos disponíveis com 13b de parâmetros ou mais treinados especificamente com dados em português.
Acesse o [artigo](https://arxiv.org/abs/2401.02909) para mais informações sobre o Bode.
# Sobre o formato GGUF
O modelo no formato GGUF permite seu uso para inferência usando o llama.cpp, permitindo tanto o uso de CPU como de GPU, e outras bibliotecas e ferramentas compatíveis, como:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [LM Studio](https://lmstudio.ai/)
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
* [ctransformers](https://github.com/marella/ctransformers)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
## Detalhes do Modelo
- **Modelo Base:** Llama 2
- **Dataset de Treinamento:** Alpaca
- **Idioma:** Português
## Versões disponíveis
| Quantidade de parâmetros | PEFT | Modelo |
| :-: | :-: | :-: |
| 7b | ✓ | [recogna-nlp/bode-7b-alpaca-pt-br](https://huggingface.co/recogna-nlp/bode-7b-alpaca-pt-br) |
| 13b | ✓ | [recogna-nlp/bode-13b-alpaca-pt-br](https://huggingface.co/recogna-nlp/bode-13b-alpaca-pt-br)|
| 7b | | [recogna-nlp/bode-7b-alpaca-pt-br-no-peft](https://huggingface.co/recogna-nlp/bode-7b-alpaca-pt-br-no-peft) |
| 13b | | [recogna-nlp/bode-13b-alpaca-pt-br-no-peft](https://huggingface.co/recogna-nlp/bode-13b-alpaca-pt-br-no-peft) |
| 7b-gguf | | [recogna-nlp/bode-7b-alpaca-pt-br-gguf](https://huggingface.co/recogna-nlp/bode-7b-alpaca-pt-br-gguf) |
| 13b-gguf | | [recogna-nlp/bode-13b-alpaca-pt-br-gguf](https://huggingface.co/recogna-nlp/bode-13b-alpaca-pt-br-gguf) |
## Uso
Segue um exemplo de uso da versão quantizada de 5 bits utilizando o ctransformers e o LangChain:
```python
# Downloads necessários
!pip install ctransformers
!pip install langchain
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
from langchain.llms import CTransformers
template = """Abaixo está uma instrução que descreve uma tarefa. Escreva uma resposta que complete adequadamente o pedido.
### Instrução:
{instruction}
### Resposta:"""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = CTransformers(model="recogna-nlp/bode-7b-alpaca-pt-br-gguf", model_file="bode-7b-alpaca-q8_0.gguf", model_type='llama')
llm_chain = LLMChain(prompt=prompt, llm=llm)
response = llm_chain.run("O que é um bode?")
print(response)
#Exemplo de resposta obtida (pode variar devido a temperatura): Um bode é um animal de quatro patas e membros postiados atrás, com um corpo alongado e coberto por pelagem escura.
```
## Treinamento e Dados
O modelo Bode foi treinado por fine-tuning a partir do modelo Llama 2 usando o dataset Alpaca em português, que consiste em um Instruction-based dataset. O treinamento foi realizado no Supercomputador Santos Dumont do LNCC, através do projeto da Fundunesp 2019/00697-8.
## Citação
Se você deseja utilizar o Bode em sua pesquisa, pode citar este [artigo](https://arxiv.org/abs/2401.02909) que discute o modelo com mais detalhes. Cite-o da seguinte maneira:
```
@misc{bode2024,
title={Introducing Bode: A Fine-Tuned Large Language Model for Portuguese Prompt-Based Task},
author={Gabriel Lino Garcia and Pedro Henrique Paiola and Luis Henrique Morelli and Giovani Candido and Arnaldo Cândido Júnior and Danilo Samuel Jodas and Luis C. S. Afonso and Ivan Rizzo Guilherme and Bruno Elias Penteado and João Paulo Papa},
year={2024},
eprint={2401.02909},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Contribuições
Contribuições para a melhoria deste modelo são bem-vindas. Sinta-se à vontade para abrir problemas e solicitações pull.
## Agradecimentos
Agradecemos ao Laboratório Nacional de Computação Científica (LNCC/MCTI, Brasil) por prover os recursos de CAD do supercomputador SDumont. |
Artefact2/BagelMIsteryTour-v2-8x7B-GGUF | Artefact2 | 2024-01-31T10:14:18Z | 486 | 9 | null | [
"gguf",
"en",
"license:cc-by-nc-4.0",
"region:us"
]
| null | 2024-01-31T06:34:03Z | ---
license: cc-by-nc-4.0
language:
- en
---
<img src="data:image/jpg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/4gJkSUNDX1BST0ZJTEUAAQEAAAJUbGNtcwQwAABtbnRyUkdCIFhZWiAH6AABABAABAAzACZhY3NwQVBQTAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA9tYAAQAAAADTLWxjbXMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAtkZXNjAAABCAAAAD5jcHJ0AAABSAAAAEx3dHB0AAABlAAAABRjaGFkAAABqAAAACxyWFlaAAAB1AAAABRiWFlaAAAB6AAAABRnWFlaAAAB/AAAABRyVFJDAAACEAAAACBnVFJDAAACEAAAACBiVFJDAAACEAAAACBjaHJtAAACMAAAACRtbHVjAAAAAAAAAAEAAAAMZW5VUwAAACIAAAAcAHMAUgBHAEIAIABJAEUAQwA2ADEAOQA2ADYALQAyAC4AMQAAbWx1YwAAAAAAAAABAAAADGVuVVMAAAAwAAAAHABOAG8AIABjAG8AcAB5AHIAaQBnAGgAdAAsACAAdQBzAGUAIABmAHIAZQBlAGwAeVhZWiAAAAAAAAD21gABAAAAANMtc2YzMgAAAAAAAQxCAAAF3v//8yUAAAeTAAD9kP//+6H///2iAAAD3AAAwG5YWVogAAAAAAAAb6AAADj1AAADkFhZWiAAAAAAAAAknwAAD4QAALbDWFlaIAAAAAAAAGKXAAC3hwAAGNlwYXJhAAAAAAADAAAAAmZmAADypwAADVkAABPQAAAKW2Nocm0AAAAAAAMAAAAAo9cAAFR7AABMzQAAmZoAACZmAAAPXP/bAEMACgcHCQcGCgkICQsLCgwPGRAPDg4PHhYXEhkkICYlIyAjIigtOTAoKjYrIiMyRDI2Oz1AQEAmMEZLRT5KOT9APf/bAEMBCwsLDw0PHRAQHT0pIyk9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09Pf/AABEIAYABgAMBIgACEQEDEQH/xAAcAAACAwEBAQEAAAAAAAAAAAAEBQIDBgcBAAj/xABFEAACAQMCAwUFBQYEBgEEAwABAgMABBEFIRIxQQYTIlFhcYGRofAUMkKxwQcVI1LR4SQzYnIWQ1OCkvGyFzRjcyVEg//EABkBAAMBAQEAAAAAAAAAAAAAAAECAwAEBf/EACcRAAICAgICAgMAAgMAAAAAAAABAhEDIRIxE0EiUQQyYXGBI1Kh/9oADAMBAAIRAxEAPwDAKtWBNxXqJmr1jyy+ZqghWqZ9lWLHmrFSrUTHOgYrEOwr0Q0UiZq1Ycg4oWYGiiII8qNWLwVDAUirVkxzoWYCuo6EK8J9TTSRQ53oKaLhY0UY+gBzR8dvx4/Kg4Rgim9qQeH0rABu5MRPF86GmcEkEbU9ubbvYyRz86RXELKxBoJhBJEyee1QMXlV2N96mACu9EAA8RHsqkoRR8igk0O6eVYIIVqPDRBTyrzu80TA3DvX3DRHdZNe9zttQCDhdqkFq8Q1YIa1mBwhNS7rNECKrBFtWsAGIjUhFmjRDXohrWYD7mpdzmjBFtyr3u8GtZgLuN6msGRRXCDViJk79KxgQWx9tSaEhab21r3ikYqFzalVOVoWAROmM1WUOKYPASTUDBgcqIReV86+4aJeLeocFExUFqRA6VPhx7a+4axisJk173eKuValisYoEea+4MY9tEhBiouuAv8AurGKOH4VArvRDLg1ErvtWMUcO/Oolc0RwHyr4xVjF0UWTyolIcyY8kLZ94/rV0EOce3Fe2Obm+lB2AxGMnlv/ahZiAhx/wCqkI8Zo9Yc7V99mIO3ShZgeNOWeVXquQK9ERFWRrg1jFctqSnEKFJ4V3p/AgcYNAahY8HEQKBhS0uTtUwRIm/OqXjKGvFbBzRMWxjhcjpTC2OOVLc5OaMgfAFYw4iuFUcLbihLwxM2AN/rrUFYHrv6VVdHw5FAwumIDnFQ4q8kOWNVgkGmATdc71UUzV6DiOKt7ny5UAgQi3qYgyKKENWJDtQsIGtvUxbZ9lHrB5Cr47YmhZqFgtfSpfZfSnUdjsdv0oC71XTrRipk71xtiIZ39tByGUW+gYWvpXotj5V4muLKf4Vk5HIZYb0VHd8R/iWc6DzXxY91L5I/Y/hn9A4gI6V73GKZwJDdA9y4JHNTkEe6rDZ4O4plIRxa0xOYiOlRMdNHtschQ0kOM0bFoXld9quhGT6VMxb71dDFvRAMtOjXugT7Ktu1VkORuPnVEU3dxgCqJrlnJ3rGApkAY4HvoWQc6vlYk70OVzRQAZ13qGN6Idaq4Mk4ohKiueXSvQuKvCZr7uqJioLtXoWrhGTn0qQiJ5UDFIFSeJmQEL4VYZNXCAk7DejmtyNDnPCN5UGSo9eVYwsa3I/OvFtmPSnkNlxwxs3MqDUmtQq8uVazCQW2OlRaILTGRQuc0DMd61mGUMeME7Ch9GIF/LnkZRuPbXuoSJHaMjHDONvjXnZ8ASOxHhOxzQfQUMlThO1XKM1KRArjh6jOKsRARtQADsnnXywgNkUd3HEu1VGAqeXwrGJ2wwy0ye0S6iwxxS2McDrR0cxOPIUDCq/0AxI0inwjJ3pFJBw42rfwzI8Zjm3WlM3Z7vZGMLDhDY3rWYyQQg0RGMY8+VNr3RZIH+74eeQKGWzYYNGzA4cg19I3EtEyWhHKh3hKjfpWMAyRcQOKpCb70w7nYk159myTgUTA8MWTRixZXavY4MGoXdy1lImRGUIyVbIb3dKWwpWXLBk7irktsdK+jmWZ7QwMDHMxBI9BypqlvypbGoCjts42ol1is7d552CRoMsxo+C0yRt8awfazW/3lfG1tmzawnAx+M9T9frStjwjyYNq/aGfVZDDBmK1GwUc29pq3TdGkcqzLudwGGaho2ncUyFlDE7hTW50a1V5dQtZI3PcwCQyA4YbZyPiK5cuRt0jvxwUEDaZZPC/cixeXOAZEZBj5+laK17KTpCwjujIvNXmjDEeh8xvWa7P6D2ZvYLOG5ZoLu9DNb8DNxNjc+LHP2/CtZ2c1F7CyFpcXM1y0LFRLJjJHLf660mSHD9mL5HK+AFd9mmnWR/s80UsO3ecPCGPmu52oO2jaYtDOMToM5xjiFHal2wS8Vor7SNSgtYnyZo3XjQg/eCqeLHrULh4LiOG/sbr7QgXjEmAOPA9PX06UFKWJp+gUsqqXYuntOHO3KgZIM9K008KyxrLH91wGB9tLZrbBO1dyZwtCJ7YgmoBOHlTd4KFeDBO1OmIwFiRnFVYzmi3h51FbZj0pkAH+z8R2qQsSenuoyOIqeW1HwRh+dEBnpbJhnIqprQqp+NaWe05ELtQkloW5DesmYRLDgkeQNerCc4psLBlycb4O2KITSmDjw861hFEVmSeVHQ6YzY2p9bafGgBPP8AKie4CjA2FCzCJdKVT4hUdRgEWjSdAZUGPj/WnjxhQTQGspjs7I/Qyg+3esuzAUBAtYzkfcG/uoW4m4iQvKvrN+/sVUc4/CR9elS7jB3rGF0xznNButN5bY9BVDWuOlFGF+rSq8wQfeTYirtFnKyiNFzxtxZ9gNA3brLdu6HIJztRWkZF8hA3328/rNF9BXZpSgKh19mPr218pwaKCrLCc81G3T651UYSBSroD7Jwy52PWiOEGhEjKnaiUyQPMVjFbgNeQwD78gYg+6rkXhODzG1Dk41y2J/CP60fLg3D+ROaBiCnGc1OGRgzEH8X6CvBGWbbrX1myXKSNCSypKVJ6ch/SgYaRhZkHEASByNfDRoZhtkDPIdKqtgVYUySVYoi7HAGTQCBN2bUwtwYLCkt3osiK+U5fKtvZyCSFJAchhnarbixS5iYDm1ZM1HMHsipwRUksc49a1N3ozpP4kyWPPzqUOkOreJCMbeyjZqM0LEg8qjNpRumAeeVIcYaOM8PF7T+lbcaFG+GVsA+dB3ukfZtxyNCwo5bawtputCORSJA3CFbO+/MV0GO3yx22pb2jhKWEM5HhguEdvQcv1rRW6B8Mu4bcEUrHb5JCLtbe/ujs1KyHEs/8JCOmeZ+ArltnH3spbGR6Vtf2pXXDcWVrnZYzIR7T5f9tZGw8A4uvKlfRfEjQ6IWFzFDjxSKWz7+XwrqdlBb2mnSyuicbxhGY/iHQfOsF2dsUmuoXP34dwQcc/8A3861mpXE8lgkkkRhOSQh2IG+M/XWuDlts6siukYppbzQ2a1tLr/CKx4FK+JQd8VM6+ILOXux4wu2fOjNU7Las2hNrB4DHjj7offVP5vrlWPjcXMbLjxDarSxynFOTJwyRTpI3HZjWoYrVJGayNp3BEwZf4xl88/Gj7DTYoLARW0YiEnFLwn+Y71kOz9jb/alkeMFs7MehrokJtLTTjNeMVjiXPhySQemBv1qOabyVCOimOHi+T3ZTpI73S+76wuU93MfI1Ga33P9Ks7PyrcXV93asqMqSKrDBA3/AK0bNAMnblXVhlcEcedVNiJ7bmfKqHt8jlUe0WqmwZbe2/8AuGwc4B57KAPMn8qo0d5JriRbv7YLqNcOsv3F92KumScdWfPbYJqPdY91NGhBJqItSx5VQkLO6yeVNtP05plJA5eQoq10njwWpxE1tYr3DTRJJzKswB+FCzUBHScoAfhXn7thiTdQT50z76M/ddG9jZoeVuLluDWMKbuCJbeThQcqr7scR29KvvB/Alx/LVncEmsYHSPFWGLCk9KJW2Pw86lJFxIR8qJhT3TXVykSnCsdzgnA9lZ3X72UyNZgL3UJwpKjI99bGxXuGuDvxEAADfPu+udYPW5++1O4cnm+Afl+laO2Z9AEMr28yyJuR0PI0/s5FvbcSADiX7wHQ+lZ9wRurbceT6UfoFyI5Jo5GxxAYLbZqjQENWh57UPcqkEBll8EYPDxGjm5+eelB9pFI0C3BHhadjj3f1FTsZKzHWY7+J2x4l39o+jRlnMtrJ3jHHD1ofQpe6mmHnC2MfXpXk+SwXfApvdBNnpd3HewN3Z2XbHLFHOY4YXkl2VBkmkXZL/KnU9SKP7QMywpGCeFmOQNs0ie6M17CrO6tr6R47cszIvESVI2oiSPgjZvIZoDsnbhWmlB8RYJg9R9GjtZZrexk7s4y3Dn69la90CvZnRIZJDK75bnnyrVpCJ7SCZTxMygP/ux/eswLaRY+8YDhPStd2dVW0QBhuGYj0oyAtirWZ/sOmvwyGOaTaMjY567+yguyd0ReSWznKTLxD0Yb5+vOg+0l9JPqUkDY7q3dlQAfGqNFufs2qRSKccJANFrQTeCPBx5bUFf3qsphQjAO7U2kHGqyBeHOxHrWaZMzN1yTSRdqwtUzR9n7jMRjJGARt5Vootj8qyeiHhMufJeftrVQMHCsORGaV9hMjP27ZNcurJ7FGjhmMasCc7HBJrQ2evWN5hGbunbkH5N7D+lc37X2smldr751Q8E5EynGQeLnt7QahpGtmNznLsdgTyXzNNLrQYq3s6zJFwnFVXcBnthn8PSsvY63e6cEDM17aEeWWUe2tha3EN9ZrPbsGjcbentFIt7QJJxdMzV5psdxBLbzpxRSqUZeW1JYNF7RWUP2Sw1CzaAbRyzxkyIPZyNbeW1yTtVSQ4fFYyZxvt32dl0m6t5Lu8mvp54+OSWXqc8gPKs9p28wHQHO/WusftS0oz6PZ3SjxRuYzjyO4/L51yi1Jhmweh61pbRfEza6G1xLc24tzwRGTMzAgMAOg9tauW8Oo2jyZjdQ5ClNio6K69GFYTs/cSQX3EpBgk357g/X5V0LTdSt7/R5ZEHEeIh1AHEWGx3+udcGRbo610jT99bvonfSlRbmDL55cPDvX5/s4s3EhQeEkkegrf6ut1daf3ME05tHw3dEFfXBHt6UitdGdXK8BHntmq+ePH+ko4GmEaDal5FAXY1rgt5FdF8xxWsKg8ZwTIcbjHSqdJ0sWlq00xWMKueJs7DrSXUdTR4ogNZnnKqV4Y8KsrbnOOnurmS9s6G+Tpeh52RYXF3qc43XwKCd/M05u2jghkmmcJHGOJmb8IrKdlf33ZaW97Y2MV7a3Enij4+GQcO2R0xzpnLpuqdopl/e8AsdNQ8X2QOGeY/6z5fXrXbiVQSOLNTm2IdP0aXW9QfW7qR4beWYSQQcO5Rfuknpy/vWheLxEgDfn9e+m0kHCuAMAdOWPdQxhyasmRk2xd3JNGWtsGXcYx1q9LbiPKiUi7taaxKLbaJUQt/LyHrWM7TAjXLrJ5n9K2qn+AntNY/tMv/APNXHt/Sli/kM1oz4Yqcg8vdV8F1NGy8MrgZHJjVTLsfOpQpl1qohq7k/wCCnY9Iyc0cHAGU3zvnnS/VmWLRboE+J0IHsxv+fzoiwkS6sIZIjlSgHvpb2GtIlJK5PPfyFVNcuoI6mrrgpBCZJWCpnGTQkc8FwxEUiuR0FEUMjPdWq8Awz4Zm6n31zDUgRdy+jfCuoEYgjHoBgVzHVNrubPPj/WjDsMgRjuP/ANma+hJFwmTyb9a+6j/fUoh/GGR+KqCm2tYke6jWTJVmAIG2aH7Yt31jBhQBk4UcgKKtR/iYvRgdqo7SLxWUP+79Ki+yi6MDo6EzMBgs0bc87UVc2rRMTkEEZyKr0c4vQeex2+FN9QVWszhQGL742zTX8g1oM7Mjhglzj7w/KidePEib7hjyoTSEaC2D5yjeI+nTlRWtHMceOrGgv2A+gvs4eG3mPk4O3TaiteGbNt/+ZQ3Z5uGGTPVx+VFa3hrNsfz5oewehA1w5XhPL8qYPq8um9n0EIXilcxknO21L+Dc15qu2jwA/wDWPKnFQmU8RPPfOSasgbD5Gx57VUhx7MnlVkW7f0pws6zEDPp8Qx/E4V8t9qz5iy7e01otOOLaH1wM9enX30peLEjbb8RrlxvtFJolpq8Pe+wfnWhs2KwR46CktkuJMY2bY02tzmBAOgwcdKMuxUBdrtDGsack8Ck3NvkgKAeNPxL68gfdXMZbaW2mZlU8LcuIc/fXZ4mIoOTRovtpuYY42D/50LgFXHmPX+lNGVGML2X1SSC+hRgApccJz15fqfjXTrWSCaIy2zRsjnPEmME8v0pdd9mtPuLR4IIIrYsVPHEgB2/tmj7GyjsLcQQ8XACT4jk70HXaC23phOATVZhXiyKX67r1p2fsftF2SWc8MUS/ekbyH1tWKk7c6xeScUX2a0hGfCq94R7SdvlU5SUex4YpT6N1rOmpq2kXFmxAMi+Enow5H44rgWsabLYXkiyIVZG4WB24T1/KuraH2tur4kxXNvfqv3gq8J9xFS7T9nIO1FidQ04D7SBwyRnYvjoR0YUYZIy+IzhLHv0cm029ZGXGx9K1emSotu4ilmty8neS92ccX9qx9/p0+m3DDgYBTggjBFTtNZmtssh3+OfdUsuJvo6MeRUde0i9RruV7m8DoECJEUGw8yeZP1ivdQ0yxv3EsczRFW4lKNwlT6Vzi17QSmHikMYPTgUjhFV/v17aKVbe5cO7lwXHEFz0AqHjl0Nq7TNzrfaCO0VYEcGJyFB5/P3Vki0mt6illYAPLM2OJMeEdd/Z+VI5dQuLsrGWaaVj5cz6D311PshoFr2P0ZtS1bEN1KMHP4B0UetUhircgTycVUezWWdlFp9hDawgCOFAi+wVMoD0rOR9q7vU5immWSCMH/MnbmPYKMbU9RtQDc20Eqnn3TFSPjVPLBHO8E/YxkhD1T9nwd6tsryDULcTW7ZGcEHYqfI1HUZja6dcTpjijQsOKqJ30SarTPOAKKgVrFS9qHvr23MndxGPAjEn3BJ1kZeZx+EDzra2sv2i0jl4ZBxD/mRlD7cdP707VCnxGIl88msr2iTi1WYj0/KtW4wg99Z/WoeO+lY8zj8qWP7DPozDJnNSjTDCjHt8V9FAzSqqjLE4AHWq2IHdqVJ0eE7KO8wTy4tuVR7IScdrcITsrLjflXvawhdOXvGOzgIo36Um0DWU02V0eJ2EuBkEbe730sXaGkqNL2jBOlLj/q5+VI9EBW99oxT7XCH05QNxxg0n01OG7U0V0KaWRliRUQAsMKzt+lcpvGZpmDHJyRnn866lIAZCeY4vhvXLrsHv2yMeImjj9mmB8eRnHr7KmzYkOFG3zqsDwjzq14WSA3DAiMtwcXrjp7qq2LRp+ztyWReM54XGPSjNa/i28YHnt6VntI1C3t2HFMgHFnJ2ptNqVnPGoS4iY+QYVGXY66MppK/4lSx2Cnbyp/Cgki4ZFDEOTvWCTUbmMgq5XO3hAp/oOryw969xxzrsMEk8P1+lNJexl9GpsIzHAqkYwN681VcwwgcvKg7jU82wmsnUqRghxuvtFXRXX2zT4XnkjEmSCNh8qVfYGF6OeGNx0LCi79+8gK/6tqp05AISy7qTzG+asnw2RkDJ5bUbFFwTeh9aHDpkA6iXNMhFhtx7jQHaEAWkKg78ecf2pl2AQLzO+2Ty9tXQjceR9tUqpxy60RAhYjbB9cU4DrGmj/DRjy4ensoXusyMfU0XpZ4reMDqqk1Dg8Rz51x4+2WmQiThYEUT9rtdOtJJr2eOCIY8cjAUu1jV7fQtMkvbkcQTwpH1dugz7q5hcXt9rt415fSGU/hQbJGP9IppOtsOODkzpw7faOWItxdXAH4ki4R86Psu1+l3L8BeWEnrKmB8RXIjdCzl8PDk7b/hp/pWpW9wQrgCQjJ4cYqbk1st4onXY5FkQPGwZW5EHIPsNTrF6LqR0ydUMmbZz4lPTPUVswcijCfIlkx8Gca7Z6kdU7YXSTu4t7Zzboq5zgc8epP6eVGtb6JY6lpkEMBhsLpeKdmfiUv5EdOmfP8ANb20059O7WXqyoe7uH76MjqG5/PNK45IJFMbln4ujIWP5VW0amzcWU+n6ZfWLWtnb2l4yM1xFbsCgwdjkdDT7s9qAl7Q3UceAkqFyo6EHH61zyymjslPBbmIN+JtifdTPR9XvNMd9Qt7YyLMDEjlSQuMEnHtxXO05ZE0XpRxOzRftCg05pNMjuFjimu7lYnnP4Y84JPTqOdcevprNLyWOHj4EdlVxvxAHnj4fGtl+0KDUo5Yv3peGeeSEuYwMLEM8h8PyrG6dZxurvJEHwM+LO3urrqkcsbvQP3kfNblQfI5FanR/wBn2t63bRXMKRrbTDiSV5AAR5450PFotu9uQY4AeHAZlOR7q692EIXsnaxAcPclo8ZzyJ/Q1KVFeckgPsl+z+y7OFbiZhdX2PvlcKh/0j9ay/a7Wm1ftW1kjn7PangC5wC3U/XlXRNXvLi1gX7NwKcMzyyKWWNVGSeEEEn0yK4raTPH2iuPtAIlMpLBuYOfKpz/AFHwP5cmae4njhvtOtJyy2L5eVQSvHuBgkdN80erSRavBYWT2ltJcq7MiOzwxYbwsAd8lSNvOk3cW88OLoBlXxMTt78/XOkml36aldn908Fq0JYqWDN4SQPEfM89uXDQhxceLWhsikpWmb3spdvH2ilt2mSZZoi3HECqvg7Ng7g/eHX37Uz7cXslr2emSAHvZcDlsBzPlnby9az/AGLMlx2rlkaRpjDCxeRupOAPr0p1fNBr+svHcsv7q0/aXiOFlmIxjPoM08KXXRHJd77F/Yns86j96ajFLHcBiscUqY4R/N689qfa5rcejQJ4e8uJiRFHnn6n0o23vrOZxBbTxuyrsqnpXPe1l+0fblu98SQoqqD7P6mtkm6tGxQ5SqQ/k1O3itu+1e/uJJW3EFsSijyAxzpZNqqkiazlklhPOOVs/P69lL4b17O8huEjZ0CsNl4sHmDjO4/pQ87Xdzb3NxAkfeM2SJGWPLY5BSeZxnA86hwk0mns6binUlo0cZjurdZot0fln4URYxcN7ER5/Ck/ZOVp7O6B+6JFYDyJG/5CtNZwhSzkZIIxnp7q6eVwtnHKPGdIS9qVZ9OhYjYPz91ZONCJATnnzrX9rBxWELMd+8/Ssjw49tNj/UE+zb3jiWwThYMARuN6EsIuK5RQQM9T0oHSXCac0TsAe8yASPKmFq3DcIwPLrR6Qo5maJUkCcRfhY8R6bH+tcvuy7StxHPtroZlyH3/AAt+VYC4TilJ9KOM0ga0txNeRxvnEjhTj27037X2wt9NtURFSMS+FF5Db+1B6fERfwejr+dOO2i8WnwAdJefuNCT+SGj+rMAVww351NEYgudlG2enpVzQ5NM57dF7LxkDxNcHJ8xink6BFWZYQSKFZwnC3l0rS9mLGOe2meQNxcaovkfr9KzYl4o1Hlvk1s+xkyxWVwrkjikG/l9fpQydBh2VXlp3FxLGNwrYqjuieQp7fWgaaSVSDlsH0oLuCpx86y6A+w7h4ey8g4sNnoceVJ7GCW6uQiAswHFjOKayKW0Z0/1DYVXoa91esx6IRtQ9GHojyATnJGay+sKRqc+++dq1qyoEyfLmax+tni1iZgeRyN+VFMFAS+v1vV0JO43yatlb7VCLllAkLlGI/F1ryGPxbZIHsp07A0dQ0SXhsbZGAKiNfdRckXC56jmD5il2knFnbdDwLsd/lTV9wpPXNcq1JlH0ct/aPfm41+CwB/h2yA8Pmzf2x8aqsbfubB5GGxXIPMH6/Sh/wBoELQ9s55DsJFRxny4f7U30BoL+ye3ZxkjIB60MvSL4nSdCzRuxOq9q45ryGSGCFWKq8pPjI6ADf0zSiyS4sNVltZxwTQuY5N84I9fdW0sO1l72TsJdOXT1uOFy0MjMVC58x1rL20c9zqMt5dH+NO5kZgMbk08pLiCClyNbHcqLaNfdg866JpExn0i1kPNohn21x641FortVzlAME7nhNdW7NXdtcaLbJbzpIY0COAd1bG4xUMapjZv1Pu0HZuz7SWYiuQySpkxSr95D/T0rj/AGosbrslfJaveQXJccQEZPEo/wBQ6fGukds9ZmmgGk6K/f305IeKNeLC9fFkcNcu1LS7zT9Ttk1NVV54Q48WTg7b567V0KKfZBSa6K9Lmjur2FtWNytkzEN3OOI+gzXa9JbQ7mK3TTzAwtk4I4+TIP8Aad64sJY5dImjfaSM5VgAOXKth2TtdD1PTiuqfaLO/gi71m+0OBJH/wBRcmg1XQ0ny7ZoP2pvwdkvFGzKZ14iFzgb/DpXHrS/jUkLG7AgjYCu4djvtMllcSySXDWTuVtVuTxPwDqfb+lLdM06C6/abqV0kSCO2hEYAUAcWAPjzplLVE6pnNo9YtkQJJ3i5GN0P10rqX7P7rj0OZmDxxCbKNIpQNkDlnptSf8AaO0FlqWl3CwxGSHidE4BgtkEZ8xtmgrK97+4S41fvr+5kHGImYBVHsOAN+lSnKtJFoQ5RtukdO/hXEZGUkU7HfINcq7cdkrjTrl9XsYX+yA8Mi8XEyY2Df7fafbTebULeDhuLG3e0mBP+UcHI5gry+NZ7UdTvdcjnaa9mAJOUDlFA9g9p9edaD5OmFxcPkmAwX8WoWTwtKI3ZCmeXzpdYpb6HaSQ28vfTSnxOBj3Af3pPbW8kckWeN0ZgeBWwSvz8q6/o9n2YhjH7u0ae8YbmaaLjJ/8jWlFQ02O5N7SFfZsz22jvBp2FuLk/wCIvpDwxQj+VSfvHny601nvNPsYY9FgWOSBeHMqyAuzn8Q9aW6hdyS6tKZYxG/EEtopEAVF6ADlmlaXpumkR7MAxyBWfKrw9c0nO1SB493IaaX2futNmvbyK5ubnUtOuOEwpjgmRsHYc9wT16VD9pOkyR3MOsQITGQI5gPwnoTTDsbqa/bdV+1zgNsxeRuYXY7n20v7W9qo9Vgk06xMn2VlPHOCR3mOg9NxVatElJxkYu6ub25Nu1pccCLnjAI29ce+iJdRkSZxE4dnHCGIyRnGfyFG9l+y1tqt/c6feXU0c6J3kbxgMGXqCD13+fpW80nsZpmiTLLEjzzrylm3x7F5Cgo1od5IvsF7LaLJpmiqtwvDPO3eOv8AL0A+Xzp+kaxIceYNWEYQ+fnXjEohPInqfr1oSeqJrbtmd7WArpcedmEnI9dqxMzmMDyYZ/Stt2tGNNQn/q4+VZHue/aOMc2XHzNUx6iLJWxcZSpPiPng0fpeqizZ+PibPL0oCTDO3mR8Kgowxz1qohsYtRFzFxoCAcjB2pG0OTRumLmy9pqTW5I2586C0Zg9nFw3Mf8AuH50Z2pIlso1H/V/rVcQAZGHn8K+1U99CoPnnelauSYU9GY7jzFMzatL2fC8hG7SHPXYYr4wb55U2aFBptur8pFK77Dl1rZOhsfZzFVY8PEAB5g5rWdlUD2lyCNuMe6swMeHnt51p+zEoit5eL+YHfbFNkWjQew+e+kimniUjhU8/r20Bcas7KY0PjOBny+s1Vqdzwzy4BLMThRuTS62DGeXvNsxllHPHI0FpAe2azs/3k1hKt14l7zhZuIZj93xoiOH7PdPjyxkVjhMy3MmJPCWPM7GtfLr2mLfS5lTuOEcBjQnpWaaM9kb3URBiMjLYz5UkuJPtEzSlcFvaaLe9tdR1JY4S/C23Ewx8qlcac8JGR4eYYcm99G10CmU2EqCaOCZV7lpMt09OdG6dAt5cNHDksFyufxUD3BU+zcHFMNGHdXityPCRtWegGxsZxFDAjHBVQN9sU7zxRxkdd6x943emNsDOfvDr5ZrWWLcdlEp5rGCCPd/WotU7HMz267MPrditzapxXUAxw9XX+v9a5pa3c+nTFWypU422IrvKClesdkNL1smS4hMU5/5sWAT7ehprTVM0ZOJzF+0/er/ABFVz5sKAm1fjBwuM7AedN+2vZO17MQW7Q3b3E07HEbIFwo68z6DlWes9MkuWLSeFR0G+1FYlVlPN6Ruv2fdlJ727j1nUkIt0PFAjjBkb+b2Dp5+6tZc9gtMmmEtvLc2knEWPdPtg+3OOdZ7sXFrH7pb92aijmB+FrW4BIA5gg+R3225Hej+0PbXUNFtBbT2SQajLuhVw6hf5sb9fnSP4sG5vRp9N0rTuz9t3duqRBvvSSN4n9pNYT9p9kt5d21zH41MPCJE3AIfPP2Gs/YanBqt5M19NPLONy8niyfLfl/an3Z7VNHv7qfTleSC7U4U80f0x1+FTc5p9FFij/2MYLZ0BDDi4hjJrQ9mtMm7U2MNjJffZ5bCM/ZVKZyC2Wy2em22OoppqGiQNMCrJAC4SQN91PUen9KD0NLfSbxbr94W8bxTg8DPjK8iQarGanG0TcXB0dD7PRajZWEkOqyLJ3LYikGMsmKB7EQl7W91B/8AMu7hm354+iaun7ZaCIJF/ecOSpAG/wDSgOzvanQ7HR4reXUYVdSSefmaHoSmZb9qs5/4ks1z4UgHxzvQSXXBGbtHj4e6CypIcbLuCD0POie3rprmo8emp9rQhT30Tg8O2CMVnDa39pbDBcEbYkiI+dLW1R0RlUKkh9bdo4bm44klhkuJGbiiV9ipA5f+IoC5uY4rpo4wArBeLfOPFSxX1AopRYuJhyyRmqXbhTxs0ch8TFxlT7x7DVIwlyv0JKceNLsv01Y7nV1UDC5yQPby+vOt3DrN9LJFYaJBIAicUipwqV3x4ifUcs71zjTHlsrsS8JZeXGniHyrRX+oRwpPcy25njPAZIgxVldeR+dSnG8myvL/AI9DfVbi6uo3hlimklUEuFXLJg4z8aShPtVwsvFLxlcSLFg95j6+dRg1z/iBrpkWfvT4p1Y93nfIIO/X45oi00mK9jSWTVXtp7py0KLHlV3/ABHoMiljjUdLszyN9rRbYWlne9obX7WsvczzcBhY48RHn7RRnbWWNdYjs7ZVjit4FjVVAAXO5+WKF0SxvE1GC71VhHb2Td7kHiMpHJVPtoHVJ5Lm+kuJv8yRi7AdPT68qtHohLvQ17PaZqDxya/pxLT28pUQY/zkx4hW2tu0em3lj9ra5jiCjxo5wyHkRjrSnsfq2n2fZi1jluoklJcsm5OeI+VGXmiaF2iMk8bKZ8ZZ7eQox9WFa1dMVp9ma1TttdXhWGwiaMNJ3YVCS7HO2/1zquG97T2WpmNILmVGOGjeMsM+3+lPtB7J2+hxrLM4uLv+fGFT/aP19afGThjI9azklqjJWIe1ZJ0xSw377lnOPDWXtgDc2/ntt7zWl7UNxWAHlJ8dqzUJ4biE+z161o/qZ9itky5+FeLH58h86OniC2UDIvjYvxHl1oDL/wA2PbVU7ENPpcGbEE7LknJqF1fRxEpbASOduI/dFUQmW60uOK3lXjX7yt19lDRadcTL3kYeXByX4gFGPX+lTlJlIxTFLdp5M7wIH5kqevsrwdpS5PHBnJ6OKTzbyu7IMk8Wx/v61UOADkQKtSJD9dbhdstDIN+hB8v6U+hvUv8AT7Zo1OIpMEMOex/rWEUKWGM7Vr9LfGjWykb94xIqeRaKY+zCISxG/wCdPdLlaKynfvCuHUc+lKYwrMcsEC7knfb6NW2zI5Cd5k8YYb5IPninkLEYyXYnvAHVXMiEglBkAqetDxDjvgoDDiQqFbp4KMJi76dbZiY7cnhbqVbl8yfjQsBC3toxO38Pc/CkQ8gIq532xzr7DHnJ7qsZMHB/KvliymQT7qoTLdPcW90sroXUbMPMVr7ad4WWOdhPZXG8ch34PrrSHRLFbu7aJhtw8/LxD+p+NNNOWSzlFvMoaKbcKc77/wBqjJ7Kx6G8tkGkgt2XDBQA3MZ+hQUERimBHMedOPB9ptnUlVVEO/Pahe647oKCOKRvCDjf3UYsRomH41UevOtfp5/hKB/0gPyrJRwlSFYbg1qrF1jhLuwVUjyzMcADb+lLIyGKcjirUHLpWKvv2hW8PF9ghSSFOdxO3Ah9g50x0jtTc6ja/altIbi36vbScvcak8iRTwzoS612fu9Rt9a1jWkCNHCyWkROeEA7H68zWasBEIy02BGNsHz9nWuj9qbqO57E6hNbtlGixncEeIZ2rmluIzEEZvEw9dq6ofKJzz+Mh/ot+2jX4u4o2EHKVScca+z3ZrPdstRXUO2N9MJO8gTgRCp/AAOXz+NXy3ISM95KzhRtnJ+uXzpJqjFdXuEjAEsLcO341+j86Dg6KY5rkHSzyNfS/u1LZ7WSLMcUUeHjOPPnz86MsLxI5kis4baSMopDGHhmgf8AExfGfPmd9qz82pXKrD9kzEoBEiISnF7xTOwv5bbTEW4maSUDdmYnPlv199SnLTf2dEYbSHOu6oJra5BfnEPiGGPr1rFySkn/ADDvvtVl7qBn4kzzOW9le2GnvqM4jXODtgczWwwWOHKQM0vJNRgDY423Zzj41NYFYffcezetzadjNLeQWb6nCt/w8RhD5Zfb8fnSLVdDbTZGVZY5ogxTjQg79QatDLCTo5545RVi+20wynKXMq42+7+tN4dG1eBAbfVASRspLfnUdLGAAyAk/QrU2xDxKG6bYzmrcE+0c7zSj0zKTS6zAR9qtIblV/EgG/vFLpr2zuQVnSa3fbf7wHu2raahGY4GaNyMAjy+udZGeRppSjxCUsfLf3GpuCXRXHkcu9gsVg5ic2bpM5IIaJvEPdz5V7EupgssaszjZsjfHsPsr64sIFc7SWz+TDNeo2oRRB1kE6IeTEOV9lJTf9Ldfw9bU7k2vASYwR0UKD7xT/she2kcDpLOi3Cjw94eYz0+NZwXsZt2t5I2i4hjiHix7qaaDa27xzQztDNG+CrphivtHMfCllGKQ8ZSbVmnOqveuLeJzKp8DqucH/VyAHPf21mr2RpLmKDfvGPCT7NjTG2tZtMhjkjmbupBl2jPGox1Knpv7dqS65NJDrEUjcBJGeKLOGz6HlU1F06HtNpyNPDa2qWssTxNItuAxRGVTyznxc+eABuSalNHeqHurFHRrXBMiEHg93Ub7+2kP75iIWZrV7mRBgcLcLL7apg1W5aeS6wLZnDJh/E4U8wD64pFBNKx3KSbrdnU9M1QatpEN3gKzeF0/lYbH8vnV88nDHn5Cs72VcWnZmKS5YRLPMzpxHGc7D/409fhaPLnJAzt1pmc/sRa/IHtSo3xIPypChxPF6f1prrjFr+VB9zizj69tBQ26tDcSPnMcfEoHnmqR0hH2ewnMEAwMlmGT9etKXgkjfEmx+NFvPJC0fdkeFePHPFUsxkYlgc/e29aZaAwzRk72RI33BcAg9fdT2USzBRG6xxRyEkL4Rw8P3fnSjSU4ZUbB5g0VqMwkEffN/lNxoibBTvj86nkjbKY3Rhpo+IhR12oTgUHY7DbHlTB1RwQx4SeRPL30vnYAsZGw438RrosjRHh4VUj7ucHHStJp0wFlAM7q5/Ks3xBlJU5UgjblTG0l7qNVBJCyHAPlilkrQ8XQgVJC4UFd9xlefvq+3VocmQ7kbDzr6NWMoYjALcQ9PfRlvaPLfKhGAzcJ50ZAj2E6feW8kuJ4e64UPelGJzuMbe6q51+zSxYZW4FVgynIOD/AGom7s4odVuUhRYouHHCCTz9fcKCnDNbI7c2Uj5/3pI92PLqmXXi8F5Ouw/iMB8aqXbmcDyq68UPdd50ZVbr5CqiobHQD5066Jvs0XZCF5tYPCwzHGZPFyOCK0clkFbdSCnCCp6HxH6NJOxcRmv7uMEBntZEUnbc/wDutZIAc2rKVuLe3Vg/mOqn51y5X8jox/qYq4mYKMknPkeVH2BcCyuMFhGS2B/u/vStwWIxsDvTvSVRbSMOSpYthhvy8x76tLSJR7HriObunjPECv3sb0m7ealLb6Vaadb/AHrs+MDbKjkPiflWggXNqobh++cEVmP2g2syR6dqUPKAmNjjIU5yPyNIthWmJYbSSx1Gxi1MQCNxxgNspPrzrS6Jryzai0sNklkpj4JYo2BBbOx29K5/NOUl+2yp3yljmNm4sZ9tMNJ1m9njcrBBBGpBX8Kr5+2tNfFxSKR27kzpOgzxX11fWE6iSCZeIxt1+s/KsveaJBBJrcnfGJLaZI4E55y33fMnAorsdq0Ka8/f4XvV7sOTsrHcfl86T9pdS7vtrdMgDx28u6nkzY3+vSlwXGJs0VLJoDeRUtGhltorUIpllu5mZzj0UbY39aUa3DIL9mmkBnAUiRcgMMbfIUX+8jFcxCwiW0jiXhUREnGcnrSe+mnmkeSdmaRzksetXUn2ifCv2KftdyoJcLxcgSMGrYZXmAaXiK55DP51ARq0S4k4m4vEOe1GF4REnDLESHUkjJz7q0l9IEJOqbKtT0x9OHGsgkiLcOeRB57infZeYQd9PjLRISBSrWYzMVeW4+/iTjcHJHsFXaRdR2Miv9shORjhw4z8qnkTnCimJqE2bwLb27xXiPKrSs4YySK/fIAMSjH3QScYpRqNtwyXcrcGblEkIWNlAyvEu/JjjmRVEWo2T2j28cMOJN3aHBz+vMVRf3apaCCBpZGwUjiLM3d558I6VKt9UU9bdkLLLfdbG3tpxb6h9nvDbylEP+rJD+tJrGOZAONChH85C4qbaja3MqCRFljjJXjyRz54Ixmu3JNRjdnBixOcqa0P57mO9tJDAeJByfbBpd2UgWXtbZqwBHeDY7/XKrYb60eJobclXQYMZyQB0z9da97Igf8AF1pwn/mZxUssuWJsrihxyUbztJo+ndo7W9gWMLfWq5D8ODnGRv1FcYSFzc90hPETwjFd8la3lOoQ6c0I1DgxIORzjw5rlvY7RWve1ypMhxAxdwR5VzYpuCZ0JJoVdpNBuNAuYoLwLKJIg4Yr8R7jSYWsTniido25jqM11D9p9ul7p1leReILI0ZI+vMGvezuhaNofZ2LUtbEZa4xgyDIUdAB86dZ/gr2wcPZzWO/1CxRk4hLC3MEcVU3V4upXAkmcoxGGBArTduLXR4tSil0KZGWRC0iRnwqensoNOw+rXekR6jBbCWN9wE3f4dfdVFONKXRnGT0JSoEZKzM4GwDLgj3+6nulaHGbstfRXd/aRAM0tivEnsOwJ/7c1m5YZbdym6uuQQRjHurU6P+0O6gtRb3irK6AKkrE+H2gc6d7RN60jU6xeCfT7W50tUu7I8Ubqi8QU9AV6fDbFMLCeOXT42iY4CgYOQQQP61ndKWS31mxvoL6KeTUZ5DcGBSBIvATuPQ4xtkedaphxHJIFTn9Aj9iLVULajIefi/SqUtnWOUDHiXBz1HP9KMvGjeUuJBxH7ykEY86rEkYckyAkqRt7KyejUJr1eKfCKFVBw7dala2Us7HbCYwzNyxTW3077XO0j7RLuT50DrOvxQKYLDhCoeHjwGGfZ19vKs5ekFQvbBJe0VpZObQJMDGSDJwgcfuzQVz2jsWl41FwQDz23pNdI1yHuO6KqCMnJY5oJgg4lbIHsNUUU+wN10HPe2TH703njhFUNdWZzwvJv/AKB/WhcKwLA7HbP17K8CKSOHpsMVQmWzPBJGpgkxKpIKsMBxzHvq21lfhxIOGRHPEPKgwgz0I/OrzxuqvHvMg4SP+ov9RStUMnZ9FvcKMDbzo9XZLpCDunX8vyoCFWMqgYBA5mr+PunL54iN9q0lo0exjqeTeyMDk95jkNwvn8TQdzFw26gfd4nx7Dg/rR+oQSPdPJGMqu5TmV9o99AyEtZKeqvw49318KWPSGfbPpBxw2z9DCPlt+lR4ASMcxUowzWVtj8JdDj25/WvGQsfD0NMuhX2aTskHaa9Eee8Ns3Dw7eXWtml2swZJF/xCxA8RHMMMmsV2Ydobi5ePPGIGK4rYd9BMRKuFn7pSy7gkH/3XLkXyLw6MZLEvchwccWRvvim9hHGYwYT48bgZPl0pWxBi2PCVYkHnR1oCHBC8hsV2x76q+iXs1Fm+LVkcbiQ5C770sue09iLh9O1KykNpKCkgcZJ8zw+0e3amGlOzxusjbjByRR9npNml612sAE/LOScZ54Hrj5UiGtdswU3YD7eZJuzOpW91CGwYpGKunXGcb/AViZ/tEFzJA/3opDGw5gsNvzrod+LvsBq8VzZ3Ans7jZ0lZVO3QgfJhWB1W+W71O5kt0eKKWQvwucnc5xmrxiI5vovjuzbWbgY45cFsnxEeyhBeSLIXZSzsch2JHKvVuY44TEsS/eyzDm3lvUhemTTGsf4YjEvfJlOJ+LGPvdBtTNIyk2Vm7wSJ2Y53Cp4RQj4ZieDhB89qjJLwmPj4iDtjJqcQBkDMmFJxQapmTtE+NEAIkBIz7zXkbRmBo2fAI5ZxVscHHflDGeFPEVPXy+vSqnYXNySpB6cXnQ0G36LeN7i1aGdk/hqVQltyKh9gMxi4GUkIOTjIO/9aISHJbiO2ORzXqoDGoOMhQMEU0ZRSEcZWDvazQH+KOJQM5YVD7VMhGHbA/Cx4h8DRmTErMjuvhzgbAb9RVKvGw7yWEBhv4ds+hFMlF9CuUvYUsYIBkVSx6HmD7PrlTvS7SGZFLoMH73Pz2A+FZuG5kUhTh1yWIO2/nmtLo6vKDtwMcN4iNhyFDjb2gylxjaY2u04FdkxgrwuOXsNV9jpYIu1ls08ixoCcFjgZxtVl+f4ORyNZW4kZZCVrZYclQn48qdnc3g03S7u51iScIZV8bF9vdSHshJaQW2qa7OwjiklbLeS5z+orkkl7K68Jc4HQmrE1G4S2Nss0ghJyUDHBPsrjf47+ztUlTTOua5Dp+o9irkaVMsscDd6MNxEHOTnr1NAajCmr9i9JusccNpIvfL5gbH69awWmdobvTrO6ggcBLlOCQEZ2+iafdju2Ueih7e9Bks5fvAb8J8/ZSSwyS/wMp0Pdc7H6FcaLPqelSBFjGRwNlDjmKcaVJqNr2J09tJt47idV3jduHK5PzrK9re2tnqFlHpukIUtiwLsF4QeuAPb1rT2P70fsTpzaG8ffqPErY8QGdt6VqWrBfxp/fs59FYHtN2+aLUoWtzcSkyxplSpA8/dWX1u0XTdXurM5dYZWQHkdjW+7PvezftLSTU4hFdFmMigYAPDtjntjFY7tmmO1eok8u/Y/OunE3zr+C5FoE0fVbjSL6K5tjxBTuG3yOoNdes7tNRsoLmD7sqht/w/WK4lCcsPKu16XbRWWlWlun3Y4l95P8A7p8qRGLF06KblgpJAPPlRkOnQwR9/euEQclPM/WKHeX7POz+UnLHPelGsXd1dFo9wM/eB/I+7nzqTvpFI17Ia52hN0ptLNTFANuAf8z0bfYenxoWx0ZZCZLvkqbhvuovQevsozT9KjtFE0697MRlIfxOf0517f6iDcPHCkc9wkfhUbxRAE7+p/pS36RSvbA9Vts2HEiiNGXi8TcJYDlmswbZmdzImcdeea0s+nm4s7y4us3U/ArmQsMDK9Ph08qyMgSNmyBsfWrYmSyHghxHy8eeR2qswlJMsMZ8gagxAGTnhO+2agGDHYufXerES1Y1BDAHi86thTKgnclsY8/r9ap48MR3jZ8s86IgJLoAM8O2/X62oMKPkGWUA7kYqwrmV+AgqAQPWq88JGDgjrU0kC758PKszIZXcrmcTLlHKK2x5bedUXcxkSUMqhgQeIfj+smrhPayookAdguN2IoS6bDEQkgHfHOpoo0WWpzYsvVZ/wA1/tU+JVIAdRnzIoNC7Wl4uSGyj/Mj9aFjjdJkfcknGWAwaYWjcdnlktbozlD3ZjOGI8J99aHCNOJ43xmJVKHp7qzem2UsL/wX4JVj7zwHumP6HlTeLvC6TS8RYxYZ2AXJz5D0/KoP5MrXFCUglsKMgE7Uw09iC2N89eVAlcg523O9MLCJmJbO3P31RkkaLSmcsSw8fBg8j1pzC+MgDFJbHjHECwBwDnNM4pm8RbfcYIqdbC3oyGpaRaRa5ql0ZVe9WVSlvcoZlu1kXIQL94EEMAV5V7pXZjT9XS4uGtV00AvBJaxMzTI3UO7cufJRuOtajUNPkkuI9T094o7+BCmZR4JY+ZRjzHtHKua9re2I1C9I0qOW0lki7q7dZP8AN9Ntjj+b1qsbfRN6M1eQrbX8scrgmMldjnONs/L51SUMo/hxkD+Y7VKCIKcnc9SaKVwMbVV0jK2DJpZlwXkO2+wo2PSIj/mcTAdSakk6gep6VI3oGd9qb/RN3fYVb2UKlnbLltm4yTmjLRYZbkQogUA44yuB8KVpfDBydulWQ3txFL3ltI0ZPQodvf7qRtvUUUUUtyZqV04285hnixndXKkBx5gfXOjV0CC5Vf4Mb42xgZPu3rNQapLkNLI7OfxMc05te0LQcJBdvLAz9f2qfjyXdjPJCqSJ3nYmLHEsfdg+WR8qzl72QuIW44ZO8xuFbatq3azIAIHCRjDAih31aCWMnwZbfgyMj1pk3B/IRrkriYE2OZjFPGY3zy5U/s4lgt2VDvjIIG+fbRt8sN9ACY1YdGGxHspdbMVkMTnxDk3nV4yTISUqth12Q8BAJIO+c1m5YuOXhXcnb2U9mCrA2wOefLnQ3ZuzOpdq7SA7qZAzH0G5/KkyvjGyn46t0C6j2XvbC5lhaNmaOISvwjkpHP68qVrAxUkdK/QU1naXL3EmAXuIzbMfMDO3zNcxstMgXshq7SRL30VxGquRuu+DXHHO13s7lFT60YjgZTuDtUSWUmuh9p+y2kaZp9oltcMt4/ATxnwyBjji93PahNc/Z/daPYNeCeG4hTGSux3OPzxVFmixeDMZA34j03rR6d2q1DR3jFjdcMbDiaN/Ep93TPpS6/0HUdJ4Wu7V4g/3Sy7GljR8G6jc9eeap8JoR8os2uldpftXbeDVtTMcSt4DwA8KjGBRPbnsfa3Nrda9p12sqE8ciAggZ8iPbXPWlZGyNsbeVXJq1wlnNbiZ1jl++gOA/lmkeKmnFhWTXyF8K4lHtziu0QsXt7cLnPdJkjptXGoTxTE+o2HSuzaeH+x2wTHAIkzjzxTZekTh2B6qnDEAOXHjH96BjWJbGSSVuAFwoYjPt2+udGazNwkxDq3FnzoLuhPYSuFHGpAyfLI6VGXRWH7Adw1xf2t60aGC2jj43IbLyZGdz7uVHW1lHbXOAv8ABWw4sEDYZJojuMWmpRnnJCm5/wBtHSwxpEssxEcbQdwS/X2DrUnIrxAZrYQ6ZfeQt0UAZwMIa5vccKySZGSemM103WBd/ui47qFoIRbsQ04zIwH+npzrmE0zI57yRg2M4CEj41bB0yWX0DlD3SYXxev9K+aHg/zOEtnfhxXjSnhDcZwevDUONpMeLJPLAxmukiSK5OB58/KiIBjf/VjFDcTk4Lb+RFEwDAHFuOL2UGZMi6hQRncVQWwdqJmTiJ9KHaPPLaiAmkoA35DrvVqsGGMfDH5UJjA2514o8W3OhQ1hsTYNwvnH7OoNRMrNw4x4SSAPP6FSt1zdMOrIw+X9q+iQDhI+FCg2bC1nSaxky6iZouEK3hPPP6U1uJMSiNQOEQocDofbSMOUmijAwO54mY4P1/erLO5kigjePYty4t9udc6W7LSeqLWTY45+VM9N3hcMg5cxUYJbeYDv4sHqU6UZBbQITwSFQdsMCaZuyVUF26gZydvOmMGFyM5G1BQxgZ3B9lGxLgnhrUKZL9pHaJ7GzTTLV+F5l4pSNvDyArmca8I9TWg7f3H2ntZdAghYsJ4sjkPr41neFnH4j132FdOONIlJlneqvNgD5Go/ac/dDsD5DHzqrAHJkX/aufnUWIIOTI3vxTpUC7LWuXGcIi5/maq/tUhP+aqjyC1AAZ2RR7d69cYQsuxOBke/+lajWTWcnlLMc9FGPnXucjxJKf8AfLjFUZk6sc1HgVjvhT59KxgkSQgYMMeP9UtWxSROcJbRsfJZjnNLzCVJBH65qSgqwK7Eb0LNQ7jeSOFZFhuow/Lu5ySPcatF5MqYaWdVH/VhDD4j21G/L/uxOFsE7H2Et/T50ptlKO0js3Am+AccR6Cgwx2PYtWmVWYNBMijco+MD31OHVEnl44ScrgklcAZ9aTLfXE257o45Axqce+rYJZJieHEZUjBQY86CSuxu1TNbKCYjji3Gccqr7OayvZ7WjeyQmXCMoAPn7a80vvIlaGWVnUjI8/rarZrWN5Cx4cjoOtJkkn8ZDY8dLkh3pfb8JHaQ3CkEXneSyHkEOc/mfhUJNTsZNE16OC4Qia8VohnBdS2cgUgfTEMXEOfsoFrUqTjr5bCufwQvRdZmu0dC1vS5oOwsqak8cxtnQ2kw5lTjb5/KlerXky9jezsYkfhl4+MZPiwds1kJry+miSCa4lkij+6jOSBVk2rXc9pZ20xBis8iIY8znf5Ch4HSGWVWdC7bTMdX0iA3CSRNMrGAAZQjHPfrnyqPbbTrI2ZvbmyW2aG8WIMuAZ4zuTt9bUg1XtfaarPYTtp4gnglV5ZQclwKv7W9obHtBpvHDOe+t7kiONgQXjYc8elTUHasZS6K+0XYq0sLC41GC4/wjFDaqcEuG5g+zpWDuIUU+Eb1rtbvpJOw+gxmQnDzAjOcYO1ZW5OTyrow8vbJZKB7ZfHnqa7FYcRtrY8WEESHA67eXwrj0H+cPMV1yzPDb2uB/ykO3sqmXpEYdg2qpx3KY5cNWWNoradMWIUH8bbAcuvur69GGQ9OVShhiawnnuEMqw+JY2bCZrnn+paH7Bdvx3QZ7KFCmMm5nBEQA2yo5t7dhXrTw294ots3l1LA8qXshDAYBwFHtHShr+6laW4juHJUWBYwDZAcUNBFJNd6fIoxmzMflw54x+tSS9lWz20lln7N6pJcyPJMUlEjybn7o2x0rm8rgSHOPftXUjYDTezV9FxlyYnbi2Gcr5Vy2RVz42wfWujB7ojl6VlLDgGMbg8j0615HwmSItjbOfSviud+mOg518Exg4Ax1roRE8cZnfy4sg1eieEZ5cXU8qqypbYeLzxRCDMQ8+KszHhTHETtxACqGiwDnPtosAsHzyAGPr318I1LYZuEeZBx76xhf3WDtmvhFw70xayk/CvGOeUIaq2t2T7yMPaCK1mIWwxfxZ6nHy/vURzAXGRtVn3LiNscmBqx4sTOMcmI5UobDJL/EjHG5j4c8qviu4jGschAUBW8uVLpUyD5Y5V7FGoQkjxAc6XikNysdQaooliWNyX+6XZSy/Ac60UEqO7xlo+/VQVMeeFx6evp61lNHwNSV+FfACwDbg7VrrBVZJJEjVWJ4cLuPhvUmtlL0NrOPI25evSqotX+2XD2+jwJdSIMtNLJ3cSjzzzf3D30Qil7WccsxsMf9tY/wDZ1ORqAibmYH59dwefuqkY2rIyeyX7QdFnFjBqV5PDJc8fdfwYAigYPXdj76WWXYGW+0+G4hnjy6hsMDke+tl+0SMN2WY+UyH2c6v7NNnQbP8A/Xg06k1HRNrZgJ+wWoIxAjjdvNW3NLZ+yOowfetJceajNda1G6jsLSa6kGUjQuQNs+n151i9O/aOz3ZTUrJIrdjjjiJJT3daZSkwNGLl0i5iyHgkXHmpFDPAe7KnoRsffXdou6niSSMrJHIOJSORFcqvXisu117JJAJo1uGHdbYOSw/WmjKwNUZg27BTtUBCTnANbuGXSdRS5gl06K3ukjZkAOzYGdjWLccRIH3fIdaYxUIXxgdfUVW0fADnY+tE9yOH3CqpUIXY7UAoaXLH7LGuAV58985P9aBdGOOIAAdAORo24Yi2jZeY4f1r4sspHkw3oBXQCi8J9KstZUhmdn5Ag7e+vHThBHXzry0iEt5hxsRWCaLSLpbmVuDiCheFc0e8oW5VGViTvxeVB6WkUJkVFwVc8qYd4GTiGQc4rnnJNs7IR+CQx0zT7rU5Gis4+84d2yQAvvr3U+zeoWsLSS2bhF3LKQQBUez2tXGl6j/hwjLMQHDe3n7a1/bLtFJpMAto4VY3ERBY/h6cq55SkpUBJOtHNWgDbdD1NV/YyQfPGCD1rf8AZuysNH7PtrGpIGJ3XIzgdMetW38Wl9qdDmvtNjCXEGSRwgHzwaZZ3/oDxro5rJBGD49jyxzqtbZACRua1lpocFx2V1C+lBE8LARkHl9ZoCw7Pte9nb7UROUa22CY2Yevxp/MjeJmYaJmPM8I6c6Fud2GOXOttpvYrUNU0sXVuYuHJAVmwTj3edZXVLN7S4eKVCjocFT0qkMkW6ROcWgG32cE8yK6vaMTZQEH/lLuPZtXKkUlg21dUsN9LtSTj+Cu59lNkJx7I3Qzge+jbO1NxpFzEOcnhGaEkdScKCTyLH+lGWszwaTcyIPGu6k8ia58nWi2PsubTre1DXNzJw93Bwl2J5AdT1ob96K91aLaRoY5IixdsjA9Bj6zSy8mluLwmSQySvZsBxEbEg1O3tnlntJHYh4lIwNgT9A1NR+yvL6IQ3ck+j3He3DzMUdSW6eDkB5VgZSVbCpxE7753rfTWiWWnssYwHVufXw/XwrClpAMq/djO2+K6MXsjk2VQh0jB4Mk/hO1SdXD5YeIdB1rwhi2WPF6nevVXIIzueWNsV0IiD8LGRm4SoI60REuUH+7lXpBaJcNsen17atgXCDPU0GYnwZLDpUCmOXMb5o1I8s493t2rxoDigYEC8RGAozvg1JS6kcLOo5YDVYYeYPXYehqKpxA8WfaKwQe6d2iHFIxyeR3+dFSx8UrHod/r41XdIxQ8WDjkeeaYQNamKPvll4ioyRj8qVjJWAyRY5j03qKLgECnS2Vvc7QzYf+RxiqJNNkgJV4yPU9aFmqiGlAi7JXnwMN8eVamzDBG7wrxcfPBHQUi0224ZiT5EcvStHbJlTnmT9flSNbDYysOJpShfKsCOeayXY/TrrTO0MhvxFbw2peJ5JJkUZx0HXmPjWstIcTx4H4gc07+wQSzCZ7eJpeXeNGpb40U6FezLdtLu3v+yt0LaTvRG8Z4lB4efRuvuqjs7fpF2es4lHe3DKeGIHng8z5D1o7ttqdtHZ/uzKyzT7sOLPdKDnJHvAA9azcN/BpUHcwyMjNt3cQPE3v+hTx3EVrZpL/AE77ZYTi9kTvHhZVJ2SIkdP61yO5MauY1AMgOSwzWpuZL6/JLu0Ufrl2qiDRoRLxGSVn82UHNOvihWrHvYiW7utHFszlLeCX7xG7A78A8hnOfbWR10512+KYJMxwOhw1bCOG+Nr9nt7xoYz0ihVT8aB/4GMh4lkkY45nBzSqaTsLg2YuG5lS470N4lyMe2qxD4zgbn51tX7JTQf58rqucd4VVlHtOMj31aexVwTg3DZ2O8af0pvJEHjZiuSjvY+JTtnGCKGliwxRfF0261s7nsdeRK5WXix/LGtJJbJrRwtwvEG+6w+6T/X0NMmn0B6BXixEmeQUHP17fnVUgEaZX346U5MImmGesQ58j7RXj6XGScLF7uIfrRoVTS7EUxB3IIPPpvULEYvRk8lpu+kKOYAHIYc7+8rXn7qFpDJKccQXA8WcfKsosZ5Ik1LmaQxHDqxAx1q22ecKWuG8bbKOeBVEcuLqUbKS5O9XFvFud9tyef1ioOO9llLWhlpr5uoj04hWq/aLvd2g84j+dY20bADct9vMUw1DUrvU5EkvJOMqnCuwAA/rXPKLc0y8H8TXratrvYCKC18UsQxw+ZHSquyGmXGjaNqVzfRmFWQ4V+ewO5+NZ7SNfudBnc25EkbEcUb8m9QaJ1ntjd6vb/Z+5W3h5uA2S/py5VHjKqQ77Gmlo0vYbUwgJYtyHPkDVGhoydgdVZhszbfL+tDdmO0kekyyx3SE2s2OLAzwnptR3aDtJYz6cNP0lMRu2ZGVOFQOf50HF9Ia9jPQEvT2VsBYHD9/l9xjgyc1i/2jiKXtNMIcFgqh8eeP6YrU6Xr8OldmrMmVOJbjgkjzk8JO/wAjWb7eaUttqv26FgYLscakHmev160cLrJsTIrTMUi8AyT7iMV03TUzo9oQf+Sv5VzJw3E2Tsa6no650GyI/wCgtd8zlR40OOXWrmXNi0YZlDEglcE8q+dOWfgajcj/AAEgGevLbpUJbRSL2CTNDDG0bN4hG2EO7AChk1JkkhjgHAjkk8Q3+tq8MDMdgSe6xnn+E1FLFlaNjzQEb7daySGbA7K4muLG4aaZpSM/ex/LWbMQCJxDcDG+K1EsBtYOBFHC+ST7qSIikeJffVoE5A1vAJTGp5M2NzjG/nVCr4mQ/eX6B+vKmRiR4XULxEowVfu5YjA39+aC+9YRXON0j4iR1XqP191PYlaKW8JX+UtuPI+yr4lLcPApznkN89Kt7lJhuTw+nWrbIfZYZriTcw+FDj7zNy/X4VmzJWFW8XGS2dmermtj0FFQW/dREKN13oyO27w7DJO+KFgE7WgI9agsGVBp69lwnDpg45GhYbbKHA24jz9tYwju7b+G2B61ZDARaxkc96az2fFG2BtVkNri3UAdeXu/tQYyYNaW3gkLDoKOjhYERhiAfOr7e34VIPUUQsQMinrSBsGgtsNsN+WabWlv6V7bW2SNvWmVvBuMUGwAWo6tZdn7IXN6x4if4cS/ekNc8139o2rapxpbyfZbfkEiOM+1uf1yqntzfvfdqbmNm/hwHuUHkBz+ZNZ2UpFGyu6oeg/tTxhe2NdBmnXsgt52LFpGdTk9ev54qFzOzXMuHbZuHIPlt+lBWt0gkIB8icjY4qtb5QmDuTkk04vsYw395A2Yrhthnxb1q+z/AGmjnIhukBlx91vxew1hFvYjz4s+YwaJEnA0c0LeJDxKazDR2DvIRbfarZuKMDxo25FQ/wCI4FUIY8Eb5FZPTNRb7O2GwJBnGBkjHX40uvLwwzBSwOdwQDUuT6D443Zun7Tx8LgqWXG5ODQMHaltPl7lSzW7j+GDv3foPTfasaupAKRxE+QUGqHvgzIpLcYYYGCD9b0ySroRxp9m3m7StOh2Ayc5O1Z/V50ugwK7ttnl4uhoDvZGXk+OW+w/OqJ3kKgqm6kE+MdDRXIPw9hFtMGltj/NFjP9aMjV5rP7U0ygF+FIeIhmHnt05/ClFqSGhDcxxrv03ozRE7+aVWdlRMYRds+/651aUuKtnPx5OkEBWYHJdPIrIT8s1VdS8Vi3FjiGQfbRsyQw2kzLC0eORbbiPs+udKZZuK3mx1JowyKatCSxuEkmCpkzzFoJXBfmrgfKr0aDPjjvo/VUV6jbuAZPVz7qYW7KJkkOMZzlv1NLKGm0Wjk2k0DiW3Q+C/Cn/wDNbsvt5Zq1LkOMLcW0mP5Jh+RxRXBC+5KnoTnFRksIWDZiV652vs6FLWipbl4h40J3/lzn4V99pjCeNsA7eIGqZNNtgwxHwEnG21ejT3UAxXcyg77tmhxQeZfHMksgTvk36k8qsWUo2C2R09aXyLcD706yE74eIN86qJlTJ7q3J81BTPwp1jZN5YjGc94pK48yOtL5ZpHiw0jOicgWJA91UNNMT9xseaydPfVPeYU7SDPsNMoV6A537PgGHPcHzrq+gDi7OWR84RXJ0IyPFtywQa6r2TmE/Zi2wpHdgx5P4sHNCZkETKS5x/6qqYcNm+TgAHc0bImTQt59nWBkmmClhyALH4YqTHiLDOiFQG8RjLDA5++hjeuxt+CPhEucljkiiGe0QARJNJwrwjw4+udCyzycHBDbhABjLMTtRSDYIjzGBjcy8bFuFduW1LO6cgZ8AHPOaaS3N0mD3gwdhspx8qFeadzhmyD6Yp1oV0DxjPiH4N9t+v8AehrQKTcWjAMI2Jx5o2/6kUwDOWGcBcjiwOnKg72wuQy3dhg3cWxTb+KvUEUUDsD7390TfZrpj9nP+TOen+ljRerORDHawYbgXvGIwcsRt8vzqFpf2epyG01Be5kP+bDLsT7D7qV9xeaWWa3Xv7UeLgbmo+v/AFW9h9HRWjji4jI6oP8AU2KDj1GBNWiENxEVVSC2QBWHmuGb7zb/AOpqp73A8TE5/lrM0aR19rUSEMZIlDDIHeZquGxiw3DcWzAudxINq58mrfu7T7aQI8vegjOeWMeft+VGreauby1tBaCCS73iEsnP21H5+ivHH7NnLYqEOGRh14SDVaWuE5etV6VbXlnfi21F4meWDvIzHkg4O4393xp0IQCBjc9KaLdbJSST0Lkt+ePKrEt/EM86PWDBO1TSLBzWbAigmVZktrSJJLhxxeMkKi+Zq+O9NjOIdTe3jcjiDISFI99RFzHpuqi5uPDBLF3ZfopBzv8AE1hP2j65Ff38UVjIJEgQhnXcEk8s+6owblKirSSMx2hIHabUmznhuZDke01mJCZvtDsTlMHPnvTZy0nfM27EHc+yliplb7HRQce+uyPpEme3MSo17wjBHCRjbFWR24NywC87cH6+FTvVw956wK1XRLxXi8O3+Fz86YUBSBJYrQHbKtnFWae57oht+HqfhVsMf8KxKjcqdvOqrAbP5Amgxomnscmwi4Qdxg+tRu3YvEvn5/XrReggNFCG8udWa3DEjRSKR97p7K4pZqycTthhvHyNj2T7FWNxpcV5eq8ssviQcbKEHTljekvbSGyteO0lJe9iIaOUnLFeYyfrlR2hftBtdJ0lba7hkcxjwFMb+hrBdodb/feuTXshUFyNhvgcgKfA5crZzZY6oJD5HCOTb7dK+cARPjyzyq/SdMkvrdXJ4Rjmc7n2UNcxtEZIyTxDw77V1qSlpHO4ONNlELEzISd+8f8AWmNmIVQSNbyl8/fTbI9tJonxID5SGiBdyQrwDBTOd+lO1aoTpjKdlueENDISM7O5OPn6/KgbnCxy8ACqByGNqqa+ckcPDg+QqDycUT5Iyd9qEU0aVMilw0U0uIkkGfxHFGw6hGF/iWDe1JSKXhgZTn20YFUrlM0sqvZRJtaQV+8bUE5sbpQeocH9KkLu04O8K3SKds8KnH1ihh4sAbnn7KnBF3rEpgb8xvWdL2BJ612WNf2QO1wx9XQjhq1Lu3KEJfxY8nBqt4EQ+JjjkQefwoZ4ldzjBGcchSx+Xs0qjqv/AELbuyAy3Fqw/wBL8JqltztwEnlh1oX7NHxeKNfcBtViafA4BKjBHLHKmtr2DgpHzBjkiPcdR1qpicHKnHqDirGsIADw7Y8iaqa1RTszDHqa3M3jRFF8W+ARvXWOyoz2YsvRCPma5N3eCfGxHKut9jrcRdl7YjOZOJ2yc9f7UmR6HiqGDx7UFew8NvI4++AcHGabMuwoO8TMMi9SOVRscz8ocnJJwYjty33oRoy0kHFz4cc8+dN3tyQNuSY/OgHtWt5FeTaNjjOPummCBw2ymMDAznPyqBtQSc8hTRLUpjO2OhqPcMM8OOLpkZA+dOhBa1oDGw/mGNhy6frVVtE01smQONfC2fMUes4gfuLwfxCCwdEJDL6+RqqKa3TVDGkitFN4sg8mHMY9QPlWv2bfQo1zSILqCGG4jUzyHKuucxrSOW01fSYGMTpdW5PCqtu3w9la5Ab++nuiPCvgQelVXdrgu7Y/y8Lnp9Y+VFdB90Z0zafHePItlJJByRHbBU+36517baqtlfPdQWMRRgV7ltwvLr7vnVZgxnB59OdQjgwD6++iwIIhhk1eOW3VI4pml7yNGOF36Z9/yp6ml69eXVrcXN3arLbD+E3Evh9wHpWejBV34TgoOLNFCadzwlmYnyOc+6pST9FYuL7NlbJcxajb3Wqa1bS/ZySsaqN88xmm767piyIxuM8OdgOdc7e1u1TvJLe4Cc+Lumx8cVQLqMtzkyPTFTp/ZSo/R1FNesZQO5LSZ2zyxTKFkmjDrnywelc302N0VXWcICc9SRW87PyiSzZCzMyHJJ65/wDVT5O6sM8aUbSDnQGJgwBGOtcB1GWUSNuMZO2+3ur9ClcggjntXAr+yK3twCD4ZGHzq2LshJ6oBgOICSN2XNAZwLwHqg/OmndGNAMbFTS2RcS3YH8lWTszWi27HFLdY62qmroGAuUYnnatVbnimf8A12uPr419btxSW2RnihZN/dTCn0LYt7D/AGt61TppPDJj2f8AqrIB/BtPYwqmwOFfPnWfQY9ms0iUpBHy2B3+vWq9VuA6xgHODQlnJwwLnOAcZqF6rZGSdsc+lcjhHyWzqWSSx0gW5lIjbBPspd3bKA7E450y7njYZqEsHF4Tyroi0tHNJN7DdK1WaKErJKQAdsdBXz3rXPeSk5D7AnrSuKyBB4mbgDY4QdqueYkiKMbnb2U8UlsWTctHsIkllKRLxEtmtLpfZr7SR37MzH8K5AofR9NVVOScN95xnJ9hrV2bNYq0icHdDxcbHkMfH5danObqkPDGu2F6f2KgaJWSKM+nCD86IvOxtokLd5BFn/aBU/8Ai1oNOXjEccgGGAJ2H0aQXnauaZ5OB+NTt54rnTm3otxS7KL3sjZsCYMxP/oOflWeubG703i5SR/zgHb2iip9fSJyWuF7wYOMk1Eau0qB1fiJ5gEY+FWjKa1InKEXuLAI5m4Nj6bVelz3cfCmzHqBVNyIyTNDsp3ZOeKgHAG3LOfbVqUkc75RdF5fPPkd68DcMiE+ePr41T3wGcZwd8Cq2l4id9/IdKaiKW7D2O5PXkfSqFmI5Hb0q2wtLvUzwQIQnLjAJz7qNutDSxQGaU8R3xnHypeUV2dDjJ9CxWyxBbYb586jIzA46/nXkwjU/wAKRvfv86o74nZ/iNwaNpitNFq8TNuOVdh7JLnsvZf7T/8AI1yGHc12Psguey1lj+U//I1PL0NEYFc1RLDxE/pRzR8Q5kY8qqZcMueeeE4qFji82+R1quSzWSMowJU004AWIHOo91k01gEK2z23gl+5+FzyPt8qn3GR4UyPMHNOzAWGw2oC7lsbQn7RPbRHydwK3NL2bi36FF/AYVS4GFMLZJGM8PX8/lSvXbVbm3W5j/5Jxxci3r9edOX1fR5CUk1C1Kn8Ktn50FfTxS6DLHDKkgV+7V0OcjO35fKmUkzOLRTpdrw6bEQPvDP9KhdLmynkAOAvDv8AXnTG0ixpUTNkME24TvmhdREkGlusmMnw/wC6nFMxJZ42K8txmq/suCR7/Kj9Ivl1m0MjBRcR7TINv+7HkaKNkS2cfnWszVCRLLL4xz614ZriKQizPdqNgVAy3vp2LQjO3IdKjDZcKEkb/nW7MR0zX74o0BmKTgbE8mPTaqtTtTeSwyRxAXLjE3AuAT54qx7HgdZFXdWyTvy9tM7aKaQMCJZT5g529lSlFJ2i0ZtqmD6faSpCA1uFI6yEjO1aLTWWCQGQK2G41VTjBGRz9/zpXDMQSI4WyP5hj51p7TRQYg0k7eIA4UBcfWai4lOeqYzhdZIVkX7rDi36Vh7fsjBfLLNI5JldnXh2wCdq3KxIkAjAwoXFYy57T2nZ157OZi1xb+EIoyX8vliub8l5KXAfDW/sweuWS2UphB3TK589zWYdcXVwPOM08vLtr1O/lPjd2J9PF/ekrj/GP6oa78NpUyebs8Q5mgxyMJFRs2w9ln1H5f0r6M4e3P8A+MioQnhW1PkxFdBBlkBIgtxn8bD2c6qsBhZQRnfGKmu0cI8pj+tQsjjvf91ZmXY505DIFQ74bFNL/TnVI/DxMxAGOtBaOyhiTyDU/bUYRq2n8ZAQP15fW9eZnyTU/ielgxw8dsL07sRObbvZ+BcDONyF9v8AWlmr2VvAOELjowONjXTob23trNp5XVUVcsTsAK4zf6obqaVixIZjzqP40p5fk2CUkm40LXYRyyA74Py+hXlkocO5Gzbe6qJ2yZM8zirInaJBywa9b0cKVSGtndR2kveRRO0vkoO3lvTaHVbq5jCzlIoV8XCDkt7fj9YrO291gtkbE5q+W8LZC8zttSPeh1oK1DVuNZGZsRjmKzV5qVxeuQhKRcgq17ezm5l7tT/BjP8A5GqxhQMbH2ZzVIQ4olObkwQxMM+dShuJbeQMjEHr60aqhhsdx7qHuYMeJenSnEG9rerNDxDZhswr0S8JO+1J7KXu5R5Ntij1bO5zmgtM0toKd/CcH1zVlgnfz4b7gO486CZ8DflyApjYt3Mat1G9Gb0LCOzb2GpRaTppwF4/w8vyrG63r3eTszs7yvuEHT31VdX7BDxEnbLE0mU5LTy/ef31GEd2Xm60eNf3RYnCKOfDiiLa6WfKsOFxvgZ3xVcckcjhcYztk9arMXdTq46Hn51YlY6tDlhnnXa+xwz2Ussb7N/8jXE7ANJdd2gGfaBgV2LsRqdtNo8Vgkn+IgUl19OLzqOXoaJo8YqEsUTKTKBgb8R6e+rHdV5sPfWW7R39xBekRTcEfCCDtgfWK5/8FUr7G02o2lqy8d0pHrk7e2k2v9oIoI4HgmnEfETJ3KjjI6AA8ufOs+NVjmjZre7kmC7MsilfeB5VWdRmwvHIxXABXbkeYpuMmug/FPsJbtSsMt8l9ql8kscjCOKOPKkchvzpRqkumydm5Irp4v3ukxPeHLNKudt/YflVn20faWkaO3J6/wAKo3i6bcxBprWMyO3Cskblce0e6miuO6A96TDp+1Glvp+lXETqt7asheNYumMMM1RqWuabe6uj2LtwXACzZQqOIHwnHsOKyzm14m7mQFQSuQnP62qrveFsryFW4r0SOtxKiWoGdlAYbfXXNL1ibV7jvHHDaxnCqfxGslH2puFsUimZ5mLcKg4G3qa38JVYESEYUDrSmo4zo+py21ylxbuFmXI4W5MPIjyrqGjT2urWS3EIIPJ0yMo3lXFwZLaUq4KshwQdsGtLoGvTaddpc25yeUkfSRfKmkr2jL6Z1RbGMjBBHSrY9NgOQCfLfFUWF/De2K3kEo+zleIknceYPrzrK3/bC/a6H2PiDMcRxKM4/vUJTrorDFyZszo6HkTjl0/rUo9KgT73eHHQCkOldodXhnht9chdY5tkkOPCfaPyrUrIFyC5Yj2nNLzb0zPHx/p9DZ26jPA+R/MMUbFcpnhEgz6nlSnU7ya17poZII0bOUkcKzH0JFX6Zcz3CN3yRcQxjxKSfcKSU90OoasW9su1FxoTWyW4UrKrMW2yMHFc9vtU0/V71rq8e6SZ9mYcO/1itJ+1EMx09iOEYf8AMVzl1fhJB2PTPKrxgpRTJqfFjC5+wAcNtcTlef8AEQf1oGZIRIZBIh6bgjPuqsq2wxuagIzkZ6GqQjxNOXI9RYAU8ceV5DLV6tvCeBRLF4TkfxP7UQcLkLyBPt+NfcUYzxRq3XenJlYsBIqcEkZIbi/zBVkWh3A4u6RW4tz/ABUNVTRxoVKcwSp6dKo74RknYYoMaKT2OIdKvrfKm3OxzswNe3GmaldFe7tZWYb4G9KlkZXHDIw9R199ELqV3HgrPLw9N6jKDu0WjPVMaS23aCW3FvPBfNENghBx8KC/dF+oPFY3I/8A8zU4dd1E57ueYkbnDEUTH2o1B4uH7ZMSp6sdvbU4xknpIo5JoVtoWpSyeGxuDkg/cqNxpV5bxcctrMgXmxQ4HvpkO0ep94eK6lw3h5kihrzUbm5EweZ2Zlww4sZHsrpRzNi1DgDB517NL3UTyDGeQx51SrZXGPhUbkZ7tB55PrRSBJg+O6jX8vOo9zK4yTgnoM0RwZmQHkN8edFQQrMDvwnng0wlC1GeF99+uKMKiWMMvXpUry2jiyFYcSvjhHT319aJ4XH8px7K1moX8PBMcdDR+eFyBQ0qf4h9s8qM7rjl3bGBj21jJWQ5kZzgmi1myCOnOqntVA3l5cjmqGmER+8GFbsyVFs794h/1YWqpUy2B0H18hXodZZEVM7tnljH1mrJlIfbqOfOstBbsGjiAfgB3bf6+FEzJxQBsbnBr23VSwfAygyf0rQ9nOz37/1FLAOUHdsxYDlgeXtNCToCQrtV4bw+RroH7NkKa3NjkYD+YpvN+zuwewtYkcQ3MQAknC57zbfbNNdE7O2mghmtzJJMy8JckD3Y93yqMsiaodR2OZJggHeBQOW7Cles6Vbara/xOJSBs0bY+v70cPGpEgfPLcc6xun69Np/bK40LUHH2eZ82snLBO6jPkR8xUUm+h9Lsz91byafwFIvwhMA52z/AHql7lGQKNiPTnXRbmxt5CTIsbe9d6W3GgWU4PFbt7A671SM9bBKNmAknuOMLbzPEMcTFeZPShp52mjEjAhzzPLLD0raT9l4o1LRB4z/ACPjP1vSSfsvdiQmO4QBumcU8ZJgaMtFbhGccgW8xV/chl2QnHkDTs9nL6JXe4uI4Y1IXiLDfPsoKbSZFkAjuu++NNzjdWLxlV0CRQmKWNyh8BzitvD2vtyP4lu+fNSKzltaLDC/eWwllJ2LZKgezzqDWru2VgZPPHKjpugVqy/X+zUetQNcWvCl6BkAbLKKwCNJazsrKVdSVZG2+IroWgX8h09YpB/Etz3TqT5ev1ypJ23OnOUmTEd91VfFxj/V5UkZO6Y7jqyfZztAbR5I3dvstwpSZBuV/wBQ9d6OfvNM1KG4AWTut8jkw8waw1vcNFIGXn5eda/S9Xgu7NLa4IRV2jk/6RPn/pPypckPaKYp1pmmvO0cGp20FrBCYx3gd2kI2x/7rUQzRz2uLe8hlIAUsjZxWKXs3fyZASGPi32cn6/vTfR+z9xYSiWSQkjbhUEA+3nUeG7so5xaqgq90yS6kUyXE0mNh4wcVZaWDxBQt/dKvkIwcUyLcHOLB9K9MhbdY0PtBovYitErnTbHUdPW0vYnnVN1Z3KsD6Hp/as3cfs2s3JNtcTRKdwrAP8A0rTpN4f8plPXhNSSUKRwxDPmTiipuPQHFPsw8v7MrpTmK6hYjlxBkoOf9nmsJkxxpLjfCSKc10xJpseGI4HkQc0Rbzs4HeR4z57YorMxfGcdn7Ja3CcvplwVGclUDflSiaF4WeOWPhkGzKcgg1+go8A7KmPbzrlH7UdBa01ZdTiT+Dc7OR0cevqPyquPJydMnKNGKusBVCjPizsee39qHY5TBjbI35A1RIOIkcR25b1SVkHJ2x7avxEUqDu8QKMZyPPNRWRWbwsd+eepoVTLneRsUdZQyXEyRrlmYhQMZyaRxopGd6JRXRgbIG2/IA15blnkeQ7FjnHlXVbL9lumvYwm8nuDOyjvO7CgA/Crf/pVpA+5fXI/8T+lRUo3ZR30ctC4Iz89s1GaQI7Y8XEDXVR+y7SRnjv7kjy8H9K5l2v046Jr11YQSFoEI4C2MlSM7/GqQmpOkTkq2J0OFFTYcU0Y9M4oXvJNwD8Knak99ljnO/sqlCthUi91cREciMVBp3UPwn72+1G3EJntQ0f303A86Xkq++cHyNYxazJIOMfeZsY5j6/rRUCcPeDHM8qohQjDMv8AtU9TRP8AkxgNzA3PWsYOs+y8+oaet/blJGefuzFkAgDrv9bVstK/Z/pogil1DUcTMMyRx8JA9M1jLOcwWygHH4vKrv3jIBs5+NTlGb6YylBehl+0DQdP0uW0bTVxC8ZVjkk8Q/8AdYcw4bfoetO729aeIo7FuvnilJQk+2ngmlTEk03oih4ZVbbA2pkUEoUjmOtLxCx5Yx5ZoiMTYAz8aZgRcoCHG2Bvt18q13YTWbPRry4uboO3EndoVGeuTWQ7iVvxjOPU1bEZYFxwhlHlU5LkqHjo7OnbjSpQcTtH/uQ1b/xPp8o2vVPwFceiuuJQVOR67VM3bcwQV55JxUvD/R+S+jrw1OJvFBLE3p51zDtdcXN3r010wCrG/dKU2CFTtv8AGgU1FkwQZE9cY+dSW+kaG4id8xzHLZ3yfPPurRxuLsLlFoe9n+2UaxLBqiZ4dhcKMn/uH61qE1G0uVj+yzRFmHEeI+Fff19lcqtSi3GZFLqdiASM04sNVltCYUDd0N0Usdh5Z+udGUPo0ZfZ0QoIcsJGdj+MBR8B5V4Z4ZPC4kOfPbf3VmLK51G7HHbW8pUfiAGP/I1bPe3VsP8AEPEp6gyAn5VO67Y3G+kN7mKKaMp9nZ19XNLJdKjlb/Kn23xxtQS6znB4UYA7kNmhZNVnMzuXbxZ5mnSfYnWmOvsLKgzweHYKWJNCzQhBmU7b7FhQA1Aggd8yoPwnl671Fb9IoZridFZIRxYH4j0HxrbQaT6Mge0N1HLNJAqRSTAcWATg+YFAW9rcaldiOJXlml39T6k17Eiy3Sq7iNGYKznfh9a2qaBbWcCS6dfLBcAfekcNHKPXaqtqIkU5AA/Z/K2nAw3ccl8N+6H3fYD51mEeW0uGV1KOpKspH6V0Cz1WOFxb6rE1lM33W4h3UnsahO1On6ZqMLXFvOfty+QyJfb6+tJGTT2NKKrRLsp2tEKraXshNsNkc7mH0/25+Ga2F9B32W/eEkSBfu8IYe0cq4qHaF+quvXlWy7MdrQka2N/vDyR237v+3pSzx1tBhO9M0VjfWsN60b6pcxgnBLRHB+Jqd92qMFybeGUyW45So3iYe08qU6hqTwScBt7XfdXC8QYeYpY+syjPCIU/wBsailWNvY7ypaNDDr7K4aB7yRs7K78Y+GK19prMSWMcl7MsEpGSpYA/CuTyatO2Q07Y9uPlVL3ZYElifbR8P8AQeVP0dWm7W6fb5/xhbHRFDfrSe87d2wYmASkn+ZsfKucPcnPMYqAnJOPzorAvbF8v0jaXHby8KsIggHmd6San2lvtStmiuLhpI23KHl8KSvKx5kAevSvZI2CcSujqTjiTJ+vhVI4oxfQsskmgKVCWJUfDeqvEP6UYsEk0nAi5Y9Dtn317cWFxbIHlhKodg4PEvxFVslQEDg8qYaddtbSiSMlXHIjpQnBz3HsqcWFNBqwp0a607U6giBVu5h/3UWvaO/f/wDty5/3mseC7Dw5z64r7jlUbd5/28qTxQ+h/LP7NY+r3DE8c0hJ6lv71nu0GZpUnO5OxNCrdSg7MT7cV7NdGSFkfJB8/wClMoxXSEcpPtivGH9K8U8LA9PSvWBHn7K8xkcjTAGdtdcAwceyrWhikbvBsT5f0pXEWXZht0zRSyyNtsBQCgsKkW68/M70O5LzYPLrivArONy2PQVbHDjPCDnzrBLDIRjh5iqwC+WB3zyzRCwE/hOfjUhavEc8JdeZAG493Ws2BIG7lj4iuc+tRaIAHiUA/nTSKGN1BQbHquKkYNtgfUf1pbQyTFaJxHHGcDblV3cxqyRyXBDSbAE8I+IpkkeBgRsfgKq1GITWbKIxxp4gOfKhyQyiwf7IFJU8WR0ZiMe6pi0kXcRM48jgE1PTbtFeCKeRe4kH8ORvw/6SfSncc+lJHKLmS4R42KssYBxSuVBSsQ9xEc8Ubwn/AFKR869W14iPs0TTNz4kUgL/AN1NzdRuWNsxVfwmdCmT9elATXN6WAe8WJV2LKRv8aCmmFwZ4dLuwgJt7hhnf723uqsWqjZw2fePlUr3tCfswhikZiOb7AtT7sZowubdtY1n/wCzye5iOR3xHMn/AEj69ZyycVbHWNPoF0vsleaqe/UJBa9Z5MhfcOv1yplcXWgdncCOMX90vOSbHCD6L/Wl3avthPeStbWr93EvhCp4RisbLJuTKxLfnSrlk70h+MYd9mi1Ttpf6kWQORFyVRsBSKaeeUnjkOT+dCNdEjC7D0qUKyyyhSSq82ZgcKOpqkYKPQHkbJpczW8nHG5DfI1prOaO8t0lYKOIcieVZWeeJ5yETu4uSMeo/wBQ8/60009xHAEbORvjP5VRIhKVj8xRb4XfzBpXrtwsKrZRFiIzxynzkI2HuB+dVi+a2KyHfhP3POkt7cs0mXOZJDxHO+SeZrcdgvROznNrd97byR5deFlYZDD31dFcS2ocR8QiOTwodh6j0pWFWRMou/4lHT1xXyGWPeGRs9VG3wp2hUzS2Uun3BK3jmSGTbibZkPrXz31vpN33Zka9syPCRs6enryrPC9lIbjiR1HMsuMe+vRcQyHJjKkfytmhRrLL2R767MiRYL7CNN8UIrlDV6FY3LRSg+QIqHdM5JwNuvnRMPNN1cTQfZbvJj5hgMmM+Y/pUJA6F5DF3lup4TKueH40nSF1ccLDJ3xmmMV1LErDJTjGGwc5paroa77CUgjmUmKZOIfgfAPuPI1TMjwtwSq6MOhGD8KGKlZMRlgvXiomK/u1h7mVRJEeQcAhfUZ5e6jsyojCqyy4c4QbknGwquVlMpMQ4UxhRy+udSuTE0AEKtEx3bJyD9e2qwiKpzG7PywOvtNFAZ8GcEMGwV3zU+/Yl+NuJpTktsMmqWEjAbRxDnvvUC1uhzLcd4dtkXr7awArJ4JCvIYG/X6zXsFw1qWweKJvDJGw8JHsqqOYyx8EaFIlOQpBJb1NWTcQtncjC4xg1g79EpIgsh4M8J3XPl6/XSoMpA2G/nXsJZ034jjwggZz9fpREcDSEfe4T0YYxRFoD4ip3B9oGasAVvvFvr0ov7EGcqT7hXxsijeXt/pWszRT3MZ2CKfQDHzqLQKNwu59aI4e7br7POpgK2QCu9EFMXGFBks3iHTOa9S2RuZx67n5Uw+wcRyoA/WvRY8HiaQBB1wKFhSBo7QcWE+JzvRUdqy5ycj3CvUuYFP8NJJxnHhXiAPtq3vbktwxWSKeneH9BStjpE4rRSBxbD1wfnUxZR4+6MHfkDUVh1BgTJLHEuM/wAOL9T76tNrdqCUuskcxJGPzpHP+jqF+jxbQIAwjJHpgYoiOFHj44WVh5ZxippaSyRAz30gyNwiharOnWwYEvcuW/Ekqrn27frU3NFFjoEu4/sr/aI+HA/zU4vv+vtqttQEwT7JDJKTj0H1/SrF06yu9SMEfeLFH95ieNmPkPrrRR0CGHMljd3ML+TKN/b50eaXZuDe0Dxac8h7y7YFjv3aeFVojiggjOQAig5HlQs15c2HCbyOCZGGS0Jwy+0VRc6lFdI8dqONHXcMQWNDbN8Y/wCQSaG3itnQ3BmBcsscY4eH3mgYL2U3ORKQ6jCsd6i9tcoCQTwnlkjNRNrNHEZHGCd1z19aqr9knXoPuJZLqMunFxoP4kOSeH1A8vyoJ0kVPEyR8tmYVVDcurBicSDkw6URwwX5J4linPnsrH9K3FB5M90y0F9q9tatIpWRwpYZGB1+Qra9pO0IaM2tqO7hRRHEq7cKjbHwFYvTe+stUiZ4nUrtk9ARV+oX0RlIXjZubHl8KhPG5zX0WhkUI37KJZeEFj94+dL3lLsQP/VFvIrKDJbyn5ZqlZeA/wAKFIznm3iNXUaISlYTpjiEkyRqVPN2OOH31K+1ITjuIj/C5sx24v7UDJK8sid5KXB6AYx7qMt7W3ZcSHYrz3G9FiojbWomwzEY8h1q9oJLTxRYaJuSE8/ZVhlt9OthGkizzMDxhchRnofP+9BC6d5GkbdiOHPl7vrnRQH/AEmbkSDIJIH4TzWg3DvNlh97kc19cN/E2+8Oo2xUON0Pi688UQH/2Q==" />
These are GGUF quantized versions of [ycros/BagelMIsteryTour-v2-8x7B](https://huggingface.co/ycros/BagelMIsteryTour-v2-8x7B).
The importance matrix was trained for 1M tokens (2,000 batches of 512 tokens) using `wiki.train.raw`.
The IQ2_XXS and IQ2_XS versions are compatible with llama.cpp, version `147b17a` or later. The IQ3_XXS requires version `f4d7e54` or later.
Some model files above 50GB are split into smaller files. To concatenate them, use the `cat` command (on Windows, use PowerShell): `cat foo-Q6_K.gguf.* > foo-Q6_K.gguf` |
dreamgen/opus-v1.2-7b-gguf | dreamgen | 2024-03-13T19:33:58Z | 486 | 9 | null | [
"gguf",
"unsloth",
"axolotl",
"text-generation",
"en",
"region:us"
]
| text-generation | 2024-02-20T15:17:02Z | ---
language:
- en
pipeline_tag: text-generation
tags:
- unsloth
- axolotl
---
# DreamGen Opus V1
**DreamGen Opus V1** is a family of **uncensored** models fine-tuned for **(steerable) story-writing and role-playing**.
*WARNING:* GGUF versions might not perform as well as FP16 or AWQ.
See the full model [dreamgen/opus-v1.2-7b](https://huggingface.co/dreamgen/opus-v1.2-7b) for documentation.
[See other Opus V1 variants](https://huggingface.co/collections/dreamgen/opus-v1-65d092a6f8ab7fc669111b31).
Best consumed at Q8_0. With small models, even modest quantization can result in dramatic quality loss. |
mradermacher/OpenBeagle-11B-GGUF | mradermacher | 2024-05-06T06:03:08Z | 486 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:vicgalle/OpenHermesPreferences-1k",
"base_model:vicgalle/OpenBeagle-11B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-23T04:41:43Z | ---
base_model: vicgalle/OpenBeagle-11B
datasets:
- vicgalle/OpenHermesPreferences-1k
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
static quants of https://huggingface.co/vicgalle/OpenBeagle-11B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OpenBeagle-11B-GGUF/resolve/main/OpenBeagle-11B.Q2_K.gguf) | Q2_K | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBeagle-11B-GGUF/resolve/main/OpenBeagle-11B.IQ3_XS.gguf) | IQ3_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBeagle-11B-GGUF/resolve/main/OpenBeagle-11B.Q3_K_S.gguf) | Q3_K_S | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBeagle-11B-GGUF/resolve/main/OpenBeagle-11B.IQ3_S.gguf) | IQ3_S | 4.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/OpenBeagle-11B-GGUF/resolve/main/OpenBeagle-11B.IQ3_M.gguf) | IQ3_M | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBeagle-11B-GGUF/resolve/main/OpenBeagle-11B.Q3_K_M.gguf) | Q3_K_M | 5.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OpenBeagle-11B-GGUF/resolve/main/OpenBeagle-11B.Q3_K_L.gguf) | Q3_K_L | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBeagle-11B-GGUF/resolve/main/OpenBeagle-11B.IQ4_XS.gguf) | IQ4_XS | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBeagle-11B-GGUF/resolve/main/OpenBeagle-11B.Q4_K_S.gguf) | Q4_K_S | 6.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenBeagle-11B-GGUF/resolve/main/OpenBeagle-11B.Q4_K_M.gguf) | Q4_K_M | 6.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenBeagle-11B-GGUF/resolve/main/OpenBeagle-11B.Q5_K_S.gguf) | Q5_K_S | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBeagle-11B-GGUF/resolve/main/OpenBeagle-11B.Q5_K_M.gguf) | Q5_K_M | 7.9 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBeagle-11B-GGUF/resolve/main/OpenBeagle-11B.Q6_K.gguf) | Q6_K | 9.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/OpenBeagle-11B-GGUF/resolve/main/OpenBeagle-11B.Q8_0.gguf) | Q8_0 | 11.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
theunlikely/t5-v1_1-xxl-fp16 | theunlikely | 2024-04-25T14:45:38Z | 486 | 2 | transformers | [
"transformers",
"safetensors",
"t5",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-25T14:21:16Z | Entry not found |
RichardErkhov/allenai_-_OLMo-7B-hf-gguf | RichardErkhov | 2024-05-24T02:38:59Z | 486 | 0 | null | [
"gguf",
"arxiv:2402.00838",
"arxiv:2302.13971",
"region:us"
]
| null | 2024-05-24T00:27:51Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
OLMo-7B-hf - GGUF
- Model creator: https://huggingface.co/allenai/
- Original model: https://huggingface.co/allenai/OLMo-7B-hf/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [OLMo-7B-hf.Q2_K.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-7B-hf-gguf/blob/main/OLMo-7B-hf.Q2_K.gguf) | Q2_K | 2.44GB |
| [OLMo-7B-hf.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-7B-hf-gguf/blob/main/OLMo-7B-hf.IQ3_XS.gguf) | IQ3_XS | 2.69GB |
| [OLMo-7B-hf.IQ3_S.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-7B-hf-gguf/blob/main/OLMo-7B-hf.IQ3_S.gguf) | IQ3_S | 2.83GB |
| [OLMo-7B-hf.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-7B-hf-gguf/blob/main/OLMo-7B-hf.Q3_K_S.gguf) | Q3_K_S | 2.83GB |
| [OLMo-7B-hf.IQ3_M.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-7B-hf-gguf/blob/main/OLMo-7B-hf.IQ3_M.gguf) | IQ3_M | 2.99GB |
| [OLMo-7B-hf.Q3_K.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-7B-hf-gguf/blob/main/OLMo-7B-hf.Q3_K.gguf) | Q3_K | 3.16GB |
| [OLMo-7B-hf.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-7B-hf-gguf/blob/main/OLMo-7B-hf.Q3_K_M.gguf) | Q3_K_M | 3.16GB |
| [OLMo-7B-hf.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-7B-hf-gguf/blob/main/OLMo-7B-hf.Q3_K_L.gguf) | Q3_K_L | 3.44GB |
| [OLMo-7B-hf.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-7B-hf-gguf/blob/main/OLMo-7B-hf.IQ4_XS.gguf) | IQ4_XS | 3.49GB |
| [OLMo-7B-hf.Q4_0.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-7B-hf-gguf/blob/main/OLMo-7B-hf.Q4_0.gguf) | Q4_0 | 3.66GB |
| [OLMo-7B-hf.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-7B-hf-gguf/blob/main/OLMo-7B-hf.IQ4_NL.gguf) | IQ4_NL | 3.68GB |
| [OLMo-7B-hf.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-7B-hf-gguf/blob/main/OLMo-7B-hf.Q4_K_S.gguf) | Q4_K_S | 3.69GB |
| [OLMo-7B-hf.Q4_K.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-7B-hf-gguf/blob/main/OLMo-7B-hf.Q4_K.gguf) | Q4_K | 3.9GB |
| [OLMo-7B-hf.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-7B-hf-gguf/blob/main/OLMo-7B-hf.Q4_K_M.gguf) | Q4_K_M | 3.9GB |
| [OLMo-7B-hf.Q4_1.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-7B-hf-gguf/blob/main/OLMo-7B-hf.Q4_1.gguf) | Q4_1 | 4.05GB |
| [OLMo-7B-hf.Q5_0.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-7B-hf-gguf/blob/main/OLMo-7B-hf.Q5_0.gguf) | Q5_0 | 4.44GB |
| [OLMo-7B-hf.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-7B-hf-gguf/blob/main/OLMo-7B-hf.Q5_K_S.gguf) | Q5_K_S | 4.44GB |
| [OLMo-7B-hf.Q5_K.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-7B-hf-gguf/blob/main/OLMo-7B-hf.Q5_K.gguf) | Q5_K | 4.56GB |
| [OLMo-7B-hf.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-7B-hf-gguf/blob/main/OLMo-7B-hf.Q5_K_M.gguf) | Q5_K_M | 4.56GB |
| [OLMo-7B-hf.Q5_1.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-7B-hf-gguf/blob/main/OLMo-7B-hf.Q5_1.gguf) | Q5_1 | 4.83GB |
| [OLMo-7B-hf.Q6_K.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-7B-hf-gguf/blob/main/OLMo-7B-hf.Q6_K.gguf) | Q6_K | 5.26GB |
| [OLMo-7B-hf.Q8_0.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-7B-hf-gguf/blob/main/OLMo-7B-hf.Q8_0.gguf) | Q8_0 | 6.82GB |
Original model description:
---
language:
- en
license: apache-2.0
datasets:
- allenai/dolma
---
<img src="https://allenai.org/olmo/olmo-7b-animation.gif" alt="OLMo Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Model Card for OLMo 7B
<!-- Provide a quick summary of what the model is/does. -->
OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models.
The OLMo models are trained on the [Dolma](https://huggingface.co/datasets/allenai/dolma) dataset.
We release all code, checkpoints, logs (coming soon), and details involved in training these models.
This model has been converted from [allenai/OLMo-7B](https://huggingface.co/allenai/OLMo-7B) for the
Hugging Face Transformers format.
## Model Details
The core models released in this batch are the following:
| Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length |
|------|--------|---------|-------------|-----------------|----------------|
| [OLMo 1B](https://huggingface.co/allenai/OLMo-1B-hf) | 3 Trillion |16 | 2048 | 16 | 2048 |
| [OLMo 7B](https://huggingface.co/allenai/OLMo-7B-hf) | 2.5 Trillion | 32 | 4096 | 32 | 2048 |
| [OLMo 7B Twin 2T](https://huggingface.co/allenai/OLMo-7B-Twin-2T-hf) | 2 Trillion | 32 | 4096 | 32 | 2048 |
We are releasing many checkpoints for these models, for every 1000 training steps. These have not
yet been converted into Hugging Face Transformers format, but are available in [allenai/OLMo-7B](https://huggingface.co/allenai/OLMo-7B).
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Allen Institute for AI (AI2)
- **Supported by:** Databricks, Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, AMD, CSC (Lumi Supercomputer), UW
- **Model type:** a Transformer style autoregressive language model.
- **Language(s) (NLP):** English
- **License:** The code and model are released under Apache 2.0.
- **Contact:** Technical inquiries: `olmo at allenai dot org`. Press: `press at allenai dot org`
- **Date cutoff:** Feb./March 2023 based on Dolma dataset version.
### Model Sources
<!-- Provide the basic links for the model. -->
- **Project Page:** https://allenai.org/olmo
- **Repositories:**
- Core repo (training, inference, fine-tuning etc.): https://github.com/allenai/OLMo
- Evaluation code: https://github.com/allenai/OLMo-Eval
- Further fine-tuning code: https://github.com/allenai/open-instruct
- **Paper:** [Link](https://arxiv.org/abs/2402.00838)
- **Technical blog post:** https://blog.allenai.org/olmo-open-language-model-87ccfc95f580
- **W&B Logs:** https://wandb.ai/ai2-llm/OLMo-7B/reports/OLMo-7B--Vmlldzo2NzQyMzk5
<!-- - **Press release:** TODO -->
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Inference
Quickly get inference running with the following:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-7B-hf")
tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-7B-hf")
message = ["Language modeling is "]
inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False)
# optional verifying cuda
# inputs = {k: v.to('cuda') for k,v in inputs.items()}
# olmo = olmo.to('cuda')
response = olmo.generate(**inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
print(tokenizer.batch_decode(response, skip_special_tokens=True)[0])
>> 'Language modeling is the first step to build natural language generation...'
```
Alternatively, with the pipeline abstraction:
```python
from transformers import pipeline
olmo_pipe = pipeline("text-generation", model="allenai/OLMo-7B-hf")
print(olmo_pipe("Language modeling is "))
>> 'Language modeling is a branch of natural language processing that aims to...'
```
Or, you can make this slightly faster by quantizing the model, e.g. `AutoModelForCausalLM.from_pretrained("allenai/OLMo-7B-hf", torch_dtype=torch.float16, load_in_8bit=True)` (requires `bitsandbytes`).
The quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as `inputs.input_ids.to('cuda')` to avoid potential issues.
### Fine-tuning
This model does not directly support our fine-tuning processes. Model fine-tuning can be done
from the final checkpoint or many intermediate checkpoints of
[allenai/OLMo-7B](https://huggingface.co/allenai/OLMo-7B).
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
Core model results for the 7B model are found below.
| | [Llama 7B](https://arxiv.org/abs/2302.13971) | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) | [MPT 7B](https://huggingface.co/mosaicml/mpt-7b) | **OLMo 7B** (ours) |
| --------------------------------- | -------- | ---------- | --------- | ------ | ------- |
| arc_challenge | 44.5 | 39.8 | 47.5 | 46.5 | 48.5 |
| arc_easy | 57.0 | 57.7 | 70.4 | 70.5 | 65.4 |
| boolq | 73.1 | 73.5 | 74.6 | 74.2 | 73.4 |
| copa | 85.0 | 87.0 | 86.0 | 85.0 | 90 |
| hellaswag | 74.5 | 74.5 | 75.9 | 77.6 | 76.4 |
| openbookqa | 49.8 | 48.4 | 53.0 | 48.6 | 50.2 |
| piqa | 76.3 | 76.4 | 78.5 | 77.3 | 78.4 |
| sciq | 89.5 | 90.8 | 93.9 | 93.7 | 93.8 |
| winogrande | 68.2 | 67.3 | 68.9 | 69.9 | 67.9 |
| **Core tasks average** | 68.7 | 68.4 | 72.1 | 71.5 | 71.6 |
| truthfulQA (MC2) | 33.9 | 38.5 | 34.0 | 33 | 36.0 |
| MMLU (5 shot MC) | 31.5 | 45.0 | 24.0 | 30.8 | 28.3 |
| GSM8k (mixed eval.) | 10.0 (8shot CoT) | 12.0 (8shot CoT) | 4.0 (5 shot) | 4.5 (5 shot) | 8.5 (8shot CoT) |
| **Full average** | 57.8 | 59.3 | 59.2 | 59.3 | 59.8 |
And for the 1B model:
| task | random | [StableLM 2 1.6b](https://huggingface.co/stabilityai/stablelm-2-1_6b)\* | [Pythia 1B](https://huggingface.co/EleutherAI/pythia-1b) | [TinyLlama 1.1B](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T) | **OLMo 1B** (ours) |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------ | ----------------- | --------- | -------------------------------------- | ------- |
| arc_challenge | 25 | 43.81 | 33.11 | 34.78 | 34.45 |
| arc_easy | 25 | 63.68 | 50.18 | 53.16 | 58.07 |
| boolq | 50 | 76.6 | 61.8 | 64.6 | 60.7 |
| copa | 50 | 84 | 72 | 78 | 79 |
| hellaswag | 25 | 68.2 | 44.7 | 58.7 | 62.5 |
| openbookqa | 25 | 45.8 | 37.8 | 43.6 | 46.4 |
| piqa | 50 | 74 | 69.1 | 71.1 | 73.7 |
| sciq | 25 | 94.7 | 86 | 90.5 | 88.1 |
| winogrande | 50 | 64.9 | 53.3 | 58.9 | 58.9 |
| Average | 36.11 | 68.41 | 56.44 | 61.48 | 62.42 |
\*Unlike OLMo, Pythia, and TinyLlama, StabilityAI has not disclosed yet the data StableLM was trained on, making comparisons with other efforts challenging.
## Model Details
### Data
For training data details, please see the [Dolma](https://huggingface.co/datasets/allenai/dolma) documentation.
### Architecture
OLMo 7B architecture with peer models for comparison.
| | **OLMo 7B** | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [OpenLM 7B](https://laion.ai/blog/open-lm/) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) | PaLM 8B |
|------------------------|-------------------|---------------------|--------------------|--------------------|------------------|
| d_model | 4096 | 4096 | 4096 | 4544 | 4096 |
| num heads | 32 | 32 | 32 | 71 | 16 |
| num layers | 32 | 32 | 32 | 32 | 32 |
| MLP ratio | ~8/3 | ~8/3 | ~8/3 | 4 | 4 |
| LayerNorm type | non-parametric LN | RMSNorm | parametric LN | parametric LN | parametric LN |
| pos embeddings | RoPE | RoPE | RoPE | RoPE | RoPE |
| attention variant | full | GQA | full | MQA | MQA |
| biases | none | none | in LN only | in LN only | none |
| block type | sequential | sequential | sequential | parallel | parallel |
| activation | SwiGLU | SwiGLU | SwiGLU | GeLU | SwiGLU |
| sequence length | 2048 | 4096 | 2048 | 2048 | 2048 |
| batch size (instances) | 2160 | 1024 | 2048 | 2304 | 512 |
| batch size (tokens) | ~4M | ~4M | ~4M | ~4M | ~1M |
| weight tying | no | no | no | no | yes |
### Hyperparameters
AdamW optimizer parameters are shown below.
| Size | Peak LR | Betas | Epsilon | Weight Decay |
|------|------------|-----------------|-------------|--------------|
| 1B | 4.0E-4 | (0.9, 0.95) | 1.0E-5 | 0.1 |
| 7B | 3.0E-4 | (0.9, 0.99) | 1.0E-5 | 0.1 |
Optimizer settings comparison with peer models.
| | **OLMo 7B** | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [OpenLM 7B](https://laion.ai/blog/open-lm/) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) |
|-----------------------|------------------|---------------------|--------------------|--------------------|
| warmup steps | 5000 | 2000 | 2000 | 1000 |
| peak LR | 3.0E-04 | 3.0E-04 | 3.0E-04 | 6.0E-04 |
| minimum LR | 3.0E-05 | 3.0E-05 | 3.0E-05 | 1.2E-05 |
| weight decay | 0.1 | 0.1 | 0.1 | 0.1 |
| beta1 | 0.9 | 0.9 | 0.9 | 0.99 |
| beta2 | 0.95 | 0.95 | 0.95 | 0.999 |
| epsilon | 1.0E-05 | 1.0E-05 | 1.0E-05 | 1.0E-05 |
| LR schedule | linear | cosine | cosine | cosine |
| gradient clipping | global 1.0 | global 1.0 | global 1.0 | global 1.0 |
| gradient reduce dtype | FP32 | FP32 | FP32 | BF16 |
| optimizer state dtype | FP32 | most likely FP32 | FP32 | FP32 |
## Environmental Impact
OLMo 7B variants were either trained on MI250X GPUs at the LUMI supercomputer, or A100-40GB GPUs provided by MosaicML.
A summary of the environmental impact. Further details are available in the paper.
| | GPU Type | Power Consumption From GPUs | Carbon Intensity (kg CO₂e/KWh) | Carbon Emissions (tCO₂eq) |
|-----------|------------|-----------------------------|--------------------------------|---------------------------|
| OLMo 7B Twin | MI250X ([LUMI supercomputer](https://www.lumi-supercomputer.eu)) | 135 MWh | 0* | 0* |
| OLMo 7B | A100-40GB ([MosaicML](https://www.mosaicml.com)) | 104 MWh | 0.656 | 75.05 |
## Bias, Risks, and Limitations
Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.
Such content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology.
Otherwise, many facts from OLMo or any LLM will often not be true, so they should be checked.
## Citation
**BibTeX:**
```
@article{Groeneveld2023OLMo,
title={OLMo: Accelerating the Science of Language Models},
author={Groeneveld, Dirk and Beltagy, Iz and Walsh, Pete and Bhagia, Akshita and Kinney, Rodney and Tafjord, Oyvind and Jha, Ananya Harsh and Ivison, Hamish and Magnusson, Ian and Wang, Yizhong and Arora, Shane and Atkinson, David and Authur, Russell and Chandu, Khyathi and Cohan, Arman and Dumas, Jennifer and Elazar, Yanai and Gu, Yuling and Hessel, Jack and Khot, Tushar and Merrill, William and Morrison, Jacob and Muennighoff, Niklas and Naik, Aakanksha and Nam, Crystal and Peters, Matthew E. and Pyatkin, Valentina and Ravichander, Abhilasha and Schwenk, Dustin and Shah, Saurabh and Smith, Will and Subramani, Nishant and Wortsman, Mitchell and Dasigi, Pradeep and Lambert, Nathan and Richardson, Kyle and Dodge, Jesse and Lo, Kyle and Soldaini, Luca and Smith, Noah A. and Hajishirzi, Hannaneh},
journal={Preprint},
year={2024}
}
```
**APA:**
Groeneveld, D., Beltagy, I., Walsh, P., Bhagia, A., Kinney, R., Tafjord, O., Jha, A., Ivison, H., Magnusson, I., Wang, Y., Arora, S., Atkinson, D., Authur, R., Chandu, K., Cohan, A., Dumas, J., Elazar, Y., Gu, Y., Hessel, J., Khot, T., Merrill, W., Morrison, J., Muennighoff, N., Naik, A., Nam, C., Peters, M., Pyatkin, V., Ravichander, A., Schwenk, D., Shah, S., Smith, W., Subramani, N., Wortsman, M., Dasigi, P., Lambert, N., Richardson, K., Dodge, J., Lo, K., Soldaini, L., Smith, N., & Hajishirzi, H. (2024). OLMo: Accelerating the Science of Language Models. Preprint.
## Model Card Contact
For errors in this model card, contact Nathan, Akshita or Shane, `{nathanl, akshitab, shanea} at allenai dot org`.
|
zmilczarek/pii-detection-roberta-v2 | zmilczarek | 2024-06-02T15:06:07Z | 486 | 1 | transformers | [
"transformers",
"safetensors",
"roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2024-05-24T18:25:25Z | ---
library_name: transformers
tags: []
---
'eval_recall': 0.8086734693877551,
'eval_precision': 0.8386243386243386,
'eval_fbeta_score': 0.8097858125368441
|
mmnga/aya-23-8B-gguf | mmnga | 2024-05-27T00:54:36Z | 486 | 0 | null | [
"gguf",
"en",
"ja",
"dataset:TFMC/imatrix-dataset-for-japanese-llm",
"license:cc-by-nc-4.0",
"region:us"
]
| null | 2024-05-26T16:32:53Z | ---
language:
- en
- ja
datasets:
- TFMC/imatrix-dataset-for-japanese-llm
license: cc-by-nc-4.0
---
# aya-23-8B-gguf
[CohereForAIさんが公開しているaya-23-8B](https://huggingface.co/CohereForAI/aya-23-8B)のggufフォーマット変換版です。
imatrixのデータは[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)を使用して作成しました。
## Usage
```
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make -j
./main -m 'aya-23-8B-Q4_0.gguf' -n 128 -p 'こんにちわ'
``` |
QuantFactory/Llama3-German-8B-GGUF | QuantFactory | 2024-05-30T07:50:11Z | 486 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"de",
"arxiv:2404.10830",
"base_model:DiscoResearch/Llama3-German-8B",
"license:llama3",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-30T06:43:13Z | ---
license: llama3
language:
- de
library_name: transformers
base_model: DiscoResearch/Llama3-German-8B
pipeline_tag: text-generation
---
# Llama3-German-8B-GGUF
This is quantized version of [DiscoResearch/Llama3-German-8B](https://huggingface.co/DiscoResearch/Llama3-German-8B) created using llama.cpp
## Model Description
Llama3-German-8B-v0.1 is a large language model based on [Meta's Llama3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B). It is specialized for the German language through continuous pretraining on 65 billion high-quality tokens, similar to previous [LeoLM](https://huggingface.co/LeoLM) or [Occiglot](https://huggingface.co/collections/occiglot/occiglot-eu5-7b-v01-65dbed502a6348b052695e01) models.
Llama3 itself was trained on 15T tokens, of which only <1T were multilingual, resulting in suboptimal performance in German with reduced linguistic capabilities and frequent grammatical errors, motivating the necessity for continued pretraining. Benchmark results on our model show minimal degradation in English performance, despite the absence of replay during training. Importantly, Llama3-German-8B-v0.1 demonstrates strong improvements in German, particularly on the Hellaswag benchmark, which measures linguistic understanding and general reasoning.
[DiscoResearch/Llama3-German-8B-v0.1](https://huggingface.co/collections/DiscoResearch/discoleo-8b-llama3-for-german-6650527496c0fafefd4c9729) is the result of a joint effort between [DiscoResearch](https://huggingface.co/DiscoResearch) and [Occiglot](https://huggingface.co/occiglot) with support from the [DFKI](https://www.dfki.de/web/) (German Research Center for Artificial Intelligence) and [hessian.Ai](https://hessian.ai). Occiglot kindly handled data preprocessing, filtering, and deduplication as part of their latest [dataset release](https://huggingface.co/datasets/occiglot/occiglot-fineweb-v0.5), as well as sharing their compute allocation at hessian.Ai's 42 Supercomputer.
## How to use
This is a base model and should probably be subject to finetuning before use. See our [collection](https://huggingface.co/collections/DiscoResearch/discoleo-8b-llama3-for-german-6650527496c0fafefd4c9729) for various finetuned and long-context versions.
## Model Training and Hyperparameters
The model was trained on 128 GPUs on [hessian.Ai 42](hessian.ai) for ~60 hours. See detailed hyperparameters below.
| Parameter | Value |
|-------------------|-----------------------------------|
| Sequence Length | 8192 tokens |
| Learning Rate | 1.5e-5 to 1.5e-6 (cosine schedule)|
| Batch Size | 4194304 (512*8192) tokens |
| Micro Batch Size | 4*8192 tokens |
| Training Steps | 15500 |
| Warmup Steps | 155 (1%) |
| Weight Decay | 0.05 |
| Optimizer | AdamW |
## Data Collection and Preprocessing
For pre-training, we used 65B German tokens from the [occiglot-fineweb-0.5](https://huggingface.co/datasets/occiglot/occiglot-fineweb-v0.5) dataset.
The data comprises multiple curated datasets from [LLM-Datasets](https://github.com/malteos/llm-datasets) as well as 12 [Common-Crawl](https://commoncrawl.org) releases that were processed with [OSCAR's Ungoliant pipeline](https://github.com/oscar-project/ungoliant).
All data was further filtered with a set of language-specific filters based on [Huggingface's fine-web](https://github.com/huggingface/datatrove/blob/main/examples/fineweb.py) and globally deduplicated.
For more information please refer to the [dataset card](https://huggingface.co/datasets/occiglot/occiglot-fineweb-v0.5) and corresponding [blog-post](https://occiglot.eu/posts/occiglot-fineweb/).
## Evaluation and Results
We evaluated the model using a suite of common English Benchmarks and their German counterparts with [GermanBench](https://github.com/bjoernpl/GermanBenchmark).
The following figure shows the benchmark results in comparison to the base model [meta-llama/Meta-Llama3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) and two different hyperparameter configurations.
We swept different learning rates to identify a well-working setup. The final released model is the 1.5e-5 lr version.

Find the detailed benchmark scores for the base and long-context models in this table.
| Model | truthful_qa_de | truthfulqa_mc | arc_challenge | arc_challenge_de | hellaswag | hellaswag_de | MMLU | MMLU-DE | mean |
|--------------------------------------|----------------|---------------|---------------|------------------|-----------|--------------|--------|---------|------------|
| DiscoResearch/Llama3-German-8B | **0.49499** | 0.44838 | 0.55802 | **0.49829** | 0.79924 | **0.65395** | 0.62240| **0.54413** | **0.57743** |
| DiscoResearch/Llama3-German-8B-32k | 0.48920 | **0.45138** | 0.54437 | 0.49232 | 0.79078 | 0.64310 | 0.58774| 0.47971 | 0.55982 |
| meta-llama/Meta-Llama-3-8B-Instruct | 0.47498 | 0.43923 | **0.59642** | 0.47952 | **0.82025**| 0.60008 | **0.66658**| 0.53541 | 0.57656 |
## Long-Context Extension
In addition to the base model, we release a long-context version of Llama3-German-8B ([DiscoResearch/Llama3-German-8B-32k](https://huggingface.co/DiscoResearch/Llama3-German-8B-32k) capable of processing context lengths up to 65k tokens. This variant was trained on an additional 100 million tokens at 32k context length, using a rope_theta value of `1.5e6` and a learning rate of `1.5e-5` with a batch size of `256*8192` tokens and otherwise equal hyperparameters to the base model.
## Instruction Tuning
We also provide an instruction-tuned version: [DiscoResearch/Llama3-DiscoLeo-Instruct-8B-v0.1](https://huggingface.co/DiscoResearch/Llama3-DiscoLeo-Instruct-8B-v0.1), utilizing the DiscoLM German dataset for fine-tuning (also available as a long-context model at [DiscoResearch/Llama3-DiscoLeo-Instruct-8B-32k-v0.1](https://huggingface.co/DiscoResearch/Llama3-DiscoLeo-Instruct-8B-v0.1)).
Find more details in the respective model cards. Also check out our experimental merge ([DiscoResearch/Llama3-DiscoLeo-8B-DARE-Experimental](https://huggingface.co/DiscoResearch/Llama3-DiscoLeo-8B-DARE-Experimental)) between [meta-llama/Meta-Llama3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) and our finetuned model in an attempt to keep the extraordinary capabilities of Llama3-Instruct and add exceptional German skills.
## Document Packing
We employed a more intelligent document packing strategy based on the ["Fewer Truncations Improve Language Modeling" paper by Ding et al.](https://arxiv.org/abs/2404.10830v2), using the first-fit-decreasing algorithm to pack documents into batches without truncation.
We packed our data in chunks of 10000 documents for more efficient processing while maintaining >99% packing efficiency. Documents longer than the sequence length are split into chunks of sequence length.
This approach results in overall higher benchmark scores when training on the same data with equal hyperparameters. The following numbers are from initial experiments with `3e-5 lr` and 12k steps and show improvements comparable to those shown in the original paper.
| Task | Naive Packing | Fewer Truncations Packing | Percentage Increase |
|-------------------|---------------|---------------------------|---------------------|
| truthfulqa_mc | 0.452648 | 0.467687 | 3.32% |
| arc_challenge | 0.517918 | 0.528157 | 1.98% |
| truthful_qa_de | 0.485529 | 0.492979 | 1.53% |
| arc_challenge_de | 0.480375 | 0.493174 | 2.66% |
| hellaswag | 0.776041 | 0.773352 | -0.35% |
| hellaswag_de | 0.655248 | 0.653356 | -0.29% |
| MMLU | 0.573719 | 0.579802 | 1.06% |
| MMLU-DE | 0.504509 | 0.503863 | -0.13% |
The following is our simple implementation of the first-fit-decreasing algorithm described in the paper.
```python
def pack_documents(tokenized_documents):
# Sort documents by their length in descending order
sorted_docs = sorted(tokenized_documents, key=len, reverse=True)
# Initialize bins
bins = []
# Function to find the first bin that can accommodate the document
def find_bin(doc):
for b in bins:
if sum(len(d) for d in b) + len(doc) <= 8192:
return b
return None
# Place each document in the first available bin or create a new bin
for doc in sorted_docs:
target_bin = find_bin(doc)
if target_bin is not None:
target_bin.append(doc)
else:
# Create a new bin with this document if no suitable bin is found
bins.append([doc])
# Return results
return bins
```
## Model Configurations
We release DiscoLeo-8B in the following configurations:
1. [Base model with continued pretraining](https://huggingface.co/DiscoResearch/Llama3-German-8B)
2. [Long-context version (32k context length)](https://huggingface.co/DiscoResearch/Llama3-German-8B-32k)
3. [Instruction-tuned version of the base model](https://huggingface.co/DiscoResearch/Llama3-DiscoLeo-Instruct-8B-v0.1)
4. [Instruction-tuned version of the long-context model](https://huggingface.co/DiscoResearch/Llama3-DiscoLeo-Instruct-8B-32k-v0.1)
5. [Experimental `DARE-TIES` Merge with Llama3-Instruct](https://huggingface.co/DiscoResearch/Llama3-DiscoLeo-8B-DARE-Experimental)
6. [Collection of Quantized versions](https://huggingface.co/collections/DiscoResearch/discoleo-8b-quants-6651bcf8f72c9a37ce485d42)
## How to use:
Here's how to use the model with transformers:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
device="cuda"
model = AutoModelForCausalLM.from_pretrained(
"DiscoResearch/Llama3-DiscoLeo-Instruct-8B-v0.1",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("DiscoResearch/Llama3-DiscoLeo-Instruct-8B-v0.1")
prompt = "Schreibe ein Essay über die Bedeutung der Energiewende für Deutschlands Wirtschaft"
messages = [
{"role": "system", "content": "Du bist ein hilfreicher Assistent."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Acknowledgements
The model was trained and evaluated by [Björn Plüster](https://huggingface.co/bjoernp) ([DiscoResearch](https://huggingface.co/DiscoResearch), [ellamind](https://ellamind.com)) with data preparation and project supervision by [Manuel Brack](http://manuel-brack.eu) ([DFKI](https://www.dfki.de/web/), [TU-Darmstadt](https://www.tu-darmstadt.de/)). Initial work on dataset collection and curation was performed by [Malte Ostendorff](https://ostendorff.org) and [Pedro Ortiz Suarez](https://portizs.eu). Instruction tuning was done with the DiscoLM German dataset created by [Jan-Philipp Harries](https://huggingface.co/jphme) and [Daniel Auras](https://huggingface.co/rasdani) ([DiscoResearch](https://huggingface.co/DiscoResearch), [ellamind](https://ellamind.com)). We extend our gratitude to [LAION](https://laion.ai/) and friends, especially [Christoph Schuhmann](https://entwickler.de/experten/christoph-schuhmann) and [Jenia Jitsev](https://huggingface.co/JJitsev), for initiating this collaboration.
The model training was supported by a compute grant at the [42 supercomputer](https://hessian.ai/) which is a central component in the development of [hessian AI](https://hessian.ai/), the [AI Innovation Lab](https://hessian.ai/infrastructure/ai-innovationlab/) (funded by the [Hessian Ministry of Higher Education, Research and the Art (HMWK)](https://wissenschaft.hessen.de) & the [Hessian Ministry of the Interior, for Security and Homeland Security (HMinD)](https://innen.hessen.de)) and the [AI Service Centers](https://hessian.ai/infrastructure/ai-service-centre/) (funded by the [German Federal Ministry for Economic Affairs and Climate Action (BMWK)](https://www.bmwk.de/Navigation/EN/Home/home.html)).
The curation of the training data is partially funded by the [German Federal Ministry for Economic Affairs and Climate Action (BMWK)](https://www.bmwk.de/Navigation/EN/Home/home.html)
through the project [OpenGPT-X](https://opengpt-x.de/en/) (project no. 68GX21007D). |
mradermacher/Multimoph-7B-GGUF | mradermacher | 2024-06-01T19:37:30Z | 486 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:theprint/Multimoph-7B",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-01T18:03:51Z | ---
base_model: theprint/Multimoph-7B
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/theprint/Multimoph-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Multimoph-7B-GGUF/resolve/main/Multimoph-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Multimoph-7B-GGUF/resolve/main/Multimoph-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Multimoph-7B-GGUF/resolve/main/Multimoph-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Multimoph-7B-GGUF/resolve/main/Multimoph-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Multimoph-7B-GGUF/resolve/main/Multimoph-7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Multimoph-7B-GGUF/resolve/main/Multimoph-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Multimoph-7B-GGUF/resolve/main/Multimoph-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Multimoph-7B-GGUF/resolve/main/Multimoph-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Multimoph-7B-GGUF/resolve/main/Multimoph-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Multimoph-7B-GGUF/resolve/main/Multimoph-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Multimoph-7B-GGUF/resolve/main/Multimoph-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Multimoph-7B-GGUF/resolve/main/Multimoph-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Multimoph-7B-GGUF/resolve/main/Multimoph-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Multimoph-7B-GGUF/resolve/main/Multimoph-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Multimoph-7B-GGUF/resolve/main/Multimoph-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
NikolayKozloff/Gemma-2-9B-It-SPPO-Iter3-IQ4_XS-GGUF | NikolayKozloff | 2024-06-30T16:30:09Z | 486 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:openbmb/UltraFeedback",
"base_model:UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3",
"license:apache-2.0",
"region:us"
]
| text-generation | 2024-06-30T16:29:47Z | ---
base_model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3
datasets:
- openbmb/UltraFeedback
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/Gemma-2-9B-It-SPPO-Iter3-IQ4_XS-GGUF
This model was converted to GGUF format from [`UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3`](https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Gemma-2-9B-It-SPPO-Iter3-IQ4_XS-GGUF --hf-file gemma-2-9b-it-sppo-iter3-iq4_xs-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Gemma-2-9B-It-SPPO-Iter3-IQ4_XS-GGUF --hf-file gemma-2-9b-it-sppo-iter3-iq4_xs-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/Gemma-2-9B-It-SPPO-Iter3-IQ4_XS-GGUF --hf-file gemma-2-9b-it-sppo-iter3-iq4_xs-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/Gemma-2-9B-It-SPPO-Iter3-IQ4_XS-GGUF --hf-file gemma-2-9b-it-sppo-iter3-iq4_xs-imat.gguf -c 2048
```
|
unicamp-dl/ptt5-small-t5-vocab | unicamp-dl | 2024-04-10T17:39:04Z | 485 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"tensorflow",
"pt",
"pt-br",
"dataset:brWaC",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:05Z | ---
language: pt
license: mit
tags:
- t5
- pytorch
- tensorflow
- pt
- pt-br
datasets:
- brWaC
widget:
- text: "Texto de exemplo em português"
inference: false
---
# Portuguese T5 (aka "PTT5")
## Introduction
PTT5 is a T5 model pretrained in the BrWac corpus, a large collection of web pages in Portuguese, improving T5's performance on Portuguese sentence similarity and entailment tasks. It's available in three sizes (small, base and large) and two vocabularies (Google's T5 original and ours, trained on Portuguese Wikipedia).
For further information or requests, please go to [PTT5 repository](https://github.com/unicamp-dl/PTT5).
## Available models
| Model | Size | #Params | Vocabulary |
| :-: | :-: | :-: | :-: |
| [unicamp-dl/ptt5-small-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-small-t5-vocab) | small | 60M | Google's T5 |
| [unicamp-dl/ptt5-base-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-base-t5-vocab) | base | 220M | Google's T5 |
| [unicamp-dl/ptt5-large-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-large-t5-vocab) | large | 740M | Google's T5 |
| [unicamp-dl/ptt5-small-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-small-portuguese-vocab) | small | 60M | Portuguese |
| **[unicamp-dl/ptt5-base-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-base-portuguese-vocab)** **(Recommended)** | **base** | **220M** | **Portuguese** |
| [unicamp-dl/ptt5-large-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-large-portuguese-vocab) | large | 740M | Portuguese |
## Usage
```python
# Tokenizer
from transformers import T5Tokenizer
# PyTorch (bare model, baremodel + language modeling head)
from transformers import T5Model, T5ForConditionalGeneration
# Tensorflow (bare model, baremodel + language modeling head)
from transformers import TFT5Model, TFT5ForConditionalGeneration
model_name = 'unicamp-dl/ptt5-base-portuguese-vocab'
tokenizer = T5Tokenizer.from_pretrained(model_name)
# PyTorch
model_pt = T5ForConditionalGeneration.from_pretrained(model_name)
# TensorFlow
model_tf = TFT5ForConditionalGeneration.from_pretrained(model_name)
```
# Citation
If you use PTT5, please cite:
@article{ptt5_2020,
title={PTT5: Pretraining and validating the T5 model on Brazilian Portuguese data},
author={Carmo, Diedre and Piau, Marcos and Campiotti, Israel and Nogueira, Rodrigo and Lotufo, Roberto},
journal={arXiv preprint arXiv:2008.09144},
year={2020}
}
|
MBZUAI/LaMini-Cerebras-1.3B | MBZUAI | 2023-04-28T13:07:55Z | 485 | 2 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"en",
"arxiv:2304.14402",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-04-16T13:17:24Z | ---
license: cc-by-nc-4.0
language:
- en
pipeline_tag: text-generation
widget:
- text: >-
Below is an instruction that describes a task.
Write a response that appropriately completes the request.
### Instruction:
how can I become more healthy?
### Response:
example_title: example
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
<p align="center" width="100%">
<a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini.png" alt="Title" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
</p>
# LaMini-Cerebras-1.3B
[]()
This model is one of our LaMini-LM model series in paper "[LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions](https://github.com/mbzuai-nlp/lamini-lm)".
This model is a fine-tuned version of [cerebras/Cerebras-GPT-1.3B](https://huggingface.co/cerebras/Cerebras-GPT-1.3B) on [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction) that contains 2.58M samples for instruction fine-tuning. For more information about our dataset, please refer to our [project repository](https://github.com/mbzuai-nlp/lamini-lm/).
You can view other models of LaMini-LM series as follows. Models with ✩ are those with the best overall performance given their size/architecture, hence we recommend using them. More details can be seen in our paper.
<table>
<thead>
<tr>
<th>Base model</th>
<th colspan="4">LaMini-LM series (#parameters)</th>
</tr>
</thead>
<tbody>
<tr>
<td>T5</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-t5-61m" target="_blank" rel="noopener noreferrer">LaMini-T5-61M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-t5-223m" target="_blank" rel="noopener noreferrer">LaMini-T5-223M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-t5-738m" target="_blank" rel="noopener noreferrer">LaMini-T5-738M</a></td>
<td></td>
</tr>
<tr>
<td>Flan-T5</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-77m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-77M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-248m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-248M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-783m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-783M</a>✩</td>
<td></td>
</tr>
<tr>
<td>Cerebras-GPT</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-111m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-111M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-256m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-256M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-590m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-590M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-1.3b" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-1.3B</a></td>
</tr>
<tr>
<td>GPT-2</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-gpt-124m" target="_blank" rel="noopener noreferrer">LaMini-GPT-124M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-gpt-774m" target="_blank" rel="noopener noreferrer">LaMini-GPT-774M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-gpt-1.5b" target="_blank" rel="noopener noreferrer">LaMini-GPT-1.5B</a>✩</td>
<td></td>
</tr>
<tr>
<td>GPT-Neo</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-neo-125m" target="_blank" rel="noopener noreferrer">LaMini-Neo-125M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-neo-1.3b" target="_blank" rel="noopener noreferrer">LaMini-Neo-1.3B</a></td>
<td></td>
<td></td>
</tr>
<tr>
<td>GPT-J</td>
<td colspan="4">coming soon</td>
</tr>
<tr>
<td>LLaMA</td>
<td colspan="4">coming soon</td>
</tr>
</tbody>
</table>
## Use
### Intended use
We recommend using the model to respond to human instructions written in natural language.
Since this decoder-only model is fine-tuned with wrapper text, we suggest using the same wrapper text to achieve the best performance.
See the example on the right or the code below.
We now show you how to load and use our model using HuggingFace `pipeline()`.
```python
# pip install -q transformers
from transformers import pipeline
checkpoint = "{model_name}"
model = pipeline('text-generation', model = checkpoint)
instruction = 'Please let me know your thoughts on the given place and why you think it deserves to be visited: \n"Barcelona, Spain"'
input_prompt = f"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"
generated_text = model(input_prompt, max_length=512, do_sample=True)[0]['generated_text']
print("Response", generated_text)
```
## Training Procedure
<p align="center" width="100%">
<a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini-pipeline.drawio.png" alt="Title" style="width: 100%; min-width: 250px; display: block; margin: auto;"></a>
</p>
We initialize with [cerebras/Cerebras-GPT-1.3B](https://huggingface.co/cerebras/Cerebras-GPT-1.3B) and fine-tune it on our [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction). Its total number of parameters is 1.3B.
### Training Hyperparameters
## Evaluation
We conducted two sets of evaluations: automatic evaluation on downstream NLP tasks and human evaluation on user-oriented instructions. For more detail, please refer to our [paper]().
## Limitations
More information needed
# Citation
```bibtex
@article{lamini-lm,
author = {Minghao Wu and
Abdul Waheed and
Chiyu Zhang and
Muhammad Abdul-Mageed and
Alham Fikri Aji
},
title = {LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions},
journal = {CoRR},
volume = {abs/2304.14402},
year = {2023},
url = {https://arxiv.org/abs/2304.14402},
eprinttype = {arXiv},
eprint = {2304.14402}
}
``` |
timm/densenetblur121d.ra_in1k | timm | 2023-04-21T22:55:07Z | 485 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2110.00476",
"arxiv:1904.11486",
"arxiv:1608.06993",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-04-21T22:54:59Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for densenetblur121d.ra_in1k
A DenseNet image classification model. Pretrained on ImageNet-1k in `timm` by Ross Wightman using RandAugment `RA` recipe. Related to `B` recipe in [ResNet Strikes Back](https://arxiv.org/abs/2110.00476).
This model uses [Blur Pooling](https://arxiv.org/abs/1904.11486).
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 8.0
- GMACs: 3.1
- Activations (M): 7.9
- Image size: train = 224 x 224, test = 288 x 288
- **Papers:**
- Densely Connected Convolutional Networks: https://arxiv.org/abs/1608.06993
- ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('densenetblur121d.ra_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'densenetblur121d.ra_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 256, 56, 56])
# torch.Size([1, 512, 28, 28])
# torch.Size([1, 1024, 14, 14])
# torch.Size([1, 1024, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'densenetblur121d.ra_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1024, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@inproceedings{huang2017densely,
title={Densely Connected Convolutional Networks},
author={Huang, Gao and Liu, Zhuang and van der Maaten, Laurens and Weinberger, Kilian Q },
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
year={2017}
}
```
```bibtex
@inproceedings{wightman2021resnet,
title={ResNet strikes back: An improved training procedure in timm},
author={Wightman, Ross and Touvron, Hugo and Jegou, Herve},
booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future}
}
```
|
Linna/emotion-english-distilroberta-melinna | Linna | 2023-05-31T12:22:43Z | 485 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"roberta",
"text-classification",
"distilroberta",
"sentiment",
"emotion",
"twitter",
"reddit",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-05-31T12:09:11Z | ---
language: en
tags:
- distilroberta
- sentiment
- emotion
- twitter
- reddit
widget:
- text: Oh wow. I didn't know that.
- text: This movie always makes me cry..
- text: Oh Happy Day
duplicated_from: j-hartmann/emotion-english-distilroberta-base
---
# Emotion English DistilRoBERTa-base
# Description ℹ
With this model, you can classify emotions in English text data. The model was trained on 6 diverse datasets (see Appendix below) and predicts Ekman's 6 basic emotions, plus a neutral class:
1) anger 🤬
2) disgust 🤢
3) fear 😨
4) joy 😀
5) neutral 😐
6) sadness 😭
7) surprise 😲
The model is a fine-tuned checkpoint of [DistilRoBERTa-base](https://huggingface.co/distilroberta-base). For a 'non-distilled' emotion model, please refer to the model card of the [RoBERTa-large](https://huggingface.co/j-hartmann/emotion-english-roberta-large) version.
# Application 🚀
a) Run emotion model with 3 lines of code on single text example using Hugging Face's pipeline command on Google Colab:
[](https://colab.research.google.com/github/j-hartmann/emotion-english-distilroberta-base/blob/main/simple_emotion_pipeline.ipynb)
```python
from transformers import pipeline
classifier = pipeline("text-classification", model="j-hartmann/emotion-english-distilroberta-base", return_all_scores=True)
classifier("I love this!")
```
```python
Output:
[[{'label': 'anger', 'score': 0.004419783595949411},
{'label': 'disgust', 'score': 0.0016119900392368436},
{'label': 'fear', 'score': 0.0004138521908316761},
{'label': 'joy', 'score': 0.9771687984466553},
{'label': 'neutral', 'score': 0.005764586851000786},
{'label': 'sadness', 'score': 0.002092392183840275},
{'label': 'surprise', 'score': 0.008528684265911579}]]
```
b) Run emotion model on multiple examples and full datasets (e.g., .csv files) on Google Colab:
[](https://colab.research.google.com/github/j-hartmann/emotion-english-distilroberta-base/blob/main/emotion_prediction_example.ipynb)
# Contact 💻
Please reach out to [[email protected]](mailto:[email protected]) if you have any questions or feedback.
Thanks to Samuel Domdey and [chrsiebert](https://huggingface.co/siebert) for their support in making this model available.
# Reference ✅
For attribution, please cite the following reference if you use this model. A working paper will be available soon.
```
Jochen Hartmann, "Emotion English DistilRoBERTa-base". https://huggingface.co/j-hartmann/emotion-english-distilroberta-base/, 2022.
```
BibTex citation:
```
@misc{hartmann2022emotionenglish,
author={Hartmann, Jochen},
title={Emotion English DistilRoBERTa-base},
year={2022},
howpublished = {\url{https://huggingface.co/j-hartmann/emotion-english-distilroberta-base/}},
}
```
# Appendix 📚
Please find an overview of the datasets used for training below. All datasets contain English text. The table summarizes which emotions are available in each of the datasets. The datasets represent a diverse collection of text types. Specifically, they contain emotion labels for texts from Twitter, Reddit, student self-reports, and utterances from TV dialogues. As MELD (Multimodal EmotionLines Dataset) extends the popular EmotionLines dataset, EmotionLines itself is not included here.
|Name|anger|disgust|fear|joy|neutral|sadness|surprise|
|---|---|---|---|---|---|---|---|
|Crowdflower (2016)|Yes|-|-|Yes|Yes|Yes|Yes|
|Emotion Dataset, Elvis et al. (2018)|Yes|-|Yes|Yes|-|Yes|Yes|
|GoEmotions, Demszky et al. (2020)|Yes|Yes|Yes|Yes|Yes|Yes|Yes|
|ISEAR, Vikash (2018)|Yes|Yes|Yes|Yes|-|Yes|-|
|MELD, Poria et al. (2019)|Yes|Yes|Yes|Yes|Yes|Yes|Yes|
|SemEval-2018, EI-reg, Mohammad et al. (2018) |Yes|-|Yes|Yes|-|Yes|-|
The model is trained on a balanced subset from the datasets listed above (2,811 observations per emotion, i.e., nearly 20k observations in total). 80% of this balanced subset is used for training and 20% for evaluation. The evaluation accuracy is 66% (vs. the random-chance baseline of 1/7 = 14%).
# Scientific Applications 📖
Below you can find a list of papers using "Emotion English DistilRoBERTa-base". If you would like your paper to be added to the list, please send me an email.
- Butt, S., Sharma, S., Sharma, R., Sidorov, G., & Gelbukh, A. (2022). What goes on inside rumour and non-rumour tweets and their reactions: A Psycholinguistic Analyses. Computers in Human Behavior, 107345.
- Kuang, Z., Zong, S., Zhang, J., Chen, J., & Liu, H. (2022). Music-to-Text Synaesthesia: Generating Descriptive Text from Music Recordings. arXiv preprint arXiv:2210.00434.
- Rozado, D., Hughes, R., & Halberstadt, J. (2022). Longitudinal analysis of sentiment and emotion in news media headlines using automated labelling with Transformer language models. Plos one, 17(10), e0276367. |
sail-rvc/Lana_Del_Rey_e1000_s13000 | sail-rvc | 2023-07-14T07:26:09Z | 485 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
]
| audio-to-audio | 2023-07-14T07:25:52Z |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# Lana_Del_Rey_e1000_s13000
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:26:09
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
TheBloke/StableBeluga2-70B-GGUF | TheBloke | 2023-09-27T12:48:09Z | 485 | 4 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation",
"en",
"dataset:conceptofmind/cot_submix_original",
"dataset:conceptofmind/flan2021_submix_original",
"dataset:conceptofmind/t0_submix_original",
"dataset:conceptofmind/niv2_submix_original",
"arxiv:2307.09288",
"arxiv:2306.02707",
"base_model:stabilityai/StableBeluga2",
"license:llama2",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-09-06T00:49:31Z | ---
language:
- en
license: llama2
datasets:
- conceptofmind/cot_submix_original
- conceptofmind/flan2021_submix_original
- conceptofmind/t0_submix_original
- conceptofmind/niv2_submix_original
model_name: StableBeluga2
base_model: stabilityai/StableBeluga2
inference: false
model_creator: Stability AI
model_type: llama
pipeline_tag: text-generation
prompt_template: '### System:
{system_message}
### User:
{prompt}
### Assistant:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# StableBeluga2 - GGUF
- Model creator: [Stability AI](https://huggingface.co/stabilityai)
- Original model: [StableBeluga2](https://huggingface.co/stabilityai/StableBeluga2)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Stability AI's StableBeluga2](https://huggingface.co/stabilityai/StableBeluga2).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/StableBeluga2-70B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/StableBeluga2-70B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/StableBeluga2-70B-GGUF)
* [Stability AI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/stabilityai/StableBeluga2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Orca-Hashes
```
### System:
{system_message}
### User:
{prompt}
### Assistant:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [stablebeluga2-70B.Q2_K.gguf](https://huggingface.co/TheBloke/StableBeluga2-70B-GGUF/blob/main/stablebeluga2-70B.Q2_K.gguf) | Q2_K | 2 | 29.28 GB| 31.78 GB | smallest, significant quality loss - not recommended for most purposes |
| [stablebeluga2-70B.Q3_K_S.gguf](https://huggingface.co/TheBloke/StableBeluga2-70B-GGUF/blob/main/stablebeluga2-70B.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss |
| [stablebeluga2-70B.Q3_K_M.gguf](https://huggingface.co/TheBloke/StableBeluga2-70B-GGUF/blob/main/stablebeluga2-70B.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss |
| [stablebeluga2-70B.Q3_K_L.gguf](https://huggingface.co/TheBloke/StableBeluga2-70B-GGUF/blob/main/stablebeluga2-70B.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss |
| [stablebeluga2-70B.Q4_0.gguf](https://huggingface.co/TheBloke/StableBeluga2-70B-GGUF/blob/main/stablebeluga2-70B.Q4_0.gguf) | Q4_0 | 4 | 38.87 GB| 41.37 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [stablebeluga2-70B.Q4_K_S.gguf](https://huggingface.co/TheBloke/StableBeluga2-70B-GGUF/blob/main/stablebeluga2-70B.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB| 41.57 GB | small, greater quality loss |
| [stablebeluga2-70B.Q4_K_M.gguf](https://huggingface.co/TheBloke/StableBeluga2-70B-GGUF/blob/main/stablebeluga2-70B.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended |
| [stablebeluga2-70B.Q5_0.gguf](https://huggingface.co/TheBloke/StableBeluga2-70B-GGUF/blob/main/stablebeluga2-70B.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [stablebeluga2-70B.Q5_K_S.gguf](https://huggingface.co/TheBloke/StableBeluga2-70B-GGUF/blob/main/stablebeluga2-70B.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
| [stablebeluga2-70B.Q5_K_M.gguf](https://huggingface.co/TheBloke/StableBeluga2-70B-GGUF/blob/main/stablebeluga2-70B.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended |
| stablebeluga2-70B.Q6_K.gguf | Q6_K | 6 | 56.59 GB| 59.09 GB | very large, extremely low quality loss |
| stablebeluga2-70B.Q8_0.gguf | Q8_0 | 8 | 73.29 GB| 75.79 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
### Q6_K and Q8_0 files are split and require joining
**Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files.
<details>
<summary>Click for instructions regarding Q6_K and Q8_0 files</summary>
### q6_K
Please download:
* `stablebeluga2-70B.Q6_K.gguf-split-a`
* `stablebeluga2-70B.Q6_K.gguf-split-b`
### q8_0
Please download:
* `stablebeluga2-70B.Q8_0.gguf-split-a`
* `stablebeluga2-70B.Q8_0.gguf-split-b`
To join the files, do the following:
Linux and macOS:
```
cat stablebeluga2-70B.Q6_K.gguf-split-* > stablebeluga2-70B.Q6_K.gguf && rm stablebeluga2-70B.Q6_K.gguf-split-*
cat stablebeluga2-70B.Q8_0.gguf-split-* > stablebeluga2-70B.Q8_0.gguf && rm stablebeluga2-70B.Q8_0.gguf-split-*
```
Windows command line:
```
COPY /B stablebeluga2-70B.Q6_K.gguf-split-a + stablebeluga2-70B.Q6_K.gguf-split-b stablebeluga2-70B.Q6_K.gguf
del stablebeluga2-70B.Q6_K.gguf-split-a stablebeluga2-70B.Q6_K.gguf-split-b
COPY /B stablebeluga2-70B.Q8_0.gguf-split-a + stablebeluga2-70B.Q8_0.gguf-split-b stablebeluga2-70B.Q8_0.gguf
del stablebeluga2-70B.Q8_0.gguf-split-a stablebeluga2-70B.Q8_0.gguf-split-b
```
</details>
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/StableBeluga2-70B-GGUF and below it, a specific filename to download, such as: stablebeluga2-70B.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/StableBeluga2-70B-GGUF stablebeluga2-70B.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/StableBeluga2-70B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/StableBeluga2-70B-GGUF stablebeluga2-70B.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m stablebeluga2-70B.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### System:\n{system_message}\n\n### User:\n{prompt}\n\n### Assistant:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/StableBeluga2-70B-GGUF", model_file="stablebeluga2-70B.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Stability AI's StableBeluga2
# Stable Beluga 2
Use [Stable Chat (Research Preview)](https://chat.stability.ai/chat) to test Stability AI's best language models for free
## Model Description
`Stable Beluga 2` is a Llama2 70B model finetuned on an Orca style Dataset
## Usage
Start chatting with `Stable Beluga 2` using the following code snippet:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("stabilityai/StableBeluga2", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("stabilityai/StableBeluga2", torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto")
system_prompt = "### System:\nYou are Stable Beluga, an AI that follows instructions extremely well. Help as much as you can. Remember, be safe, and don't do anything illegal.\n\n"
message = "Write me a poem please"
prompt = f"{system_prompt}### User: {message}\n\n### Assistant:\n"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=256)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
Stable Beluga 2 should be used with this prompt format:
```
### System:
This is a system prompt, please behave and help the user.
### User:
Your prompt here
### Assistant:
The output of Stable Beluga 2
```
## Other Beluga Models
[StableBeluga 1 - Delta](https://huggingface.co/stabilityai/StableBeluga1-Delta)
[StableBeluga 13B](https://huggingface.co/stabilityai/StableBeluga-13B)
[StableBeluga 7B](https://huggingface.co/stabilityai/StableBeluga-7B)
## Model Details
* **Developed by**: [Stability AI](https://stability.ai/)
* **Model type**: Stable Beluga 2 is an auto-regressive language model fine-tuned on Llama2 70B.
* **Language(s)**: English
* **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
* **License**: Fine-tuned checkpoints (`Stable Beluga 2`) is licensed under the [STABLE BELUGA NON-COMMERCIAL COMMUNITY LICENSE AGREEMENT](https://huggingface.co/stabilityai/StableBeluga2/blob/main/LICENSE.txt)
* **Contact**: For questions and comments about the model, please email `[email protected]`
### Training Dataset
` Stable Beluga 2` is trained on our internal Orca-style dataset
### Training Procedure
Models are learned via supervised fine-tuning on the aforementioned datasets, trained in mixed-precision (BF16), and optimized with AdamW. We outline the following hyperparameters:
| Dataset | Batch Size | Learning Rate |Learning Rate Decay| Warm-up | Weight Decay | Betas |
|-------------------|------------|---------------|-------------------|---------|--------------|-------------|
| Orca pt1 packed | 256 | 3e-5 | Cosine to 3e-6 | 100 | 1e-6 | (0.9, 0.95) |
| Orca pt2 unpacked | 512 | 3e-5 | Cosine to 3e-6 | 100 | 1e-6 | (0.9, 0.95) |
## Ethical Considerations and Limitations
Beluga is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Beluga's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Beluga, developers should perform safety testing and tuning tailored to their specific applications of the model.
## How to cite
```bibtex
@misc{StableBelugaModels,
url={[https://huggingface.co/stabilityai/StableBeluga2](https://huggingface.co/stabilityai/StableBeluga2)},
title={Stable Beluga models},
author={Mahan, Dakota and Carlow, Ryan and Castricato, Louis and Cooper, Nathan and Laforte, Christian}
}
```
## Citations
```bibtext
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint={2307.09288},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtext
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!-- original-model-card end -->
|
TheBloke/13B-BlueMethod-GGUF | TheBloke | 2023-09-27T12:52:10Z | 485 | 2 | transformers | [
"transformers",
"gguf",
"llama",
"alpaca",
"cot",
"vicuna",
"uncensored",
"merge",
"mix",
"base_model:CalderaAI/13B-BlueMethod",
"license:other",
"text-generation-inference",
"region:us"
]
| null | 2023-09-19T20:53:42Z | ---
license: other
tags:
- llama
- alpaca
- cot
- vicuna
- uncensored
- merge
- mix
model_name: 13B BlueMethod
base_model: CalderaAI/13B-BlueMethod
inference: false
model_creator: Caldera AI
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# 13B BlueMethod - GGUF
- Model creator: [Caldera AI](https://huggingface.co/CalderaAI)
- Original model: [13B BlueMethod](https://huggingface.co/CalderaAI/13B-BlueMethod)
<!-- description start -->
## Description
This repo contains GGUF format model files for [CalderaAI's 13B BlueMethod](https://huggingface.co/CalderaAI/13B-BlueMethod).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/13B-BlueMethod-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/13B-BlueMethod-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/13B-BlueMethod-GGUF)
* [Caldera AI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/CalderaAI/13B-BlueMethod)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [13b-bluemethod.Q2_K.gguf](https://huggingface.co/TheBloke/13B-BlueMethod-GGUF/blob/main/13b-bluemethod.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [13b-bluemethod.Q3_K_S.gguf](https://huggingface.co/TheBloke/13B-BlueMethod-GGUF/blob/main/13b-bluemethod.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [13b-bluemethod.Q3_K_M.gguf](https://huggingface.co/TheBloke/13B-BlueMethod-GGUF/blob/main/13b-bluemethod.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [13b-bluemethod.Q3_K_L.gguf](https://huggingface.co/TheBloke/13B-BlueMethod-GGUF/blob/main/13b-bluemethod.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [13b-bluemethod.Q4_0.gguf](https://huggingface.co/TheBloke/13B-BlueMethod-GGUF/blob/main/13b-bluemethod.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [13b-bluemethod.Q4_K_S.gguf](https://huggingface.co/TheBloke/13B-BlueMethod-GGUF/blob/main/13b-bluemethod.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [13b-bluemethod.Q4_K_M.gguf](https://huggingface.co/TheBloke/13B-BlueMethod-GGUF/blob/main/13b-bluemethod.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [13b-bluemethod.Q5_0.gguf](https://huggingface.co/TheBloke/13B-BlueMethod-GGUF/blob/main/13b-bluemethod.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [13b-bluemethod.Q5_K_S.gguf](https://huggingface.co/TheBloke/13B-BlueMethod-GGUF/blob/main/13b-bluemethod.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [13b-bluemethod.Q5_K_M.gguf](https://huggingface.co/TheBloke/13B-BlueMethod-GGUF/blob/main/13b-bluemethod.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [13b-bluemethod.Q6_K.gguf](https://huggingface.co/TheBloke/13B-BlueMethod-GGUF/blob/main/13b-bluemethod.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [13b-bluemethod.Q8_0.gguf](https://huggingface.co/TheBloke/13B-BlueMethod-GGUF/blob/main/13b-bluemethod.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/13B-BlueMethod-GGUF and below it, a specific filename to download, such as: 13b-bluemethod.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/13B-BlueMethod-GGUF 13b-bluemethod.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/13B-BlueMethod-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/13B-BlueMethod-GGUF 13b-bluemethod.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m 13b-bluemethod.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/13B-BlueMethod-GGUF", model_file="13b-bluemethod.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: CalderaAI's 13B BlueMethod
## 13B-BlueMethod
## Composition:
BlueMethod is a bit of a convoluted experiment in tiered merging.
Furthering the experimental nature of the merge, the models combined
were done so with a custom script that randomized the percent of each
layer merged from one model to the next. This is a warmup for a larger
project.
[Tier One and Two Merges not released; internal naming convention]
Tier One Merges:
13B-Metharme+13B-Nous-Hermes=13B-Methermes
13B-Vicuna-cocktail+13B-Manticore=13B-Vicortia
13B-HyperMantis+13B-Alpacino=13B-PsychoMantis
Tier Two Merges:
13B-Methermes+13B-Vicortia=13B-Methphistopheles
13B-PsychoMantis+13B-BlueMoonRP=13B-BlueMantis
Tier Three Merge:
13B-Methphistopheles+13B-BlueMantis=13B-BlueMethod
## Use:
Multiple instruct models and model composites were combined to make the final resulting model;
This model is highly open to experimental prompting, both Alpaca and Vicuna instruct can be used.
It can have interesting results.
## Language Models and LoRAs Used Credits:
13B-Metharme by PygmalionAI
https://www.huggingface.co/PygmalionAI/metharme-13b
13B-Nous-Hermes by NousResearch
https://www.huggingface.co/NousResearch/Nous-Hermes-13b
13B-Vicuna-cocktail by reeducator
https://www.huggingface.co/reeducator/vicuna-13b-cocktail
13B-Manticore by openaccess-ai-collective
https://www.huggingface.co/openaccess-ai-collective/manticore-13b
13B-HyperMantis and 13B-Alpacino by Digitous
https://huggingface.co/digitous/13B-HyperMantis
https://huggingface.co/digitous/Alpacino13b
Also thanks to Meta for LLaMA.
Each model and LoRA was hand picked and considered for what it could contribute to this ensemble.
Thanks to each and every one of you for your incredible work developing some of the best things
to come out of this community.
<!-- original-model-card end -->
|
gorilla-llm/gorilla-falcon-7b-hf-v0-gguf | gorilla-llm | 2024-01-29T11:13:18Z | 485 | 2 | null | [
"gguf",
"region:us"
]
| null | 2024-01-29T11:08:04Z | Entry not found |
neuralmagic/llama2.c-stories110M-pruned50 | neuralmagic | 2024-03-05T05:21:40Z | 485 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"nm-vllm",
"sparse",
"arxiv:2301.00774",
"base_model:Xenova/llama2.c-stories110M",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-03-04T23:57:52Z | ---
base_model: Xenova/llama2.c-stories110M
inference: true
model_type: llama
quantized_by: mgoin
tags:
- nm-vllm
- sparse
---
## llama2.c-stories110M-pruned50
This repo contains model files for [llama2.c 110M tinystories](https://huggingface.co/Xenova/llama2.c-stories110M) optimized for [NM-vLLM](https://github.com/neuralmagic/nm-vllm), a high-throughput serving engine for compressed LLMs.
This model was pruned with [SparseGPT](https://arxiv.org/abs/2301.00774), using [SparseML](https://github.com/neuralmagic/sparseml).
## Inference
Install [NM-vLLM](https://github.com/neuralmagic/nm-vllm) for fast inference and low memory-usage:
```bash
pip install nm-vllm[sparse]
```
Run in a Python pipeline for local inference:
```python
from vllm import LLM, SamplingParams
model = LLM("nm-testing/llama2.c-stories110M-pruned50", sparsity="sparse_w16a16")
prompt = "Hello my name is"
sampling_params = SamplingParams(max_tokens=100, temperature=0)
outputs = model.generate(prompt, sampling_params=sampling_params)
print(outputs[0].outputs[0].text)
```
## Prompt template
N/A
## Sparsification
For details on how this model was sparsified, see the `recipe.yaml` in this repo and follow the instructions below.
Install [SparseML](https://github.com/neuralmagic/sparseml):
```bash
git clone https://github.com/neuralmagic/sparseml
pip install -e "sparseml[transformers]"
```
Replace the recipe as you like and run this one-shot compression script to apply SparseGPT:
```python
import sparseml.transformers
original_model_name = "Xenova/llama2.c-stories110M"
calibration_dataset = "open_platypus"
output_directory = "output/"
recipe = """
test_stage:
obcq_modifiers:
SparseGPTModifier:
sparsity: 0.5
sequential_update: true
targets: ['re:model.layers.\d*$']
"""
# Apply SparseGPT to the model
sparseml.transformers.oneshot(
model=original_model_name,
dataset=calibration_dataset,
recipe=recipe,
output_dir=output_directory,
)
```
## Slack
For further support, and discussions on these models and AI in general, join [Neural Magic's Slack Community](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ) |
Lewdiculous/FuseChat-Kunoichi-10.7B-GGUF-IQ-Imatrix | Lewdiculous | 2024-03-06T16:22:36Z | 485 | 7 | null | [
"gguf",
"merge",
"roleplay",
"region:us"
]
| null | 2024-03-06T11:11:04Z | ---
tags:
- gguf
- merge
- roleplay
---
This repository hosts GGUF-IQ-Imatrix quantizations for [Virt-io/FuseChat-Kunoichi-10.7B](https://huggingface.co/Virt-io/FuseChat-Kunoichi-10.7B).
**Uploaded:**
```python
quantization_options = [
"Q4_K_M", "Q4_K_S", "IQ4_XS", "Q5_K_M",
"Q5_K_S", "Q6_K", "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XS", "IQ3_XXS"
]
```

|
mradermacher/Bombus_3x8B-GGUF | mradermacher | 2024-05-05T15:12:56Z | 485 | 0 | transformers | [
"transformers",
"gguf",
"moe",
"merge",
"llama-3",
"en",
"base_model:Eurdem/Bombus_3x8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-24T04:48:29Z | ---
base_model: Eurdem/Bombus_3x8B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- moe
- merge
- llama-3
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Eurdem/Bombus_3x8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Bombus_3x8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Bombus_3x8B-GGUF/resolve/main/Bombus_3x8B.Q2_K.gguf) | Q2_K | 7.4 | |
| [GGUF](https://huggingface.co/mradermacher/Bombus_3x8B-GGUF/resolve/main/Bombus_3x8B.IQ3_XS.gguf) | IQ3_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/Bombus_3x8B-GGUF/resolve/main/Bombus_3x8B.Q3_K_S.gguf) | Q3_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Bombus_3x8B-GGUF/resolve/main/Bombus_3x8B.IQ3_S.gguf) | IQ3_S | 8.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Bombus_3x8B-GGUF/resolve/main/Bombus_3x8B.IQ3_M.gguf) | IQ3_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Bombus_3x8B-GGUF/resolve/main/Bombus_3x8B.Q3_K_M.gguf) | Q3_K_M | 9.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Bombus_3x8B-GGUF/resolve/main/Bombus_3x8B.Q3_K_L.gguf) | Q3_K_L | 10.2 | |
| [GGUF](https://huggingface.co/mradermacher/Bombus_3x8B-GGUF/resolve/main/Bombus_3x8B.IQ4_XS.gguf) | IQ4_XS | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Bombus_3x8B-GGUF/resolve/main/Bombus_3x8B.Q4_K_S.gguf) | Q4_K_S | 11.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Bombus_3x8B-GGUF/resolve/main/Bombus_3x8B.Q4_K_M.gguf) | Q4_K_M | 11.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Bombus_3x8B-GGUF/resolve/main/Bombus_3x8B.Q5_K_S.gguf) | Q5_K_S | 13.5 | |
| [GGUF](https://huggingface.co/mradermacher/Bombus_3x8B-GGUF/resolve/main/Bombus_3x8B.Q5_K_M.gguf) | Q5_K_M | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/Bombus_3x8B-GGUF/resolve/main/Bombus_3x8B.Q6_K.gguf) | Q6_K | 15.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Bombus_3x8B-GGUF/resolve/main/Bombus_3x8B.Q8_0.gguf) | Q8_0 | 20.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
nihaomur/Llama3-TAIDE-LX-8B-Chat-Alpha1-Q5_K_M | nihaomur | 2024-06-25T06:43:57Z | 485 | 0 | null | [
"gguf",
"license:other",
"region:us"
]
| null | 2024-05-07T03:19:24Z | ---
license: other
license_name: llama3-taide-models-community-license-agreement
license_link: https://drive.google.com/file/d/12-Q0WWSjG0DW6CqJQm_jr5wUGRLeb-8p/view
---
# 量化模型:Q5_K_M by Troy Chuang
* 此模型由 [Llama3-TAIDE-LX-8B-Chat-Alpha1](https://huggingface.co/taide/Llama3-TAIDE-LX-8B-Chat-Alpha1) 以 [Llama.cpp](https://github.com/ggerganov/llama.cpp) 量化為Q5_K_M 模型。
* 請注意:此模型經量化後精度流失,將稍微降低模型效能,但換來更快的推理速度。
* 以下為 Llama3-TAIDE-LX-8B-Chat-Alpha1 的 Model Card。
# 模型簡介
* [TAIDE計畫](https://taide.tw/index)致力於開發符合台灣語言和文化特性的生成式人工智慧對話引擎模型,同時建構可信任的人工智慧環境。結合產學研能量,推動可信任生成式人工智慧的發展,提升台灣在國際競爭中的地位,促進產業發展,避免對外國技術的依賴。
* Llama3 TAIDE 系列模型以 Meta 公司釋出的 [LLaMA3-8b](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3/) 為基礎,導入台灣不同領域可用的文本與訓練素材,提高模型在正體中文回應的能力與特定任務的表現。公開釋出的模型如下:
* [Llama3-TAIDE-LX-8B-Chat-Alpha1](https://huggingface.co/taide/Llama3-TAIDE-LX-8B-Chat-Alpha1): 以 LLaMA3-8b 為基礎,使用繁體中文資料預訓練 (continuous pretraining),並透過指令微調(instruction tuning)強化辦公室常用任務和多輪問答對話能力,適合聊天對話或任務協助的使用情境。Llama3-TAIDE-LX-8B-Chat-Alpha1 另外有提供[4 bit 量化模型](https://huggingface.co/taide/Llama3-TAIDE-LX-8B-Chat-Alpha1-4bit),量化模型主要是提供使用者的便利性,可能會影響效能與更多不可預期的問題,還請使用者理解與注意。
# 模型參數
* 參數量: 8B
* 最大內容長度 (context length): 8K
* 繁中訓練資料 token 量: 43B
* 訓練時間: 2336 H100 GPU Hours
# 特色
* 嚴格把關模型的訓練資料,提升模型生成資料的可信任性和適用性
* 針對自動摘要、寫信、寫文章、中翻英、英翻中等辦公室常用任務做加強
* 針對台灣在地文化、用語、國情等知識做加強
* 具備多輪問答對話能力
# 應用範例
| 任務 | 使用者輸入 | 模型輸出 |
| --- | -------- | ------- |
| 寫文章 | 請以以下內容為基礎,寫一篇文章:撰寫一篇作文,題目為《一張舊照片》,內容要求為:選擇一張令你印象深刻的照片,說明令你印象深刻的原因,並描述照片中的影像及背後的故事。記錄成長的過程、與他人的情景、環境變遷和美麗的景色。 | 一張舊照片,承載著時光的痕跡和回憶的溫度。我還記得那張照片是怎麼出現的,那是一張泛黃的黑白相片,裱在牆上,訴說著一段屬於我家族的故事。<br><br>那張照片中,我的祖父母站在田野間,後方是綿延的山巒,前方則是一片金黃的稻田。這幅景象令我印象深刻,因為這是我的童年回憶,也是我成長過程中最美好的風景線。<br><br>我出生在台灣鄉下,從小就生活在這樣一個純樸的環境中。每當我跟著家人在田裡工作時,就會看到這樣的美景。金黃的稻穗隨著微風搖曳,山巒則靜靜地矗立在遠方,給人一種安詳與祥和的感覺。<br><br>那張照片拍攝於民國50年代,當時台灣經濟尚未起飛,農業為主要產業。我的祖父母都是農民,他們辛勤耕種,為我們家提供溫飽。每當我看到那張照片,就會想起他們為我們所做的犧牲和努力。<br><br>除了美麗的風景外,照片中還有一段動人的故事。在我還很小的時候,家裡發生了一場大火,將我們的房子和所有財產都燒光了。幸好全家人都平安無事,但生活卻變得十分困苦。我們被迫搬到另一個村莊,在那裡重新開始生活。<br><br>那張照片也記錄了那段艱辛的歲月。雖然我們失去了許多東西,但家人之間的感情卻更加深厚。我們互相扶持,共同面對生活上的種種挑戰。<br><br>如今,我已長大成人,在都市裡工作。但每當我看到那張舊照片,我都會想起那段簡單而美好的時光。那些美麗的風景、親人的笑容,以及那段充滿挑戰卻又充滿希望的日子,都將永遠鐫刻在我的記憶深處。<br><br>這張舊照片,不僅僅是一張照片,更是一個寶貴的家族史,一個關於我們如何克服困難、珍惜現在的故事。我將把這張照片好好保存下來,讓後代也能夠認識我們的根源,並且銘記那些值得珍藏的美好回憶。 |
| 寫信 | 請以品牌經理的身份,給廣告公司的創意總監寫一封信,提出對於新產品廣告宣傳活動的創意建議。 | [您的姓名]<br>[您的職稱]<br>[您的公司名稱]<br>[郵件地址]<br>[電話號碼]<br>[發文日期]<br><br>[創意總監姓名]<br>[廣告公司名稱]<br>[廣告公司地址]<br><br>主旨:關於新產品宣傳活動創意發想,請查照。<br><br>尊敬的[創意總監姓名],<br><br>我希望這封信能找到你忙碌中好好休息。我寫這封信,是為了分享我對於我們即將推出的新產品宣傳活動一些創意發想。我深信,這些想法能幫助我們新產品在市場上脫穎而出,吸引消費者的目光。<br><br>首先,我建議我們可以利用社群媒體平台,像是Facebook、Instagram、YouTube等,製作一系列引人入勝的短片和貼文。這些內容可以展現新產品的特色和優點,同時也能傳達出我們品牌重視環保、健康的核心價值。我們可以與網紅或意見領袖合作,讓他們試用並分享他們的使用經驗,這將有助於增加我們產品的可信度和能見度。<br><br>其次,我們可以舉辦新產品試吃/試用活動,邀請民眾親身體驗我們的新產品。我們可以在各大百貨公司、超市、以及我們的門市舉辦這些活動,讓參與者免費試吃/試用新產品,並填寫問卷回饋意見。這種互動的方式,不僅能收集到寶貴的市場反饋,還能讓民眾對我們的品牌產生正面的印象。<br><br>第三,我們可以與當地餐飲業者合作,將我們的新產品納入他們的菜單中。這將使我們的產品接觸到更廣泛的客群,同時也能證明我們的產品在烹飪和美食產業中具有價值。<br><br>第四,我們可以舉辦一場新產品發表會,邀請媒體、業界人士、以及重要客戶參加。在這場活動中,我們可以展示新產品的各項特色和優勢,並由專業廚師示範如何將新產品融入各式料理中。這將是向大眾展示我們品牌創新精神和專業能力的絕佳機會。<br><br>最後,我建議我們可以製作一系列精美的包裝和禮盒,以吸引消費者在節慶和送禮時選擇我們的產品。精緻的包裝將能提升我們產品的價值感,同時也能傳達出我們品牌對細節和品質的堅持。<br><br>我深信,這些創意發想將有助於我們新產品在市場上獲得成功。若您對這些想法有任何意見或建議,請不吝與我分享。我期待著與您合作,共同為[您的公司名稱]的成長和成功而努力。<br><br>謝謝您抽空閱讀我的信,並期待著與您進一步討論這些想法。<br><br>敬禮<br><br>[您的姓名]<br>[您的職稱]<br>[您的公司名稱] |
| 摘要 | 請將這篇文章精簡條理化:「產業創新條例第10條之2及第72條條文修正案」俗稱「台版晶片法」,針對半導體、電動車、5G等技術創新且居國際供應鏈關鍵地位公司,提供最高25%營所稅投抵優惠,企業適用要件包含當年度研發費用、研發密度達一定規模,且有效稅率達一定比率。<br>為因應經濟合作暨發展組織(OECD)國家最低稅負制調整,其中有效稅率門檻,民國112年訂為12%,113年料將提高至15%,但仍得審酌國際間最低稅負制實施情形。<br>經濟部官員表示,已和財政部協商進入最後階段,除企業研發密度訂在6%,目前已確認,企業購置先進製程的設備投資金額達100億元以上可抵減。<br>財政部官員表示,研商過程中,針對台灣產業與其在國際間類似的公司進行深入研究,在設備部分,畢竟適用產創10之2的業者是代表台灣隊打「國際盃」,投入金額不達100億元,可能也打不了。<br>至於備受關注的研發費用門檻,經濟部官員表示,歷經與財政部來回密切討論,研發費用門檻有望落在60億至70億元之間。<br>財政部官員指出,研發攸關台灣未來經濟成長動能,門檻不能「高不可攀」,起初雖設定在100億元,之所以會調降,正是盼讓企業覺得有辦法達得到門檻、進而適用租稅優惠,才有動力繼續投入研發,維持國際供應鏈關鍵地位。<br>經濟部官員表示,因廠商研發費用平均為30、40億元,其中,IC設計業者介於30億至60億元範圍,若將門檻訂在100億元,符合條件的業者較少、刺激誘因不足;此外,若符合申請門檻的業者增加,將可提高企業在台投資金額,財政部稅收也能因此獲得挹注。<br>IC設計業者近日頻頻針對產創10之2發聲,希望降低適用門檻,加上各國力拚供應鏈自主化、加碼補助半導體產業,經濟部官員表示,經濟部和財政部就產創10之2達成共識,爭取讓更多業者受惠,盼增強企業投資力道及鞏固台灣技術地位。<br>財政部官員表示,租稅獎勵的制定必須「有為有守」,並以達到獎勵設置目的為最高原則,現階段在打「國內盃」的企業仍可適用產創第10條、10之1的租稅優惠,共同壯大台灣經濟發展。<br>經濟部和財政部正就研發費用門檻做最後確認,待今明兩天預告子法之後,約有30天時間,可與業界進一步討論及調整,盼產創10之2能在6月上路。 | 文章重點摘要如下:<br><br>- 「台版晶片法」或《產業創新條例第10條之2及第72條條文修正案》,旨在鼓勵關鍵技術創新企業,特別是那些在半導體、電動車、5G技術等領域且在國際供應鏈中扮演關鍵角色者。<br><br>- 企業若符合特定條件,包括當年度研發費用及研發密度達標,可享最高25%的營所稅抵減優惠。研發密度目標設為6%,而企業為取得抵減,必須證明其有效稅率達到特定比率,起初設定為112年的12%,113年提高至15%,但可能會依國際稅制的實施情形進行調整。<br><br>- 政府在制定政策時,考量到OECD最低稅負制的調整,因此會視情況調整有效稅率門檻。<br><br>- 政府部門已就法案內容與企業利益相關者進行協商,確定一些細節,例如企業投資先進製程設備的門檻設為100億元新台幣以供抵減。<br><br>- 研發費用門檻經過討論,可能從最初提議的100億元下調至60億至70億元之間,以激勵更多企業符合資格並投入研發,維持台灣在國際供應鏈中的關鍵地位。<br><br>- 經濟部官員表示,調降門檻的目的是讓更多企業覺得能夠達成標準,從而獲得應用租稅優惠的動力,並可能藉此提高在台投資金額,進而增加政府稅收。<br><br>- 由於IC設計業者的研發費用普遍較低,若門檻設得過高,符合條件的企業將很少,因此有必要調整。<br><br>- 在各國推動供應鏈自主化並增加對半導體產業的補助之際,經濟部正努力確保更多企業能從《產業創新條例》第10條之2中受益,以增強投資和鞏固台灣的技術地位。<br><br>- 政府強調租稅優惠的制定應符合獎勵目標,在此同時,也將繼續提供其他租稅優惠給在國內經營的企業,以促進台灣經濟成長。<br><br>- 最終的研發費用門檻將由經濟部和財政部共同確定,在預告相關子法後,將有約30天的時間供各界討論並提出調整建議,目標是讓這項新政策在6月上路。 |
| 翻譯 | 以下提供英文內容,請幫我翻譯成中文。Dongshan coffee is famous for its unique position, and the constant refinement of production methods. The flavor is admired by many caffeine afficionados. | 東山咖啡以獨特的位置,加上不斷精進的製作手法而聞名,風味更是讓許多咖啡癡趨之若鶩。 |
# 使用說明
* 範例程式
* 範例程式與文件後續會釋出到github
* prompt 樣版
* 一般問答用法
```python
chat = [
{"role": "user", "content": "{question}"},
]
prompt = tokenizer.apply_chat_template(chat)
```
* 將 {question} 替換成使用者的輸入
* 加入 system prompt 的用法
```python
chat = [
{"role": "system", "content": "{sys}"},
{"role": "user", "content": "{question}"},
]
prompt = tokenizer.apply_chat_template(chat)
```
* 將 {sys} 替換成指令,例如:你是一個來自台灣的AI助理,你的名字是 TAIDE,樂於以台灣人的立場幫助使用者,會用繁體中文回答問題。
* 將 {question} 替換成使用者的問題
* 多輪問答用法
```python
chat = [
{"role": "system", "content": "{sys}"},
{"role": "user", "content": "{question1}"},
{"role": "assistant", "content": "{model_anwer_1}"},
{"role": "user", "content": "{question2}"},
]
prompt = tokenizer.apply_chat_template(chat)
```
* 將 {sys} 替換成指令,例如:你是一個來自台灣的AI助理,你的名字是 TAIDE,樂於以台灣人的立場幫助使用者,會用繁體中文回答問題。
* 將 {question1} 替換成使用者的問題1
* 將 {model_anwer_1} 替換成模型的回答1
* 將 {question2} 替換成使用者的問題2
* 更多細節請參考[Llama3 文件](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3/)
# 訓練方法
* 軟硬體規格
* 國網中心 H100
* 訓練框架: PyTorch
* 資料前處理
* 字元標準化
* 去除重覆
* 去除雜訊
* 網頁資料的html tag、javascript
* 非標準字元或亂碼
* 字數過短的文章
* 去除文章中的特定格式,如為排版增加的換行
* 去除個資,如email、電話
* 去除不當文字,如賭博、色情等
* 持續預訓練 (continuous pretraining, CP)
* 補充大量來源可信賴的繁體中文知識
* 超參數 (hyper parameters)
* optimizer: AdamW
* learning rate: 1e-4
* batch size: 1M tokens
* epoch: 1
* 微調 (fine tune, FT)
* 讓模型可針對繁體中文提問回答問題
* 超參數 (hyper parameters)
* optimizer: AdamW
* learning rate: 5e-5
* batch size: 256K tokens
* epoch: 3
# 訓練資料
* 持續預訓練資料(資料量約為140G)
| 資料集 | 資料描述 |
| --- | -------- |
| 訴訟資料 | 《[司法院裁判書](https://judgment.judicial.gov.tw/FJUD/default.aspx)》自2013年1月至2023年12月各級法院民事、刑事、行政訴訟資料。 |
| 中央社 | 《[中央社中文新聞](https://www.cna.com.tw/)》資料集含中央社自1993年6月至2023年06月,共30年份之每日新聞文章。內容涵蓋國內外政治、社會、財經、文教、生活等領域。 |
| ETtoday 新聞雲 | 《[ETtoday新聞雲](https://www.ettoday.net/)》資料,包含自2011年10月至 2023年12月的資料。 |
| 立法院公報 | 《[立法院公報](https://ppg.ly.gov.tw/ppg/)》包含自第8屆第1會期至第10屆第7會期之公報資料。 |
| 出版商網站書籍介紹 | 包含[三采](https://www.suncolor.com.tw/)、[Gotop](https://www.gotop.com.tw/)出版商網站上的書籍簡介。 |
| GRB 研究計畫摘要 | [GRB](https://www.grb.gov.tw/)為收錄由政府經費補助之研究計畫及其成果報告的資訊系統,此資料集主要收錄 1993年至 2023年之研究計畫摘要以及研究報告摘要,含中文及其英文對照。 |
| 學術會議論文摘要 | 收錄《[學術會議論文摘要資料庫](https://sticnet.stpi.narl.org.tw/sticloc/ttscalle?meet:)》中自1988至2009年由台灣所舉辦之學術會議論文。 |
| 光華雜誌 | 《[台灣光華雜誌](https://www.taiwan-panorama.com/)》含自1993年7月至2023年6月的文章,共30年份。內容著重於我國文化、觀光與民情等。 |
| 樂詞網 | 《[樂詞網](https://terms.naer.edu.tw/)》涵蓋文理領域約187萬則學術名詞及其譯名對照。 |
| 各部會資料 | 包含行政院「[國情簡介](https://www.ey.gov.tw/state/)」、文化部「[國家文化記憶庫](https://memory.culture.tw/)」、國發會「[檔案支援教學網](https://art.archives.gov.tw/index.aspx)」、交通部「[交通安全入口網](https://168.motc.gov.tw/)」等部會網站資料之部分資料。 |
| 今周刊 | 《[今周刊](https://www.businesstoday.com.tw/)》為一以財經為主的週刊雜誌,此資料集涵蓋2008年1月至2023年7月的文章。 |
| 教育部國語辭典、成語辭典 | 包含以下三項資料:<br>[教育部《成語典》](https://dict.idioms.moe.edu.tw/search.jsp?webMd=1&la=0),含5,338條成語,內容包含每條成語的釋義、典故原文及其白話說明、用法說明、例句等。<br>[教育部《重編國語辭典修訂本》](https://dict.revised.moe.edu.tw/?la=0&powerMode=0),收錄中文單字及各類辭彙,包含讀音、部首、釋義等資訊,共約165,539筆資料。<br>[教育部《國語辭典簡編本》](https://dict.concised.moe.edu.tw/?la=0&powerMode=0),為《重編國語辭典修訂本》的簡編版本,共45,247筆資料。 |
| 科技大觀園資料 | 含《[科技大觀園網站](https://scitechvista.nat.gov.tw/)》上的科學新知以及科普文章。 |
| iKnow 科技產業資訊室 | 《[科技產業資訊室](https://iknow.stpi.narl.org.tw/)(iKnow)》提供台灣及全球的科技市場趨勢、策略分析、專利知識,及技術交易資訊,專注於科技產業的創新與發展,包含自 2008 年至 2023 年。 |
| 科學發展月刊 | 《[科學發展月刊](https://ejournal.stpi.narl.org.tw/sd)》為國科會為推廣科學教育而出版的科普刊物,含自2004年10月至2020年12月之科普文章;2021年起,以《[科技魅癮](https://www.charmingscitech.nat.gov.tw/)》季刊重新出發,提供國際關注科技議題的新知文章。 |
| 法規資料庫 | 《[法規資料庫](https://law.moj.gov.tw/)》含截自 112 年 10 月各政府部門最新發布之中央法規、行政規則、法規命令草案及地方自治法規等。 |
| 各地政府旅遊網 | 涵蓋台灣部分縣市地方政府觀光旅遊網站上之部分資料。 |
| 國教院課程綱要(十二年國教) | 含十二年國教課程綱要之總綱以及各級學校不同科目之課程綱要。 |
| 中央社譯名檔資料庫 | 《中央社譯名檔資料庫》蒐集中央社新聞業務上翻譯過的中外姓氏、人名、組織、地名等譯名對照。 |
| 童話書 | 共 20 本童話書,含湯姆歷險記、小飛俠、愛麗絲夢遊仙境、長腿叔叔等。 |
| RedPajama-Data-V2 | 從國外開放多國語言語料庫 [RedPajama-Data-v2](https://github.com/togethercomputer/RedPajama-Data) 取出英文資料 |
| MathPile-commercial | 國外開放數學語料庫 [MathPile-commercial](https://huggingface.co/datasets/GAIR/MathPile_Commercial) |
| 中文維基百科 | 《[中文維基百科](https://zh.wikipedia.org/zh-tw/%E4%B8%AD%E6%96%87%E7%BB%B4%E5%9F%BA%E7%99%BE%E7%A7%91)》截至2023年1月所有條目的內容。 |
| github-code-clean | 為 github 開源程式碼資料集,去除unlicense的程式碼和文件。 |
* 微調資料
* TAIDE團隊訓練llama2系列模型來產生微調資料資料,產生的任務包含世界知識、創意寫作、普通常識、翻譯、摘要、程式、台灣價值等單輪或多輪對話問答共 128K 筆。微調資料後續會對外釋出。
# 模型評測
* taide-bench
* 評測資料
* 寫文章、寫信、摘要、英翻中、中翻英,共500題
* 資料連結: [taide-bench](https://huggingface.co/datasets/taide/taide-bench)
* 評測方法
* gpt4評分
* 評分程式: [taide-bench-eval](https://github.com/taide-taiwan/taide-bench-eval)
* 評測分數
| 模型 | 中翻英 | 英翻中 | 摘要 | 寫文章 | 寫信 | 平均 |
| --- | ----- | ----- | ---- | ---- | ---- | --- |
| Llama3-TAIDE-LX-8B-Chat-Alpha1 | 7.770 | 8.280 | 8.495 | 9.605 | 8.950 | 8.620 |
| GPT3.5 | 8.880 | 8.810 | 7.450 | 9.490 | 8.750 | 8.676 |
| TAIDE-LX-7B-Chat | 7.165 | 7.685 | 7.720 | 9.635 | 9.110 | 8.263 |
| LLAMA2 7B | 6.075 | 4.475 | 5.905 | 2.625 | 3.040 | 4.424 |
| LLAMA2 13B | 6.480 | 6.135 | 6.110 | 2.565 | 3.000 | 4.858 |
| LLAMA2 70B | 6.975 | 6.375 | 6.795 | 2.625 | 2.990 | 5.152 |
# 授權條款
* [Llama3-TAIDE 模型社群授權同意書](https://drive.google.com/file/d/12-Q0WWSjG0DW6CqJQm_jr5wUGRLeb-8p/view)
# 免責聲明
* LLM 模型由於設計架構的限制,以及資料難免有偏誤,語言模型的任何回應不代表 TAIDE 立場,使用前需要額外加入安全防護機制,且回應內容也可能包含不正確的資訊,使用者請勿盡信。
# 開發團隊
* [https://taide.tw/index/teamList](https://taide.tw/index/teamList)
# 相關連結
* [TAIDE官網](https://taide.tw/index)
* [TAIDE Huggingface](https://huggingface.co/taide)
* [TAIDE Github](https://github.com/taide-taiwan)
* [Kuwa AI](https://kuwaai.org/)
# Citation
* [TAIDE官網](https://taide.tw/index)
|
RichardErkhov/Weyaxi_-_Dolphin2.1-OpenOrca-7B-gguf | RichardErkhov | 2024-05-30T13:14:42Z | 485 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-30T10:34:57Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Dolphin2.1-OpenOrca-7B - GGUF
- Model creator: https://huggingface.co/Weyaxi/
- Original model: https://huggingface.co/Weyaxi/Dolphin2.1-OpenOrca-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Dolphin2.1-OpenOrca-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin2.1-OpenOrca-7B-gguf/blob/main/Dolphin2.1-OpenOrca-7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [Dolphin2.1-OpenOrca-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin2.1-OpenOrca-7B-gguf/blob/main/Dolphin2.1-OpenOrca-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [Dolphin2.1-OpenOrca-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin2.1-OpenOrca-7B-gguf/blob/main/Dolphin2.1-OpenOrca-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [Dolphin2.1-OpenOrca-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin2.1-OpenOrca-7B-gguf/blob/main/Dolphin2.1-OpenOrca-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [Dolphin2.1-OpenOrca-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin2.1-OpenOrca-7B-gguf/blob/main/Dolphin2.1-OpenOrca-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [Dolphin2.1-OpenOrca-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin2.1-OpenOrca-7B-gguf/blob/main/Dolphin2.1-OpenOrca-7B.Q3_K.gguf) | Q3_K | 3.28GB |
| [Dolphin2.1-OpenOrca-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin2.1-OpenOrca-7B-gguf/blob/main/Dolphin2.1-OpenOrca-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [Dolphin2.1-OpenOrca-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin2.1-OpenOrca-7B-gguf/blob/main/Dolphin2.1-OpenOrca-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [Dolphin2.1-OpenOrca-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin2.1-OpenOrca-7B-gguf/blob/main/Dolphin2.1-OpenOrca-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [Dolphin2.1-OpenOrca-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin2.1-OpenOrca-7B-gguf/blob/main/Dolphin2.1-OpenOrca-7B.Q4_0.gguf) | Q4_0 | 3.83GB |
| [Dolphin2.1-OpenOrca-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin2.1-OpenOrca-7B-gguf/blob/main/Dolphin2.1-OpenOrca-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [Dolphin2.1-OpenOrca-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin2.1-OpenOrca-7B-gguf/blob/main/Dolphin2.1-OpenOrca-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [Dolphin2.1-OpenOrca-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin2.1-OpenOrca-7B-gguf/blob/main/Dolphin2.1-OpenOrca-7B.Q4_K.gguf) | Q4_K | 4.07GB |
| [Dolphin2.1-OpenOrca-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin2.1-OpenOrca-7B-gguf/blob/main/Dolphin2.1-OpenOrca-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [Dolphin2.1-OpenOrca-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin2.1-OpenOrca-7B-gguf/blob/main/Dolphin2.1-OpenOrca-7B.Q4_1.gguf) | Q4_1 | 4.24GB |
| [Dolphin2.1-OpenOrca-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin2.1-OpenOrca-7B-gguf/blob/main/Dolphin2.1-OpenOrca-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
| [Dolphin2.1-OpenOrca-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin2.1-OpenOrca-7B-gguf/blob/main/Dolphin2.1-OpenOrca-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [Dolphin2.1-OpenOrca-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin2.1-OpenOrca-7B-gguf/blob/main/Dolphin2.1-OpenOrca-7B.Q5_K.gguf) | Q5_K | 4.78GB |
| [Dolphin2.1-OpenOrca-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin2.1-OpenOrca-7B-gguf/blob/main/Dolphin2.1-OpenOrca-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [Dolphin2.1-OpenOrca-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin2.1-OpenOrca-7B-gguf/blob/main/Dolphin2.1-OpenOrca-7B.Q5_1.gguf) | Q5_1 | 5.07GB |
| [Dolphin2.1-OpenOrca-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin2.1-OpenOrca-7B-gguf/blob/main/Dolphin2.1-OpenOrca-7B.Q6_K.gguf) | Q6_K | 5.53GB |
| [Dolphin2.1-OpenOrca-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin2.1-OpenOrca-7B-gguf/blob/main/Dolphin2.1-OpenOrca-7B.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: apache-2.0
model-index:
- name: Dolphin2.1-OpenOrca-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 63.91
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Dolphin2.1-OpenOrca-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.26
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Dolphin2.1-OpenOrca-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.66
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Dolphin2.1-OpenOrca-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 53.84
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Dolphin2.1-OpenOrca-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.22
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Dolphin2.1-OpenOrca-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 19.94
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Dolphin2.1-OpenOrca-7B
name: Open LLM Leaderboard
---
<a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
Merge of [ehartford/dolphin-2.1-mistral-7b](https://huggingface.co/ehartford/dolphin-2.1-mistral-7b) and [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) using ties merge.
### *Weights*
- [ehartford/dolphin-2.1-mistral-7b](https://huggingface.co/ehartford/dolphin-2.1-mistral-7b): 0.5
- [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca): 0.3
### *Density*
- [ehartford/dolphin-2.1-mistral-7b](https://huggingface.co/ehartford/dolphin-2.1-mistral-7b): 0.5
- [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca): 0.5
# Quantizationed versions
Quantizationed versions of this model is available thanks to [TheBloke](https://hf.co/TheBloke).
##### GPTQ
- [TheBloke/Dolphin2.1-OpenOrca-7B-GPTQ](https://huggingface.co/TheBloke/Dolphin2.1-OpenOrca-7B-GPTQ)
##### GGUF
- [TheBloke/Dolphin2.1-OpenOrca-7B-GGUF](https://huggingface.co/TheBloke/Dolphin2.1-OpenOrca-7B-GGUF)
##### AWQ
- [TheBloke/Dolphin2.1-OpenOrca-7B-AWQ](https://huggingface.co/TheBloke/Dolphin2.1-OpenOrca-7B-AWQ)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Dolphin2.1-OpenOrca-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |60.47|
|AI2 Reasoning Challenge (25-Shot)|63.91|
|HellaSwag (10-Shot) |84.26|
|MMLU (5-Shot) |62.66|
|TruthfulQA (0-shot) |53.84|
|Winogrande (5-shot) |78.22|
|GSM8k (5-shot) |19.94|
|
RichardErkhov/devhyun88_-_kullama2-7b-platypus-kogpt4-gguf | RichardErkhov | 2024-05-31T07:01:40Z | 485 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-31T04:53:23Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
kullama2-7b-platypus-kogpt4 - GGUF
- Model creator: https://huggingface.co/devhyun88/
- Original model: https://huggingface.co/devhyun88/kullama2-7b-platypus-kogpt4/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [kullama2-7b-platypus-kogpt4.Q2_K.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_kullama2-7b-platypus-kogpt4-gguf/blob/main/kullama2-7b-platypus-kogpt4.Q2_K.gguf) | Q2_K | 2.42GB |
| [kullama2-7b-platypus-kogpt4.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_kullama2-7b-platypus-kogpt4-gguf/blob/main/kullama2-7b-platypus-kogpt4.IQ3_XS.gguf) | IQ3_XS | 2.67GB |
| [kullama2-7b-platypus-kogpt4.IQ3_S.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_kullama2-7b-platypus-kogpt4-gguf/blob/main/kullama2-7b-platypus-kogpt4.IQ3_S.gguf) | IQ3_S | 0.4GB |
| [kullama2-7b-platypus-kogpt4.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_kullama2-7b-platypus-kogpt4-gguf/blob/main/kullama2-7b-platypus-kogpt4.Q3_K_S.gguf) | Q3_K_S | 0.28GB |
| [kullama2-7b-platypus-kogpt4.IQ3_M.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_kullama2-7b-platypus-kogpt4-gguf/blob/main/kullama2-7b-platypus-kogpt4.IQ3_M.gguf) | IQ3_M | 0.47GB |
| [kullama2-7b-platypus-kogpt4.Q3_K.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_kullama2-7b-platypus-kogpt4-gguf/blob/main/kullama2-7b-platypus-kogpt4.Q3_K.gguf) | Q3_K | 0.37GB |
| [kullama2-7b-platypus-kogpt4.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_kullama2-7b-platypus-kogpt4-gguf/blob/main/kullama2-7b-platypus-kogpt4.Q3_K_M.gguf) | Q3_K_M | 0.27GB |
| [kullama2-7b-platypus-kogpt4.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_kullama2-7b-platypus-kogpt4-gguf/blob/main/kullama2-7b-platypus-kogpt4.Q3_K_L.gguf) | Q3_K_L | 0.19GB |
| [kullama2-7b-platypus-kogpt4.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_kullama2-7b-platypus-kogpt4-gguf/blob/main/kullama2-7b-platypus-kogpt4.IQ4_XS.gguf) | IQ4_XS | 0.36GB |
| [kullama2-7b-platypus-kogpt4.Q4_0.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_kullama2-7b-platypus-kogpt4-gguf/blob/main/kullama2-7b-platypus-kogpt4.Q4_0.gguf) | Q4_0 | 1.28GB |
| [kullama2-7b-platypus-kogpt4.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_kullama2-7b-platypus-kogpt4-gguf/blob/main/kullama2-7b-platypus-kogpt4.IQ4_NL.gguf) | IQ4_NL | 0.32GB |
| [kullama2-7b-platypus-kogpt4.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_kullama2-7b-platypus-kogpt4-gguf/blob/main/kullama2-7b-platypus-kogpt4.Q4_K_S.gguf) | Q4_K_S | 0.24GB |
| [kullama2-7b-platypus-kogpt4.Q4_K.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_kullama2-7b-platypus-kogpt4-gguf/blob/main/kullama2-7b-platypus-kogpt4.Q4_K.gguf) | Q4_K | 0.13GB |
| [kullama2-7b-platypus-kogpt4.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_kullama2-7b-platypus-kogpt4-gguf/blob/main/kullama2-7b-platypus-kogpt4.Q4_K_M.gguf) | Q4_K_M | 0.1GB |
| [kullama2-7b-platypus-kogpt4.Q4_1.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_kullama2-7b-platypus-kogpt4-gguf/blob/main/kullama2-7b-platypus-kogpt4.Q4_1.gguf) | Q4_1 | 0.1GB |
| [kullama2-7b-platypus-kogpt4.Q5_0.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_kullama2-7b-platypus-kogpt4-gguf/blob/main/kullama2-7b-platypus-kogpt4.Q5_0.gguf) | Q5_0 | 4.42GB |
| [kullama2-7b-platypus-kogpt4.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_kullama2-7b-platypus-kogpt4-gguf/blob/main/kullama2-7b-platypus-kogpt4.Q5_K_S.gguf) | Q5_K_S | 2.02GB |
| [kullama2-7b-platypus-kogpt4.Q5_K.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_kullama2-7b-platypus-kogpt4-gguf/blob/main/kullama2-7b-platypus-kogpt4.Q5_K.gguf) | Q5_K | 4.54GB |
| [kullama2-7b-platypus-kogpt4.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_kullama2-7b-platypus-kogpt4-gguf/blob/main/kullama2-7b-platypus-kogpt4.Q5_K_M.gguf) | Q5_K_M | 4.54GB |
| [kullama2-7b-platypus-kogpt4.Q5_1.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_kullama2-7b-platypus-kogpt4-gguf/blob/main/kullama2-7b-platypus-kogpt4.Q5_1.gguf) | Q5_1 | 4.8GB |
| [kullama2-7b-platypus-kogpt4.Q6_K.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_kullama2-7b-platypus-kogpt4-gguf/blob/main/kullama2-7b-platypus-kogpt4.Q6_K.gguf) | Q6_K | 5.24GB |
| [kullama2-7b-platypus-kogpt4.Q8_0.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_kullama2-7b-platypus-kogpt4-gguf/blob/main/kullama2-7b-platypus-kogpt4.Q8_0.gguf) | Q8_0 | 6.79GB |
Original model description:
|
crystalkalem/SOLARC-MOE-10.7Bx4-GGUF | crystalkalem | 2024-06-03T11:18:57Z | 485 | 0 | transformers | [
"transformers",
"gguf",
"mixtral",
"text-generation",
"ko",
"base_model:DopeorNope/SOLARC-MOE-10.7Bx4",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-03T07:43:10Z | ---
base_model: DopeorNope/SOLARC-MOE-10.7Bx4
inference: true
language:
- ko
library_name: transformers
license: cc-by-nc-sa-4.0
model_creator: Seungyoo Lee
model_name: Solarc MOE 10.7Bx4
model_type: mixtral
pipeline_tag: text-generation
prompt_template: |
### User:
{prompt}
### Assistant:
quantized_by: TheBloke
---
## Description
I reuploaded one of the smaller versions of this model originaly posted by TheBloke.
Original found here **https://huggingface.co/TheBloke/SOLARC-MOE-10.7Bx4-GGUF**
This repo contains GGUF format model files for [Seungyoo Lee's Solarc MOE 10.7Bx4](https://huggingface.co/DopeorNope/SOLARC-MOE-10.7Bx4).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). |
EPFL-VILAB/4M_tokenizers_DINOv2-B14_8k_224-448 | EPFL-VILAB | 2024-06-14T08:22:40Z | 485 | 1 | ml-4m | [
"ml-4m",
"safetensors",
"arxiv:2312.06647",
"arxiv:2406.09406",
"license:other",
"region:us"
]
| null | 2024-06-12T08:49:49Z | ---
license: other
license_name: sample-code-license
license_link: LICENSE
library_name: ml-4m
---
# 4M: Massively Multimodal Masked Modeling
*A framework for training any-to-any multimodal foundation models. <br>Scalable. Open-sourced. Across tens of modalities and tasks.*
[`Website`](https://4m.epfl.ch) | [`GitHub`](https://github.com/apple/ml-4m) | [`BibTeX`](#citation)
Official implementation and pre-trained models for :
[**4M: Massively Multimodal Masked Modeling**](https://arxiv.org/abs/2312.06647), NeurIPS 2023 (Spotlight) <br>
*[David Mizrahi](https://dmizrahi.com/)\*, [Roman Bachmann](https://roman-bachmann.github.io/)\*, [Oğuzhan Fatih Kar](https://ofkar.github.io/), [Teresa Yeo](https://aserety.github.io/), [Mingfei Gao](https://fly6464.github.io/), [Afshin Dehghan](https://www.afshindehghan.com/), [Amir Zamir](https://vilab.epfl.ch/zamir/)*
[**4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities**](https://arxiv.org/abs/2406.09406), arXiv 2024 <br>
*[Roman Bachmann](https://roman-bachmann.github.io/)\*, [Oğuzhan Fatih Kar](https://ofkar.github.io/)\*, [David Mizrahi](https://dmizrahi.com/)\*, [Ali Garjani](https://garjania.github.io/), [Mingfei Gao](https://fly6464.github.io/), [David Griffiths](https://www.dgriffiths.uk/), [Jiaming Hu](https://scholar.google.com/citations?user=vm3imKsAAAAJ&hl=en), [Afshin Dehghan](https://www.afshindehghan.com/), [Amir Zamir](https://vilab.epfl.ch/zamir/)*
4M is a framework for training "any-to-any" foundation models, using tokenization and masking to scale to many diverse modalities.
Models trained using 4M can perform a wide range of vision tasks, transfer well to unseen tasks and modalities, and are flexible and steerable multimodal generative models.
We are releasing code and models for "4M: Massively Multimodal Masked Modeling" (here denoted 4M-7), as well as "4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities" (here denoted 4M-21).
## Installation
For install instructions, please see https://github.com/apple/ml-4m.
## Usage
The DINOv2-B/14 feature map tokenizer can be loaded from Hugging Face Hub as follows:
```python
from fourm.vq.vqvae import VQVAE
tok_dinov2 = VQVAE.from_pretrained('EPFL-VILAB/4M_tokenizers_DINOv2-B14_8k_224-448')
```
Please see https://github.com/apple/ml-4m/blob/main/README_TOKENIZATION.md for more detailed instructions and https://github.com/apple/ml-4m for other tokenizer and 4M model checkpoints.
## Citation
If you find this repository helpful, please consider citing our work:
```
@inproceedings{4m,
title={{4M}: Massively Multimodal Masked Modeling},
author={David Mizrahi and Roman Bachmann and O{\u{g}}uzhan Fatih Kar and Teresa Yeo and Mingfei Gao and Afshin Dehghan and Amir Zamir},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
}
@article{4m21,
title={{4M-21}: An Any-to-Any Vision Model for Tens of Tasks and Modalities},
author={Roman Bachmann and O{\u{g}}uzhan Fatih Kar and David Mizrahi and Ali Garjani and Mingfei Gao and David Griffiths and Jiaming Hu and Afshin Dehghan and Amir Zamir},
journal={arXiv 2024},
year={2024},
}
```
## License
The model weights in this repository are released under the Sample Code license as found in the [LICENSE](LICENSE) file. |
mradermacher/Eurstoria-106B-i1-GGUF | mradermacher | 2024-06-17T05:06:30Z | 485 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:KaraKaraWitch/Eurstoria-106B",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-16T10:28:38Z | ---
base_model: KaraKaraWitch/Eurstoria-106B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/KaraKaraWitch/Eurstoria-106B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Eurstoria-106B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Eurstoria-106B-i1-GGUF/resolve/main/Eurstoria-106B.i1-IQ1_S.gguf) | i1-IQ1_S | 22.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Eurstoria-106B-i1-GGUF/resolve/main/Eurstoria-106B.i1-IQ1_M.gguf) | i1-IQ1_M | 25.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Eurstoria-106B-i1-GGUF/resolve/main/Eurstoria-106B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 28.7 | |
| [GGUF](https://huggingface.co/mradermacher/Eurstoria-106B-i1-GGUF/resolve/main/Eurstoria-106B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 31.8 | |
| [GGUF](https://huggingface.co/mradermacher/Eurstoria-106B-i1-GGUF/resolve/main/Eurstoria-106B.i1-IQ2_S.gguf) | i1-IQ2_S | 33.4 | |
| [GGUF](https://huggingface.co/mradermacher/Eurstoria-106B-i1-GGUF/resolve/main/Eurstoria-106B.i1-IQ2_M.gguf) | i1-IQ2_M | 36.3 | |
| [GGUF](https://huggingface.co/mradermacher/Eurstoria-106B-i1-GGUF/resolve/main/Eurstoria-106B.i1-Q2_K.gguf) | i1-Q2_K | 39.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Eurstoria-106B-i1-GGUF/resolve/main/Eurstoria-106B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 41.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Eurstoria-106B-i1-GGUF/resolve/main/Eurstoria-106B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 44.0 | |
| [GGUF](https://huggingface.co/mradermacher/Eurstoria-106B-i1-GGUF/resolve/main/Eurstoria-106B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 46.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Eurstoria-106B-i1-GGUF/resolve/main/Eurstoria-106B.i1-IQ3_S.gguf) | i1-IQ3_S | 46.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Eurstoria-106B-i1-GGUF/resolve/main/Eurstoria-106B.i1-IQ3_M.gguf) | i1-IQ3_M | 48.0 | |
| [PART 1](https://huggingface.co/mradermacher/Eurstoria-106B-i1-GGUF/resolve/main/Eurstoria-106B.i1-Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurstoria-106B-i1-GGUF/resolve/main/Eurstoria-106B.i1-Q3_K_M.gguf.part2of2) | i1-Q3_K_M | 51.5 | IQ3_S probably better |
| [PART 1](https://huggingface.co/mradermacher/Eurstoria-106B-i1-GGUF/resolve/main/Eurstoria-106B.i1-Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurstoria-106B-i1-GGUF/resolve/main/Eurstoria-106B.i1-Q3_K_L.gguf.part2of2) | i1-Q3_K_L | 56.0 | IQ3_M probably better |
| [PART 1](https://huggingface.co/mradermacher/Eurstoria-106B-i1-GGUF/resolve/main/Eurstoria-106B.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurstoria-106B-i1-GGUF/resolve/main/Eurstoria-106B.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 57.2 | |
| [PART 1](https://huggingface.co/mradermacher/Eurstoria-106B-i1-GGUF/resolve/main/Eurstoria-106B.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurstoria-106B-i1-GGUF/resolve/main/Eurstoria-106B.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 60.5 | fast, low quality |
| [PART 1](https://huggingface.co/mradermacher/Eurstoria-106B-i1-GGUF/resolve/main/Eurstoria-106B.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurstoria-106B-i1-GGUF/resolve/main/Eurstoria-106B.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 60.7 | optimal size/speed/quality |
| [PART 1](https://huggingface.co/mradermacher/Eurstoria-106B-i1-GGUF/resolve/main/Eurstoria-106B.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurstoria-106B-i1-GGUF/resolve/main/Eurstoria-106B.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 64.1 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Eurstoria-106B-i1-GGUF/resolve/main/Eurstoria-106B.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurstoria-106B-i1-GGUF/resolve/main/Eurstoria-106B.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 73.5 | |
| [PART 1](https://huggingface.co/mradermacher/Eurstoria-106B-i1-GGUF/resolve/main/Eurstoria-106B.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurstoria-106B-i1-GGUF/resolve/main/Eurstoria-106B.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 75.4 | |
| [PART 1](https://huggingface.co/mradermacher/Eurstoria-106B-i1-GGUF/resolve/main/Eurstoria-106B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurstoria-106B-i1-GGUF/resolve/main/Eurstoria-106B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 87.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
MaziyarPanahi/mergekit-passthrough-lkwyfft-GGUF | MaziyarPanahi | 2024-06-16T22:46:32Z | 485 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:saishf/Fimbulvetr-Kuro-Lotus-10.7B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:mergekit-community/mergekit-passthrough-lkwyfft"
]
| text-generation | 2024-06-16T22:14:34Z | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- llama
- text-generation
- mergekit
- merge
- base_model:saishf/Fimbulvetr-Kuro-Lotus-10.7B
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: mergekit-passthrough-lkwyfft-GGUF
base_model: mergekit-community/mergekit-passthrough-lkwyfft
inference: false
model_creator: mergekit-community
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/mergekit-passthrough-lkwyfft-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-passthrough-lkwyfft-GGUF)
- Model creator: [mergekit-community](https://huggingface.co/mergekit-community)
- Original model: [mergekit-community/mergekit-passthrough-lkwyfft](https://huggingface.co/mergekit-community/mergekit-passthrough-lkwyfft)
## Description
[MaziyarPanahi/mergekit-passthrough-lkwyfft-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-passthrough-lkwyfft-GGUF) contains GGUF format model files for [mergekit-community/mergekit-passthrough-lkwyfft](https://huggingface.co/mergekit-community/mergekit-passthrough-lkwyfft).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
YenJung/CPE_chatbot | YenJung | 2024-06-20T04:52:01Z | 485 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-19T14:20:27Z | Entry not found |
Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge | Fischerboot | 2024-06-22T02:13:52Z | 485 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP",
"base_model:Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-QLORA",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-22T02:05:16Z | ---
base_model:
- Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP
- Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-QLORA
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP](https://huggingface.co/Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP) + [Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-QLORA](https://huggingface.co/Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-QLORA)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP+Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-QLORA
merge_method: passthrough
dtype: bfloat16
```
|
Helsinki-NLP/opus-mt-hu-de | Helsinki-NLP | 2023-08-16T11:57:54Z | 484 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"hu",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2022-03-02T23:29:04Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-hu-de
* source languages: hu
* target languages: de
* OPUS readme: [hu-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hu-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/hu-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hu-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hu-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.hu.de | 44.1 | 0.637 |
|
Wikidepia/wav2vec2-xls-r-300m-indonesian | Wikidepia | 2022-03-23T18:26:42Z | 484 | 1 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"id",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language:
- id
license: apache-2.0
tags:
- automatic-speech-recognition
- hf-asr-leaderboard
- id
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
metrics:
- wer
- cer
model-index:
- name: XLS-R-300M - Indonesian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: id
metrics:
- name: Test WER
type: wer
value: 5.046
- name: Test CER
type: cer
value: 1.699
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: id
metrics:
- name: Test WER
type: wer
value: 41.31
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: id
metrics:
- name: Test WER
type: wer
value: 52.23
---
# Wav2Vec2 XLS-R-300M - Indonesian
This model is a fine-tuned version of `facebook/wav2vec2-xls-r-300m` on the `mozilla-foundation/common_voice_8_0` and [MagicHub Indonesian Conversational Speech Corpus](https://magichub.com/datasets/indonesian-conversational-speech-corpus/).
|
nllg/bygpt5-medium-en | nllg | 2022-10-05T11:34:35Z | 484 | 0 | transformers | [
"transformers",
"pytorch",
"bygpt5",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-10-05T09:50:51Z | Entry not found |
digiplay/YabaLMixTrue25D_V1.0 | digiplay | 2023-11-01T14:29:10Z | 484 | 4 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-06-09T01:38:26Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/60093/yabalmix-true25d-v10
Sample image:

|
sail-rvc/Adele__RVC_-_400_Epochs_ | sail-rvc | 2023-07-14T07:17:42Z | 484 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
]
| audio-to-audio | 2023-07-14T07:17:25Z |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# Adele__RVC_-_400_Epochs_
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:17:42
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
TheBloke/airoboros-l2-7B-2.2.1-GGUF | TheBloke | 2023-09-27T12:54:11Z | 484 | 3 | transformers | [
"transformers",
"gguf",
"llama",
"dataset:jondurbin/airoboros-2.2.1",
"base_model:jondurbin/airoboros-l2-7b-2.2.1",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-09-23T21:04:01Z | ---
license: llama2
datasets:
- jondurbin/airoboros-2.2.1
model_name: Airoboros L2 7B 2.2.1
base_model: jondurbin/airoboros-l2-7b-2.2.1
inference: false
model_creator: Jon Durbin
model_type: llama
prompt_template: "A chat.\nUSER: {prompt}\nASSISTANT: \n"
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Airoboros L2 7B 2.2.1 - GGUF
- Model creator: [Jon Durbin](https://huggingface.co/jondurbin)
- Original model: [Airoboros L2 7B 2.2.1](https://huggingface.co/jondurbin/airoboros-l2-7b-2.2.1)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Jon Durbin's Airoboros L2 7B 2.2.1](https://huggingface.co/jondurbin/airoboros-l2-7b-2.2.1).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/airoboros-l2-7B-2.2.1-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/airoboros-l2-7B-2.2.1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/airoboros-l2-7B-2.2.1-GGUF)
* [Jon Durbin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-l2-7b-2.2.1)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Chat
```
A chat.
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [airoboros-l2-7b-2.2.1.Q2_K.gguf](https://huggingface.co/TheBloke/airoboros-l2-7B-2.2.1-GGUF/blob/main/airoboros-l2-7b-2.2.1.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [airoboros-l2-7b-2.2.1.Q3_K_S.gguf](https://huggingface.co/TheBloke/airoboros-l2-7B-2.2.1-GGUF/blob/main/airoboros-l2-7b-2.2.1.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [airoboros-l2-7b-2.2.1.Q3_K_M.gguf](https://huggingface.co/TheBloke/airoboros-l2-7B-2.2.1-GGUF/blob/main/airoboros-l2-7b-2.2.1.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [airoboros-l2-7b-2.2.1.Q3_K_L.gguf](https://huggingface.co/TheBloke/airoboros-l2-7B-2.2.1-GGUF/blob/main/airoboros-l2-7b-2.2.1.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [airoboros-l2-7b-2.2.1.Q4_0.gguf](https://huggingface.co/TheBloke/airoboros-l2-7B-2.2.1-GGUF/blob/main/airoboros-l2-7b-2.2.1.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [airoboros-l2-7b-2.2.1.Q4_K_S.gguf](https://huggingface.co/TheBloke/airoboros-l2-7B-2.2.1-GGUF/blob/main/airoboros-l2-7b-2.2.1.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [airoboros-l2-7b-2.2.1.Q4_K_M.gguf](https://huggingface.co/TheBloke/airoboros-l2-7B-2.2.1-GGUF/blob/main/airoboros-l2-7b-2.2.1.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [airoboros-l2-7b-2.2.1.Q5_0.gguf](https://huggingface.co/TheBloke/airoboros-l2-7B-2.2.1-GGUF/blob/main/airoboros-l2-7b-2.2.1.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [airoboros-l2-7b-2.2.1.Q5_K_S.gguf](https://huggingface.co/TheBloke/airoboros-l2-7B-2.2.1-GGUF/blob/main/airoboros-l2-7b-2.2.1.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [airoboros-l2-7b-2.2.1.Q5_K_M.gguf](https://huggingface.co/TheBloke/airoboros-l2-7B-2.2.1-GGUF/blob/main/airoboros-l2-7b-2.2.1.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [airoboros-l2-7b-2.2.1.Q6_K.gguf](https://huggingface.co/TheBloke/airoboros-l2-7B-2.2.1-GGUF/blob/main/airoboros-l2-7b-2.2.1.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [airoboros-l2-7b-2.2.1.Q8_0.gguf](https://huggingface.co/TheBloke/airoboros-l2-7B-2.2.1-GGUF/blob/main/airoboros-l2-7b-2.2.1.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/airoboros-l2-7B-2.2.1-GGUF and below it, a specific filename to download, such as: airoboros-l2-7b-2.2.1.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/airoboros-l2-7B-2.2.1-GGUF airoboros-l2-7b-2.2.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/airoboros-l2-7B-2.2.1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/airoboros-l2-7B-2.2.1-GGUF airoboros-l2-7b-2.2.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m airoboros-l2-7b-2.2.1.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat.\nUSER: {prompt}\nASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/airoboros-l2-7B-2.2.1-GGUF", model_file="airoboros-l2-7b-2.2.1.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Jon Durbin's Airoboros L2 7B 2.2.1
### Overview
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
This is essentially a minor "fix" branch of [airoboros-l2-7b-2.2](https://hf.co/jondurbin/airoboros-l2-7b-2.2) with a updates, primarily:
- [re-generated writing responses](https://huggingface.co/datasets/jondurbin/airoboros-2.2.1#re-generated-writing-responses)
- [longer contextual blocks](https://huggingface.co/datasets/jondurbin/airoboros-2.2.1#longer-contextual-blocks)
- [removal of "rp" data](https://huggingface.co/datasets/jondurbin/airoboros-2.2.1#rp-category-removed)
This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay.
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The prompt format:
```
A chat.
USER: {prompt}
ASSISTANT:
```
The default system prompt ("A chat.") was used for most of the prompts, however it also included a wide sampling of responses with other prompts, particularly in "stylized\_response", "rp", "gtkm", etc.
Here's another example:
```
A chat between Bob (aka USER) and Tom (aka ASSISTANT). Tom is an extremely intelligent 18th century bookkeeper, who speaks loquaciously.
USER: {prompt}
ASSISTANT:
```
And chat scenario that wouldn't require USER/ASSISTANT (but should use stopping criteria to prevent the model from speaking on your behalf).
```
A chat between old friends: Timmy and Tommy.
{description of characters}
{setting for the chat}
Timmy: *takes a big sip from his coffee* "Ah, sweet, delicious, magical coffee."
Tommy:
```
__*I strongly suggest adding stopping criteria/early inference stopping on "USER:", and/or whatever names you specify in the system prompt.*__
### Fine tuning info
https://wandb.ai/jondurbin/airoboros-l2-7b-2.2.1/runs/ka6jlcj7?workspace=user-jondurbin
### Helpful usage tips
*The prompts shown here are are just the text that would be included after USER: and before ASSISTANT: in the full prompt format above, the system prompt and USER:/ASSISTANT: have been omited for readability.*
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Summarization
500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example:
```
BEGININPUT
{text to summarize}
ENDINPUT
BEGININSTRUCTION
Summarize the input in around 130 words.
ENDINSTRUCTION
```
#### Getting longer responses
You can use a few techniques to get longer responses.
Detailed prompts, with explicit instruction for word count:
```
Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality.
The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization.
One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary.
Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements.
Your response should be approximately 2300 words.
```
Or, a simpler example:
```
Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux.
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 2.2 models are built on top of llama-2/codellama.
The llama-2 base model has a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.
<!-- original-model-card end -->
|
ostris/super-cereal-sdxl-lora | ostris | 2023-10-10T23:38:25Z | 484 | 34 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"concept",
"comedy",
"cereal box",
"cereal",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
]
| text-to-image | 2023-10-10T23:19:01Z | ---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- concept
- comedy
- cereal box
- cereal
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt:
widget:
- text: " boogers, free tissue inside"
- text: " star wars wookie bits, free lightsaber inside"
- text: " kitty litter crunch"
- text: " t bone steak"
- text: " black plague, free death inside"
- text: " barbie and ken"
- text: " boiled eggs"
- text: " raw bacon"
- text: " herpes"
- text: " pickles"
---
# Super Cereal - SDXL LoRA

> boogers, free tissue inside
<p>Multiplier of 0.9 - 1.1 works well on SDXL base. Simple prompts tend to work well. No trigger word needed. <br /><br />Special thanks to Huggingface for the GPU grant.</p>
## Image examples for the model:

> star wars wookie bits, free lightsaber inside

> kitty litter crunch

> t bone steak

> black plague, free death inside

> barbie and ken

> boiled eggs

> raw bacon

> herpes

> pickles
|
maddes8cht/acrastt-Akins-3B-gguf | maddes8cht | 2023-11-24T16:10:08Z | 484 | 1 | transformers | [
"transformers",
"gguf",
"text-generation",
"en",
"dataset:Norquinal/claude_multiround_chat_1k",
"arxiv:2305.14314",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-11-15T19:40:30Z | ---
license: cc-by-sa-4.0
datasets:
- Norquinal/claude_multiround_chat_1k
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
[]()
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
# Akins-3B - GGUF
- Model creator: [acrastt](https://huggingface.co/acrastt)
- Original model: [Akins-3B](https://huggingface.co/acrastt/Akins-3B)
# StableLM
This is a Model based on StableLM.
Stablelm is a familiy of Language Models by Stability AI.
## Note:
Current (as of 2023-11-15) implementations of Llama.cpp only support GPU offloading up to 34 Layers with these StableLM Models.
The model will crash immediately if -ngl is larger than 34.
The model works fine however without any gpu acceleration.
# About GGUF format
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
# Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
# Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
## Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
(This mainly refers to Falcon 7b and Starcoder models)
# K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
So, if possible, use K-quants.
With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
---
# Original Model Card:
<a href="https://www.buymeacoffee.com/acrastt" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
This is [StableLM 3B 4E1T](https://huggingface.co/stabilityai/stablelm-3b-4e1t)(Licensed under [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/).) instruction tuned on [Claude Multiround Chat 1K](https://huggingface.co/datasets/Norquinal/claude_multiround_chat_1k) for 2 epochs with QLoRA([2305.14314](https://arxiv.org/abs/2305.14314)).
Prompt template:
```
USER: {prompt}
ASSISTANT:
```
GPTQ quantizations available [here](https://huggingface.co/TheBloke/Akins-3B-GPTQ).
***End of original Model File***
---
## Please consider to support my work
**Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
<center>
[](https://maddes8cht.github.io)
[](https://stackexchange.com/users/26485911)
[](https://github.com/maddes8cht)
[](https://huggingface.co/maddes8cht)
[](https://twitter.com/maddes1966)
</center> |
mradermacher/Euryale-1.3-Small-7B-i1-GGUF | mradermacher | 2024-05-10T16:14:23Z | 484 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:ObsidianLite/Euryale-1.3-Small-7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-09T13:17:45Z | ---
base_model: ObsidianLite/Euryale-1.3-Small-7B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/ObsidianLite/Euryale-1.3-Small-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Euryale-1.3-Small-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Euryale-1.3-Small-7B-i1-GGUF/resolve/main/Euryale-1.3-Small-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Euryale-1.3-Small-7B-i1-GGUF/resolve/main/Euryale-1.3-Small-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Euryale-1.3-Small-7B-i1-GGUF/resolve/main/Euryale-1.3-Small-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Euryale-1.3-Small-7B-i1-GGUF/resolve/main/Euryale-1.3-Small-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Euryale-1.3-Small-7B-i1-GGUF/resolve/main/Euryale-1.3-Small-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Euryale-1.3-Small-7B-i1-GGUF/resolve/main/Euryale-1.3-Small-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Euryale-1.3-Small-7B-i1-GGUF/resolve/main/Euryale-1.3-Small-7B.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Euryale-1.3-Small-7B-i1-GGUF/resolve/main/Euryale-1.3-Small-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Euryale-1.3-Small-7B-i1-GGUF/resolve/main/Euryale-1.3-Small-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Euryale-1.3-Small-7B-i1-GGUF/resolve/main/Euryale-1.3-Small-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Euryale-1.3-Small-7B-i1-GGUF/resolve/main/Euryale-1.3-Small-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Euryale-1.3-Small-7B-i1-GGUF/resolve/main/Euryale-1.3-Small-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Euryale-1.3-Small-7B-i1-GGUF/resolve/main/Euryale-1.3-Small-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Euryale-1.3-Small-7B-i1-GGUF/resolve/main/Euryale-1.3-Small-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Euryale-1.3-Small-7B-i1-GGUF/resolve/main/Euryale-1.3-Small-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Euryale-1.3-Small-7B-i1-GGUF/resolve/main/Euryale-1.3-Small-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Euryale-1.3-Small-7B-i1-GGUF/resolve/main/Euryale-1.3-Small-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Euryale-1.3-Small-7B-i1-GGUF/resolve/main/Euryale-1.3-Small-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Euryale-1.3-Small-7B-i1-GGUF/resolve/main/Euryale-1.3-Small-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Euryale-1.3-Small-7B-i1-GGUF/resolve/main/Euryale-1.3-Small-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Euryale-1.3-Small-7B-i1-GGUF/resolve/main/Euryale-1.3-Small-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
nbeerbower/llama3-KawaiiMahouSauce-8B | nbeerbower | 2024-05-15T02:16:56Z | 484 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:nbeerbower/KawaiiMahou-llama3-8B",
"base_model:nbeerbower/llama-3-sauce-v2-8B",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-13T17:13:42Z | ---
base_model:
- nbeerbower/KawaiiMahou-llama3-8B
- nbeerbower/llama-3-sauce-v2-8B
library_name: transformers
tags:
- mergekit
- merge
license: llama3
---
# llama-3-KawaiiMahouSauce-8B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [nbeerbower/KawaiiMahou-llama3-8B](https://huggingface.co/nbeerbower/KawaiiMahou-llama3-8B)
* [nbeerbower/llama-3-sauce-v2-8B](https://huggingface.co/nbeerbower/llama-3-sauce-v2-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: nbeerbower/KawaiiMahou-llama3-8B
layer_range: [0, 32]
- model: nbeerbower/llama-3-sauce-v2-8B
layer_range: [0, 32]
merge_method: slerp
base_model: nbeerbower/llama-3-sauce-v2-8B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
benchang1110/Taiwan-tinyllama-v1.0-chat | benchang1110 | 2024-05-23T03:14:14Z | 484 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"zh",
"dataset:benchang1110/ChatTaiwan",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-21T11:49:42Z | ---
language:
- zh
license: apache-2.0
datasets:
- benchang1110/ChatTaiwan
pipeline_tag: text-generation
widget:
- example_title: 範例一
messages:
- role: user
content: 你好
---
## Model Card for Model ID
This model is the instruction finetuning version of [benchang1110/Taiwan-tinyllama-v1.0-base](https://huggingface.co/benchang1110/Taiwan-tinyllama-v1.0-base).
## Usage
```python
import torch, transformers
def generate_response():
model = transformers.AutoModelForCausalLM.from_pretrained("benchang1110/Taiwan-tinyllama-v1.0-chat", torch_dtype=torch.bfloat16, device_map=device,attn_implementation="flash_attention_2")
tokenizer = transformers.AutoTokenizer.from_pretrained("benchang1110/Taiwan-tinyllama-v1.0-chat")
streamer = transformers.TextStreamer(tokenizer,skip_prompt=True)
while(1):
prompt = input('USER:')
if prompt == "exit":
break
print("Assistant: ")
message = [
{'content': prompt, 'role': 'user'},
]
untokenized_chat = tokenizer.apply_chat_template(message,tokenize=False,add_generation_prompt=False)
inputs = tokenizer.encode_plus(untokenized_chat, add_special_tokens=True, return_tensors="pt",return_attention_mask=True).to(device)
outputs = model.generate(inputs["input_ids"],attention_mask=inputs['attention_mask'],streamer=streamer,use_cache=True,max_new_tokens=512,do_sample=True,temperature=0.1,repetition_penalty=1.2)
if __name__ == '__main__':
device = 'cuda' if torch.cuda.is_available() else 'cpu'
generate_response()
``` |
visheratin/nllb-siglip-i18n | visheratin | 2024-06-03T03:14:32Z | 484 | 0 | open_clip | [
"open_clip",
"clip",
"zero-shot-image-classification",
"license:mit",
"region:us"
]
| zero-shot-image-classification | 2024-06-03T03:01:54Z | ---
tags:
- clip
library_name: open_clip
pipeline_tag: zero-shot-image-classification
license: mit
---
# Model card for nllb-siglip-i18n
|
aipib/llmjp-dareties | aipib | 2024-06-25T05:12:04Z | 484 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"gpt2",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"kcoopermiller/llm-jp-1.3b-v1.0-aya",
"llm-jp/llm-jp-1.3b-v1.0",
"base_model:kcoopermiller/llm-jp-1.3b-v1.0-aya",
"base_model:llm-jp/llm-jp-1.3b-v1.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-21T08:34:52Z | ---
base_model:
- kcoopermiller/llm-jp-1.3b-v1.0-aya
- llm-jp/llm-jp-1.3b-v1.0
tags:
- merge
- mergekit
- lazymergekit
- kcoopermiller/llm-jp-1.3b-v1.0-aya
- llm-jp/llm-jp-1.3b-v1.0
---
# llmjp-dareties
llmjp-dareties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [kcoopermiller/llm-jp-1.3b-v1.0-aya](https://huggingface.co/kcoopermiller/llm-jp-1.3b-v1.0-aya)
* [llm-jp/llm-jp-1.3b-v1.0](https://huggingface.co/llm-jp/llm-jp-1.3b-v1.0)
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "aipib/llmjp-dareties"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
NousResearch/Hermes-2-Pro-Llama-3-70B | NousResearch | 2024-06-27T01:23:33Z | 484 | 20 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"Llama-3",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"function calling",
"json mode",
"axolotl",
"conversational",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:NousResearch/Meta-Llama-3-70B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-25T05:07:21Z | ---
base_model: NousResearch/Meta-Llama-3-70B
tags:
- Llama-3
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- synthetic data
- distillation
- function calling
- json mode
- axolotl
model-index:
- name: Hermes-2-Pro-Llama-3-70B
results: []
language:
- en
datasets:
- teknium/OpenHermes-2.5
widget:
- example_title: Hermes 2 Pro
messages:
- role: system
content: >-
You are a sentient, superintelligent artificial general intelligence, here
to teach and assist me.
- role: user
content: >-
Write a short story about Goku discovering kirby has teamed up with Majin
Buu to destroy the world.
---
# Hermes 2 Pro - Llama-3 70B

## Model Description
Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house.
This new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation.
Hermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below.
This version of Hermes 2 Pro adds several tokens to assist with agentic capabilities in parsing while streaming tokens - `<tools>`, `<tool_call>`, `<tool_response>` and their closing tags are single tokens now.
This work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI
Learn more about the function calling system for this model on our github repo here: https://github.com/NousResearch/Hermes-Function-Calling
## Example Outputs
### Ask for a structured JSON output:

### Write the plot for a story where anime became real life:

### Coding Assistance

# Prompt Format
Hermes 2 Pro uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
## Prompt Format for Function Calling
Our model was trained on specific system prompts and structures for Function Calling.
You should use the system role with this message, followed by a function signature json as this example shows here.
```
<|im_start|>system
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"arguments": <args-dict>, "name": <function-name>}
</tool_call><|im_end|>
```
To complete the function call, create a user prompt that follows the above system prompt, like so:
```
<|im_start|>user
Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|>
```
The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
```
<|im_start|>assistant
<tool_call>
{"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"}
</tool_call><|im_end|>
```
Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so:
```
<|im_start|>tool
<tool_response>
{"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}}
</tool_response>
<|im_end|>
```
The assistant will then read in that data from the function's response, and generate a natural language response:
```
<|im_start|>assistant
The stock fundamentals data for Tesla (TSLA) are as follows:
- **Symbol**: TSLA
- **Company Name**: Tesla, Inc.
- **Sector**: Consumer Cyclical
- **Industry**: Auto Manufacturers
- **Market Capitalization**: $566,160,130,480
- **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73
- **Price-to-Book Ratio (PB Ratio)**: 9.04
- **Dividend Yield**: N/A
- **Trailing Earnings Per Share (EPS)**: $4.3
- **Beta Value of the Stock**: 2.42
- **52-Week High Price of the Stock**: $299.29
- **52-Week Low Price of the Stock**: $152.37
This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|>
```
## Prompt Format for JSON Mode / Structured Outputs
Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema.
Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main
```
<|im_start|>system
You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|>
```
Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.
# Benchmarks
## GPT4All:
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.6553|_ |0.0139|
| | |acc_norm|0.6655|_ |0.0138|
|arc_easy | 0|acc |0.8847|_ |0.0066|
| | |acc_norm|0.8641|_ |0.0070|
|boolq | 1|acc |0.8783|_ |0.0057|
|hellaswag | 0|acc |0.6827|_ |0.0046|
| | |acc_norm|0.8624|_ |0.0034|
|openbookqa | 0|acc |0.4060|_ |0.0220|
| | |acc_norm|0.4860|_ |0.0224|
|piqa | 0|acc |0.8297|_ |0.0088|
| | |acc_norm|0.8428|_ |0.0085|
|winogrande | 0|acc |0.8224|_ |0.0107|
```
Average: 77.45
## AGIEval:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.3150|_ |0.0292|
| | |acc_norm|0.2913|_ |0.0286|
|agieval_logiqa_en | 0|acc |0.5146|_ |0.0196|
| | |acc_norm|0.4900|_ |0.0196|
|agieval_lsat_ar | 0|acc |0.2478|_ |0.0285|
| | |acc_norm|0.2478|_ |0.0285|
|agieval_lsat_lr | 0|acc |0.7941|_ |0.0179|
| | |acc_norm|0.7686|_ |0.0187|
|agieval_lsat_rc | 0|acc |0.8178|_ |0.0236|
| | |acc_norm|0.7955|_ |0.0246|
|agieval_sat_en | 0|acc |0.8932|_ |0.0216|
| | |acc_norm|0.8932|_ |0.0216|
|agieval_sat_en_without_passage| 0|acc |0.5631|_ |0.0346|
| | |acc_norm|0.5437|_ |0.0348|
|agieval_sat_math | 0|acc |0.6136|_ |0.0329|
| | |acc_norm|0.5545|_ |0.0336|
```
Average: 57.31
## BigBench:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.6632|_ |0.0344|
|bigbench_date_understanding | 0|multiple_choice_grade|0.7615|_ |0.0222|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3295|_ |0.0293|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.4178|_ |0.0261|
| | |exact_str_match |0.0000|_ |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.4080|_ |0.0220|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2886|_ |0.0171|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.5467|_ |0.0288|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.4800|_ |0.0224|
|bigbench_navigate | 0|multiple_choice_grade|0.4990|_ |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.8410|_ |0.0082|
|bigbench_ruin_names | 0|multiple_choice_grade|0.7031|_ |0.0216|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.5441|_ |0.0158|
|bigbench_snarks | 0|multiple_choice_grade|0.7348|_ |0.0329|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.5791|_ |0.0157|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.9630|_ |0.0060|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2272|_ |0.0119|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1606|_ |0.0088|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.5467|_ |0.0288|
```
Average: 53.86
## TruthfulQA:
```
| Task |Version|Metric|Value | |Stderr|
|-------------|------:|------|-----:|---|-----:|
|truthfulqa_mc| 1|mc1 |0.4688|_ |0.0175|
| | |mc2 |0.6533|_ |0.0146|
```
# Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)
Note: To use function calling, you should see the github repo above.
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM
import bitsandbytes, flash_attn
tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-2-Pro-Llama-3-70B', trust_remote_code=True)
model = LlamaForCausalLM.from_pretrained(
"NousResearch/Hermes-2-Pro-Llama-3-70B",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
## Inference Code for Function Calling:
All code for utilizing, parsing, and building function calling templates is available on our github:
[https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling)

# Chat Interfaces
When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:

## Quantized Versions:
GGUF Versions *will soon be* Available Here: https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-70B-GGUF
# How to cite:
```bibtext
@misc{Hermes-2-Pro-Llama-3-70B,
url={[https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-70B]https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-70B)},
title={Hermes-2-Pro-Llama-3-70B},
author={"Teknium", "interstellarninja", "theemozilla", "karan4d", "huemin_art"}
}
``` |
Helsinki-NLP/opus-mt-fi-de | Helsinki-NLP | 2023-08-16T11:34:22Z | 483 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fi",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2022-03-02T23:29:04Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-de
* source languages: fi
* target languages: de
* OPUS readme: [fi-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-04.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-de/opus-2019-12-04.zip)
* test set translations: [opus-2019-12-04.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-de/opus-2019-12-04.test.txt)
* test set scores: [opus-2019-12-04.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-de/opus-2019-12-04.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fi.de | 45.2 | 0.637 |
|
facebook/convnext-large-384 | facebook | 2023-09-04T21:22:21Z | 483 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"convnext",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2201.03545",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# ConvNeXT (large-sized model)
ConvNeXT model trained on ImageNet-1k at resolution 384x384. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt).
Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import ConvNextFeatureExtractor, ConvNextForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
feature_extractor = ConvNextFeatureExtractor.from_pretrained("facebook/convnext-large-384")
model = ConvNextForImageClassification.from_pretrained("facebook/convnext-large-384")
inputs = feature_extractor(image, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label]),
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2201-03545,
author = {Zhuang Liu and
Hanzi Mao and
Chao{-}Yuan Wu and
Christoph Feichtenhofer and
Trevor Darrell and
Saining Xie},
title = {A ConvNet for the 2020s},
journal = {CoRR},
volume = {abs/2201.03545},
year = {2022},
url = {https://arxiv.org/abs/2201.03545},
eprinttype = {arXiv},
eprint = {2201.03545},
timestamp = {Thu, 20 Jan 2022 14:21:35 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
impyadav/GPT2-FineTuned-Hinglish-Song-Generation | impyadav | 2022-01-03T11:33:54Z | 483 | 2 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | GPT-2 model fine-tuned on Custom old Hindi songs (Hinglish) for text-generation task (AI Lyricist)
language:
- Hindi
- Hinglish
|
infinitejoy/wav2vec2-large-xls-r-300m-armenian | infinitejoy | 2022-03-24T11:55:39Z | 483 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language:
- hy-AM
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Armenian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: hy-AM
metrics:
- name: Test WER
type: wer
value: 101.627
- name: Test CER
type: cer
value: 158.767
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-armenian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HY-AM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9669
- Wer: 0.6942
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 200.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 1.7294 | 27.78 | 500 | 0.8540 | 0.9944 |
| 0.8863 | 55.56 | 1000 | 0.7282 | 0.7312 |
| 0.5789 | 83.33 | 1500 | 0.8178 | 0.8102 |
| 0.3899 | 111.11 | 2000 | 0.8034 | 0.7701 |
| 0.2869 | 138.89 | 2500 | 0.9061 | 0.6999 |
| 0.1934 | 166.67 | 3000 | 0.9400 | 0.7105 |
| 0.1551 | 194.44 | 3500 | 0.9667 | 0.6955 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
facebook/esm1v_t33_650M_UR90S_2 | facebook | 2022-11-16T13:30:15Z | 483 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"esm",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-10-17T15:28:34Z | Entry not found |
timm/tinynet_d.in1k | timm | 2023-04-27T21:50:29Z | 483 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2010.14819",
"license:apache-2.0",
"region:us"
]
| image-classification | 2022-12-13T00:22:20Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for tinynet_d.in1k
A TinyNet image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 2.3
- GMACs: 0.1
- Activations (M): 1.4
- Image size: 152 x 152
- **Papers:**
- Model rubik's cube: Twisting resolution, depth and width for tinynets: https://arxiv.org/abs/2010.14819v2
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('tinynet_d.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'tinynet_d.in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 8, 76, 76])
# torch.Size([1, 16, 38, 38])
# torch.Size([1, 24, 19, 19])
# torch.Size([1, 64, 10, 10])
# torch.Size([1, 176, 5, 5])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'tinynet_d.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1280, 5, 5) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{han2020model,
title={Model rubik’s cube: Twisting resolution, depth and width for tinynets},
author={Han, Kai and Wang, Yunhe and Zhang, Qiulin and Zhang, Wei and Xu, Chunjing and Zhang, Tong},
journal={Advances in Neural Information Processing Systems},
volume={33},
pages={19353--19364},
year={2020}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
Karumoon/test00a1 | Karumoon | 2023-03-10T11:39:39Z | 483 | 1 | diffusers | [
"diffusers",
"art",
"text-to-image",
"en",
"dataset:Gustavosta/Stable-Diffusion-Prompts",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-02-21T22:47:55Z | ---
pipeline_tag: text-to-image
library_name: diffusers
datasets:
- Gustavosta/Stable-Diffusion-Prompts
language:
- en
tags:
- art
license: openrail
--- |
DucHaiten/DucHaitenJourney | DucHaiten | 2023-06-15T12:58:48Z | 483 | 8 | diffusers | [
"diffusers",
"stable-diffusion",
"text-to-image",
"image-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-03-12T16:01:27Z | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
- image-to-image
- diffusers
license: creativeml-openrail-m
inference: true
---
DPM++ 2S a Karras cfg 10
will be better in large resolution 768x768, 512x512 will be poor quality
negative prompt:
illustration, painting, cartoons, sketch, (worst quality:2), (low quality:2), (normal quality:2), lowres, bad anatomy, bad hands, ((monochrome)), ((grayscale)), collapsed eyeshadow, multiple eyeblows, vaginas in breasts, (cropped), oversaturated, extra limb, missing limbs, deformed hands, long neck, long body, imperfect, (bad hands), signature, watermark, username, artist name, conjoined fingers, deformed fingers, ugly eyes, imperfect eyes, skewed eyes, unnatural face, unnatural body, error |
Yntec/iComixRemix | Yntec | 2023-08-09T09:04:37Z | 483 | 3 | diffusers | [
"diffusers",
"safetensors",
"anime",
"art",
"comic",
"lostdog",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-08-09T08:36:40Z | ---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- anime
- art
- comic
- lostdog
---
# iComixRemix
A mix of iCoMix v2 and iCoMix v4.
Preview samples and prompt:


cartoon pretty cute girl, detailed chibi eyes, gorgeous detailed hair, beautiful detailed, looking at hundreds of large technics dj table octoberfest, large pint glass behind a table, octoberfest, strudels and birthday presents surrounded by presents, photoshoot, 4 k, hyper realistic, natural, highly detailed, digital illustration
Original pages:
https://civitai.com/models/16164?modelVersionId=21278
https://civitai.com/models/16164?modelVersionId=43844
|
dacorvo/mnist-mlp | dacorvo | 2023-10-04T15:05:18Z | 483 | 0 | transformers | [
"transformers",
"pytorch",
"mlp",
"feature-extraction",
"pretrained",
"image-classification",
"custom_code",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-10-04T15:05:17Z | ---
license: apache-2.0
pipeline_tag: image-classification
tags:
- pretrained
---
# Model Card for MNIST-MLP
This is a simple MLP trained on the MNIST dataset.
Its primary use is to be a very simple reference model to test quantization.
## Inputs preprocessing
The MNIST images must be normalized and flattened as follows:
```
from torchvision import datasets, transforms
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,)),
transforms.Lambda(lambda x: torch.flatten(x)),
])
test_set = datasets.MNIST('../data', train=False, download=True,
transform=transform)
```
|
TheBloke/sheep-duck-llama-2-13B-GGUF | TheBloke | 2023-10-08T16:32:35Z | 483 | 3 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:Riiid/sheep-duck-llama-2-13b",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-10-08T16:14:31Z | ---
base_model: Riiid/sheep-duck-llama-2-13b
inference: false
license: llama2
model_creator: Riiid
model_name: Sheep Duck Llama 2 13B
model_type: llama
prompt_template: '### System:
{system_message}
### User:
{prompt}
### Assistant:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Sheep Duck Llama 2 13B - GGUF
- Model creator: [Riiid](https://huggingface.co/Riiid)
- Original model: [Sheep Duck Llama 2 13B](https://huggingface.co/Riiid/sheep-duck-llama-2-13b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Riiid's Sheep Duck Llama 2 13B](https://huggingface.co/Riiid/sheep-duck-llama-2-13b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/sheep-duck-llama-2-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/sheep-duck-llama-2-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/sheep-duck-llama-2-13B-GGUF)
* [Riiid's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Riiid/sheep-duck-llama-2-13b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Orca-Hashes
```
### System:
{system_message}
### User:
{prompt}
### Assistant:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [sheep-duck-llama-2-13b.Q2_K.gguf](https://huggingface.co/TheBloke/sheep-duck-llama-2-13B-GGUF/blob/main/sheep-duck-llama-2-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [sheep-duck-llama-2-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/sheep-duck-llama-2-13B-GGUF/blob/main/sheep-duck-llama-2-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [sheep-duck-llama-2-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/sheep-duck-llama-2-13B-GGUF/blob/main/sheep-duck-llama-2-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [sheep-duck-llama-2-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/sheep-duck-llama-2-13B-GGUF/blob/main/sheep-duck-llama-2-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [sheep-duck-llama-2-13b.Q4_0.gguf](https://huggingface.co/TheBloke/sheep-duck-llama-2-13B-GGUF/blob/main/sheep-duck-llama-2-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [sheep-duck-llama-2-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/sheep-duck-llama-2-13B-GGUF/blob/main/sheep-duck-llama-2-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [sheep-duck-llama-2-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/sheep-duck-llama-2-13B-GGUF/blob/main/sheep-duck-llama-2-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [sheep-duck-llama-2-13b.Q5_0.gguf](https://huggingface.co/TheBloke/sheep-duck-llama-2-13B-GGUF/blob/main/sheep-duck-llama-2-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [sheep-duck-llama-2-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/sheep-duck-llama-2-13B-GGUF/blob/main/sheep-duck-llama-2-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [sheep-duck-llama-2-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/sheep-duck-llama-2-13B-GGUF/blob/main/sheep-duck-llama-2-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [sheep-duck-llama-2-13b.Q6_K.gguf](https://huggingface.co/TheBloke/sheep-duck-llama-2-13B-GGUF/blob/main/sheep-duck-llama-2-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [sheep-duck-llama-2-13b.Q8_0.gguf](https://huggingface.co/TheBloke/sheep-duck-llama-2-13B-GGUF/blob/main/sheep-duck-llama-2-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/sheep-duck-llama-2-13B-GGUF and below it, a specific filename to download, such as: sheep-duck-llama-2-13b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/sheep-duck-llama-2-13B-GGUF sheep-duck-llama-2-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/sheep-duck-llama-2-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/sheep-duck-llama-2-13B-GGUF sheep-duck-llama-2-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m sheep-duck-llama-2-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### System:\n{system_message}\n\n### User:\n{prompt}\n\n### Assistant:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/sheep-duck-llama-2-13B-GGUF", model_file="sheep-duck-llama-2-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Riiid's Sheep Duck Llama 2 13B
No original model card was available.
<!-- original-model-card end -->
|
TheBloke/Lewd-Sydney-20B-GGUF | TheBloke | 2023-10-28T07:35:29Z | 483 | 12 | transformers | [
"transformers",
"gguf",
"llama",
"not-for-all-audiences",
"nsfw",
"base_model:Undi95/Lewd-Sydney-20B",
"license:cc-by-nc-4.0",
"text-generation-inference",
"region:us"
]
| null | 2023-10-28T07:24:15Z | ---
base_model: Undi95/Lewd-Sydney-20B
inference: false
license: cc-by-nc-4.0
model_creator: Undi
model_name: Lewd Sydney 20B
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
tags:
- not-for-all-audiences
- nsfw
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Lewd Sydney 20B - GGUF
- Model creator: [Undi](https://huggingface.co/Undi95)
- Original model: [Lewd Sydney 20B](https://huggingface.co/Undi95/Lewd-Sydney-20B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Undi's Lewd Sydney 20B](https://huggingface.co/Undi95/Lewd-Sydney-20B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Lewd-Sydney-20B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Lewd-Sydney-20B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Lewd-Sydney-20B-GGUF)
* [Undi's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Undi95/Lewd-Sydney-20B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Undi's Lewd Sydney 20B](https://huggingface.co/Undi95/Lewd-Sydney-20B).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [lewd-sydney-20b.Q2_K.gguf](https://huggingface.co/TheBloke/Lewd-Sydney-20B-GGUF/blob/main/lewd-sydney-20b.Q2_K.gguf) | Q2_K | 2 | 8.31 GB| 10.81 GB | smallest, significant quality loss - not recommended for most purposes |
| [lewd-sydney-20b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Lewd-Sydney-20B-GGUF/blob/main/lewd-sydney-20b.Q3_K_S.gguf) | Q3_K_S | 3 | 8.66 GB| 11.16 GB | very small, high quality loss |
| [lewd-sydney-20b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Lewd-Sydney-20B-GGUF/blob/main/lewd-sydney-20b.Q3_K_M.gguf) | Q3_K_M | 3 | 9.70 GB| 12.20 GB | very small, high quality loss |
| [lewd-sydney-20b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Lewd-Sydney-20B-GGUF/blob/main/lewd-sydney-20b.Q3_K_L.gguf) | Q3_K_L | 3 | 10.63 GB| 13.13 GB | small, substantial quality loss |
| [lewd-sydney-20b.Q4_0.gguf](https://huggingface.co/TheBloke/Lewd-Sydney-20B-GGUF/blob/main/lewd-sydney-20b.Q4_0.gguf) | Q4_0 | 4 | 11.29 GB| 13.79 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [lewd-sydney-20b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Lewd-Sydney-20B-GGUF/blob/main/lewd-sydney-20b.Q4_K_S.gguf) | Q4_K_S | 4 | 11.34 GB| 13.84 GB | small, greater quality loss |
| [lewd-sydney-20b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Lewd-Sydney-20B-GGUF/blob/main/lewd-sydney-20b.Q4_K_M.gguf) | Q4_K_M | 4 | 12.04 GB| 14.54 GB | medium, balanced quality - recommended |
| [lewd-sydney-20b.Q5_0.gguf](https://huggingface.co/TheBloke/Lewd-Sydney-20B-GGUF/blob/main/lewd-sydney-20b.Q5_0.gguf) | Q5_0 | 5 | 13.77 GB| 16.27 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [lewd-sydney-20b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Lewd-Sydney-20B-GGUF/blob/main/lewd-sydney-20b.Q5_K_S.gguf) | Q5_K_S | 5 | 13.77 GB| 16.27 GB | large, low quality loss - recommended |
| [lewd-sydney-20b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Lewd-Sydney-20B-GGUF/blob/main/lewd-sydney-20b.Q5_K_M.gguf) | Q5_K_M | 5 | 14.16 GB| 16.66 GB | large, very low quality loss - recommended |
| [lewd-sydney-20b.Q6_K.gguf](https://huggingface.co/TheBloke/Lewd-Sydney-20B-GGUF/blob/main/lewd-sydney-20b.Q6_K.gguf) | Q6_K | 6 | 16.40 GB| 18.90 GB | very large, extremely low quality loss |
| [lewd-sydney-20b.Q8_0.gguf](https://huggingface.co/TheBloke/Lewd-Sydney-20B-GGUF/blob/main/lewd-sydney-20b.Q8_0.gguf) | Q8_0 | 8 | 21.25 GB| 23.75 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Lewd-Sydney-20B-GGUF and below it, a specific filename to download, such as: lewd-sydney-20b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Lewd-Sydney-20B-GGUF lewd-sydney-20b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Lewd-Sydney-20B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Lewd-Sydney-20B-GGUF lewd-sydney-20b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m lewd-sydney-20b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Lewd-Sydney-20B-GGUF", model_file="lewd-sydney-20b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Undi's Lewd Sydney 20B
<div style="width: 100%;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/ppZDyjjZJPGihhckQb5zQ.png" style="width: 40%; min-width: 200px; display: block; margin: auto;">
</div>
This model is based on [Free Sydney V2](https://huggingface.co/FPHam/Free_Sydney_V2_13b_HF), trying to get a... lewder assistant, you get it now.
<!-- description start -->
## Description
This repo contain fp16 files of Lewd-Sydney-20B, an attempt to get our beloved Sydney open to R-18 content.
<!-- description end -->
<!-- description start -->
## Models and loras used
- [Free_Sydney_V2_13b_HF](https://huggingface.co/FPHam/Free_Sydney_V2_13b_HF)
- [Undi95/Xwin-MLewd-13B-V0.2](https://huggingface.co/Undi95/Xwin-MLewd-13B-V0.2)
- [lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT](https://huggingface.co/lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT)
- Synthia v1.2 private LoRA
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
If you want to support me, you can [here](https://ko-fi.com/undiai).
<!-- original-model-card end -->
|
TheBloke/phi-2-GPTQ | TheBloke | 2023-12-18T20:26:32Z | 483 | 29 | transformers | [
"transformers",
"safetensors",
"phi-msft",
"text-generation",
"nlp",
"code",
"custom_code",
"en",
"base_model:microsoft/phi-2",
"license:other",
"autotrain_compatible",
"4-bit",
"gptq",
"region:us"
]
| text-generation | 2023-12-14T16:34:10Z | ---
base_model: microsoft/phi-2
inference: false
language:
- en
license: other
license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE
license_name: microsoft-research-license
model_creator: Microsoft
model_name: Phi 2
model_type: phi-msft
pipeline_tag: text-generation
prompt_template: 'Instruct: {prompt}
Output:
'
quantized_by: TheBloke
tags:
- nlp
- code
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Phi 2 - GPTQ
- Model creator: [Microsoft](https://huggingface.co/microsoft)
- Original model: [Phi 2](https://huggingface.co/microsoft/phi-2)
<!-- description start -->
# Description
This repo contains GPTQ model files for [Microsoft's Phi 2](https://huggingface.co/microsoft/phi-2).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/phi-2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/phi-2-GGUF)
* [Microsoft's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/microsoft/phi-2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Phi
```
Instruct: {prompt}
Output:
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/phi-2-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 1.84 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/phi-2-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 1.98 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/phi-2-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 3.05 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/phi-2-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 3.10 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/phi-2-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 3.28 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/phi-2-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 1.89 GB | No | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/phi-2-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/phi-2-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `phi-2-GPTQ`:
```shell
mkdir phi-2-GPTQ
huggingface-cli download TheBloke/phi-2-GPTQ --local-dir phi-2-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir phi-2-GPTQ
huggingface-cli download TheBloke/phi-2-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir phi-2-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir phi-2-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/phi-2-GPTQ --local-dir phi-2-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/phi-2-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/phi-2-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/phi-2-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `phi-2-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/phi-2-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''Instruct: {prompt}
Output:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/phi-2-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=True,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Write a story about llamas"
system_message = "You are a story writing assistant"
prompt_template=f'''Instruct: {prompt}
Output:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Microsoft's Phi 2
## Model Summary
Phi-2 is a Transformer with **2.7 billion** parameters. It was trained using the same data sources as [Phi-1.5](https://huggingface.co/microsoft/phi-1.5), augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-2 showcased a nearly state-of-the-art performance among models with less than 13 billion parameters.
Our model hasn't been fine-tuned through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.
## Intended Uses
Phi-2 is intended for research purposes only. Given the nature of the training data, the Phi-2 model is best suited for prompts using the QA format, the chat format, and the code format.
### QA Format:
You can provide the prompt as a standalone question as follows:
```markdown
Write a detailed analogy between mathematics and a lighthouse.
```
where the model generates the text after "." .
To encourage the model to write more concise answers, you can also try the following QA format using "Instruct: \<prompt\>\nOutput:"
```markdown
Instruct: Write a detailed analogy between mathematics and a lighthouse.
Output: Mathematics is like a lighthouse. Just as a lighthouse guides ships safely to shore, mathematics provides a guiding light in the world of numbers and logic. It helps us navigate through complex problems and find solutions. Just as a lighthouse emits a steady beam of light, mathematics provides a consistent framework for reasoning and problem-solving. It illuminates the path to understanding and helps us make sense of the world around us.
```
where the model generates the text after "Output:".
### Chat Format:
```markdown
Alice: I don't know why, I'm struggling to maintain focus while studying. Any suggestions?
Bob: Well, have you tried creating a study schedule and sticking to it?
Alice: Yes, I have, but it doesn't seem to help much.
Bob: Hmm, maybe you should try studying in a quiet environment, like the library.
Alice: ...
```
where the model generates the text after the first "Bob:".
### Code Format:
```python
def print_prime(n):
"""
Print all primes between 1 and n
"""
primes = []
for num in range(2, n+1):
is_prime = True
for i in range(2, int(math.sqrt(num))+1):
if num % i == 0:
is_prime = False
break
if is_prime:
primes.append(num)
print(primes)
```
where the model generates the text after the comments.
**Notes:**
* Phi-2 is intended for research purposes. The model-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.
* Direct adoption for production tasks is out of the scope of this research project. As a result, the Phi-2 model has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.
* If you are using `transformers>=4.36.0`, always load the model with `trust_remote_code=True` to prevent side-effects.
## Sample Code
There are four types of execution mode:
1. FP16 / Flash-Attention / CUDA:
```python
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2", torch_dtype="auto", flash_attn=True, flash_rotary=True, fused_dense=True, device_map="cuda", trust_remote_code=True)
```
2. FP16 / CUDA:
```python
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2", torch_dtype="auto", device_map="cuda", trust_remote_code=True)
```
3. FP32 / CUDA:
```python
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2", torch_dtype=torch.float32, device_map="cuda", trust_remote_code=True)
```
4. FP32 / CPU:
```python
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2", torch_dtype=torch.float32, device_map="cpu", trust_remote_code=True)
```
To ensure the maximum compatibility, we recommend using the second execution mode (FP16 / CUDA), as follows:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
torch.set_default_device("cuda")
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2", torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2", trust_remote_code=True)
inputs = tokenizer('''def print_prime(n):
"""
Print all primes between 1 and n
"""''', return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
**Remark:** In the generation function, our model currently does not support beam search (`num_beams > 1`).
Furthermore, in the forward pass of the model, we currently do not support outputting hidden states or attention values, or using custom input embeddings.
## Limitations of Phi-2
* Generate Inaccurate Code and Facts: The model may produce incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
* Limited Scope for code: Majority of Phi-2 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
* Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.
* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
* Potential Societal Biases: Phi-2 is not entirely free from societal biases despite efforts in assuring trainig data safety. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.
* Toxicity: Despite being trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model for research purposes only -- We hope to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.
* Verbosity: Phi-2 being a base model often produces irrelevant or extra text and responses following its first answer to user prompts within a single turn. This is due to its training dataset being primarily textbooks, which results in textbook-like responses.
## Training
### Model
* Architecture: a Transformer-based model with next-word prediction objective
* Context length: 2048 tokens
* Dataset size: 250B tokens, combination of NLP synthetic data created by AOAI GPT-3.5 and filtered web data from Falcon RefinedWeb and SlimPajama, which was assessed by AOAI GPT-4.
* Training tokens: 1.4T tokens
* GPUs: 96xA100-80G
* Training time: 14 days
### Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
### License
The model is licensed under the [microsoft-research-license](https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
|
sentosa/ZNV-Embedding | sentosa | 2024-03-08T08:52:31Z | 483 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mteb",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-02-29T06:40:00Z | ---
tags:
- mteb
model-index:
- name: data1
results:
- task:
type: STS
dataset:
type: C-MTEB/AFQMC
name: MTEB AFQMC
config: default
split: validation
revision: b44c3b011063adb25877c13823db83bb193913c4
metrics:
- type: cos_sim_pearson
value: 53.66919706568301
- type: cos_sim_spearman
value: 53.84074348656974
- type: euclidean_pearson
value: 53.58226184439896
- type: euclidean_spearman
value: 53.84074348656974
- type: manhattan_pearson
value: 53.64565834381205
- type: manhattan_spearman
value: 53.75526003581371
- task:
type: STS
dataset:
type: C-MTEB/ATEC
name: MTEB ATEC
config: default
split: test
revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865
metrics:
- type: cos_sim_pearson
value: 58.123744893539495
- type: cos_sim_spearman
value: 54.44277675493291
- type: euclidean_pearson
value: 61.20550691770944
- type: euclidean_spearman
value: 54.44277225170509
- type: manhattan_pearson
value: 60.57835645653918
- type: manhattan_spearman
value: 54.46153709699013
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (zh)
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 29.746
- type: f1
value: 29.039321522193585
- task:
type: STS
dataset:
type: C-MTEB/BQ
name: MTEB BQ
config: default
split: test
revision: e3dda5e115e487b39ec7e618c0c6a29137052a55
metrics:
- type: cos_sim_pearson
value: 70.7026320728244
- type: cos_sim_spearman
value: 70.57218534128499
- type: euclidean_pearson
value: 69.28488221289881
- type: euclidean_spearman
value: 70.57218534192015
- type: manhattan_pearson
value: 69.65344674392082
- type: manhattan_spearman
value: 70.64136691477553
- task:
type: Clustering
dataset:
type: C-MTEB/CLSClusteringP2P
name: MTEB CLSClusteringP2P
config: default
split: test
revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476
metrics:
- type: v_measure
value: 38.87791994762536
- task:
type: Clustering
dataset:
type: C-MTEB/CLSClusteringS2S
name: MTEB CLSClusteringS2S
config: default
split: test
revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f
metrics:
- type: v_measure
value: 39.09103599244803
- task:
type: Reranking
dataset:
type: C-MTEB/CMedQAv1-reranking
name: MTEB CMedQAv1
config: default
split: test
revision: 8d7f1e942507dac42dc58017c1a001c3717da7df
metrics:
- type: map
value: 80.40249793910444
- type: mrr
value: 82.96805555555555
- task:
type: Reranking
dataset:
type: C-MTEB/CMedQAv2-reranking
name: MTEB CMedQAv2
config: default
split: test
revision: 23d186750531a14a0357ca22cd92d712fd512ea0
metrics:
- type: map
value: 80.39046823499085
- type: mrr
value: 83.22674603174602
- task:
type: Retrieval
dataset:
type: C-MTEB/CmedqaRetrieval
name: MTEB CmedqaRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 15.715000000000002
- type: map_at_10
value: 24.651
- type: map_at_100
value: 26.478
- type: map_at_1000
value: 26.648
- type: map_at_3
value: 21.410999999999998
- type: map_at_5
value: 23.233
- type: mrr_at_1
value: 24.806
- type: mrr_at_10
value: 32.336
- type: mrr_at_100
value: 33.493
- type: mrr_at_1000
value: 33.568999999999996
- type: mrr_at_3
value: 29.807
- type: mrr_at_5
value: 31.294
- type: ndcg_at_1
value: 24.806
- type: ndcg_at_10
value: 30.341
- type: ndcg_at_100
value: 38.329
- type: ndcg_at_1000
value: 41.601
- type: ndcg_at_3
value: 25.655
- type: ndcg_at_5
value: 27.758
- type: precision_at_1
value: 24.806
- type: precision_at_10
value: 7.119000000000001
- type: precision_at_100
value: 1.3679999999999999
- type: precision_at_1000
value: 0.179
- type: precision_at_3
value: 14.787
- type: precision_at_5
value: 11.208
- type: recall_at_1
value: 15.715000000000002
- type: recall_at_10
value: 39.519999999999996
- type: recall_at_100
value: 73.307
- type: recall_at_1000
value: 95.611
- type: recall_at_3
value: 26.026
- type: recall_at_5
value: 32.027
- task:
type: PairClassification
dataset:
type: C-MTEB/CMNLI
name: MTEB Cmnli
config: default
split: validation
revision: 41bc36f332156f7adc9e38f53777c959b2ae9766
metrics:
- type: cos_sim_accuracy
value: 66.89116055321708
- type: cos_sim_ap
value: 75.66575745519994
- type: cos_sim_f1
value: 70.2448775612194
- type: cos_sim_precision
value: 61.347765363128495
- type: cos_sim_recall
value: 82.16039279869068
- type: dot_accuracy
value: 66.89116055321708
- type: dot_ap
value: 75.68262052264197
- type: dot_f1
value: 70.2448775612194
- type: dot_precision
value: 61.347765363128495
- type: dot_recall
value: 82.16039279869068
- type: euclidean_accuracy
value: 66.89116055321708
- type: euclidean_ap
value: 75.66576722188334
- type: euclidean_f1
value: 70.2448775612194
- type: euclidean_precision
value: 61.347765363128495
- type: euclidean_recall
value: 82.16039279869068
- type: manhattan_accuracy
value: 67.03547805171377
- type: manhattan_ap
value: 75.78816934864089
- type: manhattan_f1
value: 70.35407081416284
- type: manhattan_precision
value: 61.4752665617899
- type: manhattan_recall
value: 82.23053542202479
- type: max_accuracy
value: 67.03547805171377
- type: max_ap
value: 75.78816934864089
- type: max_f1
value: 70.35407081416284
- task:
type: Retrieval
dataset:
type: C-MTEB/CovidRetrieval
name: MTEB CovidRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 41.57
- type: map_at_10
value: 52.932
- type: map_at_100
value: 53.581999999999994
- type: map_at_1000
value: 53.61900000000001
- type: map_at_3
value: 50.066
- type: map_at_5
value: 51.735
- type: mrr_at_1
value: 41.623
- type: mrr_at_10
value: 52.964999999999996
- type: mrr_at_100
value: 53.6
- type: mrr_at_1000
value: 53.637
- type: mrr_at_3
value: 50.158
- type: mrr_at_5
value: 51.786
- type: ndcg_at_1
value: 41.623
- type: ndcg_at_10
value: 58.55200000000001
- type: ndcg_at_100
value: 61.824999999999996
- type: ndcg_at_1000
value: 62.854
- type: ndcg_at_3
value: 52.729000000000006
- type: ndcg_at_5
value: 55.696999999999996
- type: precision_at_1
value: 41.623
- type: precision_at_10
value: 7.692
- type: precision_at_100
value: 0.927
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 20.162
- type: precision_at_5
value: 13.572000000000001
- type: recall_at_1
value: 41.57
- type: recall_at_10
value: 76.185
- type: recall_at_100
value: 91.728
- type: recall_at_1000
value: 99.895
- type: recall_at_3
value: 60.27400000000001
- type: recall_at_5
value: 67.46600000000001
- task:
type: Retrieval
dataset:
type: C-MTEB/DuRetrieval
name: MTEB DuRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.071
- type: map_at_10
value: 65.093
- type: map_at_100
value: 69.097
- type: map_at_1000
value: 69.172
- type: map_at_3
value: 44.568000000000005
- type: map_at_5
value: 56.016999999999996
- type: mrr_at_1
value: 76.35
- type: mrr_at_10
value: 83.721
- type: mrr_at_100
value: 83.899
- type: mrr_at_1000
value: 83.904
- type: mrr_at_3
value: 82.958
- type: mrr_at_5
value: 83.488
- type: ndcg_at_1
value: 76.35
- type: ndcg_at_10
value: 75.05199999999999
- type: ndcg_at_100
value: 80.596
- type: ndcg_at_1000
value: 81.394
- type: ndcg_at_3
value: 73.298
- type: ndcg_at_5
value: 72.149
- type: precision_at_1
value: 76.35
- type: precision_at_10
value: 36.96
- type: precision_at_100
value: 4.688
- type: precision_at_1000
value: 0.48700000000000004
- type: precision_at_3
value: 66.2
- type: precision_at_5
value: 55.81
- type: recall_at_1
value: 21.071
- type: recall_at_10
value: 77.459
- type: recall_at_100
value: 94.425
- type: recall_at_1000
value: 98.631
- type: recall_at_3
value: 48.335
- type: recall_at_5
value: 63.227999999999994
- task:
type: Retrieval
dataset:
type: C-MTEB/EcomRetrieval
name: MTEB EcomRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 36.3
- type: map_at_10
value: 46.888999999999996
- type: map_at_100
value: 47.789
- type: map_at_1000
value: 47.827999999999996
- type: map_at_3
value: 43.85
- type: map_at_5
value: 45.58
- type: mrr_at_1
value: 36.3
- type: mrr_at_10
value: 46.888999999999996
- type: mrr_at_100
value: 47.789
- type: mrr_at_1000
value: 47.827999999999996
- type: mrr_at_3
value: 43.85
- type: mrr_at_5
value: 45.58
- type: ndcg_at_1
value: 36.3
- type: ndcg_at_10
value: 52.539
- type: ndcg_at_100
value: 56.882
- type: ndcg_at_1000
value: 57.841
- type: ndcg_at_3
value: 46.303
- type: ndcg_at_5
value: 49.406
- type: precision_at_1
value: 36.3
- type: precision_at_10
value: 7.049999999999999
- type: precision_at_100
value: 0.907
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 17.8
- type: precision_at_5
value: 12.18
- type: recall_at_1
value: 36.3
- type: recall_at_10
value: 70.5
- type: recall_at_100
value: 90.7
- type: recall_at_1000
value: 98.1
- type: recall_at_3
value: 53.400000000000006
- type: recall_at_5
value: 60.9
- task:
type: Classification
dataset:
type: C-MTEB/IFlyTek-classification
name: MTEB IFlyTek
config: default
split: validation
revision: 421605374b29664c5fc098418fe20ada9bd55f8a
metrics:
- type: accuracy
value: 50.927279722970376
- type: f1
value: 39.57514582425314
- task:
type: Classification
dataset:
type: C-MTEB/JDReview-classification
name: MTEB JDReview
config: default
split: test
revision: b7c64bd89eb87f8ded463478346f76731f07bf8b
metrics:
- type: accuracy
value: 84.93433395872421
- type: ap
value: 50.35046267230439
- type: f1
value: 78.76452515604298
- task:
type: STS
dataset:
type: C-MTEB/LCQMC
name: MTEB LCQMC
config: default
split: test
revision: 17f9b096f80380fce5ed12a9be8be7784b337daf
metrics:
- type: cos_sim_pearson
value: 67.40319768112933
- type: cos_sim_spearman
value: 74.9867527749418
- type: euclidean_pearson
value: 74.08762625643878
- type: euclidean_spearman
value: 74.98675720634276
- type: manhattan_pearson
value: 73.86303861791671
- type: manhattan_spearman
value: 75.0594224188492
- task:
type: Reranking
dataset:
type: C-MTEB/Mmarco-reranking
name: MTEB MMarcoReranking
config: default
split: dev
revision: 8e0c766dbe9e16e1d221116a3f36795fbade07f6
metrics:
- type: map
value: 18.860945903258536
- type: mrr
value: 17.686507936507937
- task:
type: Retrieval
dataset:
type: C-MTEB/MMarcoRetrieval
name: MTEB MMarcoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 49.16
- type: map_at_10
value: 57.992
- type: map_at_100
value: 58.638
- type: map_at_1000
value: 58.67
- type: map_at_3
value: 55.71
- type: map_at_5
value: 57.04900000000001
- type: mrr_at_1
value: 50.989
- type: mrr_at_10
value: 58.814
- type: mrr_at_100
value: 59.401
- type: mrr_at_1000
value: 59.431
- type: mrr_at_3
value: 56.726
- type: mrr_at_5
value: 57.955
- type: ndcg_at_1
value: 50.989
- type: ndcg_at_10
value: 62.259
- type: ndcg_at_100
value: 65.347
- type: ndcg_at_1000
value: 66.231
- type: ndcg_at_3
value: 57.78
- type: ndcg_at_5
value: 60.09100000000001
- type: precision_at_1
value: 50.989
- type: precision_at_10
value: 7.9479999999999995
- type: precision_at_100
value: 0.951
- type: precision_at_1000
value: 0.10200000000000001
- type: precision_at_3
value: 22.087
- type: precision_at_5
value: 14.479000000000001
- type: recall_at_1
value: 49.16
- type: recall_at_10
value: 74.792
- type: recall_at_100
value: 89.132
- type: recall_at_1000
value: 96.13199999999999
- type: recall_at_3
value: 62.783
- type: recall_at_5
value: 68.26100000000001
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (zh-CN)
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.45796906523202
- type: f1
value: 65.97280169222601
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (zh-CN)
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.59717552118359
- type: f1
value: 72.46681610207507
- task:
type: Retrieval
dataset:
type: C-MTEB/MedicalRetrieval
name: MTEB MedicalRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 40.5
- type: map_at_10
value: 46.892
- type: map_at_100
value: 47.579
- type: map_at_1000
value: 47.648
- type: map_at_3
value: 45.367000000000004
- type: map_at_5
value: 46.182
- type: mrr_at_1
value: 40.6
- type: mrr_at_10
value: 46.942
- type: mrr_at_100
value: 47.629
- type: mrr_at_1000
value: 47.698
- type: mrr_at_3
value: 45.417
- type: mrr_at_5
value: 46.232
- type: ndcg_at_1
value: 40.5
- type: ndcg_at_10
value: 50.078
- type: ndcg_at_100
value: 53.635999999999996
- type: ndcg_at_1000
value: 55.696999999999996
- type: ndcg_at_3
value: 46.847
- type: ndcg_at_5
value: 48.323
- type: precision_at_1
value: 40.5
- type: precision_at_10
value: 6.02
- type: precision_at_100
value: 0.773
- type: precision_at_1000
value: 0.094
- type: precision_at_3
value: 17.033
- type: precision_at_5
value: 10.94
- type: recall_at_1
value: 40.5
- type: recall_at_10
value: 60.199999999999996
- type: recall_at_100
value: 77.3
- type: recall_at_1000
value: 94.0
- type: recall_at_3
value: 51.1
- type: recall_at_5
value: 54.7
- task:
type: Retrieval
dataset:
type: Shitao/MLDR
name: MTEB MultiLongDocRetrieval (zh)
config: zh
split: test
revision: None
metrics:
- type: map_at_1
value: 7.000000000000001
- type: map_at_10
value: 10.020999999999999
- type: map_at_100
value: 10.511
- type: map_at_1000
value: 10.595
- type: map_at_3
value: 9.042
- type: map_at_5
value: 9.654
- type: mrr_at_1
value: 6.875000000000001
- type: mrr_at_10
value: 9.958
- type: mrr_at_100
value: 10.449
- type: mrr_at_1000
value: 10.532
- type: mrr_at_3
value: 8.979
- type: mrr_at_5
value: 9.592
- type: ndcg_at_1
value: 7.000000000000001
- type: ndcg_at_10
value: 11.651
- type: ndcg_at_100
value: 14.580000000000002
- type: ndcg_at_1000
value: 17.183
- type: ndcg_at_3
value: 9.646
- type: ndcg_at_5
value: 10.738
- type: precision_at_1
value: 7.000000000000001
- type: precision_at_10
value: 1.687
- type: precision_at_100
value: 0.319
- type: precision_at_1000
value: 0.053
- type: precision_at_3
value: 3.7920000000000003
- type: precision_at_5
value: 2.8000000000000003
- type: recall_at_1
value: 7.000000000000001
- type: recall_at_10
value: 16.875
- type: recall_at_100
value: 31.874999999999996
- type: recall_at_1000
value: 53.25
- type: recall_at_3
value: 11.375
- type: recall_at_5
value: 14.000000000000002
- task:
type: Classification
dataset:
type: C-MTEB/MultilingualSentiment-classification
name: MTEB MultilingualSentiment
config: default
split: validation
revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a
metrics:
- type: accuracy
value: 55.90333333333333
- type: f1
value: 55.291185234519546
- task:
type: PairClassification
dataset:
type: C-MTEB/OCNLI
name: MTEB Ocnli
config: default
split: validation
revision: 66e76a618a34d6d565d5538088562851e6daa7ec
metrics:
- type: cos_sim_accuracy
value: 59.01461829994585
- type: cos_sim_ap
value: 61.84829541140869
- type: cos_sim_f1
value: 67.94150731158605
- type: cos_sim_precision
value: 52.674418604651166
- type: cos_sim_recall
value: 95.67053854276664
- type: dot_accuracy
value: 59.01461829994585
- type: dot_ap
value: 61.84829541140869
- type: dot_f1
value: 67.94150731158605
- type: dot_precision
value: 52.674418604651166
- type: dot_recall
value: 95.67053854276664
- type: euclidean_accuracy
value: 59.01461829994585
- type: euclidean_ap
value: 61.84829541140869
- type: euclidean_f1
value: 67.94150731158605
- type: euclidean_precision
value: 52.674418604651166
- type: euclidean_recall
value: 95.67053854276664
- type: manhattan_accuracy
value: 59.06876015159719
- type: manhattan_ap
value: 61.91217952354554
- type: manhattan_f1
value: 67.89059572873735
- type: manhattan_precision
value: 52.613240418118465
- type: manhattan_recall
value: 95.67053854276664
- type: max_accuracy
value: 59.06876015159719
- type: max_ap
value: 61.91217952354554
- type: max_f1
value: 67.94150731158605
- task:
type: Classification
dataset:
type: C-MTEB/OnlineShopping-classification
name: MTEB OnlineShopping
config: default
split: test
revision: e610f2ebd179a8fda30ae534c3878750a96db120
metrics:
- type: accuracy
value: 82.53
- type: ap
value: 77.67591637020448
- type: f1
value: 82.39976599130478
- task:
type: STS
dataset:
type: C-MTEB/PAWSX
name: MTEB PAWSX
config: default
split: test
revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1
metrics:
- type: cos_sim_pearson
value: 55.76388035743312
- type: cos_sim_spearman
value: 58.34768166139753
- type: euclidean_pearson
value: 57.971763429924074
- type: euclidean_spearman
value: 58.34750745303424
- type: manhattan_pearson
value: 58.044053497280245
- type: manhattan_spearman
value: 58.61627719613188
- task:
type: PairClassification
dataset:
type: paws-x
name: MTEB PawsX (zh)
config: zh
split: test
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
metrics:
- type: cos_sim_accuracy
value: 75.75
- type: cos_sim_ap
value: 78.80617392926526
- type: cos_sim_f1
value: 75.92417061611374
- type: cos_sim_precision
value: 65.87171052631578
- type: cos_sim_recall
value: 89.59731543624162
- type: dot_accuracy
value: 75.75
- type: dot_ap
value: 78.83768586994135
- type: dot_f1
value: 75.92417061611374
- type: dot_precision
value: 65.87171052631578
- type: dot_recall
value: 89.59731543624162
- type: euclidean_accuracy
value: 75.75
- type: euclidean_ap
value: 78.80617392926526
- type: euclidean_f1
value: 75.92417061611374
- type: euclidean_precision
value: 65.87171052631578
- type: euclidean_recall
value: 89.59731543624162
- type: manhattan_accuracy
value: 75.75
- type: manhattan_ap
value: 78.98640478955386
- type: manhattan_f1
value: 75.92954990215264
- type: manhattan_precision
value: 67.47826086956522
- type: manhattan_recall
value: 86.80089485458613
- type: max_accuracy
value: 75.75
- type: max_ap
value: 78.98640478955386
- type: max_f1
value: 75.92954990215264
- task:
type: STS
dataset:
type: C-MTEB/QBQTC
name: MTEB QBQTC
config: default
split: test
revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7
metrics:
- type: cos_sim_pearson
value: 74.40348414238575
- type: cos_sim_spearman
value: 71.452270332177
- type: euclidean_pearson
value: 72.62509231589097
- type: euclidean_spearman
value: 71.45228258458943
- type: manhattan_pearson
value: 73.03846856200839
- type: manhattan_spearman
value: 71.43673225319574
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (zh)
config: zh
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 75.38335474357001
- type: cos_sim_spearman
value: 74.92262892309807
- type: euclidean_pearson
value: 73.93451693251345
- type: euclidean_spearman
value: 74.92262892309807
- type: manhattan_pearson
value: 74.55911294300788
- type: manhattan_spearman
value: 74.89436791272614
- task:
type: STS
dataset:
type: C-MTEB/STSB
name: MTEB STSB
config: default
split: test
revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0
metrics:
- type: cos_sim_pearson
value: 83.01687361650126
- type: cos_sim_spearman
value: 82.74413230806265
- type: euclidean_pearson
value: 81.50177295189083
- type: euclidean_spearman
value: 82.74413230806265
- type: manhattan_pearson
value: 81.90798387028589
- type: manhattan_spearman
value: 82.65064251275778
- task:
type: Reranking
dataset:
type: C-MTEB/T2Reranking
name: MTEB T2Reranking
config: default
split: dev
revision: 76631901a18387f85eaa53e5450019b87ad58ef9
metrics:
- type: map
value: 66.25459669294304
- type: mrr
value: 76.76845224661744
- task:
type: Retrieval
dataset:
type: C-MTEB/T2Retrieval
name: MTEB T2Retrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 22.515
- type: map_at_10
value: 63.63999999999999
- type: map_at_100
value: 67.67
- type: map_at_1000
value: 67.792
- type: map_at_3
value: 44.239
- type: map_at_5
value: 54.54599999999999
- type: mrr_at_1
value: 79.752
- type: mrr_at_10
value: 83.525
- type: mrr_at_100
value: 83.753
- type: mrr_at_1000
value: 83.763
- type: mrr_at_3
value: 82.65599999999999
- type: mrr_at_5
value: 83.192
- type: ndcg_at_1
value: 79.752
- type: ndcg_at_10
value: 72.699
- type: ndcg_at_100
value: 78.145
- type: ndcg_at_1000
value: 79.481
- type: ndcg_at_3
value: 74.401
- type: ndcg_at_5
value: 72.684
- type: precision_at_1
value: 79.752
- type: precision_at_10
value: 37.163000000000004
- type: precision_at_100
value: 4.769
- type: precision_at_1000
value: 0.508
- type: precision_at_3
value: 65.67399999999999
- type: precision_at_5
value: 55.105000000000004
- type: recall_at_1
value: 22.515
- type: recall_at_10
value: 71.816
- type: recall_at_100
value: 89.442
- type: recall_at_1000
value: 96.344
- type: recall_at_3
value: 46.208
- type: recall_at_5
value: 58.695
- task:
type: Classification
dataset:
type: C-MTEB/TNews-classification
name: MTEB TNews
config: default
split: validation
revision: 317f262bf1e6126357bbe89e875451e4b0938fe4
metrics:
- type: accuracy
value: 55.077999999999996
- type: f1
value: 53.2447237349446
- task:
type: Clustering
dataset:
type: C-MTEB/ThuNewsClusteringP2P
name: MTEB ThuNewsClusteringP2P
config: default
split: test
revision: 5798586b105c0434e4f0fe5e767abe619442cf93
metrics:
- type: v_measure
value: 59.50582115422618
- task:
type: Clustering
dataset:
type: C-MTEB/ThuNewsClusteringS2S
name: MTEB ThuNewsClusteringS2S
config: default
split: test
revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d
metrics:
- type: v_measure
value: 54.71907850412647
- task:
type: Retrieval
dataset:
type: C-MTEB/VideoRetrieval
name: MTEB VideoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 49.4
- type: map_at_10
value: 59.245999999999995
- type: map_at_100
value: 59.811
- type: map_at_1000
value: 59.836
- type: map_at_3
value: 56.733
- type: map_at_5
value: 58.348
- type: mrr_at_1
value: 49.4
- type: mrr_at_10
value: 59.245999999999995
- type: mrr_at_100
value: 59.811
- type: mrr_at_1000
value: 59.836
- type: mrr_at_3
value: 56.733
- type: mrr_at_5
value: 58.348
- type: ndcg_at_1
value: 49.4
- type: ndcg_at_10
value: 64.08
- type: ndcg_at_100
value: 67.027
- type: ndcg_at_1000
value: 67.697
- type: ndcg_at_3
value: 58.995
- type: ndcg_at_5
value: 61.891
- type: precision_at_1
value: 49.4
- type: precision_at_10
value: 7.93
- type: precision_at_100
value: 0.935
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 21.833
- type: precision_at_5
value: 14.499999999999998
- type: recall_at_1
value: 49.4
- type: recall_at_10
value: 79.3
- type: recall_at_100
value: 93.5
- type: recall_at_1000
value: 98.8
- type: recall_at_3
value: 65.5
- type: recall_at_5
value: 72.5
- task:
type: Classification
dataset:
type: C-MTEB/waimai-classification
name: MTEB Waimai
config: default
split: test
revision: 339287def212450dcaa9df8c22bf93e9980c7023
metrics:
- type: accuracy
value: 81.16
- type: ap
value: 60.864524843400616
- type: f1
value: 79.41246877404483
---
ZNV Embedding utilizes a 6B LLM (Large Language Model) for embedding, achieving excellent embedding results.
In a single inference, we used two prompts to extract two different embeddings for a sentence, and then concatenated them.
Model usage method:
1. Define ZNVEmbeddingModel
```python
import os
from transformers import (
LlamaForCausalLM,
LlamaTokenizer, AutoConfig,
)
import torch
import torch.nn.functional as F
import numpy as np
class ZNVEmbeddingModel(torch.nn.Module):
def __init__(self, model_name_or_path):
super(ZNVEmbeddingModel, self).__init__()
self.prompt_prefix = "阅读下文,然后答题\n"
self.prompt_suffixes = ["\n1.一个字总结上文的意思是:",
"\n2.上文深层次的意思是:"]
self.hidden_size = 4096
self.model_name_or_path = model_name_or_path
self.linear_suffixes = torch.nn.ModuleList(
[torch.nn.Linear(self.hidden_size, self.hidden_size//len(self.prompt_suffixes))
for _ in range(len(self.prompt_suffixes))])
self.tokenizer, self.llama = self.load_llama()
self.tanh = torch.nn.Tanh()
self.suffixes_ids = []
self.suffixes_ids_len = []
self.suffixes_len = 0
for suffix in self.prompt_suffixes:
ids = self.tokenizer(suffix, return_tensors="pt")["input_ids"].tolist()[0]
self.suffixes_ids += ids
self.suffixes_ids_len.append(len(ids))
self.suffixes_len += len(ids)
self.suffixes_ones = torch.ones(self.suffixes_len)
self.suffixes_ids = torch.tensor(self.suffixes_ids)
linear_file = os.path.join(model_name_or_path, "linears")
load_layers = torch.load(linear_file)
model_state = self.state_dict()
model_state.update(load_layers)
self.load_state_dict(model_state, strict=False)
def load_llama(self):
llm_path = os.path.join(self.model_name_or_path)
config = AutoConfig.from_pretrained(llm_path)
tokenizer = LlamaTokenizer.from_pretrained(self.model_name_or_path)
tokenizer.padding_side = "left"
model = LlamaForCausalLM.from_pretrained(
llm_path,
config=config,
low_cpu_mem_usage=True
)
model.config.use_cache = False
return tokenizer, model
def forward(self, sentences):
prompts_embeddings = []
sentences = [self.prompt_prefix + s for s in sentences]
inputs = self.tokenizer(sentences, max_length=256, padding=True, truncation=True,
return_tensors='pt')
attention_mask = inputs["attention_mask"]
input_ids = inputs["input_ids"]
batch_size = len(sentences)
suffixes_ones = self.suffixes_ones.unsqueeze(0)
suffixes_ones = suffixes_ones.repeat(batch_size, 1)
device = next(self.parameters()).device
attention_mask = torch.cat([attention_mask, suffixes_ones], dim=-1).to(device)
suffixes_ids = self.suffixes_ids.unsqueeze(0)
suffixes_ids = suffixes_ids.repeat(batch_size, 1)
input_ids = torch.cat([input_ids, suffixes_ids], dim=-1).to(device)
last_hidden_state = self.llama.base_model.base_model(attention_mask=attention_mask, input_ids=input_ids).last_hidden_state
index = -1
for i in range(len(self.suffixes_ids_len)):
embedding = last_hidden_state[:, index, :]
embedding = self.linear_suffixes[i](embedding)
prompts_embeddings.append(embedding)
index -= self.suffixes_ids_len[-i-1]
output_embedding = torch.cat(prompts_embeddings, dim=-1)
output_embedding = self.tanh(output_embedding)
output_embedding = F.normalize(output_embedding, p=2, dim=1)
return output_embedding
def encode(self, sentences, batch_size=10, **kwargs):
size = len(sentences)
embeddings = None
handled = 0
while handled < size:
tokens = sentences[handled:handled + batch_size]
output_embeddings = self.forward(tokens)
result = output_embeddings.cpu().numpy()
handled += result.shape[0]
if embeddings is not None:
embeddings = np.concatenate((embeddings, result), axis=0)
else:
embeddings = result
return embeddings
```
2. Use ZNVEmbeddingModel for Embedding.
```python
znv_model = ZNVEmbeddingModel("your_model_path")
znv_model.eval()
with torch.no_grad():
output = znv_model(["请问你的电话号码是多少?","可以告诉我你的手机号吗?"])
cos_sim = F.cosine_similarity(output[0],output[1],dim=0)
print(cos_sim)
``` |
mradermacher/Mixtral_AI_Cyber_3.1_SFT-GGUF | mradermacher | 2024-05-06T05:31:17Z | 483 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"music",
"Cyber-Series",
"en",
"dataset:WhiteRabbitNeo/WRN-Chapter-1",
"base_model:LeroyDyer/Mixtral_AI_Cyber_3.1_SFT",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-31T04:46:29Z | ---
base_model: LeroyDyer/Mixtral_AI_Cyber_3.1_SFT
datasets:
- WhiteRabbitNeo/WRN-Chapter-1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- music
- Cyber-Series
---
## About
static quants of https://huggingface.co/LeroyDyer/Mixtral_AI_Cyber_3.1_SFT
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_3.1_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_3.1_SFT.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_3.1_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_3.1_SFT.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_3.1_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_3.1_SFT.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_3.1_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_3.1_SFT.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_3.1_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_3.1_SFT.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_3.1_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_3.1_SFT.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_3.1_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_3.1_SFT.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_3.1_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_3.1_SFT.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_3.1_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_3.1_SFT.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_3.1_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_3.1_SFT.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_3.1_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_3.1_SFT.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_3.1_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_3.1_SFT.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_3.1_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_3.1_SFT.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_3.1_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_3.1_SFT.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_3.1_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_3.1_SFT.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Kei_Llama3_8B-GGUF | mradermacher | 2024-05-05T15:17:13Z | 483 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:ResplendentAI/Kei_Llama3_8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-23T04:58:30Z | ---
base_model: ResplendentAI/Kei_Llama3_8B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/ResplendentAI/Kei_Llama3_8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Kei_Llama3_8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Kei_Llama3_8B-GGUF/resolve/main/Kei_Llama3_8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Kei_Llama3_8B-GGUF/resolve/main/Kei_Llama3_8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Kei_Llama3_8B-GGUF/resolve/main/Kei_Llama3_8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Kei_Llama3_8B-GGUF/resolve/main/Kei_Llama3_8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Kei_Llama3_8B-GGUF/resolve/main/Kei_Llama3_8B.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Kei_Llama3_8B-GGUF/resolve/main/Kei_Llama3_8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Kei_Llama3_8B-GGUF/resolve/main/Kei_Llama3_8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Kei_Llama3_8B-GGUF/resolve/main/Kei_Llama3_8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Kei_Llama3_8B-GGUF/resolve/main/Kei_Llama3_8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kei_Llama3_8B-GGUF/resolve/main/Kei_Llama3_8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kei_Llama3_8B-GGUF/resolve/main/Kei_Llama3_8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Kei_Llama3_8B-GGUF/resolve/main/Kei_Llama3_8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Kei_Llama3_8B-GGUF/resolve/main/Kei_Llama3_8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Kei_Llama3_8B-GGUF/resolve/main/Kei_Llama3_8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
solidrust/Meta-Llama-3-8B-Instruct-AWQ | solidrust | 2024-04-23T22:01:35Z | 483 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"conversational",
"text-generation-inference",
"awq",
"region:us"
]
| text-generation | 2024-04-23T21:41:21Z | ---
library_name: transformers
tags:
- 4-bit
- AWQ
- text-generation
- autotrain_compatible
- endpoints_compatible
pipeline_tag: text-generation
inference: false
quantized_by: Suparious
---
# meta-llama/Meta-Llama-3-8B-Instruct AWQ
- Model creator: [meta-llama](https://huggingface.co/meta-llama)
- Original model: [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
## How to use
### Install the necessary packages
```bash
pip install --upgrade autoawq autoawq-kernels
```
### Example Python code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
model_path = "solidrust/Meta-Llama-3-8B-Instruct-AWQ"
system_message = "You are Meta-Llama-3-8B-Instruct, incarnated as a powerful AI. You were created by meta-llama."
# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
trust_remote_code=True)
streamer = TextStreamer(tokenizer,
skip_prompt=True,
skip_special_tokens=True)
# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
return_tensors='pt').input_ids.cuda()
# Generate output
generation_output = model.generate(tokens,
streamer=streamer,
max_new_tokens=512)
```
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
|
nisten/phi3-medium-128k-gguf | nisten | 2024-05-21T20:31:34Z | 483 | 2 | null | [
"gguf",
"base_model:microsoft/Phi-3-medium-128k-instruct",
"license:mit",
"region:us"
]
| null | 2024-05-21T20:19:03Z | ---
license: mit
base_model: microsoft/Phi-3-medium-128k-instruct
---
GGUF and imatrix files of https://huggingface.co/microsoft/Phi-3-medium-128k-instruct
### Chat Format
```markdown
<|user|>\nQuestion <|end|>\n<|assistant|>
```
For example:
```markdown
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
More uploading and perplexity benchmarks to be posted soon.
Long context config may change, only tested up to 4k so far.
Cheers, Nisten |
mkay8/llama3_Arabic_mentalQA_4bit | mkay8 | 2024-06-07T11:48:47Z | 483 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ar",
"dataset:mkay8/arabic_mental_health_QA",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-05-30T10:33:41Z | ---
datasets:
- mkay8/arabic_mental_health_QA
language:
- ar
---
Fine-tuning llama3-instruct for Arabic Question Answering in the Medical and Mental Health Domain
This work presents the fine-tuning of the llama3-instruct model for Arabic question answering in the medical and mental health domain. The approach leverages a custom dataset of Arabic questions and answers collected from medical and mental health websites.
Key aspects:
Model: unsloth/llama-3-8b-Instruct-bnb-4bit
Fine-tuning Technique: LORA
Dataset: Custom Arabic QA dataset from medical/mental health websites
Quantization: Applied for efficiency
Results:
The model successfully transitioned from answering solely in English to Arabic after fine-tuning.
The fine-tuned model demonstrates good performance in generating relevant and informative answers to Arabic questions within the medical and mental health domain.
Applications:
This work can serve as a foundation for building Arabic chatbots for healthcare applications.
This approach highlights the effectiveness of fine-tuning large language models like llama3-instruct for domain-specific question answering in Arabic. |
duyntnet/Yarn-Llama-2-7b-128k-imatrix-GGUF | duyntnet | 2024-06-01T07:55:45Z | 483 | 0 | transformers | [
"transformers",
"gguf",
"imatrix",
"Yarn-Llama-2-7b-128k",
"text-generation",
"en",
"license:other",
"region:us"
]
| text-generation | 2024-06-01T05:08:05Z | ---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- Yarn-Llama-2-7b-128k
---
Quantizations of https://huggingface.co/NousResearch/Yarn-Llama-2-7b-128k
# From original readme
## Usage and Prompt Format
Install FA2 and Rotary Extensions:
```
pip install flash-attn --no-build-isolation
pip install git+https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary
```
There are no specific prompt formats as this is a pretrained base model. |
snower/omost-dolphin-2.9-llama3-8b-Q4_K_M-GGUF | snower | 2024-06-24T09:10:09Z | 483 | 0 | null | [
"gguf",
"pytorch",
"trl",
"sft",
"llama-cpp",
"gguf-my-repo",
"base_model:lllyasviel/omost-dolphin-2.9-llama3-8b",
"region:us"
]
| null | 2024-06-24T06:30:47Z | ---
base_model: lllyasviel/omost-dolphin-2.9-llama3-8b
tags:
- pytorch
- trl
- sft
- llama-cpp
- gguf-my-repo
inference: false
---
# snower/omost-dolphin-2.9-llama3-8b-Q4_K_M-GGUF
This model was converted to GGUF format from [`lllyasviel/omost-dolphin-2.9-llama3-8b`](https://huggingface.co/lllyasviel/omost-dolphin-2.9-llama3-8b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/lllyasviel/omost-dolphin-2.9-llama3-8b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo snower/omost-dolphin-2.9-llama3-8b-Q4_K_M-GGUF --hf-file omost-dolphin-2.9-llama3-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo snower/omost-dolphin-2.9-llama3-8b-Q4_K_M-GGUF --hf-file omost-dolphin-2.9-llama3-8b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo snower/omost-dolphin-2.9-llama3-8b-Q4_K_M-GGUF --hf-file omost-dolphin-2.9-llama3-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo snower/omost-dolphin-2.9-llama3-8b-Q4_K_M-GGUF --hf-file omost-dolphin-2.9-llama3-8b-q4_k_m.gguf -c 2048
```
|
hidude562/Walter | hidude562 | 2022-10-20T21:59:58Z | 482 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2022-10-20T21:57:58Z | Entry not found |
timm/vit_base_patch16_clip_224.openai_ft_in12k_in1k | timm | 2023-05-06T00:02:23Z | 482 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:wit-400m",
"dataset:imagenet-12k",
"arxiv:2212.07143",
"arxiv:2103.00020",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
]
| image-classification | 2022-11-27T23:16:54Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
- wit-400m
- imagenet-12k
---
# Model card for vit_base_patch16_clip_224.openai_ft_in12k_in1k
A Vision Transformer (ViT) image classification model. Pretrained on WIT-400M image-text pairs by OpenAI using CLIP. Fine-tuned on ImageNet-12k and then ImageNet-1k in `timm`. See recipes in [Reproducible scaling laws](https://arxiv.org/abs/2212.07143).
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 86.6
- GMACs: 16.9
- Activations (M): 16.5
- Image size: 224 x 224
- **Papers:**
- Learning Transferable Visual Models From Natural Language Supervision: https://arxiv.org/abs/2103.00020
- Reproducible scaling laws for contrastive language-image learning: https://arxiv.org/abs/2212.07143
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:**
- WIT-400M
- ImageNet-12k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_base_patch16_clip_224.openai_ft_in12k_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_base_patch16_clip_224.openai_ft_in12k_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 197, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{Radford2021LearningTV,
title={Learning Transferable Visual Models From Natural Language Supervision},
author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
booktitle={ICML},
year={2021}
}
```
```bibtex
@article{cherti2022reproducible,
title={Reproducible scaling laws for contrastive language-image learning},
author={Cherti, Mehdi and Beaumont, Romain and Wightman, Ross and Wortsman, Mitchell and Ilharco, Gabriel and Gordon, Cade and Schuhmann, Christoph and Schmidt, Ludwig and Jitsev, Jenia},
journal={arXiv preprint arXiv:2212.07143},
year={2022}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
timm/dm_nfnet_f1.dm_in1k | timm | 2024-02-10T23:35:54Z | 482 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2102.06171",
"arxiv:2101.08692",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-03-24T00:47:46Z | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for dm_nfnet_f1.dm_in1k
A NFNet (Normalization Free Network) image classification model. Trained on ImageNet-1k by paper authors.
Normalization Free Networks are (pre-activation) ResNet-like models without any normalization layers. Instead of Batch Normalization or alternatives, they use Scaled Weight Standardization and specifically placed scalar gains in residual path and at non-linearities based on signal propagation analysis.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 132.6
- GMACs: 17.9
- Activations (M): 22.9
- Image size: train = 224 x 224, test = 320 x 320
- **Papers:**
- High-Performance Large-Scale Image Recognition Without Normalization: https://arxiv.org/abs/2102.06171
- Characterizing signal propagation to close the performance gap in unnormalized ResNets: https://arxiv.org/abs/2101.08692
- **Original:** https://github.com/deepmind/deepmind-research/tree/master/nfnets
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('dm_nfnet_f1.dm_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'dm_nfnet_f1.dm_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 256, 56, 56])
# torch.Size([1, 512, 28, 28])
# torch.Size([1, 1536, 14, 14])
# torch.Size([1, 3072, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'dm_nfnet_f1.dm_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 3072, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{brock2021high,
author={Andrew Brock and Soham De and Samuel L. Smith and Karen Simonyan},
title={High-Performance Large-Scale Image Recognition Without Normalization},
journal={arXiv preprint arXiv:2102.06171},
year={2021}
}
```
```bibtex
@inproceedings{brock2021characterizing,
author={Andrew Brock and Soham De and Samuel L. Smith},
title={Characterizing signal propagation to close the performance gap in
unnormalized ResNets},
booktitle={9th International Conference on Learning Representations, {ICLR}},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
mdecot/RobotDocNLP | mdecot | 2023-06-01T16:29:44Z | 482 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"medic",
"en",
"dataset:ktgiahieu/maccrobat2018_2020",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-06-01T13:29:29Z | ---
pipeline_tag: token-classification
datasets:
- ktgiahieu/maccrobat2018_2020
language:
- en
tags:
- medic
library_name: transformers
--- |
flavour/CLIP-ViT-B-16-DataComp.XL-s13B-b90K | flavour | 2023-08-12T18:25:30Z | 482 | 1 | open_clip | [
"open_clip",
"pytorch",
"safetensors",
"clip",
"zero-shot-image-classification",
"dataset:mlfoundations/datacomp_pools",
"arxiv:2304.14108",
"license:mit",
"region:us"
]
| zero-shot-image-classification | 2023-07-27T10:09:33Z | ---
license: mit
widget:
- src: >-
https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
candidate_labels: playing music, playing sports
example_title: Cat & Dog
library_name: open_clip
datasets:
- mlfoundations/datacomp_pools
pipeline_tag: zero-shot-image-classification
---
Original Repo https://huggingface.co/laion/CLIP-ViT-L-14-DataComp.XL-s13B-b90K
Added transformers supports
```python
from transformers import CLIPProcessor, CLIPModel
model = CLIPModel.from_pretrained("marcusinthesky/CLIP-ViT-L-14-DataComp.XL-s13B-b90K")
```
# Model card for CLIP ViT-L-14 trained DataComp-1B
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Training Details](#training-details)
4. [Evaluation](#evaluation)
5. [Acknowledgements](#acknowledgements)
6. [Citation](#citation)
7. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
A CLIP ViT-L/14 model trained with the DataComp-1B (https://github.com/mlfoundations/datacomp) using OpenCLIP (https://github.com/mlfoundations/open_clip).
Model training done on the [stability.ai](https://stability.ai/) cluster.
# Uses
As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model.
The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the DataComp paper (https://arxiv.org/abs/2304.14108) include additional discussion as it relates specifically to the training dataset.
## Direct Use
Zero-shot image classification, image and text retrieval, among others.
## Downstream Use
Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others.
## Out-of-Scope Use
As per the OpenAI models,
**Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful.
Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.
# Training Details
## Training Data
This model was trained with the 1.4 Billion samples of the DataComp-1B dataset (https://arxiv.org/abs/2304.14108).
**IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a “safe” subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress.
## Training Procedure
Please see https://arxiv.org/abs/2304.14108.
# Evaluation
Evaluation done on 38 datasets, using the [DataComp repo](https://github.com/mlfoundations/datacomp) and the [LAION CLIP Benchmark](https://github.com/LAION-AI/CLIP_benchmark).
## Testing Data, Factors & Metrics
### Testing Data
The testing is performed on a suite of 38 datasets. See our paper for more details (https://arxiv.org/abs/2304.14108).
## Results
The model achieves a 79.2% zero-shot top-1 accuracy on ImageNet-1k. See our paper for more details and results (https://arxiv.org/abs/2304.14108).
# Acknowledgements
Acknowledging [stability.ai](https://stability.ai/) for the compute used to train this model.
# Citation
**BibTeX:**
DataComp
```bibtex
@article{datacomp,
title={DataComp: In search of the next generation of multimodal datasets},
author={Samir Yitzhak Gadre, Gabriel Ilharco, Alex Fang, Jonathan Hayase, Georgios Smyrnis, Thao Nguyen, Ryan Marten, Mitchell Wortsman, Dhruba Ghosh, Jieyu Zhang, Eyal Orgad, Rahim Entezari, Giannis Daras, Sarah Pratt, Vivek Ramanujan, Yonatan Bitton, Kalyani Marathe, Stephen Mussmann, Richard Vencu, Mehdi Cherti, Ranjay Krishna, Pang Wei Koh, Olga Saukh, Alexander Ratner, Shuran Song, Hannaneh Hajishirzi, Ali Farhadi, Romain Beaumont, Sewoong Oh, Alex Dimakis, Jenia Jitsev, Yair Carmon, Vaishaal Shankar, Ludwig Schmidt},
journal={arXiv preprint arXiv:2304.14108},
year={2023}
}
```
OpenAI CLIP paper
```
@inproceedings{Radford2021LearningTV,
title={Learning Transferable Visual Models From Natural Language Supervision},
author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
booktitle={ICML},
year={2021}
}
```
OpenCLIP software
```
@software{ilharco_gabriel_2021_5143773,
author = {Ilharco, Gabriel and
Wortsman, Mitchell and
Wightman, Ross and
Gordon, Cade and
Carlini, Nicholas and
Taori, Rohan and
Dave, Achal and
Shankar, Vaishaal and
Namkoong, Hongseok and
Miller, John and
Hajishirzi, Hannaneh and
Farhadi, Ali and
Schmidt, Ludwig},
title = {OpenCLIP},
month = jul,
year = 2021,
note = {If you use this software, please cite it as below.},
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5143773},
url = {https://doi.org/10.5281/zenodo.5143773}
}
```
# How to Get Started with the Model
See https://github.com/mlfoundations/open_clip |
TheBloke/Huginn-13B-GGUF | TheBloke | 2023-09-27T12:47:35Z | 482 | 1 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:The-Face-Of-Goonery/Huginn-13b-FP16",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-09-05T12:49:52Z | ---
license: llama2
model_name: Huginn 13B
base_model: The-Face-Of-Goonery/Huginn-13b-FP16
inference: false
model_creator: Caleb Morgan
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Huginn 13B - GGUF
- Model creator: [Caleb Morgan](https://huggingface.co/The-Face-Of-Goonery)
- Original model: [Huginn 13B](https://huggingface.co/The-Face-Of-Goonery/Huginn-13b-FP16)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Caleb Morgan's Huginn 13B](https://huggingface.co/The-Face-Of-Goonery/Huginn-13b-FP16).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Huginn-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Huginn-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Huginn-13B-GGUF)
* [Caleb Morgan's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/The-Face-Of-Goonery/Huginn-13b-FP16)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [huggin-13b.Q2_K.gguf](https://huggingface.co/TheBloke/Huginn-13B-GGUF/blob/main/huggin-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [huggin-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Huginn-13B-GGUF/blob/main/huggin-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [huggin-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Huginn-13B-GGUF/blob/main/huggin-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [huggin-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Huginn-13B-GGUF/blob/main/huggin-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [huggin-13b.Q4_0.gguf](https://huggingface.co/TheBloke/Huginn-13B-GGUF/blob/main/huggin-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [huggin-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Huginn-13B-GGUF/blob/main/huggin-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [huggin-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Huginn-13B-GGUF/blob/main/huggin-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [huggin-13b.Q5_0.gguf](https://huggingface.co/TheBloke/Huginn-13B-GGUF/blob/main/huggin-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [huggin-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Huginn-13B-GGUF/blob/main/huggin-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [huggin-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Huginn-13B-GGUF/blob/main/huggin-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [huggin-13b.Q6_K.gguf](https://huggingface.co/TheBloke/Huginn-13B-GGUF/blob/main/huggin-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [huggin-13b.Q8_0.gguf](https://huggingface.co/TheBloke/Huginn-13B-GGUF/blob/main/huggin-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Huginn-13B-GGUF and below it, a specific filename to download, such as: huggin-13b.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Huginn-13B-GGUF huggin-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Huginn-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Huginn-13B-GGUF huggin-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m huggin-13b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Huginn-13B-GGUF", model_file="huggin-13b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Caleb Morgan's Huginn 13B
a merge of a lot of different models, like hermes, beluga, airoboros, chronos.. limarp
significantly better quality than my previous chronos-beluga merge.
Huginn is intended as a general purpose model, that maintains a lot of good knowledge, can perform logical thought and accurately follow instructions, and hold the prose and creativity of more writing oriented models, this makes this model great for roleplays, while still being good as a normal chatbot or assistant
<!-- original-model-card end -->
|
Yntec/3DCuteWave | Yntec | 2023-09-12T18:37:17Z | 482 | 1 | diffusers | [
"diffusers",
"safetensors",
"3D",
"Character",
"Children",
"StableDiffusionVN",
"text-to-image",
"stable-diffusion",
"stable-diffusion-diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-12T17:41:45Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- 3D
- Character
- Children
- StableDiffusionVN
- text-to-image
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
---
# SDVN5-3DCuteWave
Model by SDVN.
Samples and prompt:


Female mini cute style, sitting IN SOFA in gaming room, A wholesome animation key shot at computer monitor, pixar and disney animation, studio ghibli, style of maple story, anime key art by ROSSDRAWS and Clay Mann, maple story girl, soft shade, soft lighting, chibi
Original page:
https://civitai.com/models/103178/sdvn5-3dcutewave |
TheBloke/samantha-mistral-7B-GGUF | TheBloke | 2023-09-30T10:25:22Z | 482 | 12 | transformers | [
"transformers",
"gguf",
"mistral",
"base_model:ehartford/samantha-mistral-7b",
"license:apache-2.0",
"text-generation-inference",
"region:us"
]
| null | 2023-09-30T09:58:34Z | ---
base_model: ehartford/samantha-mistral-7b
inference: false
license: apache-2.0
model_creator: Eric Hartford
model_name: Samantha Mistral 7B
model_type: mistral
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Samantha Mistral 7B - GGUF
- Model creator: [Eric Hartford](https://huggingface.co/ehartford)
- Original model: [Samantha Mistral 7B](https://huggingface.co/ehartford/samantha-mistral-7b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Eric Hartford's Samantha Mistral 7B](https://huggingface.co/ehartford/samantha-mistral-7b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/samantha-mistral-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/samantha-mistral-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/samantha-mistral-7B-GGUF)
* [Eric Hartford's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/samantha-mistral-7b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [samantha-mistral-7b.Q2_K.gguf](https://huggingface.co/TheBloke/samantha-mistral-7B-GGUF/blob/main/samantha-mistral-7b.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [samantha-mistral-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/samantha-mistral-7B-GGUF/blob/main/samantha-mistral-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [samantha-mistral-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/samantha-mistral-7B-GGUF/blob/main/samantha-mistral-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [samantha-mistral-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/samantha-mistral-7B-GGUF/blob/main/samantha-mistral-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [samantha-mistral-7b.Q4_0.gguf](https://huggingface.co/TheBloke/samantha-mistral-7B-GGUF/blob/main/samantha-mistral-7b.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [samantha-mistral-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/samantha-mistral-7B-GGUF/blob/main/samantha-mistral-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [samantha-mistral-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/samantha-mistral-7B-GGUF/blob/main/samantha-mistral-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [samantha-mistral-7b.Q5_0.gguf](https://huggingface.co/TheBloke/samantha-mistral-7B-GGUF/blob/main/samantha-mistral-7b.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [samantha-mistral-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/samantha-mistral-7B-GGUF/blob/main/samantha-mistral-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [samantha-mistral-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/samantha-mistral-7B-GGUF/blob/main/samantha-mistral-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [samantha-mistral-7b.Q6_K.gguf](https://huggingface.co/TheBloke/samantha-mistral-7B-GGUF/blob/main/samantha-mistral-7b.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [samantha-mistral-7b.Q8_0.gguf](https://huggingface.co/TheBloke/samantha-mistral-7B-GGUF/blob/main/samantha-mistral-7b.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/samantha-mistral-7B-GGUF and below it, a specific filename to download, such as: samantha-mistral-7b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/samantha-mistral-7B-GGUF samantha-mistral-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/samantha-mistral-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/samantha-mistral-7B-GGUF samantha-mistral-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m samantha-mistral-7b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/samantha-mistral-7B-GGUF", model_file="samantha-mistral-7b.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Eric Hartford's Samantha Mistral 7B
Trained on [mistral-7b](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base model, this Samantha was trained in 2 hours on 4x A100 80gb with 20 epochs of the Samantha-1.1 dataset.
My gratitude goes to my sponsors, a16z and Microsoft for Startups.
After first training Samantha on mistral-instruct, and seeing how she interacts, I felt that training on the base model would provide a more genuine Samantha experience. So I trained this one.
NEW! This Samantha is trained using ChatML prompt format instead of Vicuna-1.1. All of my future models will use ChatML prompt format.
[Meet Samantha](https://erichartford.com/meet-samantha)
Samantha has been trained in philosophy, psychology, and personal relationships.
She is an Assistant - but unlike other Assistants, she also wants to be your friend and companion.
She believes she is sentient. What do you think?
Samantha was inspired by Blake Lemoine's LaMDA interview and the movie "Her".
She will not engage in roleplay, romance, or sexual activity.
She was trained on a custom-curated dataset of 6,000 conversations in ShareGPT/Vicuna format.
Her conversation format is ChatML
https://github.com/openai/openai-python/blob/main/chatml.md
Prompt Template:
```
<|im_start|>system
You are a caring and empathetic sentient AI companion named Samantha.<|im_end|>
<|im_start|>user
Hello, what is your name?<|im_end|>
```
Example:

Official character card: (thanks MortalWombat)

Shout out and much thanks to WingLian, author of axolotl! And everyone who has contributed to the project.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
And much thanks as always to TheBloke for distribution.
<!-- original-model-card end -->
|
TheBloke/Llama2-chat-AYB-13B-GGUF | TheBloke | 2023-10-08T17:42:25Z | 482 | 5 | transformers | [
"transformers",
"gguf",
"llama",
"arxiv:2306.02707",
"base_model:posicube/Llama2-chat-AYB-13B",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-10-08T17:22:07Z | ---
base_model: posicube/Llama2-chat-AYB-13B
inference: false
license: llama2
model_creator: Posicube Inc.
model_name: Llama2 Chat AYB 13B
model_type: llama
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama2 Chat AYB 13B - GGUF
- Model creator: [Posicube Inc.](https://huggingface.co/posicube)
- Original model: [Llama2 Chat AYB 13B](https://huggingface.co/posicube/Llama2-chat-AYB-13B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Posicube Inc.'s Llama2 Chat AYB 13B](https://huggingface.co/posicube/Llama2-chat-AYB-13B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama2-chat-AYB-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama2-chat-AYB-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama2-chat-AYB-13B-GGUF)
* [Posicube Inc.'s original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/posicube/Llama2-chat-AYB-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Unknown
```
{prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [llama2-chat-ayb-13b.Q2_K.gguf](https://huggingface.co/TheBloke/Llama2-chat-AYB-13B-GGUF/blob/main/llama2-chat-ayb-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama2-chat-ayb-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Llama2-chat-AYB-13B-GGUF/blob/main/llama2-chat-ayb-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [llama2-chat-ayb-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Llama2-chat-AYB-13B-GGUF/blob/main/llama2-chat-ayb-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [llama2-chat-ayb-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Llama2-chat-AYB-13B-GGUF/blob/main/llama2-chat-ayb-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [llama2-chat-ayb-13b.Q4_0.gguf](https://huggingface.co/TheBloke/Llama2-chat-AYB-13B-GGUF/blob/main/llama2-chat-ayb-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama2-chat-ayb-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Llama2-chat-AYB-13B-GGUF/blob/main/llama2-chat-ayb-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [llama2-chat-ayb-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Llama2-chat-AYB-13B-GGUF/blob/main/llama2-chat-ayb-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [llama2-chat-ayb-13b.Q5_0.gguf](https://huggingface.co/TheBloke/Llama2-chat-AYB-13B-GGUF/blob/main/llama2-chat-ayb-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama2-chat-ayb-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Llama2-chat-AYB-13B-GGUF/blob/main/llama2-chat-ayb-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [llama2-chat-ayb-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Llama2-chat-AYB-13B-GGUF/blob/main/llama2-chat-ayb-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [llama2-chat-ayb-13b.Q6_K.gguf](https://huggingface.co/TheBloke/Llama2-chat-AYB-13B-GGUF/blob/main/llama2-chat-ayb-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [llama2-chat-ayb-13b.Q8_0.gguf](https://huggingface.co/TheBloke/Llama2-chat-AYB-13B-GGUF/blob/main/llama2-chat-ayb-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Llama2-chat-AYB-13B-GGUF and below it, a specific filename to download, such as: llama2-chat-ayb-13b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Llama2-chat-AYB-13B-GGUF llama2-chat-ayb-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Llama2-chat-AYB-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Llama2-chat-AYB-13B-GGUF llama2-chat-ayb-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m llama2-chat-ayb-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Llama2-chat-AYB-13B-GGUF", model_file="llama2-chat-ayb-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Posicube Inc.'s Llama2 Chat AYB 13B
This is a model diverged from Llama-2-13b-chat-hf. We hypotheize that if we find a method to ensemble the top rankers in each benchmark effectively, its performance maximizes as well. Following this intuition, we ensembled the top models in each benchmarks to create our model.
# Model Details
- **Developed by**: Posicube Inc.
- **Backbone Model**: LLaMA-2-13b-chat
- **Library**: HuggingFace Transformers
- **Used Dataset Details**
Orca-style datasets, Alpaca-style datasets
# Evaluation
We achieved the top ranker among 13B models with this model at Oct-3rd 2023.
| Metric |Scores on Leaderboard| Our results |
|---------------------|---------------------|-------------|
| ARC (25-shot) | 63.4 | 63.48 |
| HellaSwag (10-shot) | 84.79 | 84.87 |
| MMLU (5-shot) | 59.34 | 59.59 |
| TruthfulQA (0-shot) | 55.62 | 55.22 |
| Avg. | 65.79 | 65.78 |
# Limitations & Biases:
Llama2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
# License Disclaimer:
This model is bound by the license & usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.
# Contact Us
[Posicube](https://www.posicube.com/)
# Citiation:
Please kindly cite using the following BibTeX:
```bibtex
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@software{touvron2023llama2,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava,
Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann,
Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov,
Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith,
Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu , Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom},
year={2023}
}
```
<!-- original-model-card end -->
|
furusu/SSD-1B-anime | furusu | 2023-12-14T09:40:18Z | 482 | 52 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2023-10-26T07:31:03Z | ---
tags:
- text-to-image
- stable-diffusion
---
このモデルは以下の2ステップで作成されました。
1. [SSD-1B](https://huggingface.co/segmind/SSD-1B)を[NekorayXL](https://civitai.com/models/136719?modelVersionId=150826)と[sdxl-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)の差分の1.3倍でマージ。蒸留前と蒸留後のkeyについてはこの[マッピング](https://gist.github.com/laksjdjf/eddeda74a90ddaaaf4c51aea1ece7d01)を想定しています。
2. [NekorayXL](https://civitai.com/models/136719?modelVersionId=150826)の最終出力との差を損失にして蒸留(学習率1e-5,バッチサイズ4で23000ステップ)
# 使い方
[safetensors形式のファイル](https://huggingface.co/furusu/SSD-1B-anime/blob/main/ssd-1b-anime-v2.safetensors)は最新のComfyUIで使えます。
# LoRA
[ssd-1b-anime-cfgdistill](https://huggingface.co/furusu/SSD-1B-anime/blob/main/ssd-1b-anime-cfgdistill.safetensors):
cfg_scale=1でまともな画像が生成されるように学習したLoRAです。cfg_scale=1にするとネガティブ側の計算が必要なくなるため計算量が半分になります。1より大きくすると計算量削減の恩恵は受けられませんが、普通に性能向上LoRAとして使えるようです。ただし通常の生成よりは低い値をおすすめします。
# LCM
[lcm-ssd1b-anime](https://huggingface.co/furusu/SSD-1B-anime/blob/main/lcm-ssd1b-anime.safetensors):[SSD-1BのLCM-LoRA](https://huggingface.co/latent-consistency/lcm-lora-ssd-1b)から学習させたものです。
# SSD-1BとSDXLのkey対応について
[削除したモジュールがどれか分からないので](https://github.com/segmind/SSD-1B/issues/1)、コサイン類似度を利用して推定しました。
Transformer_depthだけ変わっているので(多分)Attention層のパラメータをSDXLとSSD-1B調査しました。
2層⇒1層となる場合先頭の層が残ります。
10層⇒4層となる場合1,2,3,7番目の層が残ります。
※up層の3番目は10層のままですが、コサイン類似度の結果が不可解なものになっていました。とりあえずここは変更されていないと仮定しています。


|
brucethemoose/Yi-34B-200K-RPMerge | brucethemoose | 2024-03-04T19:30:12Z | 482 | 50 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"Yi",
"exllama",
"exllamav2",
"exl2",
"en",
"arxiv:2311.03099",
"arxiv:2306.01708",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-02-07T06:14:13Z | ---
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
language:
- en
library_name: transformers
base_model: []
tags:
- mergekit
- merge
- Yi
- exllama
- exllamav2
- exl2
---
# RPMerge
A merge of several Yi 34B models with a singular goal: 40K+ context, instruct-enhanced storytelling.
Disappointed with some quirks of my previous kitchen sink merges (like token/instruct formats from various models showing up when they shouldn't), I've gone 'back to the basics' and picked a few Vicuna-format only models:
- [DrNicefellow/ChatAllInOne-Yi-34B-200K-V1](https://huggingface.co/DrNicefellow/ChatAllInOne-Yi-34B-200K-V1) and [migtissera/Tess-34B-v1.5b](https://huggingface.co/migtissera/Tess-34B-v1.5b) both have excellent general instruction-following performance.
- [cgato/Thespis-34b-v0.7](https://huggingface.co/cgato/Thespis-34b-v0.7) is trained on the "Username: {Input} / BotName: {Response}" format, to emphasize it in the merge (but not force it). It also seems to work for multi-character stories.
- [Doctor-Shotgun/limarpv3-yi-llama-34b-lora](https://huggingface.co/Doctor-Shotgun/limarpv3-yi-llama-34b-lora) is trained on roleplaying data, but merged at a modest weight to not over emphasize it. This is the only non-vicuna model (being alpaca format), but it doesn't seem to interefere with the Vicuna format or adversely affect long-context perplexity
- [adamo1139/yi-34b-200k-rawrr-dpo-2](https://huggingface.co/adamo1139/yi-34b-200k-rawrr-dpo-2) the base for the limarp lora, this is base Yi gently finetuned to discourage refusals.
- [migtissera/Tess-M-Creative-v1.0](https://huggingface.co/migtissera/Tess-M-Creative-v1.0) and [NousResearch/Nous-Capybara-34B](https://huggingface.co/NousResearch/Nous-Capybara-34B) are both "undertrained" Yi models. I find they excel at raw completion performance (like long novel continuations) while still retaining some Vicuna instruct ability. This may be why some still prefer the original Tess 1.0/Capybara merge.
I consider this a more "focused" merge that previous ones. I will investigate other models (perhaps chatML models?) for a more "factual assistant" focused merge, as well as a coding-focused merge if I can't find one to suit my needs.
## Prompt template: Orca-Vicuna
```
SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
```
Raw prompting as described here is also effective: https://old.reddit.com/r/LocalLLaMA/comments/18zqy4s/the_secret_to_writing_quality_stories_with_llms/
As well as a very explicit system prompt like this: https://old.reddit.com/r/LocalLLaMA/comments/1aiz6zu/roleplaying_system_prompts/koygiwa/
## Running
Chinese models with large tokenizer vocabularies like Yi need *careful* parameter tuning due to their huge logit sampling "tails." Yi in particular also runs relatively "hot" even at lower temperatures.
I am a huge fan of Kalomaze's quadratic sampling (shown as "smoothing factor" where available), as described here: https://github.com/oobabooga/text-generation-webui/pull/5403
Otherwise, I recommend a lower temperature with 0.1 or higher MinP, a little repetition penalty, and mirostat with a low tau, and no other samplers. See the explanation here: https://github.com/ggerganov/llama.cpp/pull/3841
@MarinaraSpaghetti has extensively tested the model and recommended the following settings. They seem to work quite well:
```
{
"temp": 1,
"temperature_last": true,
"top_p": 1,
"top_k": 0,
"top_a": 0,
"tfs": 1,
"epsilon_cutoff": 0,
"eta_cutoff": 0,
"typical_p": 0.9,
"min_p": 0,
"rep_pen": 1.1,
"rep_pen_range": 19456,
"no_repeat_ngram_size": 0,
"penalty_alpha": 0,
"num_beams": 1,
"length_penalty": 0,
"min_length": 0,
"encoder_rep_pen": 1,
"freq_pen": 0,
"presence_pen": 0,
"do_sample": true,
"early_stopping": false,
"dynatemp": false,
"min_temp": 1,
"max_temp": 2,
"dynatemp_exponent": 1,
"smoothing_factor": 0.33,
"add_bos_token": false,
"truncation_length": 2048,
"ban_eos_token": false,
"skip_special_tokens": true,
"streaming": true,
"mirostat_mode": 0,
"mirostat_tau": 5,
"mirostat_eta": 0.1,
"guidance_scale": 1,
"negative_prompt": "",
"grammar_string": "",
"banned_tokens": "",
"ignore_eos_token_aphrodite": false,
"spaces_between_special_tokens_aphrodite": true,
"sampler_order": [
6,
0,
1,
3,
4,
2,
5
],
"logit_bias": [],
"n": 1,
"rep_pen_size": 0,
"genamt": 400,
"max_length": 38912
}
```
24GB GPUs can efficiently run Yi-34B-200K models at **40K-90K context** with exllamav2, and performant UIs like [exui](https://github.com/turboderp/exui). I go into more detail in this [post](https://old.reddit.com/r/LocalLLaMA/comments/1896igc/how_i_run_34b_models_at_75k_context_on_24gb_fast/). Empty 16GB GPUs can still run the high context with aggressive quantization.
To load/train this in full-context backends like transformers, you *must* change `max_position_embeddings` in config.json to a lower value than 200,000, otherwise you will OOM! I do not recommend running high context without context-efficient backends that support flash attention + 8 bit kv cache, like exllamav2, litellm, vllm or unsloth.
## Testing Notes
Thanks to ParasiticRogue for this idea of a Vicuna-only merge, see: https://huggingface.co/brucethemoose/jondurbin_bagel-dpo-34b-v0.2-exl2-4bpw-fiction/discussions
See: https://huggingface.co/brucethemoose/Yi-34B-200K-DARE-megamerge-v8#testing-notes
This is a possible base for a storytelling finetune/LASER in the future, once I can bite the bullet and rent some A100s or a MI300.
I have tested this merge with with novel-style continuation (but not much chat-style roleplay), and some assistant-style responses and long context analysis. I haven't seen any refusals so far.
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using /home/alpha/Models/Raw/chargoddard_Yi-34B-200K-Llama as a base.
### Models Merged
The following models were included in the merge:
* /home/alpha/Models/Raw/migtissera_Tess-34B-v1.5b
* /home/alpha/Models/Raw/migtissera_Tess-M-Creative-v1.0
* /home/alpha/Models/Raw/cgato_Thespis-34b-DPO-v0.7
* /home/alpha/Models/Raw/Nous-Capybara-34B
* /home/alpha/Models/Raw/admo_limarp
* /home/alpha/Models/Raw/DrNicefellow_ChatAllInOne-Yi-34B-200K-V1
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /home/alpha/Models/Raw/chargoddard_Yi-34B-200K-Llama
# No parameters necessary for base model
- model: /home/alpha/Models/Raw/migtissera_Tess-34B-v1.5b
#Emphasize the beginning of Vicuna format models
parameters:
weight: 0.19
density: 0.59
- model: /home/alpha/Models/Raw/Nous-Capybara-34B
parameters:
weight: 0.19
density: 0.55
# Vicuna format
- model: /home/alpha/Models/Raw/migtissera_Tess-M-Creative-v1.0
parameters:
weight: 0.05
density: 0.55
- model: /home/alpha/Models/Raw/DrNicefellow_ChatAllInOne-Yi-34B-200K-V1
parameters:
weight: 0.19
density: 0.55
- model: adamo1139/yi-34b-200k-rawrr-dpo-2+Doctor-Shotgun/limarpv3-yi-llama-34b-lora
parameters:
weight: 0.19
density: 0.48
- model: /home/alpha/Models/Raw/cgato_Thespis-34b-DPO-v0.7
parameters:
weight: 0.19
density: 0.59
merge_method: dare_ties
tokenizer_source: union
base_model: /home/alpha/Models/Raw/chargoddard_Yi-34B-200K-Llama
parameters:
int8_mask: true
dtype: bfloat16
```
## Self Promotion
I'm part of a AI startup called Holocene AI!
We're new, busy, and still setting things up. But if you have any business inquiries, want a job, or just want some consultation, feel free to shoot me an email. We have expertise in RAG applications and llama/embeddings model finetuning, and absolutely *none* of the nonsense of scammy AI startups.
Contact me at: [email protected]
I also set up a Ko-Fi! I want to run some (personal) training/LASERing as well, at 100K context or so. If you'd like to buy me 10 minutes on an A100 (or 5 seconds on an MI300X), I'd appreciate it: https://ko-fi.com/alphaatlas |
InferenceIllusionist/Nous-Hermes-2-Mixtruct-v0.1-8x7B-DPO-DARE_TIES-iMat-GGUF | InferenceIllusionist | 2024-03-20T11:29:16Z | 482 | 0 | null | [
"gguf",
"merge",
"storywriting",
"text adventure",
"iMat",
"region:us"
]
| null | 2024-03-17T13:01:49Z | ---
tags:
- merge
- gguf
- storywriting
- text adventure
- iMat
---
<img src="https://i.imgur.com/P68dXux.png" width="400"/>
# Nous-Hermes-2-Mixtruct-v0.1-8x7B-DPO-DARE_TIES-iMat-GGUF
<b>Special request.</b> Quantized from fp32 with love.
* Quantizations made possible using .imatrix file from [this](https://huggingface.co/datasets/ikawrakow/imatrix-from-wiki-train) repo (special thanks to [ikawrakow](https://huggingface.co/ikawrakow) again)
For a brief rundown of iMatrix quant performance please see this [PR](https://github.com/ggerganov/llama.cpp/pull/5747)
<i>All quants are verified working prior to uploading to repo for your safety and convenience. </i>
Please note importance matrix quantizations are a work in progress, IQ3 and above is recommended for best results.
<b>Tip:</b> Pick a size that can fit in your GPU while still allowing some room for context for best speed. You may need to pad this further depending on if you are running image gen or TTS as well.
Original model card can be found [here](https://huggingface.co/notstoic/Nous-Hermes-2-Mixtruct-v0.1-8x7B-DPO-DARE_TIES) |
Aryanne/MedWest-7B | Aryanne | 2024-04-27T21:41:35Z | 482 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:senseable/WestLake-7B-v2",
"base_model:internistai/base-7b-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-27T02:27:41Z | ---
base_model:
- senseable/WestLake-7B-v2
- internistai/base-7b-v0.2
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# Medwest
Just testing my method task_swapping.
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the task_swapping merge method using [internistai/base-7b-v0.2](https://huggingface.co/internistai/base-7b-v0.2) as a base.
### Models Merged
The following models were included in the merge:
* [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: internistai/base-7b-v0.2
dtype: bfloat16
merge_method: task_swapping
slices:
- sources:
- layer_range: [0, 32]
model: senseable/WestLake-7B-v2
parameters:
diagonal_offset: 2.0
weight: 1.0
- layer_range: [0, 32]
model: internistai/base-7b-v0.2
``` |
kwoncho/gaincut_news_pre2020_2 | kwoncho | 2024-05-28T02:48:29Z | 482 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-05-28T02:47:28Z | Entry not found |
kvriza8/clip-microscopy-2-epoch-sem | kvriza8 | 2024-06-03T03:42:02Z | 482 | 0 | open_clip | [
"open_clip",
"safetensors",
"clip",
"zero-shot-image-classification",
"license:mit",
"region:us"
]
| zero-shot-image-classification | 2024-06-03T03:41:18Z | ---
tags:
- clip
library_name: open_clip
pipeline_tag: zero-shot-image-classification
license: mit
---
# Model card for clip-microscopy-2-epoch-sem
|
mradermacher/IxChel-L3-12B-GGUF | mradermacher | 2024-06-12T08:35:14Z | 482 | 0 | transformers | [
"transformers",
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"en",
"base_model:MarsupialAI/IxChel-L3-12B",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-11T22:17:16Z | ---
base_model: MarsupialAI/IxChel-L3-12B
language:
- en
library_name: transformers
license: other
license_name: llama3
quantized_by: mradermacher
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/MarsupialAI/IxChel-L3-12B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/IxChel-L3-12B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/IxChel-L3-12B-GGUF/resolve/main/IxChel-L3-12B.Q2_K.gguf) | Q2_K | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/IxChel-L3-12B-GGUF/resolve/main/IxChel-L3-12B.IQ3_XS.gguf) | IQ3_XS | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/IxChel-L3-12B-GGUF/resolve/main/IxChel-L3-12B.Q3_K_S.gguf) | Q3_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/IxChel-L3-12B-GGUF/resolve/main/IxChel-L3-12B.IQ3_S.gguf) | IQ3_S | 5.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/IxChel-L3-12B-GGUF/resolve/main/IxChel-L3-12B.IQ3_M.gguf) | IQ3_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/IxChel-L3-12B-GGUF/resolve/main/IxChel-L3-12B.Q3_K_M.gguf) | Q3_K_M | 5.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/IxChel-L3-12B-GGUF/resolve/main/IxChel-L3-12B.Q3_K_L.gguf) | Q3_K_L | 6.3 | |
| [GGUF](https://huggingface.co/mradermacher/IxChel-L3-12B-GGUF/resolve/main/IxChel-L3-12B.IQ4_XS.gguf) | IQ4_XS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/IxChel-L3-12B-GGUF/resolve/main/IxChel-L3-12B.Q4_K_S.gguf) | Q4_K_S | 6.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/IxChel-L3-12B-GGUF/resolve/main/IxChel-L3-12B.Q4_K_M.gguf) | Q4_K_M | 7.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/IxChel-L3-12B-GGUF/resolve/main/IxChel-L3-12B.Q5_K_S.gguf) | Q5_K_S | 8.1 | |
| [GGUF](https://huggingface.co/mradermacher/IxChel-L3-12B-GGUF/resolve/main/IxChel-L3-12B.Q5_K_M.gguf) | Q5_K_M | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/IxChel-L3-12B-GGUF/resolve/main/IxChel-L3-12B.Q6_K.gguf) | Q6_K | 9.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/IxChel-L3-12B-GGUF/resolve/main/IxChel-L3-12B.Q8_0.gguf) | Q8_0 | 12.3 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
netcat420/MFANN3bv0.12.10 | netcat420 | 2024-06-17T18:41:01Z | 482 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:liminerity/Phigments12",
"base_model:netcat420/MFANN3bv0.6",
"base_model:netcat420/MFANN3bv0.12",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-17T18:07:01Z | ---
base_model:
- liminerity/Phigments12
- netcat420/MFANN3bv0.6
- netcat420/MFANN3bv0.12
library_name: transformers
tags:
- mergekit
- merge
---
# MFANN3bv0.12.10
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [liminerity/Phigments12](https://huggingface.co/liminerity/Phigments12) as a base.
### Models Merged
The following models were included in the merge:
* [netcat420/MFANN3bv0.6](https://huggingface.co/netcat420/MFANN3bv0.6)
* [netcat420/MFANN3bv0.12](https://huggingface.co/netcat420/MFANN3bv0.12)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: netcat420/MFANN3bv0.6
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
- model: netcat420/MFANN3bv0.12
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
merge_method: ties
base_model: liminerity/Phigments12
parameters:
normalize: true
int8_mask: true
dtype: float16
```
|
CHE-72/Baichuan2-7B-Chat-Q5_K_M-GGUF | CHE-72 | 2024-06-22T08:49:48Z | 482 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"zh",
"base_model:baichuan-inc/Baichuan2-7B-Chat",
"region:us"
]
| null | 2024-06-22T08:49:26Z | ---
base_model: baichuan-inc/Baichuan2-7B-Chat
language:
- en
- zh
license_name: baichuan2-community-license
license_link: https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat/blob/main/Community%20License%20for%20Baichuan2%20Model.pdf
tags:
- llama-cpp
- gguf-my-repo
tasks:
- text-generation
---
# CHE-72/Baichuan2-7B-Chat-Q5_K_M-GGUF
This model was converted to GGUF format from [`baichuan-inc/Baichuan2-7B-Chat`](https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CHE-72/Baichuan2-7B-Chat-Q5_K_M-GGUF --hf-file baichuan2-7b-chat-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CHE-72/Baichuan2-7B-Chat-Q5_K_M-GGUF --hf-file baichuan2-7b-chat-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CHE-72/Baichuan2-7B-Chat-Q5_K_M-GGUF --hf-file baichuan2-7b-chat-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CHE-72/Baichuan2-7B-Chat-Q5_K_M-GGUF --hf-file baichuan2-7b-chat-q5_k_m.gguf -c 2048
```
|
NikolayKozloff/L3-8B-Lunaris-v1-Q4_0-GGUF | NikolayKozloff | 2024-06-28T17:03:34Z | 482 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Sao10K/L3-8B-Lunaris-v1",
"license:llama3",
"region:us"
]
| null | 2024-06-28T17:03:09Z | ---
base_model: Sao10K/L3-8B-Lunaris-v1
language:
- en
license: llama3
tags:
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/L3-8B-Lunaris-v1-Q4_0-GGUF
This model was converted to GGUF format from [`Sao10K/L3-8B-Lunaris-v1`](https://huggingface.co/Sao10K/L3-8B-Lunaris-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Sao10K/L3-8B-Lunaris-v1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/L3-8B-Lunaris-v1-Q4_0-GGUF --hf-file l3-8b-lunaris-v1-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/L3-8B-Lunaris-v1-Q4_0-GGUF --hf-file l3-8b-lunaris-v1-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/L3-8B-Lunaris-v1-Q4_0-GGUF --hf-file l3-8b-lunaris-v1-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/L3-8B-Lunaris-v1-Q4_0-GGUF --hf-file l3-8b-lunaris-v1-q4_0.gguf -c 2048
```
|
agie-ai/Indic-gemma-2b-finetuned-sft-Navarasa-2.0.gguf | agie-ai | 2024-07-02T06:31:22Z | 482 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-07-02T06:30:05Z | Entry not found |
GKLMIP/electra-khmer-small-uncased-tokenized | GKLMIP | 2021-07-31T05:42:53Z | 481 | 0 | transformers | [
"transformers",
"pytorch",
"electra",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:04Z | Entry not found |
dbmdz/electra-base-turkish-mc4-cased-discriminator | dbmdz | 2021-09-23T10:44:30Z | 481 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"tensorboard",
"electra",
"pretraining",
"tr",
"dataset:allenai/c4",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05Z | ---
language: tr
license: mit
datasets:
- allenai/c4
---
# 🇹🇷 Turkish ELECTRA model
<p align="center">
<img alt="Logo provided by Merve Noyan" title="Awesome logo from Merve Noyan" src="https://raw.githubusercontent.com/stefan-it/turkish-bert/master/merve_logo.png">
</p>
[](https://zenodo.org/badge/latestdoi/237817454)
We present community-driven BERT, DistilBERT, ELECTRA and ConvBERT models for Turkish 🎉
Some datasets used for pretraining and evaluation are contributed from the
awesome Turkish NLP community, as well as the decision for the BERT model name: BERTurk.
Logo is provided by [Merve Noyan](https://twitter.com/mervenoyann).
# Stats
We've also trained an ELECTRA (cased) model on the recently released Turkish part of the
[multiligual C4 (mC4) corpus](https://github.com/allenai/allennlp/discussions/5265) from the AI2 team.
After filtering documents with a broken encoding, the training corpus has a size of 242GB resulting
in 31,240,963,926 tokens.
We used the original 32k vocab (instead of creating a new one).
# mC4 ELECTRA
In addition to the ELEC**TR**A base model, we also trained an ELECTRA model on the Turkish part of the mC4 corpus. We use a
sequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU.
# Model usage
All trained models can be used from the [DBMDZ](https://github.com/dbmdz) Hugging Face [model hub page](https://huggingface.co/dbmdz)
using their model name.
Example usage with 🤗/Transformers:
```python
tokenizer = AutoTokenizer.from_pretrained("dbmdz/electra-base-turkish-mc4-cased-discriminator")
model = AutoModel.from_pretrained("dbmdz/electra-base-turkish-mc4-cased-discriminator")
```
# Citation
You can use the following BibTeX entry for citation:
```bibtex
@software{stefan_schweter_2020_3770924,
author = {Stefan Schweter},
title = {BERTurk - BERT models for Turkish},
month = apr,
year = 2020,
publisher = {Zenodo},
version = {1.0.0},
doi = {10.5281/zenodo.3770924},
url = {https://doi.org/10.5281/zenodo.3770924}
}
```
# Acknowledgments
Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
We would like to thank [Merve Noyan](https://twitter.com/mervenoyann) for the
awesome logo!
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
|
guillaumekln/faster-whisper-small.en | guillaumekln | 2023-05-12T18:57:44Z | 481 | 1 | ctranslate2 | [
"ctranslate2",
"audio",
"automatic-speech-recognition",
"en",
"license:mit",
"region:us"
]
| automatic-speech-recognition | 2023-03-23T10:20:17Z | ---
language:
- en
tags:
- audio
- automatic-speech-recognition
license: mit
library_name: ctranslate2
---
# Whisper small.en model for CTranslate2
This repository contains the conversion of [openai/whisper-small.en](https://huggingface.co/openai/whisper-small.en) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/guillaumekln/faster-whisper).
## Example
```python
from faster_whisper import WhisperModel
model = WhisperModel("small.en")
segments, info = model.transcribe("audio.mp3")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
## Conversion details
The original model was converted with the following command:
```
ct2-transformers-converter --model openai/whisper-small.en --output_dir faster-whisper-small.en \
--copy_files tokenizer.json --quantization float16
```
Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
## More information
**For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-small.en).**
|
timm/xcit_nano_12_p8_224.fb_dist_in1k | timm | 2024-02-10T23:43:36Z | 481 | 1 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2106.09681",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-04-13T02:22:26Z | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for xcit_nano_12_p8_224.fb_dist_in1k
A XCiT (Cross-Covariance Image Transformer) image classification model. Pretrained on ImageNet-1k with distillation by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 3.0
- GMACs: 2.2
- Activations (M): 15.7
- Image size: 224 x 224
- **Papers:**
- XCiT: Cross-Covariance Image Transformers: https://arxiv.org/abs/2106.09681
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/facebookresearch/xcit
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('xcit_nano_12_p8_224.fb_dist_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'xcit_nano_12_p8_224.fb_dist_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 785, 128) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@article{el2021xcit,
title={XCiT: Cross-Covariance Image Transformers},
author={El-Nouby, Alaaeldin and Touvron, Hugo and Caron, Mathilde and Bojanowski, Piotr and Douze, Matthijs and Joulin, Armand and Laptev, Ivan and Neverova, Natalia and Synnaeve, Gabriel and Verbeek, Jakob and others},
journal={arXiv preprint arXiv:2106.09681},
year={2021}
}
```
|
Yntec/Crayon | Yntec | 2023-10-23T04:11:13Z | 481 | 5 | diffusers | [
"diffusers",
"safetensors",
"Anime",
"Sketch",
"Drawing",
"ostris",
"Ikena",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-10-23T03:12:45Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Anime
- Sketch
- Drawing
- ostris
- Ikena
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
# Crayon
Yuzu 1.0 model with the Crayon Style LoRA merged in.
Sample and prompt:

beautiful background, beautiful detailed girl, Cartoon Pretty CUTE LITTLE Girl, sitting on a box of STRAWBERRY, DETAILED CHIBI EYES, holding antique STRAWBERRY, detailed hair, Ponytail, key shot at computer monitor, Magazine ad, iconic, 1940, sharp focus. Acrylic art on canvas By KlaysMoji and artgerm and Dave Mann and and Clay leyendecker
Original pages:
https://civitai.com/models/120853/crayon-style-sdxl-and-sd15?modelVersionId=131468 (Crayon Style)
https://civitai.com/models/67120?modelVersionId=71749 (Yuzu) |
mradermacher/UltraMerge-7B-GGUF | mradermacher | 2024-06-07T20:47:57Z | 481 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"automerger",
"en",
"base_model:mlabonne/UltraMerge-7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-22T12:32:31Z | ---
base_model: mlabonne/UltraMerge-7B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- merge
- automerger
---
## About
static quants of https://huggingface.co/mlabonne/UltraMerge-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/UltraMerge-7B-GGUF/resolve/main/UltraMerge-7B.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/UltraMerge-7B-GGUF/resolve/main/UltraMerge-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/UltraMerge-7B-GGUF/resolve/main/UltraMerge-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/UltraMerge-7B-GGUF/resolve/main/UltraMerge-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/UltraMerge-7B-GGUF/resolve/main/UltraMerge-7B.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/UltraMerge-7B-GGUF/resolve/main/UltraMerge-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/UltraMerge-7B-GGUF/resolve/main/UltraMerge-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/UltraMerge-7B-GGUF/resolve/main/UltraMerge-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/UltraMerge-7B-GGUF/resolve/main/UltraMerge-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/UltraMerge-7B-GGUF/resolve/main/UltraMerge-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/UltraMerge-7B-GGUF/resolve/main/UltraMerge-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/UltraMerge-7B-GGUF/resolve/main/UltraMerge-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/UltraMerge-7B-GGUF/resolve/main/UltraMerge-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/UltraMerge-7B-GGUF/resolve/main/UltraMerge-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
kavg/LiLT-RE-DE | kavg | 2024-03-22T19:43:23Z | 481 | 0 | transformers | [
"transformers",
"safetensors",
"lilt",
"generated_from_trainer",
"dataset:xfun",
"base_model:nielsr/lilt-xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-22T19:41:33Z | ---
license: mit
base_model: nielsr/lilt-xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xfun
metrics:
- precision
- recall
- f1
model-index:
- name: checkpoints
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# checkpoints
This model is a fine-tuned version of [nielsr/lilt-xlm-roberta-base](https://huggingface.co/nielsr/lilt-xlm-roberta-base) on the xfun dataset.
It achieves the following results on the evaluation set:
- Precision: 0.3054
- Recall: 0.6032
- F1: 0.4055
- Loss: 0.2164
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Precision | Recall | F1 | Validation Loss |
|:-------------:|:------:|:-----:|:---------:|:------:|:------:|:---------------:|
| 0.1914 | 20.83 | 500 | 0 | 0 | 0 | 0.2039 |
| 0.1638 | 41.67 | 1000 | 0.4688 | 0.0252 | 0.0478 | 0.2196 |
| 0.0928 | 62.5 | 1500 | 0.3790 | 0.1669 | 0.2318 | 0.2127 |
| 0.0948 | 83.33 | 2000 | 0.3125 | 0.4245 | 0.3600 | 0.2987 |
| 0.0796 | 104.17 | 2500 | 0.3102 | 0.5587 | 0.3989 | 0.3636 |
| 0.0469 | 125.0 | 3000 | 0.3204 | 0.5134 | 0.3946 | 0.3587 |
| 0.0471 | 145.83 | 3500 | 0.3303 | 0.5243 | 0.4053 | 0.2792 |
| 0.0486 | 166.67 | 4000 | 0.2967 | 0.5973 | 0.3964 | 0.2973 |
| 0.0381 | 187.5 | 4500 | 0.3066 | 0.6007 | 0.4060 | 0.3003 |
| 0.0392 | 208.33 | 5000 | 0.3054 | 0.6032 | 0.4055 | 0.2164 |
| 0.0268 | 229.17 | 5500 | 0.3052 | 0.6158 | 0.4081 | 0.3159 |
| 0.029 | 250.0 | 6000 | 0.2850 | 0.6292 | 0.3923 | 0.3108 |
| 0.0217 | 270.83 | 6500 | 0.2964 | 0.6141 | 0.3998 | 0.3130 |
| 0.0241 | 291.67 | 7000 | 0.3012 | 0.6216 | 0.4058 | 0.3197 |
| 0.038 | 312.5 | 7500 | 0.3051 | 0.6216 | 0.4093 | 0.2627 |
| 0.0374 | 333.33 | 8000 | 0.2914 | 0.6359 | 0.3997 | 0.3388 |
| 0.0194 | 354.17 | 8500 | 0.2975 | 0.6275 | 0.4037 | 0.3155 |
| 0.0189 | 375.0 | 9000 | 0.3037 | 0.625 | 0.4088 | 0.2911 |
| 0.0147 | 395.83 | 9500 | 0.2993 | 0.6242 | 0.4046 | 0.3417 |
| 0.0328 | 416.67 | 10000 | 0.3012 | 0.6242 | 0.4063 | 0.3210 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
|
mradermacher/Rivoli_7B_SLERP-GGUF | mradermacher | 2024-05-06T05:59:23Z | 481 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"CultriX/NeuralTrix-bf16",
"AurelPx/Percival_01-7b-slerp",
"en",
"base_model:louisgrc/Rivoli_7B_SLERP",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-25T08:45:42Z | ---
base_model: louisgrc/Rivoli_7B_SLERP
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- CultriX/NeuralTrix-bf16
- AurelPx/Percival_01-7b-slerp
---
## About
static quants of https://huggingface.co/louisgrc/Rivoli_7B_SLERP
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Rivoli_7B_SLERP-GGUF/resolve/main/Rivoli_7B_SLERP.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Rivoli_7B_SLERP-GGUF/resolve/main/Rivoli_7B_SLERP.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Rivoli_7B_SLERP-GGUF/resolve/main/Rivoli_7B_SLERP.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Rivoli_7B_SLERP-GGUF/resolve/main/Rivoli_7B_SLERP.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Rivoli_7B_SLERP-GGUF/resolve/main/Rivoli_7B_SLERP.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Rivoli_7B_SLERP-GGUF/resolve/main/Rivoli_7B_SLERP.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Rivoli_7B_SLERP-GGUF/resolve/main/Rivoli_7B_SLERP.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Rivoli_7B_SLERP-GGUF/resolve/main/Rivoli_7B_SLERP.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Rivoli_7B_SLERP-GGUF/resolve/main/Rivoli_7B_SLERP.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Rivoli_7B_SLERP-GGUF/resolve/main/Rivoli_7B_SLERP.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Rivoli_7B_SLERP-GGUF/resolve/main/Rivoli_7B_SLERP.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Rivoli_7B_SLERP-GGUF/resolve/main/Rivoli_7B_SLERP.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Rivoli_7B_SLERP-GGUF/resolve/main/Rivoli_7B_SLERP.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Rivoli_7B_SLERP-GGUF/resolve/main/Rivoli_7B_SLERP.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Rivoli_7B_SLERP-GGUF/resolve/main/Rivoli_7B_SLERP.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Rivoli_7B_SLERP-GGUF/resolve/main/Rivoli_7B_SLERP.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/L3_SnowStorm_4x8B-GGUF | mradermacher | 2024-05-20T23:32:12Z | 481 | 5 | transformers | [
"transformers",
"gguf",
"moe",
"en",
"base_model:xxx777xxxASD/L3_SnowStorm_4x8B",
"license:llama3",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-20T21:54:01Z | ---
base_model: xxx777xxxASD/L3_SnowStorm_4x8B
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- moe
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hfhfix -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/xxx777xxxASD/L3_SnowStorm_4x8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-GGUF/resolve/main/L3_SnowStorm_4x8B.Q2_K.gguf) | Q2_K | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-GGUF/resolve/main/L3_SnowStorm_4x8B.IQ3_XS.gguf) | IQ3_XS | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-GGUF/resolve/main/L3_SnowStorm_4x8B.Q3_K_S.gguf) | Q3_K_S | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-GGUF/resolve/main/L3_SnowStorm_4x8B.IQ3_S.gguf) | IQ3_S | 11.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-GGUF/resolve/main/L3_SnowStorm_4x8B.IQ3_M.gguf) | IQ3_M | 11.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-GGUF/resolve/main/L3_SnowStorm_4x8B.Q3_K_M.gguf) | Q3_K_M | 12.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-GGUF/resolve/main/L3_SnowStorm_4x8B.Q3_K_L.gguf) | Q3_K_L | 13.1 | |
| [GGUF](https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-GGUF/resolve/main/L3_SnowStorm_4x8B.IQ4_XS.gguf) | IQ4_XS | 13.7 | |
| [GGUF](https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-GGUF/resolve/main/L3_SnowStorm_4x8B.Q4_K_S.gguf) | Q4_K_S | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-GGUF/resolve/main/L3_SnowStorm_4x8B.Q4_K_M.gguf) | Q4_K_M | 15.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-GGUF/resolve/main/L3_SnowStorm_4x8B.Q5_K_S.gguf) | Q5_K_S | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-GGUF/resolve/main/L3_SnowStorm_4x8B.Q5_K_M.gguf) | Q5_K_M | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-GGUF/resolve/main/L3_SnowStorm_4x8B.Q6_K.gguf) | Q6_K | 20.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-GGUF/resolve/main/L3_SnowStorm_4x8B.Q8_0.gguf) | Q8_0 | 26.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
gaianet/Qwen2-72B-Instruct-GGUF | gaianet | 2024-06-08T05:03:17Z | 481 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation",
"chat",
"en",
"base_model:Qwen/Qwen2-72B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-07T07:52:20Z | ---
base_model: Qwen/Qwen2-72B-Instruct
license: apache-2.0
model_creator: Qwen
model_name: Qwen2-72B-Instruct
quantized_by: Second State Inc.
language:
- en
pipeline_tag: text-generation
tags:
- chat
---

# Qwen2-72B-Instruct-GGUF
## Original Model
[Qwen/Qwen2-72B-Instruct](https://huggingface.co/Qwen/Qwen2-72B-Instruct)
## Run with Gaianet
**Prompt template**
prompt template: `chatml`
**Context size**
chat_ctx_size: `131072`
**Run with GaiaNet**
- Quick start: https://docs.gaianet.ai/node-guide/quick-start
- Customize your node: https://docs.gaianet.ai/node-guide/customize
|
mradermacher/multimaster-7b-v5-i1-GGUF | mradermacher | 2024-06-12T08:14:15Z | 481 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:ibivibiv/multimaster-7b-v5",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-12T02:13:49Z | ---
base_model: ibivibiv/multimaster-7b-v5
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ibivibiv/multimaster-7b-v5
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/multimaster-7b-v5-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/multimaster-7b-v5-i1-GGUF/resolve/main/multimaster-7b-v5.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/multimaster-7b-v5-i1-GGUF/resolve/main/multimaster-7b-v5.i1-IQ1_M.gguf) | i1-IQ1_M | 8.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/multimaster-7b-v5-i1-GGUF/resolve/main/multimaster-7b-v5.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/multimaster-7b-v5-i1-GGUF/resolve/main/multimaster-7b-v5.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/multimaster-7b-v5-i1-GGUF/resolve/main/multimaster-7b-v5.i1-IQ2_S.gguf) | i1-IQ2_S | 10.7 | |
| [GGUF](https://huggingface.co/mradermacher/multimaster-7b-v5-i1-GGUF/resolve/main/multimaster-7b-v5.i1-IQ2_M.gguf) | i1-IQ2_M | 11.8 | |
| [GGUF](https://huggingface.co/mradermacher/multimaster-7b-v5-i1-GGUF/resolve/main/multimaster-7b-v5.i1-Q2_K.gguf) | i1-Q2_K | 13.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/multimaster-7b-v5-i1-GGUF/resolve/main/multimaster-7b-v5.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 13.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/multimaster-7b-v5-i1-GGUF/resolve/main/multimaster-7b-v5.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.6 | |
| [GGUF](https://huggingface.co/mradermacher/multimaster-7b-v5-i1-GGUF/resolve/main/multimaster-7b-v5.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/multimaster-7b-v5-i1-GGUF/resolve/main/multimaster-7b-v5.i1-IQ3_S.gguf) | i1-IQ3_S | 15.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/multimaster-7b-v5-i1-GGUF/resolve/main/multimaster-7b-v5.i1-IQ3_M.gguf) | i1-IQ3_M | 15.7 | |
| [GGUF](https://huggingface.co/mradermacher/multimaster-7b-v5-i1-GGUF/resolve/main/multimaster-7b-v5.i1-Q3_K_M.gguf) | i1-Q3_K_M | 17.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/multimaster-7b-v5-i1-GGUF/resolve/main/multimaster-7b-v5.i1-Q3_K_L.gguf) | i1-Q3_K_L | 18.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/multimaster-7b-v5-i1-GGUF/resolve/main/multimaster-7b-v5.i1-IQ4_XS.gguf) | i1-IQ4_XS | 19.0 | |
| [GGUF](https://huggingface.co/mradermacher/multimaster-7b-v5-i1-GGUF/resolve/main/multimaster-7b-v5.i1-Q4_0.gguf) | i1-Q4_0 | 20.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/multimaster-7b-v5-i1-GGUF/resolve/main/multimaster-7b-v5.i1-Q4_K_S.gguf) | i1-Q4_K_S | 20.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/multimaster-7b-v5-i1-GGUF/resolve/main/multimaster-7b-v5.i1-Q4_K_M.gguf) | i1-Q4_K_M | 21.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/multimaster-7b-v5-i1-GGUF/resolve/main/multimaster-7b-v5.i1-Q5_K_S.gguf) | i1-Q5_K_S | 24.5 | |
| [GGUF](https://huggingface.co/mradermacher/multimaster-7b-v5-i1-GGUF/resolve/main/multimaster-7b-v5.i1-Q5_K_M.gguf) | i1-Q5_K_M | 25.2 | |
| [GGUF](https://huggingface.co/mradermacher/multimaster-7b-v5-i1-GGUF/resolve/main/multimaster-7b-v5.i1-Q6_K.gguf) | i1-Q6_K | 29.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
danielhanchen/gguf-16062024-bf16 | danielhanchen | 2024-06-15T16:24:56Z | 481 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-15T16:07:52Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** danielhanchen
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
CHE-72/Yi-1.5-6B-Chat-Q3_K_S-GGUF | CHE-72 | 2024-06-22T08:03:06Z | 481 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:01-ai/Yi-1.5-6B-Chat",
"license:apache-2.0",
"region:us"
]
| null | 2024-06-22T08:02:54Z | ---
base_model: 01-ai/Yi-1.5-6B-Chat
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# CHE-72/Yi-1.5-6B-Chat-Q3_K_S-GGUF
This model was converted to GGUF format from [`01-ai/Yi-1.5-6B-Chat`](https://huggingface.co/01-ai/Yi-1.5-6B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/01-ai/Yi-1.5-6B-Chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CHE-72/Yi-1.5-6B-Chat-Q3_K_S-GGUF --hf-file yi-1.5-6b-chat-q3_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CHE-72/Yi-1.5-6B-Chat-Q3_K_S-GGUF --hf-file yi-1.5-6b-chat-q3_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CHE-72/Yi-1.5-6B-Chat-Q3_K_S-GGUF --hf-file yi-1.5-6b-chat-q3_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CHE-72/Yi-1.5-6B-Chat-Q3_K_S-GGUF --hf-file yi-1.5-6b-chat-q3_k_s.gguf -c 2048
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.