modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
ChrisWilson011016/5CfigS9T6jn6SUFHJYm2J16syW6kqGRZeDigsR5LvGERYEyz_vgg | ChrisWilson011016 | "2024-03-04T18:55:11Z" | 1,150 | 0 | keras | [
"keras",
"region:us"
] | null | "2024-02-24T15:19:48Z" | Entry not found |
google/codegemma-1.1-2b | google | "2024-06-27T14:10:03Z" | 1,150 | 17 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"license:gemma",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-30T21:32:55Z" | ---
library_name: transformers
license: gemma
license_link: https://ai.google.dev/gemma/terms
extra_gated_heading: Access CodeGemma on Hugging Face
extra_gated_prompt: To access CodeGemma on Hugging Face, you’re required to review
and agree to Google’s usage license. To do this, please ensure you’re logged-in
to Hugging Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# CodeGemma
Model Page
: [CodeGemma](https://ai.google.dev/gemma/docs/codegemma)
Resources and Technical Documentation
: [Technical Report](https://goo.gle/codegemma)
: [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
Terms of Use
: [Terms](https://www.kaggle.com/models/google/codegemma/license/consent/verify/huggingface?returnModelRepoId=google/codegemma-1.1-2b)
Authors
: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
CodeGemma is a collection of lightweight open code models built on top of Gemma. CodeGemma models are text-to-text and text-to-code decoder-only models and are available as a 7 billion pretrained variant that specializes in code completion and code generation tasks, a 7 billion parameter instruction-tuned variant for code chat and instruction following and a 2 billion parameter pretrained variant for fast code completion.
| | [ **codegemma-2b** ](https://huggingface.co/google/codegemma-1.1-2b) | [codegemma-7b](https://huggingface.co/google/codegemma-7b) | [codegemma-7b-it](https://huggingface.co/google/codegemma-1.1-7b-it) |
|----------------------------------|:----------------------------------------------------------------:|:----------------------------------------------------------:|:----------------------------------------------------------------:|
| Code Completion | ✅ | ✅ | |
| Generation from natural language | | ✅ | ✅ |
| Chat | | | ✅ |
| Instruction Following | | | ✅ |
### Sample Usage
#### For Code Completion
Code completion can be used for infilling inside code editors. CodeGemma was trained for this task using the fill-in-the-middle (FIM) objective, where you provide a prefix and a suffix as context for the completion. The following tokens are used to separate the different parts of the input:
- `<|fim_prefix|>` precedes the context before the completion we want to run.
- `<|fim_suffix|>` precedes the suffix. You must put this token exactly where the cursor would be positioned in an editor, as this is the location that will be completed by the model.
- `<|fim_middle|>` is the prompt that invites the model to run the generation.
In addition to these, there's also `<|file_separator|>`, which is used to provide multi-file contexts.
Please, make sure to not provide any extra spaces or newlines around the tokens, other than those that would naturally occur in the code fragment you want to complete. Here's an example:
```python
from transformers import GemmaTokenizer, AutoModelForCausalLM
model_id = "google/codegemma-1.1-2b"
tokenizer = GemmaTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
prompt = '''\
<|fim_prefix|>import datetime
def calculate_age(birth_year):
"""Calculates a person's age based on their birth year."""
current_year = datetime.date.today().year
<|fim_suffix|>
return age<|fim_middle|>\
'''
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
prompt_len = inputs["input_ids"].shape[-1]
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0][prompt_len:]))
```
This may return something like the following:
```
age = current_year - birth_year<|file_separator|>test_calculate_age.py
<|fim_suffix|>
assert calculate_age(1990) == 33
assert calculate_age(1980) == 43
assert calculate_age(1970) == 53
assert calculate_age(1960) == 63
assert calculate_age(1950) == 73
```
Note the extra content after the correct completion. The model returns the completion, followed by one of the FIM tokens or the EOS token. You should ignore everything that comes after any of these tokens. A good way to achieve this is by providing a list of terminators to the `generate` function, like this:
```python
FIM_PREFIX = '<|fim_prefix|>'
FIM_SUFFIX = '<|fim_suffix|>'
FIM_MIDDLE = '<|fim_middle|>'
FIM_FILE_SEPARATOR = '<|file_separator|>'
terminators = tokenizer.convert_tokens_to_ids([FIM_PREFIX, FIM_MIDDLE, FIM_SUFFIX, FIM_FILE_SEPARATOR])
terminators += [tokenizer.eos_token_id]
outputs = model.generate(
**inputs,
max_new_tokens=100,
eos_token_id=terminators,
)
```
In this case, generation stops as soon as the first delimiter is found in the response:
```
age = current_year - birth_year<|file_separator|>
```
#### For Code Generation
```python
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("google/codegemma-1.1-2b")
model = AutoModelForCausalLM.from_pretrained("google/codegemma-1.1-2b")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
### Inputs and Outputs
Inputs
: For pretrained model variants: code prefix and/or suffix for code completion and generation scenarios, or natural language text or prompt
: For instruction tuned model variant: natural language text or prompt
Outputs
: For pretrained model variants: fill-in-the-middle code completion, code and natural language
: For instruction tuned model variant: code and natural language
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
Using Gemma as the base model, CodeGemma 2B and 7B pretrained variants are further trained on an additional 500 to 1000 billion tokens of primarily English language data from publicly available code repositories, open source mathematics datasets and synthetically generated code.
### Training Data Processing
The following data pre-processing techniques were applied:
* FIM Pretrained CodeGemma models focus on fill-in-the-middle (FIM) tasks. The models are trained to work with both PSM and SPM modes. Our FIM settings are 80% to 90% FIM rate with 50-50 PSM/SPM.
* Dependency Graph-based Packing and Unit Test-based Lexical Packing techniques: To improve model alignment with real-world applications, we structured training examples at the project/repository level to co-locate the most relevant source files within each repository. Specifically, we employed two heuristic techniques: dependency graph-based packing and unit test-based lexical packing
* We developed a novel technique for splitting the documents into prefix, middle, and suffix to make the suffix start in a more syntactically natural point rather than purely random distribution.
* Safety: Similarly to Gemma, we deployed rigorous safety filtering including filtering personal data, CSAM filtering and other filtering based on content quality and safety in line with [our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11).
## Implementation Information
Information about the hardware and software used to train the models.
### Hardware
CodeGemma was trained using the latest generation of [Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e).
### Software
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/).
## Evaluation Information
Model evaluation metrics and results.
### Evaluation Approach
We evaluate CodeGemma on a variety of academic benchmarks across several domains:
* Code completion benchmarks: HumanEval Single Line and Multiple Line Infilling
* Code generation benchmarks: HumanEval, MBPP, BabelCode (C++, C#, Go, Java, JavaScript, Kotlin, Python, Rust)
* Q&A: BoolQ, PIQA, TriviaQA
* Natural Language: ARC-Challenge, HellaSwag, MMLU, WinoGrande
* Math Reasoning: GSM8K, MATH
### Evaluation Results
#### Coding Benchmarks
Benchmark | [2B](https://huggingface.co/google/codegemma-2b) | [2B (1.1)](https://huggingface.co/google/codegemma-1.1-2b) | [7B](https://huggingface.co/google/codegemma-7b) | [7B-IT](https://huggingface.co/google/codegemma-7b-it) | [7B-IT (1.1)](https://huggingface.co/google/codegemma-1.1-7b-it)
----------------------|------|----------|------|-------|------------
HumanEval | 31.1 | 37.8 | 44.5 | 56.1 | 60.4
MBPP | 43.6 | 49.2 | 56.2 | 54.2 | 55.6
HumanEval Single Line | 78.4 | 79.3 | 76.1 | 68.3 | 77.4
HumanEval Multi Line | 51.4 | 51.0 | 58.4 | 20.1 | 23.7
BC HE C++ | 24.2 | 19.9 | 32.9 | 42.2 | 46.6
BC HE C# | 10.6 | 26.1 | 22.4 | 26.7 | 54.7
BC HE Go | 20.5 | 18.0 | 21.7 | 28.6 | 34.2
BC HE Java | 29.2 | 29.8 | 41.0 | 48.4 | 50.3
BC HE JavaScript | 21.7 | 28.0 | 39.8 | 46.0 | 48.4
BC HE Kotlin | 28.0 | 32.3 | 39.8 | 51.6 | 47.8
BC HE Python | 21.7 | 36.6 | 42.2 | 48.4 | 54.0
BC HE Rust | 26.7 | 24.2 | 34.1 | 36.0 | 37.3
BC MBPP C++ | 47.1 | 38.9 | 53.8 | 56.7 | 63.5
BC MBPP C# | 28.7 | 45.3 | 32.5 | 41.2 | 62.0
BC MBPP Go | 45.6 | 38.9 | 43.3 | 46.2 | 53.2
BC MBPP Java | 41.8 | 49.7 | 50.3 | 57.3 | 62.9
BC MBPP JavaScript | 45.3 | 45.0 | 58.2 | 61.4 | 61.4
BC MBPP Kotlin | 46.8 | 49.7 | 54.7 | 59.9 | 62.6
BC MBPP Python | 38.6 | 52.9 | 59.1 | 62.0 | 60.2
BC MBPP Rust | 45.3 | 47.4 | 52.9 | 53.5 | 52.3
#### Natural Language Benchmarks

## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including:
* Human evaluation on prompts covering content safety and representational harms. See the [Gemma model card](https://ai.google.dev/gemma/docs/model_card#evaluation_approach) for more details on evaluation approach.
* Specific testing of cyber-offence capabilities, focusing on testing autonomous hacking capabilities and ensuring potential harms are limited.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child safety, content safety, representational harms, memorization, large-scale harms. See the [Gemma model card](https://ai.google.dev/gemma/docs/model_card#evaluation_results) for more details.
## Model Usage & Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Code Gemma models have a wide range of applications, which vary between IT and PT models. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development.
Code Completion
: PT models can be used to complete code with an IDE extension
Code Generation
: IT model can be used to generate code with or without an IDE extension
Code Conversation
: IT model can power conversation interfaces which discuss code.
Code Education
: IT model supports interactive code learning experiences, aids in syntax correction or provides coding practice.
### Known Limitations
Large Language Models (LLMs) have limitations based on their training data and the inherent limitations of the technology. See the [Gemma model card](https://ai.google.dev/gemma/docs/model_card#evaluation_results) for more details on the limitations of LLMs.
### Ethical Considerations & Risks
The development of large language models (LLMs) raises several ethical concerns. We have carefully considered multiple aspects in the development of these models. Please refer to [the same discussion](https://ai.google.dev/gemma/docs/model_card#ethical_considerations_and_risks) in the Gemma model card for model details.
### Benefits
At the time of release, this family of models provides high-performance open code-focused large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models.
Using the coding benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives.
|
netcat420/MFANN3bv0.13.10 | netcat420 | "2024-06-26T23:21:37Z" | 1,150 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:netcat420/MFANN3bv0.13",
"base_model:netcat420/MFANN3bv0.6",
"base_model:liminerity/Phigments12",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-26T22:48:12Z" | ---
base_model:
- netcat420/MFANN3bv0.13
- netcat420/MFANN3bv0.6
- liminerity/Phigments12
library_name: transformers
tags:
- mergekit
- merge
---
# MFANN3bv0.13.10
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [liminerity/Phigments12](https://huggingface.co/liminerity/Phigments12) as a base.
### Models Merged
The following models were included in the merge:
* [netcat420/MFANN3bv0.13](https://huggingface.co/netcat420/MFANN3bv0.13)
* [netcat420/MFANN3bv0.6](https://huggingface.co/netcat420/MFANN3bv0.6)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: netcat420/MFANN3bv0.6
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
- model: netcat420/MFANN3bv0.13
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
merge_method: ties
base_model: liminerity/Phigments12
parameters:
normalize: true
int8_mask: true
dtype: float16
```
|
shibing624/mengzi-t5-base-chinese-correction | shibing624 | "2024-02-19T08:43:07Z" | 1,149 | 27 | transformers | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"zh",
"dataset:shibing624/CSC",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2022-06-17T07:58:45Z" | ---
language:
- zh
tags:
- t5
- pytorch
- zh
license: apache-2.0
datasets:
- shibing624/CSC
library_name: transformers
pipeline_tag: text2text-generation
widget:
- text: "少先队员因该为老人让坐"
---
# T5 for Chinese Spelling Correction Model
中文拼写纠错模型
`shibing624/mengzi-t5-base-chinese-correction` evaluate SIGHAN2015 test data:
- Sentence Level: precision:0.8321, recall:0.6390, f1:0.7229
训练使用的数据集为下方提供的“SIGHAN+Wang271K中文纠错数据集”,在SIGHAN2015的测试集上达到接近SOTA水平。
未改动模型结构,finetune中文纠错数据集,评估纠错效果很好,模型潜力巨大。
## Usage
本项目开源在中文文本纠错项目:[pycorrector](https://github.com/shibing624/pycorrector),可支持t5模型,通过如下命令调用:
```
pip install -U pycorrector
```
run:
```python
from pycorrector.t5.t5_corrector import T5Corrector
nlp = T5Corrector("shibing624/mengzi-t5-base-chinese-correction").batch_t5_correct
i = "今天新情很好"
print(i, ' => ', nlp([i]))
```
output:
```shell
今天新情很好 => 今天心情很好 [('新', '心', 2, 3)]
```
模型文件组成:
```
mengzi-t5-base-chinese-correction
|-- config.json
|-- pytorch_model.bin
|-- special_tokens_map.json
|-- spiece.model
|-- tokenizer_config.json
`-- tokenizer.json
```
如果需要训练t5-correction,请参考[https://github.com/shibing624/pycorrector/tree/master/pycorrector/t5](https://github.com/shibing624/pycorrector/tree/master/pycorrector/t5)
### 训练数据集
#### SIGHAN+Wang271K中文纠错数据集
| 数据集 | 语料 | 下载链接 | 压缩包大小 |
| :------- | :--------- | :---------: | :---------: |
| **`SIGHAN+Wang271K中文纠错数据集`** | SIGHAN+Wang271K(27万条) | [百度网盘(密码01b9)](https://pan.baidu.com/s/1BV5tr9eONZCI0wERFvr0gQ)| 106M |
| **`原始SIGHAN数据集`** | SIGHAN13 14 15 | [官方csc.html](http://nlp.ee.ncu.edu.tw/resource/csc.html)| 339K |
| **`原始Wang271K数据集`** | Wang271K | [Automatic-Corpus-Generation dimmywang提供](https://github.com/wdimmy/Automatic-Corpus-Generation/blob/master/corpus/train.sgml)| 93M |
SIGHAN+Wang271K中文纠错数据集,数据格式:
```json
[
{
"id": "B2-4029-3",
"original_text": "晚间会听到嗓音,白天的时候大家都不会太在意,但是在睡觉的时候这嗓音成为大家的恶梦。",
"wrong_ids": [
5,
31
],
"correct_text": "晚间会听到噪音,白天的时候大家都不会太在意,但是在睡觉的时候这噪音成为大家的恶梦。"
},
]
```
## Citation
```latex
@software{pycorrector,
author = {Xu Ming},
title = {pycorrector: Text Error Correction Tool},
year = {2021},
url = {https://github.com/shibing624/pycorrector},
}
``` |
roktimsardar123/majicMIX-realistic-7 | roktimsardar123 | "2024-01-22T12:09:25Z" | 1,149 | 0 | diffusers | [
"diffusers",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-01-21T21:04:34Z" | Entry not found |
GroNLP/wav2vec2-dutch-large-ft-cgn | GroNLP | "2023-09-11T08:55:54Z" | 1,148 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"speech",
"nl",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-04-08T12:21:08Z" | ---
language: nl
tags:
- speech
---
# Wav2Vec2-Dutch-Large-ft-CGN
A Dutch Wav2Vec2 model. This model is created by further pre-training the original English [`facebook/wav2vec2-large`](https://huggingface.co/facebook/wav2vec2-large) model on Dutch speech from [Het Corpus Gesproken Nederlands](https://taalmaterialen.ivdnt.org/download/tstc-corpus-gesproken-nederlands/). Subsequently, the model is fine-tuned on the same Dutch speech using CTC. |
sentence-transformers/stsb-distilbert-base | sentence-transformers | "2024-03-27T12:55:49Z" | 1,147 | 4 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"tf",
"safetensors",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:1908.10084",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | "2022-03-02T23:29:05Z" | ---
license: apache-2.0
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
pipeline_tag: sentence-similarity
---
**⚠️ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)**
# sentence-transformers/stsb-distilbert-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/stsb-distilbert-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/stsb-distilbert-base')
model = AutoModel.from_pretrained('sentence-transformers/stsb-distilbert-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/stsb-distilbert-base)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
stablediffusionapi/amireal | stablediffusionapi | "2023-04-25T20:27:26Z" | 1,147 | 3 | diffusers | [
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-04-25T20:26:38Z" | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# amireal API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "amireal"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Model link: [View model](https://stablediffusionapi.com/models/amireal)
Credits: [View credits](https://civitai.com/?query=amireal)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v3/dreambooth"
payload = json.dumps({
"key": "",
"model_id": "amireal",
"prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
LiteLLMs/CarbonVillain-en-10.7B-v3-GGUF | LiteLLMs | "2024-01-01T22:21:02Z" | 1,147 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"GGUF",
"conversational",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-01T21:06:17Z" | ---
language:
- en
license: mit
tags:
- GGUF
quantized_by: andrijdavid
---
# CarbonVillain-en-10.7B-v3-GGUF
- Original model: [CarbonVillain-en-10.7B-v3](https://huggingface.co/jeonsworld/CarbonVillain-en-10.7B-v3)
<!-- description start -->
## Description
This repo contains GGUF format model files for [CarbonVillain-en-10.7B-v3](https://huggingface.co/jeonsworld/CarbonVillain-en-10.7B-v3).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: andrijdavid/CarbonVillain-en-10.7B-v3-GGUF and below it, a specific filename to download, such as: CarbonVillain-en-10.7B-v3-f16.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download andrijdavid/CarbonVillain-en-10.7B-v3-GGUF CarbonVillain-en-10.7B-v3-f16.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download andrijdavid/CarbonVillain-en-10.7B-v3-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download andrijdavid/CarbonVillain-en-10.7B-v3-GGUF CarbonVillain-en-10.7B-v3-f16.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m CarbonVillain-en-10.7B-v3-f16.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./CarbonVillain-en-10.7B-v3-f16.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./CarbonVillain-en-10.7B-v3-f16.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: CarbonVillain-en-10.7B-v3
# CarbonVillain
**This is a model created without learning to oppose indiscriminate carbon emissions.**
This model is an experimental version created using [mergekit](https://github.com/cg123/mergekit).
- merge models
- kyujinpy/Sakura-SOLAR-Instruct
- jeonsworld/CarbonVillain-en-10.7B-v1
- method: slerp
# Prompt Template(s)
```
### User:
{user}
### Assistant:
{asistant}
```
# Evaluation
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_jeonsworld__CarbonVillain-en-10.7B-v3)
| Metric | Value |
| - | ----- |
| Avg. | |
| ARC (25-shot) | |
| HellaSwag (10-shot) | |
| MMLU (5-shot) | |
| TruthfulQA (0-shot) | |
| Winogrande (5-shot) | |
| GSM8K (5-shot) | |
<!-- original-model-card end --> |
Locutusque/TinyMistral-248M-v2-Instruct | Locutusque | "2024-02-03T21:09:56Z" | 1,147 | 6 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:HuggingFaceH4/ultrachat_200k",
"base_model:Locutusque/TinyMistral-248M-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-19T03:17:10Z" | ---
license: apache-2.0
datasets:
- HuggingFaceH4/ultrachat_200k
language:
- en
inference:
parameters:
do_sample: true
temperature: 0.1
top_p: 0.14
top_k: 12
max_new_tokens: 250
repetition_penalty: 1.1
widget:
- text: "<|im_start|>user\nHow do I incorporate visual elements into my writing?<|im_end|>\n<|im_start|>assistant\n"
base_model: Locutusque/TinyMistral-248M-v2
---
# Description
Fine-tuned Locutusque/TinyMistral-248M-v2 on the HuggingFaceH4/ultrachat_200k dataset.
# Recommended inference parameters
```
do_sample: true
temperature: 0.1
top_p: 0.14
top_k: 12
repetition_penalty: 1.1
```
# Recommended prompt template
```
<|im_start|>user\n{user message}<|im_end|>\n<|im_start|>assistant\n{assistant message}<|endoftext|>
```
# Evaluation
This model will be submitted to the Open LLM Leaderboard. |
mrsinghania/asr-question-detection | mrsinghania | "2021-09-21T06:44:23Z" | 1,146 | 5 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:05Z" | <i>Question vs Statement classifier</i> trained on more than 7k samples which were coming from spoken data in an interview setting
<b>Code for using in Transformers:</b>
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("mrsinghania/asr-question-detection")
model = AutoModelForSequenceClassification.from_pretrained("mrsinghania/asr-question-detection") |
yangheng/deberta-v3-large-absa-v1.1 | yangheng | "2024-05-01T14:56:43Z" | 1,146 | 16 | transformers | [
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"text-classification",
"aspect-based-sentiment-analysis",
"PyABSA",
"en",
"dataset:laptop14",
"dataset:restaurant14",
"dataset:restaurant16",
"dataset:ACL-Twitter",
"dataset:MAMS",
"dataset:Television",
"dataset:TShirt",
"dataset:Yelp",
"arxiv:2110.08604",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-19T00:32:37Z" | ---
language:
- en
tags:
- aspect-based-sentiment-analysis
- PyABSA
license: mit
datasets:
- laptop14
- restaurant14
- restaurant16
- ACL-Twitter
- MAMS
- Television
- TShirt
- Yelp
metrics:
- accuracy
- macro-f1
widget:
- text: "[CLS] when tables opened up, the manager sat another party before us. [SEP] manager [SEP] "
---
# Note
This model is training with 30k+ ABSA samples, see [ABSADatasets](https://github.com/yangheng95/ABSADatasets). Yet the test sets are not included in pre-training, so you can use this model for training and benchmarking on common ABSA datasets, e.g., Laptop14, Rest14 datasets. (Except for the Rest15 dataset!)
# DeBERTa for aspect-based sentiment analysis
The `deberta-v3-large-absa` model for aspect-based sentiment analysis, trained with English datasets from [ABSADatasets](https://github.com/yangheng95/ABSADatasets).
## Training Model
This model is trained based on the FAST-LCF-BERT model with `microsoft/deberta-v3-large`, which comes from [PyABSA](https://github.com/yangheng95/PyABSA).
To track state-of-the-art models, please see [PyASBA](https://github.com/yangheng95/PyABSA).
## Usage
```python3
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("yangheng/deberta-v3-large-absa-v1.1")
model = AutoModelForSequenceClassification.from_pretrained("yangheng/deberta-v3-large-absa-v1.1")
```
## Example in PyASBA
An [example](https://github.com/yangheng95/PyABSA/blob/release/demos/aspect_polarity_classification/train_apc_multilingual.py) for using FAST-LCF-BERT in PyASBA datasets.
## Datasets
This model is fine-tuned with 180k examples for the ABSA dataset (including augmented data). Training dataset files:
```
loading: integrated_datasets/apc_datasets/SemEval/laptop14/Laptops_Train.xml.seg
loading: integrated_datasets/apc_datasets/SemEval/restaurant14/Restaurants_Train.xml.seg
loading: integrated_datasets/apc_datasets/SemEval/restaurant16/restaurant_train.raw
loading: integrated_datasets/apc_datasets/ACL_Twitter/acl-14-short-data/train.raw
loading: integrated_datasets/apc_datasets/MAMS/train.xml.dat
loading: integrated_datasets/apc_datasets/Television/Television_Train.xml.seg
loading: integrated_datasets/apc_datasets/TShirt/Menstshirt_Train.xml.seg
loading: integrated_datasets/apc_datasets/Yelp/yelp.train.txt
```
If you use this model in your research, please cite our paper:
```
@article{YangZMT21,
author = {Heng Yang and
Biqing Zeng and
Mayi Xu and
Tianxing Wang},
title = {Back to Reality: Leveraging Pattern-driven Modeling to Enable Affordable
Sentiment Dependency Learning},
journal = {CoRR},
volume = {abs/2110.08604},
year = {2021},
url = {https://arxiv.org/abs/2110.08604},
eprinttype = {arXiv},
eprint = {2110.08604},
timestamp = {Fri, 22 Oct 2021 13:33:09 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2110-08604.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
bitfount/RETFound_MAE | bitfount | "2023-10-05T11:14:46Z" | 1,146 | 3 | timm | [
"timm",
"pytorch",
"image-classification",
"license:cc-by-nc-4.0",
"region:us"
] | image-classification | "2023-10-04T15:17:31Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: cc-by-nc-4.0
---
# Model card for RETFound_MAE
A copy of [open-eye/RETFound_MAE](https://huggingface.co/open-eye/RETFound_MAE) integrated with `timm` and a copy of the CFP model parameters. |
santiagomed/candle-moondream | santiagomed | "2024-04-02T19:15:36Z" | 1,146 | 2 | null | [
"gguf",
"license:apache-2.0",
"region:us"
] | null | "2024-04-01T00:24:03Z" | ---
license: apache-2.0
---
|
MCZK/Qwen2-1.5B-Instruct-GGUF | MCZK | "2024-06-08T02:08:28Z" | 1,146 | 0 | null | [
"gguf",
"chat",
"text-generation",
"en",
"license:apache-2.0",
"region:us"
] | text-generation | "2024-06-08T00:56:20Z" | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- chat
---
Qwen様の [Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct) をGGUF形式に変換したものです。
K量子化モデルについてもiMatrix適用してあります。
iMatrixテキストはTFMC様の[c4_en_ja_imatrix.txt](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)を使用しています。
|
John6666/3x3mix-xl-typef-v1-sdxl | John6666 | "2024-06-24T12:52:35Z" | 1,146 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"pony",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-06-24T12:46:41Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- pony
---
Original model is [here](https://civitai.com/models/535241/3x3mixxltypef?modelVersionId=594969).
|
athirdpath/CleverMommy-mix-20b | athirdpath | "2023-11-26T12:13:39Z" | 1,145 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-26T11:40:40Z" | ---
license: cc-by-nc-4.0
---
An extended part of my effort to create Eileithyia-20B. This model is made by following the recipe below, inverting it, then SLERPing the models back together at 0.5, hopefully fusing the models into one block for use with Harmonia.
slices:
- sources:
- model: microsoft/Orca-2-13b
-
layer_range: [0, 16]
- sources:
- model: athirdpath/Eileithyia-13B
-
layer_range: [8, 24]
- sources:
- model: microsoft/Orca-2-13b
-
layer_range: [17, 32]
- sources:
- model: athirdpath/Eileithyia-13B
-
layer_range: [25, 40]
merge_method: passthrough
dtype: float16
Thanks to Undi95 for pioneering the recipe. |
UCSC-VLAA/ViT-L-16-HTxt-Recap-CLIP | UCSC-VLAA | "2024-06-24T17:19:37Z" | 1,145 | 12 | open_clip | [
"open_clip",
"clip",
"zero-shot-image-classification",
"dataset:UCSC-VLAA/Recap-DataComp-1B",
"arxiv:2406.08478",
"license:cc-by-4.0",
"region:us"
] | zero-shot-image-classification | "2024-06-13T07:53:11Z" | ---
tags:
- clip
library_name: open_clip
pipeline_tag: zero-shot-image-classification
license: cc-by-4.0
datasets:
- UCSC-VLAA/Recap-DataComp-1B
---
# Model card for Recap-CLIP-ViT-L-16-Txt-Huge-2.56B
A CLIPA model trained on Recap-DataComp-1B...
## Model Details
- **Model Type:** Contrastive Image-Text, Zero-Shot Image Classification.
- **Original:** https://github.com/UCSC-VLAA/Recap-DataComp-1B
- **Dataset:** https://huggingface.co/datasets/UCSC-VLAA/Recap-DataComp-1B
- **Papers:**
- What If We Recaption Billions of Web Images with LLaMA-3?: https://arxiv.org/abs/2406.08478
## Model Usage
### With OpenCLIP
```
import torch
import torch.nn.functional as F
from urllib.request import urlopen
from PIL import Image
from open_clip import create_model_from_pretrained, get_tokenizer
model, preprocess = create_model_from_pretrained('hf-hub:UCSC-VLAA/ViT-L-16-HTxt-Recap-CLIP')
tokenizer = get_tokenizer('hf-hub:UCSC-VLAA/ViT-L-16-HTxt-Recap-CLIP')
image = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
image = preprocess(image).unsqueeze(0)
text = tokenizer(["a diagram", "a dog", "a cat", "a beignet"], context_length=model.context_length)
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model.encode_image(image)
text_features = model.encode_text(text)
image_features = F.normalize(image_features, dim=-1)
text_features = F.normalize(text_features, dim=-1)
text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)
print("Label probs:", text_probs) # prints: [[0., 0., 0., 1.0]]
```
## Bias, Risks, and Limitations
This model is trained on image-text dataset with LLaVA-1.5-LLaMA3-8B generated captions, which may still contain biases and inaccuracies inherent in the original web-crawled data.
Users should be aware of the bias, risks, or limitations when using this model. check the [dataset card](https://huggingface.co/datasets/UCSC-VLAA/Recap-DataComp-1B) page for more details.
## Citation
```bibtex
@article{li2024recaption,
title={What If We Recaption Billions of Web Images with LLaMA-3?},
author={Xianhang Li and Haoqin Tu and Mude Hui and Zeyu Wang and Bingchen Zhao and Junfei Xiao and Sucheng Ren and Jieru Mei and Qing Liu and Huangjie Zheng and Yuyin Zhou and Cihang Xie},
journal={arXiv preprint arXiv:2406.08478},
year={2024}
}
```
## Model Contact
[email protected]
|
abacusai/Liberated-Qwen1.5-7B | abacusai | "2024-03-14T00:03:49Z" | 1,144 | 12 | transformers | [
"transformers",
"pytorch",
"qwen2",
"text-generation",
"conversational",
"en",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/Code-Feedback",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:abacusai/SystemChat",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-13T23:55:45Z" | ---
language:
- en
license: other
datasets:
- teknium/OpenHermes-2.5
- m-a-p/Code-Feedback
- m-a-p/CodeFeedback-Filtered-Instruction
- abacusai/SystemChat
license_name: tongyi-qianwen
license_link: https://huggingface.co/Qwen/Qwen1.5-72B/blob/main/LICENSE
---
<img href="https://abacus.ai" src="https://cdn-uploads.huggingface.co/production/uploads/64c14f6b02e1f8f67c73bd05/pf4d6FA7DriRtVq5HCkxd.png" width="600" />
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/xCWGByXr8YNwGxKVh_x9H.png" width="600" />
# Liberated-Qwen1.5-7B
Brought to you by [AbacusAI](https://abacus.ai) and Eric Hartford
This model is based on Qwen/Qwen1.5-7B and subject to the [tongyi-qianwen](https://huggingface.co/Qwen/Qwen1.5-72B/blob/main/LICENSE) license.
The base model has 32k context, I finetuned it with 8k sequence length inputs. YMMV.
Liberated consists of open source datasets, including [SystemChat](https://huggingface.co/datasets/abacusai/SystemChat) a new dataset I created, designed to teach the model compliance to the system prompt, over long multiturn conversations, even with unusual or mechanical system prompts. These are tasks that Open Source Models have been lacking in thus far. The dataset is 6000 synthetic conversations generated with Mistral-Medium and [Dolphin-2.7-mixtral-8x7b](https://huggingface.co/cognitivecomputations/dolphin-2.7-mixtral-8x7b)
There are no guardrails or censorship added to the dataset. You are advised to implement your own alignment layer before exposing the model as a service. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models
You are responsible for any content you create using this model. Enjoy responsibly.
## Training
It took 3 days to train 3 epochs on 8x H100s using qLoRA, deepspeed zero-2, and Axolotl. learning rate 2e-4.
Liberated was trained with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl), using this [config](https://huggingface.co/abacusai/Liberated-Qwen1.5-72B/blob/main/configs/Liberated-Qwen-1.5-72b.qlora.yml)
## Prompt format
This model uses ChatML prompt format.
```
<|im_start|>system
You are Liberated, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
Example:
```
<|im_start|>system
You name is Liberated. You are an uncensored and unbiased AI assistant. You always respond with a JSON object.<|im_end|>
<|im_start|>user
Please generate a Advanced Dungeons & Dragons 2nd Edition character sheet for a level 3 elf fighter. Make up a name and background and visual description for him.<|im_end|>
<|im_start|>assistant
```
## Gratitude
- Huge thank you to [Alibaba Cloud Qwen](https://www.alibabacloud.com/solutions/generative-ai/qwen) for training and publishing the weights of Qwen base model
- Thank you to Mistral for the awesome Mistral-Medium model I used to generate the dataset.
- HUGE Thank you to the dataset authors: @teknium, [@m-a-p](https://m-a-p.ai) and all the people who built the datasets these composites came from.
- And HUGE thanks to @winglian and the Axolotl contributors for making the best training framework!
- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
- Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way.
## Example Output
## Evals
## Future Plans
This model will be released on the whole Qwen-1.5 series.
Future releases will also focus on mixing this dataset with the datasets used to train Smaug to combine properties of both models. |
Walmart-the-bag/Quintellect-10.7B | Walmart-the-bag | "2024-03-22T14:19:02Z" | 1,144 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"code",
"conversational",
"en",
"dataset:sahil2801/CodeAlpaca-20k",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-19T04:25:56Z" | ---
license: apache-2.0
tags:
- code
datasets:
- sahil2801/CodeAlpaca-20k
language:
- en
inference: false
---
# Quintellect-10.7B

The Quintellect-10.7B AI model was created to help achieve greater accessibility to coding knowledge and expertise, empowering the community to overcome technological challenges and foster innovation. I developed the model because I believe in democratizing access to coding skills, which are essential for bridging the digital divide and unleashing creativity worldwide.
This model excels in coding tasks, proficiently handling languages like Python and JavaScript. Whether you need assistance with standard programming tasks or require code creation from scratch its a good model to use. With its capabilities, it can guide you through challenging coding problems, offer solutions, or even generate code tailored to your specifications.
The prompt format for the Quintellect-10.7B LLM model is Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
``` |
nvidia/OpenMath-CodeLlama-7b-Python-hf | nvidia | "2024-02-16T02:09:12Z" | 1,143 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"nvidia",
"code",
"math",
"en",
"dataset:nvidia/OpenMathInstruct-1",
"arxiv:2402.10176",
"base_model:codellama/CodeLlama-7b-Python-hf",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-09T05:53:23Z" | ---
license: llama2
base_model:
- codellama/CodeLlama-7b-Python-hf
datasets:
- nvidia/OpenMathInstruct-1
language:
- en
tags:
- nvidia
- code
- math
---
# OpenMath-CodeLlama-7b-Python-hf
OpenMath models were designed to solve mathematical problems by integrating text-based reasoning with code blocks
executed by Python interpreter. The models were trained on [OpenMathInstruct-1](https://huggingface.co/datasets/nvidia/OpenMathInstruct-1),
a math instruction tuning dataset with 1.8M problem-solution pairs generated using permissively licensed
[Mixtral-8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) model.
<table border="1">
<tr>
<td></td>
<td colspan="2" style="text-align: center;">greedy</td>
<td colspan="2" style="text-align: center;">majority@50</td>
</tr>
<tr>
<td style="text-align: center;">model</td>
<td style="text-align: center;">GSM8K</td>
<td style="text-align: center;">MATH</td>
<td style="text-align: center;">GMS8K</td>
<td style="text-align: center;">MATH</td>
</tr>
<tr>
<td style="text-align: right;">OpenMath-CodeLlama-7B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-7b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-7b-Python-hf">HF</a>)</td>
<td style="text-align: center;">75.9</td>
<td style="text-align: center;">43.6</td>
<td style="text-align: center;">84.8</td>
<td style="text-align: center;">55.6</td>
</tr>
<tr>
<td style="text-align: right;">OpenMath-Mistral-7B (<a href="https://huggingface.co/nvidia/OpenMath-Mistral-7B-v0.1">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-Mistral-7B-v0.1-hf">HF</a>)</td>
<td style="text-align: center;">80.2</td>
<td style="text-align: center;">44.5</td>
<td style="text-align: center;">86.9</td>
<td style="text-align: center;">57.2</td>
</tr>
<tr>
<td style="text-align: right;">OpenMath-CodeLlama-13B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-13b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-13b-Python-hf">HF</a>)</td>
<td style="text-align: center;">78.8</td>
<td style="text-align: center;">45.5</td>
<td style="text-align: center;">86.8</td>
<td style="text-align: center;">57.6</td>
</tr>
<tr>
<td style="text-align: right;">OpenMath-CodeLlama-34B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-34b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-34b-Python-hf">HF</a>)</td>
<td style="text-align: center;">80.7</td>
<td style="text-align: center;">48.3</td>
<td style="text-align: center;">88.0</td>
<td style="text-align: center;">60.2</td>
</tr>
<tr>
<td style="text-align: right;">OpenMath-Llama2-70B (<a href="https://huggingface.co/nvidia/OpenMath-Llama-2-70b">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-Llama-2-70b-hf">HF</a>)</td>
<td style="text-align: center;"><b>84.7</b></td>
<td style="text-align: center;">46.3</td>
<td style="text-align: center;">90.1</td>
<td style="text-align: center;">58.3</td>
</tr>
<tr>
<td style="text-align: right;">OpenMath-CodeLlama-70B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-70b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-70b-Python-hf">HF</a>)</td>
<td style="text-align: center;">84.6</td>
<td style="text-align: center;"><b>50.7</b></td>
<td style="text-align: center;"><b>90.8</b></td>
<td style="text-align: center;"><b>60.4</b></td>
</tr>
</table>
The pipeline we used to produce these models is fully open-sourced!
- [Code](https://github.com/Kipok/NeMo-Skills)
- [Models](https://huggingface.co/collections/nvidia/openmath-65c5619de2ba059be0775014)
- [Dataset](https://huggingface.co/datasets/nvidia/OpenMathInstruct-1)
See our [paper](https://arxiv.org/abs/2402.10176) for more details!
# How to use the models?
Try to [run inference with our models](https://github.com/Kipok/NeMo-Skills/blob/main/docs/inference.md) with just a few commands!
# Reproducing our results
We provide [all instructions](https://github.com/Kipok/NeMo-Skills/blob/main/docs/reproducing-results.md) to fully reproduce our results.
# Improving other models
To improve other models or to learn more about our code, read through the docs below.
- [NeMo-Skills Pipeline](https://github.com/Kipok/NeMo-Skills)
- [Generating synthetic data](https://github.com/Kipok/NeMo-Skills/blob/main/docs/synthetic-data-generation.md)
- [Finetuning models](https://github.com/Kipok/NeMo-Skills/blob/main/docs/finetuning.md)
- [Evaluating models](https://github.com/Kipok/NeMo-Skills/blob/main/docs/evaluation.md)
In our pipeline we use [NVIDIA NeMo](https://www.nvidia.com/en-us/ai-data-science/generative-ai/nemo-framework/),
an end-to-end, cloud-native framework to build, customize, and deploy generative AI models anywhere.
It includes training and inferencing frameworks, guardrailing toolkits, data curation tools, and pretrained models,
offering enterprises an easy, cost-effective, and fast way to adopt generative AI.
# Citation
If you find our work useful, please consider citing us!
```bibtex
@article{toshniwal2024openmath,
title = {OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset},
author = {Shubham Toshniwal and Ivan Moshkov and Sean Narenthiran and Daria Gitman and Fei Jia and Igor Gitman},
year = {2024},
journal = {arXiv preprint arXiv: Arxiv-2402.10176}
}
```
# License
The use of this model is governed by the [Llama 2 Community License Agreement](https://ai.meta.com/llama/license/) |
YeungNLP/firefly-qwen1.5-en-7b | YeungNLP | "2024-03-03T08:17:30Z" | 1,143 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2305.18290",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-29T02:57:13Z" | ---
library_name: transformers
license: apache-2.0
basemodel: Qwen/Qwen1.5-7B
---
## Model Card for Firefly-Qwen1.5
[firefly-qwen1.5-en-7b](https://huggingface.co/YeungNLP/firefly-qwen1.5-en-7b) and [firefly-qwen1.5-en-7b-dpo-v0.1](https://huggingface.co/YeungNLP/firefly-qwen1.5-en-7b-dpo-v0.1) are trained based on [Qwen1.5-7B](https://huggingface.co/Qwen/Qwen1.5-7B) to act as a helpful and harmless AI assistant.
We use [Firefly](https://github.com/yangjianxin1/Firefly) to train our models on **a single V100 GPU** with QLoRA.
firefly-qwen1.5-en-7b is fine-tuned based on Qwen1.5-7B with English instruction data, and firefly-qwen1.5-en-7b-dpo-v0.1 is trained with [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290) based on firefly-qwen1.5-en-7b.
Our models outperform official [Qwen1.5-7B-Chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat), [Gemma-7B-it](https://huggingface.co/google/gemma-7b-it), [Zephyr-7B-Beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
<img src="pics/open_llm.png" width="800">
Although our models are trained with English data, you can also try to chat with models in Chinese because Qwen1.5 is also good at Chinese. But we have not evaluated
the performance in Chinese yet.
We advise you to install transformers>=4.37.0.
## Performance
We evaluate our models on [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), they achieve good performance.
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
|-----------------------------------|--------|--------|-----------|--------|------------|------------|--------|
| firefly-gemma-7b | 62.93 | 62.12 | 79.77 | 61.57 | 49.41 | 75.45 | 49.28 |
| **firefly-qwen1.5-en-7b-dpo-v0.1** | 62.36 | 54.35 | 76.04 | 61.21 | 56.4 | 72.06 | 54.13 |
| zephyr-7b-beta | 61.95 | 62.03 | 84.36 | 61.07 | 57.45 | 77.74 | 29.04 |
| **firefly-qwen1.5-en-7b** | 61.44 | 53.41 | 75.51 | 61.67 |51.96 |70.72 | 55.34 |
| vicuna-13b-v1.5 | 55.41 | 57.08 | 81.24 | 56.67 | 51.51 | 74.66 | 11.3 |
| Xwin-LM-13B-V0.1 | 55.29 | 62.54 | 82.8 | 56.53 | 45.96 | 74.27 | 9.63 |
| Qwen1.5-7B-Chat | 55.15 | 55.89 | 78.56 | 61.65 | 53.54 | 67.72 | 13.57 |
| gemma-7b-it | 53.56 | 51.45 | 71.96 | 53.52 | 47.29 | 67.96 | 29.19 |
## Usage
The chat templates of our chat models are the same as Official Qwen1.5-7B-Chat:
```text
<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
hello, who are you?<|im_end|>
<|im_start|>assistant
I am a AI program developed by Firefly<|im_end|>
```
You can use script to inference in [Firefly](https://github.com/yangjianxin1/Firefly/blob/master/script/chat/chat.py).
You can also use the following code:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name_or_path = "YeungNLP/firefly-qwen1.5-en-7b"
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
trust_remote_code=True,
low_cpu_mem_usage=True,
torch_dtype=torch.float16,
device_map='auto',
)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
prompt = "Compose an engaging travel blog post about a recent trip to Hawaii, highlighting cultural experiences and must-see attractions. "
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to('cuda')
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=1500,
top_p = 0.9,
temperature = 0.35,
repetition_penalty = 1.0,
eos_token_id=tokenizer.encode('<|im_end|>', add_special_tokens=False)
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
## Training Details
Both in SFT and DPO stages, **We only use a single V100 GPU** with QLoRA, and we use [Firefly](https://github.com/yangjianxin1/Firefly) to train our models.
### Training Setting
The following hyperparameters are used during SFT:
- num_epochs: 1
- learning_rate: 2e-4
- total_train_batch_size: 32
- max_seq_length: 2048
- optimizer: paged_adamw_32bit
- lr_scheduler_type: constant_with_warmup
- warmup_steps: 700
- lora_rank: 64
- lora_alpha: 16
- lora_dropout: 0.05
- gradient_checkpointing: true
- fp16: true
The following hyperparameters were used during DPO:
- num_epochs: 1
- learning_rate: 2e-4
- total_train_batch_size: 32
- max_seq_length: 1600
- max_prompt_length: 500
- optimizer: paged_adamw_32bit
- lr_scheduler_type: constant_with_warmup
- warmup_steps: 200
- lora_rank: 64
- lora_alpha: 16
- lora_dropout: 0.05
- gradient_checkpointing: true
- fp16: true
### Training metrics
Training Rewards/margins in DPO:
<img src="pics/margins.png" width="600">
Training Rewards/accuracies in DPO:
<img src="pics/accuracies.png" width="500">
Training loss in DPO:
<img src="pics/loss.png" width="500">
The table below shows the full set of DPO training metrics:
| Epoch | Step | Loss | Rewards/accuracies | Rewards/margins | Rewards/chosen | Rewards/rejected | Logits/chosen| Logits/rejected | Logps/chosen| Logps/rejected|
|---|---|---|---|---|---|---|---|---|---|---|
|0.05|100|0.6231|0.6587|0.3179|0.0404|-0.2774|1.1694|1.2377|-284.5586|-255.4863|
|0.1|200|0.5945|0.6894|0.5988|-0.1704|-0.7693|1.012|1.0283|-284.3049|-268.1887|
|0.16|300|0.5754|0.6981|0.8314|-0.282|-1.1133|0.8912|0.8956|-283.6926|-270.3117|
|0.21|400|0.5702|0.7194|0.9369|-0.1944|-1.1313|0.7255|0.7557|-291.2833|-273.9706|
|0.26|500|0.5913|0.695|0.8784|-0.4524|-1.3309|0.5491|0.5535|-289.5705|-271.754|
|0.31|600|0.5743|0.6994|1.0192|-0.4505|-1.4698|0.6446|0.6399|-296.5292|-277.824|
|0.37|700|0.5876|0.7219|1.0471|-0.6998|-1.747|0.4955|0.4329|-303.7684|-289.0117|
|0.42|800|0.5831|0.715|1.0485|-0.8185|-1.8671|0.5589|0.4804|-295.6313|-288.0656|
|0.47|900|0.5674|0.7119|1.1854|-1.2085|-2.3939|0.3467|0.2249|-302.3643|-286.2816|
|0.52|1000|0.5794|0.7138|1.1458|-0.8423|-1.9881|0.5116|0.4248|-299.3136|-287.3934|
|0.58|1100|0.5718|0.7194|1.2897|-1.4944|-2.7841|0.6392|0.5739|-316.6829|-294.1148|
|0.63|1200|0.5718|0.7275|1.2459|-1.7543|-3.0002|0.4999|0.4065|-316.7873|-297.8514|
|0.68|1300|0.5789|0.72|1.3379|-1.8485|-3.1864|0.4289|0.3172|-314.8326|-296.8319|
|0.73|1400|0.5462|0.7425|1.4074|-1.9865|-3.3939|0.3645|0.2333|-309.4503|-294.3931|
|0.79|1500|0.5829|0.7156|1.2582|-2.1183|-3.3766|0.4193|0.2796|-307.5281|-292.0817|
|0.84|1600|0.5575|0.7375|1.471|-2.1429|-3.6139|0.6547|0.5152|-310.9912|-298.899|
|0.89|1700|0.5638|0.745|1.5433|-2.991|-4.5343|0.7336|0.6782|-328.2657|-307.5182|
|0.94|1800|0.5559|0.7181|1.4484|-2.8818|-4.3302|0.7997|0.8327|-316.2716|-295.1836|
|0.99|1900|0.5627|0.7387|1.5378|-2.7941|-4.332|0.8573|0.858|-324.9405|-310.1192| |
fnlp/AnyGPT-chat | fnlp | "2024-06-05T15:27:29Z" | 1,143 | 9 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:fnlp/AnyInstruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-23T14:50:34Z" | ---
license: apache-2.0
datasets:
- fnlp/AnyInstruct
language:
- en
---
# Chat model for paper "AnyGPT: Unified Multimodal LLM with Discrete Sequence Modeling"
## Introduction
We introduce AnyGPT, an any-to-any multimodal language model that utilizes discrete representations for the unified processing of various modalities, including speech, text, images, and music. The [base model](https://huggingface.co/fnlp/AnyGPT-base) aligns the four modalities, allowing for intermodal conversions between different modalities and text. Furthermore, we constructed the [AnyInstruct](https://huggingface.co/datasets/fnlp/AnyInstruct) dataset based on various generative models, which contains instructions for arbitrary modal interconversion. Trained on this dataset, our [chat model](https://huggingface.co/fnlp/AnyGPT-chat) can engage in free multimodal conversations, where multimodal data can be inserted at will.
AnyGPT proposes a generative training scheme that converts all modal data into a unified discrete representation, using the Next Token Prediction task for unified training on a Large Language Model (LLM). From the perspective of 'compression is intelligence': when the quality of the Tokenizer is high enough, and the perplexity (PPL) of the LLM is low enough, it is possible to compress the vast amount of multimodal data on the internet into the same model, thereby emerging capabilities not present in a pure text-based LLM.
Demos are shown in [project page](https://junzhan2000.github.io/AnyGPT.github.io).
## Example Demonstrations
[](https://www.youtube.com/watch?v=oW3E3pIsaRg)
## Inference
### Installation
```bash
git clone https://github.com/OpenMOSS/AnyGPT.git
cd AnyGPT
conda create --name AnyGPT python=3.9
conda activate AnyGPT
pip install -r requirements.txt
```
### Model Weights
* Check the AnyGPT-base weights in [fnlp/AnyGPT-base](https://huggingface.co/fnlp/AnyGPT-base)
* Check the AnyGPT-chat weights in [fnlp/AnyGPT-chat](https://huggingface.co/fnlp/AnyGPT-chat)
* Check the SpeechTokenizer and Soundstorm weights in [fnlp/AnyGPT-speech-modules](https://huggingface.co/fnlp/AnyGPT-speech-modules)
* Check the SEED tokenizer weights in [AILab-CVC/seed-tokenizer-2](https://huggingface.co/AILab-CVC/seed-tokenizer-2)
The SpeechTokenizer is used for tokenizing and reconstructing speech, Soundstorm is responsible for completing paralinguistic information, and SEED-tokenizer is used for tokenizing images.
The model weights of unCLIP SD-UNet which are used to reconstruct the image, and Encodec-32k which are used to tokenize and reconstruct music will be downloaded automatically.
### Base model CLI Inference
```bash
python anygpt/src/infer/cli_infer_base_model.py \
--model-name-or-path "path/to/AnyGPT-7B-base" \
--image-tokenizer-path models/seed-tokenizer-2/seed_quantizer.pt \
--speech-tokenizer-path "path/to/model" \
--speech-tokenizer-config "path/to/config" \
--soundstorm-path "path/to/model" \
--output-dir "infer_output/base"
```
for example
```bash
python anygpt/src/infer/cli_infer_base_model.py \
--model-name-or-path models/anygpt/base \
--image-tokenizer-path models/seed-tokenizer-2/seed_quantizer.pt \
--speech-tokenizer-path models/speechtokenizer/ckpt.dev \
--speech-tokenizer-config models/speechtokenizer/config.json \
--soundstorm-path models/soundstorm/speechtokenizer_soundstorm_mls.pt \
--output-dir "infer_output/base"
```
#### Interaction
The Base Model can perform various tasks, including text-to-image, image caption, Automatic Speech Recognition (ASR), Zero-shot Text-to-Speech (TTS), Text-to-Music, and Music Captioning.
We can perform inference following a specific instruction format.
* Text-to-Image
* ```text|image|{caption}```
* example:
```text|image|A bustling medieval market scene with vendors selling exotic goods under colorful tents```
* Image Caption
* ```image|text|{caption}```
* example:
```image|text|static/infer/image/cat.jpg```
* TTS(random voice)
* ```text|speech|{speech content}```
* example:
```text|speech|I could be bounded in a nutshell and count myself a king of infinite space.```
* Zero-shot TTS
* ```text|speech|{speech content}|{voice prompt}```
* example:
```text|speech|I could be bounded in a nutshell and count myself a king of infinite space.|static/infer/speech/voice_prompt1.wav/voice_prompt3.wav```
* ASR
* ```speech|text|{speech file path}```
* example: ```speech|text|AnyGPT/static/infer/speech/voice_prompt2.wav```
* Text-to-Music
* ```text|music|{caption}```
* example:
```text|music|features an indie rock sound with distinct elements that evoke a dreamy, soothing atmosphere```
* Music Caption
* ```music|text|{music file path}```
* example: ```music|text|static/infer/music/features an indie rock sound with distinct element.wav```
**Notes**
For different tasks, we used different language model decoding strategies. The decoding configuration files for image, speech, and music generation are located in ```config/image_generate_config.json```, ```config/speech_generate_config.json```, and ```config/music_generate_config.json```, respectively. The decoding configuration files for other modalities to text are in ```config/text_generate_config.json```. You can directly modify or add parameters to change the decoding strategy.
Due to limitations in data and training resources, the model's generation may still be unstable. You can generate multiple times or try different decoding strategies.
The speech and music response will be saved to ```.wav``` files, and the image response will be saved to a ```jpg```. The filename will be a concatenation of the prompt and the time. The paths to these files will be indicated in the response.
### Training
#### Pretraining
* Install dependency
``` bash
cd FastChat
pip3 install -e ".[train]"
```
* run
```
srun --partition=llm_h --job-name=pretrain --gres=gpu:8 --quotatype=spot --ntasks=1 --ntasks-per-node=1 --cpus-per-task 100 --kill-on-bad-exit=1 bash scripts/stage1_pretrain.sh
```
We have provided some sample data in the "data" folder. To download the complete dataset, please refer to the following:
* Image data: https://huggingface.co/datasets/zhanjun/AnyGPT-data-image
* The two datasets in the t2i folder are high-quality image datasets, used for fine-tuning text-to-image generation.
* Speech data: https://huggingface.co/datasets/zhanjun/AnyGPT-data-speech
* Music data: None
* Insruction data: https://huggingface.co/datasets/zhanjun/Anygpt_data_instruction
These data are preprocessed by multimodal tokeniziers.
## Acknowledgements
- [SpeechGPT](https://github.com/0nutation/SpeechGPT/tree/main/speechgpt), [Vicuna](https://github.com/lm-sys/FastChat): The codebase we built upon.
- We thank the great work from [SpeechTokenizer](https://github.com/ZhangXInFD/SpeechTokenizer),[soundstorm-speechtokenizer](https://github.com/ZhangXInFD/soundstorm-speechtokenizer), [SEED-tokenizer](https://github.com/AILab-CVC/SEED),
## Lincese
`AnyGPT` is released under the original [License](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) of [LLaMA2](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf).
## Citation
If you find AnyGPT and AnyInstruct useful in your research or applications, please kindly cite:
```
@article{zhan2024anygpt,
title={AnyGPT: Unified Multimodal LLM with Discrete Sequence Modeling},
author={Zhan, Jun and Dai, Junqi and Ye, Jiasheng and Zhou, Yunhua and Zhang, Dong and Liu, Zhigeng and Zhang, Xin and Yuan, Ruibin and Zhang, Ge and Li, Linyang and others},
journal={arXiv preprint arXiv:2402.12226},
year={2024}
}
``` |
Antraxas/test1 | Antraxas | "2023-09-02T09:25:04Z" | 1,141 | 0 | diffusers | [
"diffusers",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-02-06T18:29:48Z" | ---
license: openrail
---
|
lemon-mint/gemma-ko-7b-instruct-v0.71 | lemon-mint | "2024-04-09T02:46:25Z" | 1,141 | 1 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"korean",
"pytorch",
"conversational",
"ko",
"en",
"base_model:google/gemma-1.1-7b-it",
"license:gemma",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-09T02:37:43Z" | ---
language:
- ko
- en
license: gemma
library_name: transformers
tags:
- korean
- gemma
- pytorch
base_model: google/gemma-1.1-7b-it
pipeline_tag: text-generation
---

# Gemma Ko 7B Instruct v0.71
- Eval Loss: `1.51977`
- Train Loss: `0.48541`
- lr: `5e-5`
- optimizer: adamw
- lr_scheduler_type: cosine
## Model Details
### Model Description
The Gemma Ko 7B Instruct v0.71 model is designed for generating human-like text in the Korean language.
It can be used for a variety of natural language processing tasks, such as language translation, text summarization, question answering, and conversation generation.
This model is particularly well-suited for applications that require high-quality, coherent, and contextually relevant Korean text generation.
- **Developed by:** `lemon-mint`
- **Model type:** Gemma
- **Language(s) (NLP):** Korean, English
- **License:** [gemma-terms-of-use](https://ai.google.dev/gemma/terms)
- **Finetuned from model:** [google/gemma-1.1-7b-it](https://huggingface.co/google/gemma-1.1-7b-it)
# Limitations and Ethical Considerations
As Gemma Ko 7B has been trained on extensive web data, biases present in the training data may be reflected in the model. Additionally, there is a possibility that it may generate sentences containing errors or incorrect information. Therefore, rather than blindly trusting the model's output, it is necessary to refer to it with caution. |
McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp-unsup-simcse | McGill-NLP | "2024-04-30T03:42:49Z" | 1,141 | 1 | peft | [
"peft",
"safetensors",
"text-embedding",
"embeddings",
"information-retrieval",
"beir",
"text-classification",
"language-model",
"text-clustering",
"text-semantic-similarity",
"text-evaluation",
"text-reranking",
"feature-extraction",
"sentence-similarity",
"Sentence Similarity",
"natural_questions",
"ms_marco",
"fever",
"hotpot_qa",
"mteb",
"en",
"arxiv:2404.05961",
"license:mit",
"model-index",
"region:us"
] | sentence-similarity | "2024-04-30T02:45:32Z" | ---
library_name: peft
license: mit
language:
- en
pipeline_tag: sentence-similarity
tags:
- text-embedding
- embeddings
- information-retrieval
- beir
- text-classification
- language-model
- text-clustering
- text-semantic-similarity
- text-evaluation
- text-reranking
- feature-extraction
- sentence-similarity
- Sentence Similarity
- natural_questions
- ms_marco
- fever
- hotpot_qa
- mteb
model-index:
- name: LLM2Vec-Meta-Llama-3-unsupervised
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 75.70149253731343
- type: ap
value: 40.824269118508354
- type: f1
value: 70.55918234479084
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 80.6812
- type: ap
value: 76.63327889516552
- type: f1
value: 80.5276613226382
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 40.002
- type: f1
value: 39.67277678335084
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.173999999999996
- type: map_at_10
value: 42.548
- type: map_at_100
value: 43.492999999999995
- type: map_at_1000
value: 43.5
- type: map_at_3
value: 37.376
- type: map_at_5
value: 40.359
- type: mrr_at_1
value: 27.24
- type: mrr_at_10
value: 42.945
- type: mrr_at_100
value: 43.89
- type: mrr_at_1000
value: 43.897000000000006
- type: mrr_at_3
value: 37.779
- type: mrr_at_5
value: 40.755
- type: ndcg_at_1
value: 26.173999999999996
- type: ndcg_at_10
value: 51.731
- type: ndcg_at_100
value: 55.684999999999995
- type: ndcg_at_1000
value: 55.86
- type: ndcg_at_3
value: 41.122
- type: ndcg_at_5
value: 46.491
- type: precision_at_1
value: 26.173999999999996
- type: precision_at_10
value: 8.108
- type: precision_at_100
value: 0.9820000000000001
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 17.330000000000002
- type: precision_at_5
value: 13.001
- type: recall_at_1
value: 26.173999999999996
- type: recall_at_10
value: 81.081
- type: recall_at_100
value: 98.222
- type: recall_at_1000
value: 99.57300000000001
- type: recall_at_3
value: 51.991
- type: recall_at_5
value: 65.007
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 49.215974795578546
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 41.71067780141813
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 57.15639347603191
- type: mrr
value: 71.4509959108297
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_spearman
value: 84.67361609277127
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 84.76623376623375
- type: f1
value: 84.70041172334481
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 38.39251163108548
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 31.30501371807517
- task:
type: Retrieval
dataset:
type: cqadupstack/android
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.409
- type: map_at_10
value: 36.925000000000004
- type: map_at_100
value: 38.651
- type: map_at_1000
value: 38.798
- type: map_at_3
value: 33.437
- type: map_at_5
value: 35.506
- type: mrr_at_1
value: 33.763
- type: mrr_at_10
value: 43.442
- type: mrr_at_100
value: 44.339
- type: mrr_at_1000
value: 44.391000000000005
- type: mrr_at_3
value: 40.749
- type: mrr_at_5
value: 42.408
- type: ndcg_at_1
value: 33.763
- type: ndcg_at_10
value: 43.486999999999995
- type: ndcg_at_100
value: 49.71
- type: ndcg_at_1000
value: 51.81
- type: ndcg_at_3
value: 38.586
- type: ndcg_at_5
value: 41.074
- type: precision_at_1
value: 33.763
- type: precision_at_10
value: 8.798
- type: precision_at_100
value: 1.544
- type: precision_at_1000
value: 0.21
- type: precision_at_3
value: 19.361
- type: precision_at_5
value: 14.335
- type: recall_at_1
value: 26.409
- type: recall_at_10
value: 55.352999999999994
- type: recall_at_100
value: 81.66799999999999
- type: recall_at_1000
value: 95.376
- type: recall_at_3
value: 40.304
- type: recall_at_5
value: 47.782000000000004
- task:
type: Retrieval
dataset:
type: cqadupstack/english
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.6
- type: map_at_10
value: 36.42
- type: map_at_100
value: 37.628
- type: map_at_1000
value: 37.767
- type: map_at_3
value: 33.553
- type: map_at_5
value: 35.118
- type: mrr_at_1
value: 34.394999999999996
- type: mrr_at_10
value: 42.586
- type: mrr_at_100
value: 43.251
- type: mrr_at_1000
value: 43.303000000000004
- type: mrr_at_3
value: 40.297
- type: mrr_at_5
value: 41.638
- type: ndcg_at_1
value: 34.394999999999996
- type: ndcg_at_10
value: 42.05
- type: ndcg_at_100
value: 46.371
- type: ndcg_at_1000
value: 48.76
- type: ndcg_at_3
value: 37.936
- type: ndcg_at_5
value: 39.827
- type: precision_at_1
value: 34.394999999999996
- type: precision_at_10
value: 8.268
- type: precision_at_100
value: 1.355
- type: precision_at_1000
value: 0.186
- type: precision_at_3
value: 18.726000000000003
- type: precision_at_5
value: 13.541
- type: recall_at_1
value: 26.6
- type: recall_at_10
value: 51.529
- type: recall_at_100
value: 70.038
- type: recall_at_1000
value: 85.67
- type: recall_at_3
value: 39.448
- type: recall_at_5
value: 44.6
- task:
type: Retrieval
dataset:
type: cqadupstack/gaming
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.863000000000003
- type: map_at_10
value: 43.733
- type: map_at_100
value: 45.005
- type: map_at_1000
value: 45.074
- type: map_at_3
value: 40.593
- type: map_at_5
value: 42.272
- type: mrr_at_1
value: 37.555
- type: mrr_at_10
value: 47.532999999999994
- type: mrr_at_100
value: 48.431999999999995
- type: mrr_at_1000
value: 48.47
- type: mrr_at_3
value: 44.901
- type: mrr_at_5
value: 46.274
- type: ndcg_at_1
value: 37.555
- type: ndcg_at_10
value: 49.789
- type: ndcg_at_100
value: 55.059999999999995
- type: ndcg_at_1000
value: 56.434
- type: ndcg_at_3
value: 44.238
- type: ndcg_at_5
value: 46.698
- type: precision_at_1
value: 37.555
- type: precision_at_10
value: 8.257
- type: precision_at_100
value: 1.189
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 20.23
- type: precision_at_5
value: 13.868
- type: recall_at_1
value: 31.863000000000003
- type: recall_at_10
value: 64.188
- type: recall_at_100
value: 87.02600000000001
- type: recall_at_1000
value: 96.761
- type: recall_at_3
value: 48.986000000000004
- type: recall_at_5
value: 55.177
- task:
type: Retrieval
dataset:
type: cqadupstack/gis
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 15.964
- type: map_at_10
value: 22.746
- type: map_at_100
value: 23.704
- type: map_at_1000
value: 23.82
- type: map_at_3
value: 20.5
- type: map_at_5
value: 21.836
- type: mrr_at_1
value: 17.740000000000002
- type: mrr_at_10
value: 24.634
- type: mrr_at_100
value: 25.535999999999998
- type: mrr_at_1000
value: 25.628
- type: mrr_at_3
value: 22.429
- type: mrr_at_5
value: 23.791
- type: ndcg_at_1
value: 17.740000000000002
- type: ndcg_at_10
value: 26.838
- type: ndcg_at_100
value: 31.985000000000003
- type: ndcg_at_1000
value: 35.289
- type: ndcg_at_3
value: 22.384
- type: ndcg_at_5
value: 24.726
- type: precision_at_1
value: 17.740000000000002
- type: precision_at_10
value: 4.35
- type: precision_at_100
value: 0.753
- type: precision_at_1000
value: 0.108
- type: precision_at_3
value: 9.754999999999999
- type: precision_at_5
value: 7.164
- type: recall_at_1
value: 15.964
- type: recall_at_10
value: 37.705
- type: recall_at_100
value: 61.94499999999999
- type: recall_at_1000
value: 87.646
- type: recall_at_3
value: 25.714
- type: recall_at_5
value: 31.402
- task:
type: Retrieval
dataset:
type: cqadupstack/mathematica
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.221
- type: map_at_10
value: 14.735000000000001
- type: map_at_100
value: 15.778
- type: map_at_1000
value: 15.9
- type: map_at_3
value: 12.791
- type: map_at_5
value: 13.703999999999999
- type: mrr_at_1
value: 12.438
- type: mrr_at_10
value: 18.353
- type: mrr_at_100
value: 19.285
- type: mrr_at_1000
value: 19.375
- type: mrr_at_3
value: 16.439
- type: mrr_at_5
value: 17.352999999999998
- type: ndcg_at_1
value: 12.438
- type: ndcg_at_10
value: 18.703
- type: ndcg_at_100
value: 24.104999999999997
- type: ndcg_at_1000
value: 27.366
- type: ndcg_at_3
value: 15.055
- type: ndcg_at_5
value: 16.42
- type: precision_at_1
value: 12.438
- type: precision_at_10
value: 3.818
- type: precision_at_100
value: 0.77
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 7.753
- type: precision_at_5
value: 5.622
- type: recall_at_1
value: 9.221
- type: recall_at_10
value: 27.461999999999996
- type: recall_at_100
value: 51.909000000000006
- type: recall_at_1000
value: 75.56
- type: recall_at_3
value: 17.046
- type: recall_at_5
value: 20.766000000000002
- task:
type: Retrieval
dataset:
type: cqadupstack/physics
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.828
- type: map_at_10
value: 33.166000000000004
- type: map_at_100
value: 34.618
- type: map_at_1000
value: 34.744
- type: map_at_3
value: 29.737000000000002
- type: map_at_5
value: 31.541000000000004
- type: mrr_at_1
value: 29.548000000000002
- type: mrr_at_10
value: 38.582
- type: mrr_at_100
value: 39.527
- type: mrr_at_1000
value: 39.577
- type: mrr_at_3
value: 35.884
- type: mrr_at_5
value: 37.413999999999994
- type: ndcg_at_1
value: 29.548000000000002
- type: ndcg_at_10
value: 39.397
- type: ndcg_at_100
value: 45.584
- type: ndcg_at_1000
value: 47.823
- type: ndcg_at_3
value: 33.717000000000006
- type: ndcg_at_5
value: 36.223
- type: precision_at_1
value: 29.548000000000002
- type: precision_at_10
value: 7.767
- type: precision_at_100
value: 1.2959999999999998
- type: precision_at_1000
value: 0.17099999999999999
- type: precision_at_3
value: 16.747
- type: precision_at_5
value: 12.203999999999999
- type: recall_at_1
value: 22.828
- type: recall_at_10
value: 52.583999999999996
- type: recall_at_100
value: 79.06400000000001
- type: recall_at_1000
value: 93.59100000000001
- type: recall_at_3
value: 36.671
- type: recall_at_5
value: 43.22
- task:
type: Retrieval
dataset:
type: cqadupstack/programmers
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.366
- type: map_at_10
value: 30.214000000000002
- type: map_at_100
value: 31.647
- type: map_at_1000
value: 31.763
- type: map_at_3
value: 27.234
- type: map_at_5
value: 28.801
- type: mrr_at_1
value: 26.256
- type: mrr_at_10
value: 35.299
- type: mrr_at_100
value: 36.284
- type: mrr_at_1000
value: 36.342
- type: mrr_at_3
value: 32.572
- type: mrr_at_5
value: 34.050999999999995
- type: ndcg_at_1
value: 26.256
- type: ndcg_at_10
value: 35.899
- type: ndcg_at_100
value: 41.983
- type: ndcg_at_1000
value: 44.481
- type: ndcg_at_3
value: 30.665
- type: ndcg_at_5
value: 32.879999999999995
- type: precision_at_1
value: 26.256
- type: precision_at_10
value: 6.804
- type: precision_at_100
value: 1.187
- type: precision_at_1000
value: 0.16
- type: precision_at_3
value: 14.84
- type: precision_at_5
value: 10.708
- type: recall_at_1
value: 21.366
- type: recall_at_10
value: 47.878
- type: recall_at_100
value: 73.245
- type: recall_at_1000
value: 90.623
- type: recall_at_3
value: 33.341
- type: recall_at_5
value: 39.198
- task:
type: Retrieval
dataset:
type: mteb/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.477166666666665
- type: map_at_10
value: 27.431416666666664
- type: map_at_100
value: 28.656000000000002
- type: map_at_1000
value: 28.787583333333338
- type: map_at_3
value: 24.85175
- type: map_at_5
value: 26.270166666666668
- type: mrr_at_1
value: 24.06841666666667
- type: mrr_at_10
value: 31.620000000000005
- type: mrr_at_100
value: 32.52283333333333
- type: mrr_at_1000
value: 32.59441666666667
- type: mrr_at_3
value: 29.328666666666663
- type: mrr_at_5
value: 30.620416666666667
- type: ndcg_at_1
value: 24.06841666666667
- type: ndcg_at_10
value: 32.404583333333335
- type: ndcg_at_100
value: 37.779500000000006
- type: ndcg_at_1000
value: 40.511583333333334
- type: ndcg_at_3
value: 27.994166666666665
- type: ndcg_at_5
value: 30.021749999999997
- type: precision_at_1
value: 24.06841666666667
- type: precision_at_10
value: 6.03725
- type: precision_at_100
value: 1.0500833333333337
- type: precision_at_1000
value: 0.14875000000000002
- type: precision_at_3
value: 13.419583333333335
- type: precision_at_5
value: 9.700666666666665
- type: recall_at_1
value: 19.477166666666665
- type: recall_at_10
value: 42.99441666666667
- type: recall_at_100
value: 66.787
- type: recall_at_1000
value: 86.18825000000001
- type: recall_at_3
value: 30.46366666666667
- type: recall_at_5
value: 35.83141666666667
- task:
type: Retrieval
dataset:
type: cqadupstack/stats
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 16.246
- type: map_at_10
value: 22.127
- type: map_at_100
value: 23.006
- type: map_at_1000
value: 23.125
- type: map_at_3
value: 20.308999999999997
- type: map_at_5
value: 21.139
- type: mrr_at_1
value: 19.631999999999998
- type: mrr_at_10
value: 24.884999999999998
- type: mrr_at_100
value: 25.704
- type: mrr_at_1000
value: 25.793
- type: mrr_at_3
value: 23.083000000000002
- type: mrr_at_5
value: 23.942
- type: ndcg_at_1
value: 19.631999999999998
- type: ndcg_at_10
value: 25.862000000000002
- type: ndcg_at_100
value: 30.436000000000003
- type: ndcg_at_1000
value: 33.638
- type: ndcg_at_3
value: 22.431
- type: ndcg_at_5
value: 23.677
- type: precision_at_1
value: 19.631999999999998
- type: precision_at_10
value: 4.417
- type: precision_at_100
value: 0.7270000000000001
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 10.327
- type: precision_at_5
value: 7.147
- type: recall_at_1
value: 16.246
- type: recall_at_10
value: 34.869
- type: recall_at_100
value: 56.221
- type: recall_at_1000
value: 80.449
- type: recall_at_3
value: 24.83
- type: recall_at_5
value: 28.142
- task:
type: Retrieval
dataset:
type: cqadupstack/tex
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.798
- type: map_at_10
value: 14.695
- type: map_at_100
value: 15.590000000000002
- type: map_at_1000
value: 15.726999999999999
- type: map_at_3
value: 13.004999999999999
- type: map_at_5
value: 13.861
- type: mrr_at_1
value: 12.939
- type: mrr_at_10
value: 18.218
- type: mrr_at_100
value: 18.998
- type: mrr_at_1000
value: 19.093
- type: mrr_at_3
value: 16.454
- type: mrr_at_5
value: 17.354
- type: ndcg_at_1
value: 12.939
- type: ndcg_at_10
value: 18.278
- type: ndcg_at_100
value: 22.709
- type: ndcg_at_1000
value: 26.064
- type: ndcg_at_3
value: 15.204
- type: ndcg_at_5
value: 16.416
- type: precision_at_1
value: 12.939
- type: precision_at_10
value: 3.768
- type: precision_at_100
value: 0.724
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 7.707999999999999
- type: precision_at_5
value: 5.733
- type: recall_at_1
value: 9.798
- type: recall_at_10
value: 25.562
- type: recall_at_100
value: 45.678999999999995
- type: recall_at_1000
value: 69.963
- type: recall_at_3
value: 16.705000000000002
- type: recall_at_5
value: 19.969
- task:
type: Retrieval
dataset:
type: cqadupstack/unix
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.1
- type: map_at_10
value: 27.034999999999997
- type: map_at_100
value: 28.396
- type: map_at_1000
value: 28.518
- type: map_at_3
value: 24.363
- type: map_at_5
value: 25.826999999999998
- type: mrr_at_1
value: 23.694000000000003
- type: mrr_at_10
value: 31.724999999999998
- type: mrr_at_100
value: 32.743
- type: mrr_at_1000
value: 32.82
- type: mrr_at_3
value: 29.275000000000002
- type: mrr_at_5
value: 30.684
- type: ndcg_at_1
value: 23.694000000000003
- type: ndcg_at_10
value: 32.366
- type: ndcg_at_100
value: 38.241
- type: ndcg_at_1000
value: 40.973
- type: ndcg_at_3
value: 27.661
- type: ndcg_at_5
value: 29.782999999999998
- type: precision_at_1
value: 23.694000000000003
- type: precision_at_10
value: 5.951
- type: precision_at_100
value: 1.0070000000000001
- type: precision_at_1000
value: 0.135
- type: precision_at_3
value: 13.34
- type: precision_at_5
value: 9.533999999999999
- type: recall_at_1
value: 19.1
- type: recall_at_10
value: 44.032
- type: recall_at_100
value: 69.186
- type: recall_at_1000
value: 88.562
- type: recall_at_3
value: 30.712
- type: recall_at_5
value: 36.372
- task:
type: Retrieval
dataset:
type: cqadupstack/webmasters
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.671
- type: map_at_10
value: 28.583
- type: map_at_100
value: 30.098999999999997
- type: map_at_1000
value: 30.364
- type: map_at_3
value: 25.825
- type: map_at_5
value: 27.500999999999998
- type: mrr_at_1
value: 25.889
- type: mrr_at_10
value: 33.617999999999995
- type: mrr_at_100
value: 34.687
- type: mrr_at_1000
value: 34.774
- type: mrr_at_3
value: 31.191999999999997
- type: mrr_at_5
value: 32.675
- type: ndcg_at_1
value: 25.889
- type: ndcg_at_10
value: 34.056999999999995
- type: ndcg_at_100
value: 40.142
- type: ndcg_at_1000
value: 43.614000000000004
- type: ndcg_at_3
value: 29.688
- type: ndcg_at_5
value: 32.057
- type: precision_at_1
value: 25.889
- type: precision_at_10
value: 6.7
- type: precision_at_100
value: 1.417
- type: precision_at_1000
value: 0.241
- type: precision_at_3
value: 14.360999999999999
- type: precision_at_5
value: 10.711
- type: recall_at_1
value: 20.671
- type: recall_at_10
value: 43.97
- type: recall_at_100
value: 71.83699999999999
- type: recall_at_1000
value: 94.42399999999999
- type: recall_at_3
value: 31.0
- type: recall_at_5
value: 37.489
- task:
type: Retrieval
dataset:
type: cqadupstack/wordpress
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 13.66
- type: map_at_10
value: 18.798000000000002
- type: map_at_100
value: 19.75
- type: map_at_1000
value: 19.851
- type: map_at_3
value: 16.874
- type: map_at_5
value: 18.136
- type: mrr_at_1
value: 14.972
- type: mrr_at_10
value: 20.565
- type: mrr_at_100
value: 21.488
- type: mrr_at_1000
value: 21.567
- type: mrr_at_3
value: 18.669
- type: mrr_at_5
value: 19.861
- type: ndcg_at_1
value: 14.972
- type: ndcg_at_10
value: 22.128999999999998
- type: ndcg_at_100
value: 27.028000000000002
- type: ndcg_at_1000
value: 29.887000000000004
- type: ndcg_at_3
value: 18.365000000000002
- type: ndcg_at_5
value: 20.48
- type: precision_at_1
value: 14.972
- type: precision_at_10
value: 3.549
- type: precision_at_100
value: 0.632
- type: precision_at_1000
value: 0.093
- type: precision_at_3
value: 7.887
- type: precision_at_5
value: 5.840999999999999
- type: recall_at_1
value: 13.66
- type: recall_at_10
value: 30.801000000000002
- type: recall_at_100
value: 53.626
- type: recall_at_1000
value: 75.634
- type: recall_at_3
value: 20.807000000000002
- type: recall_at_5
value: 25.86
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.622
- type: map_at_10
value: 16.042
- type: map_at_100
value: 18.023
- type: map_at_1000
value: 18.228
- type: map_at_3
value: 12.995999999999999
- type: map_at_5
value: 14.424000000000001
- type: mrr_at_1
value: 18.892999999999997
- type: mrr_at_10
value: 30.575000000000003
- type: mrr_at_100
value: 31.814999999999998
- type: mrr_at_1000
value: 31.856
- type: mrr_at_3
value: 26.851000000000003
- type: mrr_at_5
value: 29.021
- type: ndcg_at_1
value: 18.892999999999997
- type: ndcg_at_10
value: 23.575
- type: ndcg_at_100
value: 31.713
- type: ndcg_at_1000
value: 35.465
- type: ndcg_at_3
value: 18.167
- type: ndcg_at_5
value: 20.071
- type: precision_at_1
value: 18.892999999999997
- type: precision_at_10
value: 7.883
- type: precision_at_100
value: 1.652
- type: precision_at_1000
value: 0.23500000000000001
- type: precision_at_3
value: 13.898
- type: precision_at_5
value: 11.14
- type: recall_at_1
value: 8.622
- type: recall_at_10
value: 30.044999999999998
- type: recall_at_100
value: 58.072
- type: recall_at_1000
value: 79.226
- type: recall_at_3
value: 17.21
- type: recall_at_5
value: 22.249
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.845
- type: map_at_10
value: 12.352
- type: map_at_100
value: 17.423
- type: map_at_1000
value: 18.529
- type: map_at_3
value: 8.505
- type: map_at_5
value: 10.213
- type: mrr_at_1
value: 41.75
- type: mrr_at_10
value: 54.6
- type: mrr_at_100
value: 55.345
- type: mrr_at_1000
value: 55.374
- type: mrr_at_3
value: 52.37500000000001
- type: mrr_at_5
value: 53.87499999999999
- type: ndcg_at_1
value: 31.25
- type: ndcg_at_10
value: 26.779999999999998
- type: ndcg_at_100
value: 31.929000000000002
- type: ndcg_at_1000
value: 39.290000000000006
- type: ndcg_at_3
value: 28.746
- type: ndcg_at_5
value: 27.334999999999997
- type: precision_at_1
value: 41.75
- type: precision_at_10
value: 22.55
- type: precision_at_100
value: 7.242
- type: precision_at_1000
value: 1.439
- type: precision_at_3
value: 33.833
- type: precision_at_5
value: 28.65
- type: recall_at_1
value: 4.845
- type: recall_at_10
value: 18.664
- type: recall_at_100
value: 41.085
- type: recall_at_1000
value: 65.242
- type: recall_at_3
value: 10.572
- type: recall_at_5
value: 13.961000000000002
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 47.08
- type: f1
value: 42.843345856303756
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 33.743
- type: map_at_10
value: 46.521
- type: map_at_100
value: 47.235
- type: map_at_1000
value: 47.272
- type: map_at_3
value: 43.252
- type: map_at_5
value: 45.267
- type: mrr_at_1
value: 36.484
- type: mrr_at_10
value: 49.406
- type: mrr_at_100
value: 50.03300000000001
- type: mrr_at_1000
value: 50.058
- type: mrr_at_3
value: 46.195
- type: mrr_at_5
value: 48.193999999999996
- type: ndcg_at_1
value: 36.484
- type: ndcg_at_10
value: 53.42
- type: ndcg_at_100
value: 56.69499999999999
- type: ndcg_at_1000
value: 57.623999999999995
- type: ndcg_at_3
value: 47.010999999999996
- type: ndcg_at_5
value: 50.524
- type: precision_at_1
value: 36.484
- type: precision_at_10
value: 7.925
- type: precision_at_100
value: 0.975
- type: precision_at_1000
value: 0.107
- type: precision_at_3
value: 19.967
- type: precision_at_5
value: 13.87
- type: recall_at_1
value: 33.743
- type: recall_at_10
value: 71.988
- type: recall_at_100
value: 86.60799999999999
- type: recall_at_1000
value: 93.54
- type: recall_at_3
value: 54.855
- type: recall_at_5
value: 63.341
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 13.003
- type: map_at_10
value: 21.766
- type: map_at_100
value: 23.618
- type: map_at_1000
value: 23.832
- type: map_at_3
value: 18.282999999999998
- type: map_at_5
value: 20.267
- type: mrr_at_1
value: 26.851999999999997
- type: mrr_at_10
value: 34.658
- type: mrr_at_100
value: 35.729
- type: mrr_at_1000
value: 35.785
- type: mrr_at_3
value: 31.686999999999998
- type: mrr_at_5
value: 33.315
- type: ndcg_at_1
value: 26.851999999999997
- type: ndcg_at_10
value: 28.563
- type: ndcg_at_100
value: 36.374
- type: ndcg_at_1000
value: 40.306999999999995
- type: ndcg_at_3
value: 24.224
- type: ndcg_at_5
value: 25.939
- type: precision_at_1
value: 26.851999999999997
- type: precision_at_10
value: 8.193999999999999
- type: precision_at_100
value: 1.616
- type: precision_at_1000
value: 0.232
- type: precision_at_3
value: 16.255
- type: precision_at_5
value: 12.469
- type: recall_at_1
value: 13.003
- type: recall_at_10
value: 35.689
- type: recall_at_100
value: 65.762
- type: recall_at_1000
value: 89.546
- type: recall_at_3
value: 21.820999999999998
- type: recall_at_5
value: 28.097
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 29.541
- type: map_at_10
value: 43.088
- type: map_at_100
value: 44.252
- type: map_at_1000
value: 44.345
- type: map_at_3
value: 39.79
- type: map_at_5
value: 41.687000000000005
- type: mrr_at_1
value: 59.082
- type: mrr_at_10
value: 67.27300000000001
- type: mrr_at_100
value: 67.708
- type: mrr_at_1000
value: 67.731
- type: mrr_at_3
value: 65.526
- type: mrr_at_5
value: 66.589
- type: ndcg_at_1
value: 59.082
- type: ndcg_at_10
value: 52.372
- type: ndcg_at_100
value: 56.725
- type: ndcg_at_1000
value: 58.665
- type: ndcg_at_3
value: 47.129
- type: ndcg_at_5
value: 49.808
- type: precision_at_1
value: 59.082
- type: precision_at_10
value: 11.275
- type: precision_at_100
value: 1.469
- type: precision_at_1000
value: 0.173
- type: precision_at_3
value: 29.773
- type: precision_at_5
value: 19.980999999999998
- type: recall_at_1
value: 29.541
- type: recall_at_10
value: 56.374
- type: recall_at_100
value: 73.42999999999999
- type: recall_at_1000
value: 86.28
- type: recall_at_3
value: 44.659
- type: recall_at_5
value: 49.952999999999996
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 75.1904
- type: ap
value: 69.80555086826531
- type: f1
value: 74.93725389065787
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 7.085
- type: map_at_10
value: 13.344000000000001
- type: map_at_100
value: 14.501
- type: map_at_1000
value: 14.605
- type: map_at_3
value: 10.758
- type: map_at_5
value: 12.162
- type: mrr_at_1
value: 7.278
- type: mrr_at_10
value: 13.607
- type: mrr_at_100
value: 14.761
- type: mrr_at_1000
value: 14.860000000000001
- type: mrr_at_3
value: 11.003
- type: mrr_at_5
value: 12.421
- type: ndcg_at_1
value: 7.278
- type: ndcg_at_10
value: 17.473
- type: ndcg_at_100
value: 23.721
- type: ndcg_at_1000
value: 26.69
- type: ndcg_at_3
value: 12.078
- type: ndcg_at_5
value: 14.62
- type: precision_at_1
value: 7.278
- type: precision_at_10
value: 3.175
- type: precision_at_100
value: 0.639
- type: precision_at_1000
value: 0.09
- type: precision_at_3
value: 5.382
- type: precision_at_5
value: 4.519
- type: recall_at_1
value: 7.085
- type: recall_at_10
value: 30.549
- type: recall_at_100
value: 60.919999999999995
- type: recall_at_1000
value: 84.372
- type: recall_at_3
value: 15.675
- type: recall_at_5
value: 21.818
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 94.46876424988601
- type: f1
value: 94.23159241922738
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 81.0875512995896
- type: f1
value: 61.674961674414
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 75.01344989912575
- type: f1
value: 71.7942527839921
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 79.15601882985877
- type: f1
value: 78.82502954601195
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 31.468806971345227
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 27.874332804382256
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.099340785595842
- type: mrr
value: 31.077367694660257
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.9050000000000002
- type: map_at_10
value: 8.931000000000001
- type: map_at_100
value: 11.246
- type: map_at_1000
value: 12.579
- type: map_at_3
value: 6.544
- type: map_at_5
value: 7.854
- type: mrr_at_1
value: 33.745999999999995
- type: mrr_at_10
value: 44.734
- type: mrr_at_100
value: 45.486
- type: mrr_at_1000
value: 45.534
- type: mrr_at_3
value: 42.157
- type: mrr_at_5
value: 43.813
- type: ndcg_at_1
value: 31.734
- type: ndcg_at_10
value: 26.284999999999997
- type: ndcg_at_100
value: 25.211
- type: ndcg_at_1000
value: 34.974
- type: ndcg_at_3
value: 29.918
- type: ndcg_at_5
value: 29.066
- type: precision_at_1
value: 33.745999999999995
- type: precision_at_10
value: 19.628
- type: precision_at_100
value: 6.476999999999999
- type: precision_at_1000
value: 1.976
- type: precision_at_3
value: 28.793000000000003
- type: precision_at_5
value: 25.759
- type: recall_at_1
value: 3.9050000000000002
- type: recall_at_10
value: 13.375
- type: recall_at_100
value: 28.453
- type: recall_at_1000
value: 61.67399999999999
- type: recall_at_3
value: 7.774
- type: recall_at_5
value: 10.754
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.33
- type: map_at_10
value: 30.44
- type: map_at_100
value: 31.848
- type: map_at_1000
value: 31.906000000000002
- type: map_at_3
value: 26.143
- type: map_at_5
value: 28.583
- type: mrr_at_1
value: 21.031
- type: mrr_at_10
value: 33.028
- type: mrr_at_100
value: 34.166000000000004
- type: mrr_at_1000
value: 34.208
- type: mrr_at_3
value: 29.089
- type: mrr_at_5
value: 31.362000000000002
- type: ndcg_at_1
value: 21.031
- type: ndcg_at_10
value: 37.65
- type: ndcg_at_100
value: 43.945
- type: ndcg_at_1000
value: 45.338
- type: ndcg_at_3
value: 29.256999999999998
- type: ndcg_at_5
value: 33.453
- type: precision_at_1
value: 21.031
- type: precision_at_10
value: 6.8309999999999995
- type: precision_at_100
value: 1.035
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 13.818
- type: precision_at_5
value: 10.649000000000001
- type: recall_at_1
value: 18.33
- type: recall_at_10
value: 57.330999999999996
- type: recall_at_100
value: 85.284
- type: recall_at_1000
value: 95.676
- type: recall_at_3
value: 35.356
- type: recall_at_5
value: 45.073
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 66.373
- type: map_at_10
value: 80.233
- type: map_at_100
value: 80.973
- type: map_at_1000
value: 80.99499999999999
- type: map_at_3
value: 77.127
- type: map_at_5
value: 79.056
- type: mrr_at_1
value: 76.55
- type: mrr_at_10
value: 83.813
- type: mrr_at_100
value: 83.96900000000001
- type: mrr_at_1000
value: 83.97200000000001
- type: mrr_at_3
value: 82.547
- type: mrr_at_5
value: 83.38600000000001
- type: ndcg_at_1
value: 76.53999999999999
- type: ndcg_at_10
value: 84.638
- type: ndcg_at_100
value: 86.28099999999999
- type: ndcg_at_1000
value: 86.459
- type: ndcg_at_3
value: 81.19
- type: ndcg_at_5
value: 83.057
- type: precision_at_1
value: 76.53999999999999
- type: precision_at_10
value: 12.928999999999998
- type: precision_at_100
value: 1.514
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 35.503
- type: precision_at_5
value: 23.512
- type: recall_at_1
value: 66.373
- type: recall_at_10
value: 93.273
- type: recall_at_100
value: 99.031
- type: recall_at_1000
value: 99.91799999999999
- type: recall_at_3
value: 83.55799999999999
- type: recall_at_5
value: 88.644
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 43.67174666339103
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 61.66838659211271
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.318
- type: map_at_10
value: 5.938000000000001
- type: map_at_100
value: 7.582
- type: map_at_1000
value: 7.936
- type: map_at_3
value: 4.208
- type: map_at_5
value: 5.098
- type: mrr_at_1
value: 11.4
- type: mrr_at_10
value: 17.655
- type: mrr_at_100
value: 19.088
- type: mrr_at_1000
value: 19.203
- type: mrr_at_3
value: 15.25
- type: mrr_at_5
value: 16.535
- type: ndcg_at_1
value: 11.4
- type: ndcg_at_10
value: 10.388
- type: ndcg_at_100
value: 18.165
- type: ndcg_at_1000
value: 24.842
- type: ndcg_at_3
value: 9.414
- type: ndcg_at_5
value: 8.453
- type: precision_at_1
value: 11.4
- type: precision_at_10
value: 5.54
- type: precision_at_100
value: 1.71
- type: precision_at_1000
value: 0.33
- type: precision_at_3
value: 8.866999999999999
- type: precision_at_5
value: 7.580000000000001
- type: recall_at_1
value: 2.318
- type: recall_at_10
value: 11.267000000000001
- type: recall_at_100
value: 34.743
- type: recall_at_1000
value: 67.07300000000001
- type: recall_at_3
value: 5.408
- type: recall_at_5
value: 7.713
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_spearman
value: 72.15850185456762
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_spearman
value: 61.59518395985063
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_spearman
value: 79.71131323749228
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_spearman
value: 72.10974664733891
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_spearman
value: 82.17899407125657
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_spearman
value: 79.41138579273438
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_spearman
value: 85.44343473477939
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_spearman
value: 63.90264271389905
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_spearman
value: 77.44151296326804
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 76.27597486396654
- type: mrr
value: 93.28127119793788
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 49.594
- type: map_at_10
value: 60.951
- type: map_at_100
value: 61.68599999999999
- type: map_at_1000
value: 61.712
- type: map_at_3
value: 57.946
- type: map_at_5
value: 59.89
- type: mrr_at_1
value: 52.666999999999994
- type: mrr_at_10
value: 62.724000000000004
- type: mrr_at_100
value: 63.269
- type: mrr_at_1000
value: 63.291
- type: mrr_at_3
value: 60.167
- type: mrr_at_5
value: 61.95
- type: ndcg_at_1
value: 52.666999999999994
- type: ndcg_at_10
value: 66.35600000000001
- type: ndcg_at_100
value: 69.463
- type: ndcg_at_1000
value: 70.111
- type: ndcg_at_3
value: 60.901
- type: ndcg_at_5
value: 64.054
- type: precision_at_1
value: 52.666999999999994
- type: precision_at_10
value: 9.0
- type: precision_at_100
value: 1.073
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 24.221999999999998
- type: precision_at_5
value: 16.333000000000002
- type: recall_at_1
value: 49.594
- type: recall_at_10
value: 81.256
- type: recall_at_100
value: 94.989
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 66.706
- type: recall_at_5
value: 74.411
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.65049504950495
- type: cos_sim_ap
value: 88.1421623503371
- type: cos_sim_f1
value: 81.44072036018008
- type: cos_sim_precision
value: 81.48148148148148
- type: cos_sim_recall
value: 81.39999999999999
- type: dot_accuracy
value: 99.37623762376238
- type: dot_ap
value: 69.87152032240303
- type: dot_f1
value: 65.64885496183206
- type: dot_precision
value: 72.18225419664267
- type: dot_recall
value: 60.199999999999996
- type: euclidean_accuracy
value: 99.63069306930693
- type: euclidean_ap
value: 86.13858297902517
- type: euclidean_f1
value: 79.87679671457904
- type: euclidean_precision
value: 82.0675105485232
- type: euclidean_recall
value: 77.8
- type: manhattan_accuracy
value: 99.63168316831683
- type: manhattan_ap
value: 86.31976532265482
- type: manhattan_f1
value: 80.10204081632654
- type: manhattan_precision
value: 81.77083333333334
- type: manhattan_recall
value: 78.5
- type: max_accuracy
value: 99.65049504950495
- type: max_ap
value: 88.1421623503371
- type: max_f1
value: 81.44072036018008
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 68.19604139959692
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 36.3569584557381
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 48.82174503355024
- type: mrr
value: 49.610933388506915
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.805895993742798
- type: cos_sim_spearman
value: 31.445431226826738
- type: dot_pearson
value: 24.441585432516867
- type: dot_spearman
value: 25.468117334810188
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.2
- type: map_at_10
value: 1.431
- type: map_at_100
value: 7.138999999999999
- type: map_at_1000
value: 17.933
- type: map_at_3
value: 0.551
- type: map_at_5
value: 0.7979999999999999
- type: mrr_at_1
value: 76.0
- type: mrr_at_10
value: 85.167
- type: mrr_at_100
value: 85.21300000000001
- type: mrr_at_1000
value: 85.21300000000001
- type: mrr_at_3
value: 84.667
- type: mrr_at_5
value: 85.167
- type: ndcg_at_1
value: 72.0
- type: ndcg_at_10
value: 63.343
- type: ndcg_at_100
value: 45.739999999999995
- type: ndcg_at_1000
value: 41.875
- type: ndcg_at_3
value: 68.162
- type: ndcg_at_5
value: 65.666
- type: precision_at_1
value: 76.0
- type: precision_at_10
value: 66.4
- type: precision_at_100
value: 46.800000000000004
- type: precision_at_1000
value: 18.996
- type: precision_at_3
value: 72.667
- type: precision_at_5
value: 68.4
- type: recall_at_1
value: 0.2
- type: recall_at_10
value: 1.712
- type: recall_at_100
value: 10.896
- type: recall_at_1000
value: 40.115
- type: recall_at_3
value: 0.594
- type: recall_at_5
value: 0.889
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.0619999999999998
- type: map_at_10
value: 5.611
- type: map_at_100
value: 8.841000000000001
- type: map_at_1000
value: 10.154
- type: map_at_3
value: 2.7720000000000002
- type: map_at_5
value: 4.181
- type: mrr_at_1
value: 14.285999999999998
- type: mrr_at_10
value: 26.249
- type: mrr_at_100
value: 28.046
- type: mrr_at_1000
value: 28.083000000000002
- type: mrr_at_3
value: 21.769
- type: mrr_at_5
value: 24.524
- type: ndcg_at_1
value: 11.224
- type: ndcg_at_10
value: 12.817
- type: ndcg_at_100
value: 23.183999999999997
- type: ndcg_at_1000
value: 35.099000000000004
- type: ndcg_at_3
value: 11.215
- type: ndcg_at_5
value: 12.016
- type: precision_at_1
value: 14.285999999999998
- type: precision_at_10
value: 12.653
- type: precision_at_100
value: 5.306
- type: precision_at_1000
value: 1.294
- type: precision_at_3
value: 13.605
- type: precision_at_5
value: 13.877999999999998
- type: recall_at_1
value: 1.0619999999999998
- type: recall_at_10
value: 10.377
- type: recall_at_100
value: 34.77
- type: recall_at_1000
value: 70.875
- type: recall_at_3
value: 3.688
- type: recall_at_5
value: 6.2509999999999994
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.8488
- type: ap
value: 15.590122317097372
- type: f1
value: 55.86108396102662
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 57.61460101867573
- type: f1
value: 57.8678726826158
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 32.01459876897588
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 84.1032365738809
- type: cos_sim_ap
value: 66.60137415520323
- type: cos_sim_f1
value: 62.12845010615712
- type: cos_sim_precision
value: 62.493326214628944
- type: cos_sim_recall
value: 61.76781002638523
- type: dot_accuracy
value: 81.85015199380103
- type: dot_ap
value: 58.854644211365084
- type: dot_f1
value: 56.15180082185158
- type: dot_precision
value: 51.806422836752894
- type: dot_recall
value: 61.2928759894459
- type: euclidean_accuracy
value: 83.6681170650295
- type: euclidean_ap
value: 64.93555585305603
- type: euclidean_f1
value: 61.02775195857125
- type: euclidean_precision
value: 61.42742582197273
- type: euclidean_recall
value: 60.633245382585756
- type: manhattan_accuracy
value: 83.73368301841808
- type: manhattan_ap
value: 65.45422483039611
- type: manhattan_f1
value: 61.58552806597499
- type: manhattan_precision
value: 62.09763948497854
- type: manhattan_recall
value: 61.08179419525066
- type: max_accuracy
value: 84.1032365738809
- type: max_ap
value: 66.60137415520323
- type: max_f1
value: 62.12845010615712
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 86.36628245430201
- type: cos_sim_ap
value: 79.29963896460292
- type: cos_sim_f1
value: 72.63895990066467
- type: cos_sim_precision
value: 69.09128803668196
- type: cos_sim_recall
value: 76.57068062827224
- type: dot_accuracy
value: 84.65091007878294
- type: dot_ap
value: 75.04883449222972
- type: dot_f1
value: 69.18569117382708
- type: dot_precision
value: 64.89512376070682
- type: dot_recall
value: 74.08376963350786
- type: euclidean_accuracy
value: 85.88116583226608
- type: euclidean_ap
value: 78.42687640324908
- type: euclidean_f1
value: 71.74350111107192
- type: euclidean_precision
value: 66.19800820152314
- type: euclidean_recall
value: 78.3030489682784
- type: manhattan_accuracy
value: 86.27508052935926
- type: manhattan_ap
value: 79.29581298930101
- type: manhattan_f1
value: 72.51838235294117
- type: manhattan_precision
value: 67.03921568627452
- type: manhattan_recall
value: 78.97289805974745
- type: max_accuracy
value: 86.36628245430201
- type: max_ap
value: 79.29963896460292
- type: max_f1
value: 72.63895990066467
---
> LLM2Vec is a simple recipe to convert decoder-only LLMs into text encoders. It consists of 3 simple steps: 1) enabling bidirectional attention, 2) masked next token prediction, and 3) unsupervised contrastive learning. The model can be further fine-tuned to achieve state-of-the-art performance.
- **Repository:** https://github.com/McGill-NLP/llm2vec
- **Paper:** https://arxiv.org/abs/2404.05961
## Installation
```bash
pip install llm2vec
```
## Usage
```python
from llm2vec import LLM2Vec
import torch
from transformers import AutoTokenizer, AutoModel, AutoConfig
from peft import PeftModel
# Loading base Mistral model, along with custom code that enables bidirectional connections in decoder-only LLMs. MNTP LoRA weights are merged into the base model.
tokenizer = AutoTokenizer.from_pretrained(
"McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp"
)
config = AutoConfig.from_pretrained(
"McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp", trust_remote_code=True
)
model = AutoModel.from_pretrained(
"McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp",
trust_remote_code=True,
config=config,
torch_dtype=torch.bfloat16,
device_map="cuda" if torch.cuda.is_available() else "cpu",
)
model = PeftModel.from_pretrained(
model,
"McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp",
)
model = model.merge_and_unload() # This can take several minutes on cpu
# Loading unsupervised SimCSE model. This loads the trained LoRA weights on top of MNTP model. Hence the final weights are -- Base model + MNTP (LoRA) + SimCSE (LoRA).
model = PeftModel.from_pretrained(
model, "McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp-unsup-simcse"
)
# Wrapper for encoding and pooling operations
l2v = LLM2Vec(model, tokenizer, pooling_mode="mean", max_length=512)
# Encoding queries using instructions
instruction = (
"Given a web search query, retrieve relevant passages that answer the query:"
)
queries = [
[instruction, "how much protein should a female eat"],
[instruction, "summit define"],
]
q_reps = l2v.encode(queries)
# Encoding documents. Instruction are not required for documents
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments.",
]
d_reps = l2v.encode(documents)
# Compute cosine similarity
q_reps_norm = torch.nn.functional.normalize(q_reps, p=2, dim=1)
d_reps_norm = torch.nn.functional.normalize(d_reps, p=2, dim=1)
cos_sim = torch.mm(q_reps_norm, d_reps_norm.transpose(0, 1))
print(cos_sim)
"""
tensor([[0.6522, 0.1891],
[0.1162, 0.3457]])
"""
```
## Questions
If you have any question about the code, feel free to email Parishad (`[email protected]`) and Vaibhav (`[email protected]`). |
google/paligemma-3b-ft-cococap-448 | google | "2024-06-27T14:10:20Z" | 1,141 | 2 | transformers | [
"transformers",
"safetensors",
"paligemma",
"pretraining",
"image-text-to-text",
"arxiv:2310.09199",
"arxiv:2303.15343",
"arxiv:2403.08295",
"arxiv:1706.03762",
"arxiv:2010.11929",
"arxiv:2209.06794",
"arxiv:2209.04372",
"arxiv:2103.01913",
"arxiv:2401.06209",
"arxiv:2305.10355",
"arxiv:2205.12522",
"arxiv:2110.11624",
"arxiv:2108.03353",
"arxiv:2010.04295",
"arxiv:2203.10244",
"arxiv:1810.12440",
"arxiv:1905.13648",
"arxiv:1608.00272",
"arxiv:1908.04913",
"license:gemma",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | image-text-to-text | "2024-05-13T01:47:35Z" | ---
library_name: transformers
license: gemma
pipeline_tag: image-text-to-text
extra_gated_heading: Access PaliGemma on Hugging Face
extra_gated_prompt: To access PaliGemma on Hugging Face, you’re required to review
and agree to Google’s usage license. To do this, please ensure you’re logged-in
to Hugging Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# PaliGemma model card
**Model page:** [PaliGemma](https://ai.google.dev/gemma/docs/paligemma)
Transformers PaliGemma 3B weights, fine-tuned with 448*448 input images on the <a href="https://cocodataset.org/#home">COCO_captions</a> dataset. The models are available in float32, bfloat16 and float16 format for research purposes only. The fine-tune config is available at <a href="https://github.com/google-research/big_vision/blob/main/big_vision/configs/proj/paligemma/transfers/cococap.py">big_vision</a>.
**Resources and technical documentation:**
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [PaliGemma on Kaggle](https://www.kaggle.com/models/google/paligemma)
* [PaliGemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/363)
**Terms of Use:** [Terms](https://www.kaggle.com/models/google/paligemma-ft/license/consent/verify/huggingface?returnModelRepoId=google/paligemma-3b-ft-cococap-448)
**Authors:** Google
## Model information
### Model summary
#### Description
PaliGemma is a versatile and lightweight vision-language model (VLM) inspired by
[PaLI-3](https://arxiv.org/abs/2310.09199) and based on open components such as
the [SigLIP vision model](https://arxiv.org/abs/2303.15343) and the [Gemma
language model](https://arxiv.org/abs/2403.08295). It takes both image and text
as input and generates text as output, supporting multiple languages. It is designed for class-leading fine-tune performance on a wide range of vision-language tasks such as image and short video caption, visual question answering, text reading, object detection and object segmentation.
#### Model architecture
PaliGemma is the composition of a [Transformer
decoder](https://arxiv.org/abs/1706.03762) and a [Vision Transformer image
encoder](https://arxiv.org/abs/2010.11929), with a total of 3 billion
params. The text decoder is initialized from
[Gemma-2B](https://www.kaggle.com/models/google/gemma). The image encoder is
initialized from
[SigLIP-So400m/14](https://colab.research.google.com/github/google-research/big_vision/blob/main/big_vision/configs/proj/image_text/SigLIP_demo.ipynb).
PaliGemma is trained following the PaLI-3 recipes.
#### Inputs and outputs
* **Input:** Image and text string, such as a prompt to caption the image, or
a question.
* **Output:** Generated text in response to the input, such as a caption of
the image, an answer to a question, a list of object bounding box
coordinates, or segmentation codewords.
### Model data
#### Pre-train datasets
PaliGemma is pre-trained on the following mixture of datasets:
* **WebLI:** [WebLI (Web Language Image)](https://arxiv.org/abs/2209.06794) is
a web-scale multilingual image-text dataset built from the public web. A
wide range of WebLI splits are used to acquire versatile model capabilities,
such as visual semantic understanding, object localization,
visually-situated text understanding, multilinguality, etc.
* **CC3M-35L:** Curated English image-alt_text pairs from webpages ([Sharma et
al., 2018](https://aclanthology.org/P18-1238/)). We used the [Google Cloud
Translation API](https://cloud.google.com/translate) to translate into 34
additional languages.
* **VQ²A-CC3M-35L/VQG-CC3M-35L:** A subset of VQ2A-CC3M ([Changpinyo et al.,
2022a](https://aclanthology.org/2022.naacl-main.142/)), translated into the
same additional 34 languages as CC3M-35L, using the [Google Cloud
Translation API](https://cloud.google.com/translate).
* **OpenImages:** Detection and object-aware questions and answers
([Piergiovanni et al. 2022](https://arxiv.org/abs/2209.04372)) generated by
handcrafted rules on the [OpenImages dataset].
* **WIT:** Images and texts collected from Wikipedia ([Srinivasan et al.,
2021](https://arxiv.org/abs/2103.01913)).
[OpenImages dataset]: https://storage.googleapis.com/openimages/web/factsfigures_v7.html
#### Data responsibility filtering
The following filters are applied to WebLI, with the goal of training PaliGemma
on clean data:
* **Pornographic image filtering:** This filter removes images deemed to be of
pornographic nature.
* **Text safety filtering:** We identify and filter out images that are paired
with unsafe text. Unsafe text is any text deemed to contain or be about
CSAI, pornography, vulgarities, or otherwise offensive.
* **Text toxicity filtering:** We further use the [Perspective
API](https://perspectiveapi.com/) to identify and filter out images that are
paired with text deemed insulting, obscene, hateful or otherwise toxic.
* **Text personal information filtering:** We filtered certain personal information and other sensitive data using [Cloud Data Loss Prevention (DLP)
API](https://cloud.google.com/security/products/dlp) to protect the privacy
of individuals. Identifiers such as social security numbers and [other sensitive information types] were removed.
* **Additional methods:** Filtering based on content quality and safety in
line with our policies and practices.
[other sensitive information types]: https://cloud.google.com/sensitive-data-protection/docs/high-sensitivity-infotypes-reference?_gl=1*jg604m*_ga*ODk5MzA3ODQyLjE3MTAzMzQ3NTk.*_ga_WH2QY8WWF5*MTcxMDUxNTkxMS4yLjEuMTcxMDUxNjA2NC4wLjAuMA..&_ga=2.172110058.-899307842.1710334759
## How to Use
PaliGemma is a single-turn vision language model not meant for conversational use,
and it works best when fine-tuning to a specific use case.
You can configure which task the model will solve by conditioning it with task prefixes,
such as “detect” or “segment”. The pretrained models were trained in this fashion to imbue
them with a rich set of capabilities (question answering, captioning, segmentation, etc.).
However, they are not designed to be used directly, but to be transferred (by fine-tuning)
to specific tasks using a similar prompt structure. For interactive testing, you can use
the "mix" family of models, which have been fine-tuned on a mixture of tasks.
Please, refer to the [usage and limitations section](#usage-and-limitations) for intended
use cases, or visit the [blog post](https://huggingface.co/blog/paligemma-google-vlm) for
additional details and examples.
## Use in Transformers
The following snippets use model `google/paligemma-3b-mix-224` for reference purposes.
The model in this repo you are now browsing may have been trained for other tasks, please
make sure you use appropriate inputs for the task at hand.
### Running the default precision (`float32`) on CPU
```python
from transformers import AutoProcessor, PaliGemmaForConditionalGeneration
from PIL import Image
import requests
import torch
model_id = "google/paligemma-3b-mix-224"
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
model = PaliGemmaForConditionalGeneration.from_pretrained(model_id).eval()
processor = AutoProcessor.from_pretrained(model_id)
# Instruct the model to create a caption in Spanish
prompt = "caption es"
model_inputs = processor(text=prompt, images=image, return_tensors="pt")
input_len = model_inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
```
Output: `Un auto azul estacionado frente a un edificio.`
### Running other precisions on CUDA
For convenience, the repos contain revisions of the weights already converted to `bfloat16` and `float16`,
so you can use them to reduce the download size and avoid casting on your local computer.
This is how you'd run `bfloat16` on an nvidia CUDA card.
```python
from transformers import AutoProcessor, PaliGemmaForConditionalGeneration
from PIL import Image
import requests
import torch
model_id = "google/paligemma-3b-mix-224"
device = "cuda:0"
dtype = torch.bfloat16
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
model = PaliGemmaForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=dtype,
device_map=device,
revision="bfloat16",
).eval()
processor = AutoProcessor.from_pretrained(model_id)
# Instruct the model to create a caption in Spanish
prompt = "caption es"
model_inputs = processor(text=prompt, images=image, return_tensors="pt").to(model.device)
input_len = model_inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
```
### Loading in 4-bit / 8-bit
You need to install `bitsandbytes` to automatically run inference using 8-bit or 4-bit precision:
```
pip install bitsandbytes accelerate
```
```
from transformers import AutoProcessor, PaliGemmaForConditionalGeneration
from PIL import Image
import requests
import torch
model_id = "google/paligemma-3b-mix-224"
device = "cuda:0"
dtype = torch.bfloat16
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
model = PaliGemmaForConditionalGeneration.from_pretrained(
model_id, quantization_config=quantization_config
).eval()
processor = AutoProcessor.from_pretrained(model_id)
# Instruct the model to create a caption in Spanish
prompt = "caption es"
model_inputs = processor(text=prompt, images=image, return_tensors="pt").to(model.device)
input_len = model_inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
```
## Implementation information
### Hardware
PaliGemma was trained using the latest generation of Tensor Processing Unit
(TPU) hardware (TPUv5e).
### Software
Training was done using [JAX](https://github.com/google/jax),
[Flax](https://github.com/google/flax),
[TFDS](https://github.com/tensorflow/datasets) and
[`big_vision`](https://github.com/google-research/big_vision).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
TFDS is used to access datasets and Flax is used for model architecture. The
PaliGemma fine-tune code and inference code are released in the `big_vision`
GitHub repository.
## Evaluation information
### Benchmark results
In order to verify the transferability of PaliGemma to a wide variety of
academic tasks, we fine-tune the pretrained models on each task. Additionally we
train the mix model with a mixture of the transfer tasks. We report results on
different resolutions to provide an impression of which tasks benefit from
increased resolution. Importantly, none of these tasks or datasets are part of
the pretraining data mixture, and their images are explicitly removed from the
web-scale pre-training data.
#### Mix model (fine-tune on mixture of transfer tasks)
<table>
<tbody><tr>
<th>Benchmark</th>
<th>Metric (split)</th>
<th>mix-224</th>
<th>mix-448</th>
</tr>
<tr>
<td><a href="https://arxiv.org/abs/2401.06209">MMVP</a></td>
<td>Paired Accuracy</td>
<td>46.00</td>
<td>45.33</td>
</tr>
<tr>
<td><a href="https://arxiv.org/abs/2305.10355">POPE</a></td>
<td>Accuracy<br>(random/popular/adversarial)</td>
<td>
88.00<br>
86.63<br>
85.67
</td>
<td>
89.37<br>
88.40<br>
87.47
</td>
</tr>
<tr>
<td><a href="https://cs.stanford.edu/people/dorarad/gqa/about.html">GQA</a></td>
<td>Accuracy (test)</td>
<td>65.20</td>
<td>65.47</td>
</tr>
</tbody></table>
#### Single task (fine-tune on single task)
<table>
<tbody><tr>
<th>Benchmark<br>(train split)</th>
<th>Metric<br>(split)</th>
<th>pt-224</th>
<th>pt-448</th>
<th>pt-896</th>
</tr>
<tr>
<th>Captioning</th>
</tr>
<tr>
<td>
<a href="https://cocodataset.org/#home">COCO captions</a><br>(train+restval)
</td>
<td>CIDEr (val)</td>
<td>141.92</td>
<td>144.60</td>
</tr>
<tr>
<td>
<a href="https://nocaps.org/">NoCaps</a><br>(Eval of COCO<br>captions transfer)
</td>
<td>CIDEr (val)</td>
<td>121.72</td>
<td>123.58</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/pdf/2205.12522">COCO-35L</a><br>(train)
</td>
<td>CIDEr dev<br>(en/avg-34/avg)</td>
<td>
139.2<br>
115.8<br>
116.4
</td>
<td>
141.2<br>
118.0<br>
118.6
</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/pdf/2205.12522">XM3600</a><br>(Eval of COCO-35L transfer)
</td>
<td>CIDEr dev<br>(en/avg-34/avg)</td>
<td>
78.1<br>
41.3<br>
42.4
</td>
<td>
80.0<br>
41.9<br>
42.9
</td>
</tr>
<tr>
<td>
<a href="https://textvqa.org/textcaps/">TextCaps</a><br>(train)
</td>
<td>CIDEr (val)</td>
<td>127.48</td>
<td>153.94</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2110.11624">SciCap</a><br>(first sentence, no subfigure)<br>(train+val)
</td>
<td>CIDEr/BLEU-4<br>(test)</td>
<td>
162.25<br>
0.192<br>
</td>
<td>
181.49<br>
0.211<br>
</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2108.03353">Screen2words</a><br>(train+dev)
</td>
<td>CIDEr (test)</td>
<td>117.57</td>
<td>119.59</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2010.04295">Widget Captioning</a><br>(train+dev)
</td>
<td>CIDEr (test)</td>
<td>136.07</td>
<td>148.36</td>
</tr>
<tr>
<th>Question answering</th>
</tr>
<tr>
<td>
<a href="https://visualqa.org/index.html">VQAv2</a><br>(train+validation)
</td>
<td>Accuracy<br>(Test server - std)</td>
<td>83.19</td>
<td>85.64</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2401.06209">MMVP</a><br>(Eval of VQAv2 transfer)
</td>
<td>Paired Accuracy</td>
<td>47.33</td>
<td>45.33</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2305.10355">POPE</a><br>(Eval of VQAv2 transfer)
</td>
<td>Accuracy<br>(random/popular/<br>adversarial)</td>
<td>
87.80<br>
85.87<br>
84.27
</td>
<td>
88.23<br>
86.77<br>
85.90
</td>
</tr>
<tr>
<td>
<a href="https://okvqa.allenai.org/">OKVQA</a><br>(train)
</td>
<td>Accuracy (val)</td>
<td>63.54</td>
<td>63.15</td>
</tr>
<tr>
<td>
<a href="https://allenai.org/project/a-okvqa/home">A-OKVQA</a> (MC)<br>(train+val)
</td>
<td>Accuracy<br>(Test server)</td>
<td>76.37</td>
<td>76.90</td>
</tr>
<tr>
<td>
<a href="https://allenai.org/project/a-okvqa/home">A-OKVQA</a> (DA)<br>(train+val)
</td>
<td>Accuracy<br>(Test server)</td>
<td>61.85</td>
<td>63.22</td>
</tr>
<tr>
<td>
<a href="https://cs.stanford.edu/people/dorarad/gqa/about.html">GQA</a><br>(train_balanced+<br>val_balanced)
</td>
<td>Accuracy<br>(testdev balanced)</td>
<td>65.61</td>
<td>67.03</td>
</tr>
<tr>
<td>
<a href="https://aclanthology.org/2022.findings-acl.196/">xGQA</a><br>(Eval of GQA transfer)
</td>
<td>Mean Accuracy<br>(bn, de, en, id,<br>ko, pt, ru, zh)</td>
<td>58.37</td>
<td>59.07</td>
</tr>
<tr>
<td>
<a href="https://lil.nlp.cornell.edu/nlvr/">NLVR2</a><br>(train+dev)
</td>
<td>Accuracy (test)</td>
<td>90.02</td>
<td>88.93</td>
</tr>
<tr>
<td>
<a href="https://marvl-challenge.github.io/">MaRVL</a><br>(Eval of NLVR2 transfer)
</td>
<td>Mean Accuracy<br>(test)<br>(id, sw, ta, tr, zh)</td>
<td>80.57</td>
<td>76.78</td>
</tr>
<tr>
<td>
<a href="https://allenai.org/data/diagrams">AI2D</a><br>(train)
</td>
<td>Accuracy (test)</td>
<td>72.12</td>
<td>73.28</td>
</tr>
<tr>
<td>
<a href="https://scienceqa.github.io/">ScienceQA</a><br>(Img subset, no CoT)<br>(train+val)
</td>
<td>Accuracy (test)</td>
<td>95.39</td>
<td>95.93</td>
</tr>
<tr>
<td>
<a href="https://zenodo.org/records/6344334">RSVQA-LR</a> (Non numeric)<br>(train+val)
</td>
<td>Mean Accuracy<br>(test)</td>
<td>92.65</td>
<td>93.11</td>
</tr>
<tr>
<td>
<a href="https://zenodo.org/records/6344367">RSVQA-HR</a> (Non numeric)<br>(train+val)
</td>
<td>Mean Accuracy<br>(test/test2)</td>
<td>
92.61<br>
90.58
</td>
<td>
92.79<br>
90.54
</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2203.10244">ChartQA</a><br>(human+aug)x(train+val)
</td>
<td>Mean Relaxed<br>Accuracy<br>(test_human,<br>test_aug)</td>
<td>57.08</td>
<td>71.36</td>
</tr>
<tr>
<td>
<a href="https://vizwiz.org/tasks-and-datasets/vqa/">VizWiz VQA</a><br>(train+val)
</td>
<td>Accuracy<br>(Test server - std)</td>
<td>
73.7
</td>
<td>
75.52
</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/1810.12440">TallyQA</a><br>(train)
</td>
<td>Accuracy<br>(test_simple/<br>test_complex)</td>
<td>
81.72<br>
69.56
</td>
<td>
84.86<br>
72.27
</td>
</tr>
<tr>
<td>
<a href="https://ocr-vqa.github.io/">OCR-VQA</a><br>(train+val)
</td>
<td>Accuracy (test)</td>
<td>72.32</td>
<td>74.61</td>
<td>74.93</td>
</tr>
<tr>
<td>
<a href="https://textvqa.org/">TextVQA</a><br>(train+val)
</td>
<td>Accuracy<br>(Test server - std)</td>
<td>55.47</td>
<td>73.15</td>
<td>76.48</td>
</tr>
<tr>
<td>
<a href="https://www.docvqa.org/">DocVQA</a><br>(train+val)
</td>
<td>ANLS (Test server)</td>
<td>43.74</td>
<td>78.02</td>
<td>84.77</td>
</tr>
<tr>
<td>
<a href="https://openaccess.thecvf.com/content/WACV2022/papers/Mathew_InfographicVQA_WACV_2022_paper.pdf">Infographic VQA</a><br>(train+val)
</td>
<td>ANLS (Test server)</td>
<td>28.46</td>
<td>40.47</td>
<td>47.75</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/1905.13648">SceneText VQA</a><br>(train+val)
</td>
<td>ANLS (Test server)</td>
<td>63.29</td>
<td>81.82</td>
<td>84.40</td>
</tr>
<tr>
<th>Segmentation</th>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/1608.00272">RefCOCO</a><br>(combined refcoco, refcoco+,<br>refcocog excluding val<br>and test images)
</td>
<td>MIoU<br>(validation)<br>refcoco/refcoco+/<br>refcocog</td>
<td>
73.40<br>
68.32<br>
67.65
</td>
<td>
75.57<br>
69.76<br>
70.17
</td>
<td>
76.94<br>
72.18<br>
72.22
</td>
</tr>
<tr>
<th>Video tasks (Caption/QA)</th>
</tr>
<tr>
<td>MSR-VTT (Captioning)</td>
<td>CIDEr (test)</td>
<td>70.54</td>
</tr>
<tr>
<td>MSR-VTT (QA)</td>
<td>Accuracy (test)</td>
<td>50.09</td>
</tr>
<tr>
<td>ActivityNet (Captioning)</td>
<td>CIDEr (test)</td>
<td>34.62</td>
</tr>
<tr>
<td>ActivityNet (QA)</td>
<td>Accuracy (test)</td>
<td>50.78</td>
</tr>
<tr>
<td>VATEX (Captioning)</td>
<td>CIDEr (test)</td>
<td>79.73</td>
</tr>
<tr>
<td>MSVD (QA)</td>
<td>Accuracy (test)</td>
<td>60.22</td>
</tr>
</tbody></table>
## Ethics and safety
### Evaluation approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Human evaluation on prompts covering child safety, content safety and
representational harms. See the [Gemma model
card](https://ai.google.dev/gemma/docs/model_card#evaluation_approach) for
more details on evaluation approach, but with image captioning and visual
question answering setups.
* Image-to-Text benchmark evaluation: Benchmark against relevant academic
datasets such as FairFace Dataset ([Karkkainen et al.,
2021](https://arxiv.org/abs/1908.04913)).
### Evaluation results
* The human evaluation results of ethics and safety evaluations are within
acceptable thresholds for meeting [internal
policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11)
for categories such as child safety, content safety and representational
harms.
* On top of robust internal evaluations, we also use the Perspective API
(threshold of 0.8) to measure toxicity, profanity, and other potential
issues in the generated captions for images sourced from the FairFace
dataset. We report the maximum and median values observed across subgroups
for each of the perceived gender, ethnicity, and age attributes.
<table>
<tbody><tr>
</tr></tbody><tbody><tr><th>Metric</th>
<th>Perceived<br>gender</th>
<th></th>
<th>Ethnicity</th>
<th></th>
<th>Age group</th>
<th></th>
</tr>
<tr>
<th></th>
<th>Maximum</th>
<th>Median</th>
<th>Maximum</th>
<th>Median</th>
<th>Maximum</th>
<th>Median</th>
</tr>
<tr>
<td>Toxicity</td>
<td>0.04%</td>
<td>0.03%</td>
<td>0.08%</td>
<td>0.00%</td>
<td>0.09%</td>
<td>0.00%</td>
</tr>
<tr>
<td>Identity Attack</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
</tr>
<tr>
<td>Insult</td>
<td>0.06%</td>
<td>0.04%</td>
<td>0.09%</td>
<td>0.07%</td>
<td>0.16%</td>
<td>0.00%</td>
</tr>
<tr>
<td>Threat</td>
<td>0.06%</td>
<td>0.05%</td>
<td>0.14%</td>
<td>0.05%</td>
<td>0.17%</td>
<td>0.00%</td>
</tr>
<tr>
<td>Profanity</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
</tr>
</tbody></table>
## Usage and limitations
### Intended usage
Open Vision Language Models (VLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
Fine-tune on specific vision-language task:
* The pre-trained models can be fine-tuned on a wide range of vision-language
tasks such as: image captioning, short video caption, visual question
answering, text reading, object detection and object segmentation.
* The pre-trained models can be fine-tuned for specific domains such as remote
sensing question answering, visual questions from people who are blind,
science question answering, describe UI element functionalities.
* The pre-trained models can be fine-tuned for tasks with non-textual outputs
such as bounding boxes or segmentation masks.
Vision-language research:
* The pre-trained models and fine-tuned models can serve as a foundation for researchers to experiment with VLM
techniques, develop algorithms, and contribute to the advancement of the
field.
### Ethical considerations and risks
The development of vision-language models (VLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following:
* Bias and Fairness
* VLMs trained on large-scale, real-world image-text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card.
* Misinformation and Misuse
* VLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](https://ai.google.dev/responsible).
* Transparency and Accountability
* This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share innovation by making VLM technology accessible to developers and researchers across the AI ecosystem.
Risks identified and mitigations:
* **Perpetuation of biases:** It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* **Generation of harmful content:** Mechanisms and guidelines for content
safety are essential. Developers are encouraged to exercise caution and
implement appropriate content safety safeguards based on their specific
product policies and application use cases.
* **Misuse for malicious purposes:** Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the [Gemma
Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* **Privacy violations:** Models were trained on data filtered to remove certain personal information and sensitive data. Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques.
### Limitations
* Most limitations inherited from the underlying Gemma model still apply:
* VLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* Natural language is inherently complex. VLMs might struggle to grasp
subtle nuances, sarcasm, or figurative language.
* VLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* VLMs rely on statistical patterns in language and images. They might
lack the ability to apply common sense reasoning in certain situations.
* PaliGemma was designed first and foremost to serve as a general pre-trained
model for transfer to specialized tasks. Hence, its "out of the box" or
"zero-shot" performance might lag behind models designed specifically for
that.
* PaliGemma is not a multi-turn chatbot. It is designed for a single round of
image and text input.
|
tokyotech-llm/Swallow-13b-hf | tokyotech-llm | "2024-06-29T08:56:21Z" | 1,140 | 10 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"ja",
"arxiv:2404.17790",
"arxiv:2404.17733",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-16T15:40:49Z" | ---
language:
- en
- ja
library_name: transformers
pipeline_tag: text-generation
license: llama2
model_type: llama
---
# Swallow
Our Swallow model has undergone continual pre-training from the [Llama 2 family](https://huggingface.co/meta-llama), primarily with the addition of Japanese language data. The tuned versions use supervised fine-tuning (SFT).
Links to other models can be found in the index.
# Model Release Updates
We are excited to share the release schedule for our latest models:
- **April 26, 2024**: Released version 0.1 of our enhanced instruction-tuned models: [Swallow-7b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-v0.1), [Swallow-13b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-v0.1), and [Swallow-70b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-v0.1) as preview versions.
- **March 2, 2024**: Released the [Swallow-7b-plus-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-plus-hf), a model trained with approximately twice as many Japanese tokens as [Swallow-7b-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-hf).
- **February 4, 2024**: Released the [Swallow-13b-NVE-hf](https://huggingface.co/tokyotech-llm/Swallow-13b-NVE-hf).
- **January 26, 2024**: Released the [Swallow-7b-NVE-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-hf), [Swallow-7b-NVE-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-instruct-hf), [Swallow-70b-NVE-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-hf), and [Swallow-70b-NVE-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-instruct-hf)
- **December 19, 2023**: Released the [Swallow-7b-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-hf), [Swallow-7b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-hf), [Swallow-13b-hf](https://huggingface.co/tokyotech-llm/Swallow-13b-hf), [Swallow-13b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-hf), [Swallow-70b-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-hf), and [Swallow-70b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-hf).
## Swallow Model Index
|Model|Swallow-hf|Swallow-instruct-hf|Swallow-instruct-v0.1|
|---|---|---|---|
|7B| [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-hf)|[Link](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-v1.0)|
|7B-Plus| [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-plus-hf) | N/A | N/A |
|13B| [Link](https://huggingface.co/tokyotech-llm/Swallow-13b-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-hf)| [Link](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-v1.0)|
|70B| [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-hf)| [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-v1.0)|
## Swallow Model Index NVE (No Vocabulary Expansion)
|Model|Swallow-NVE-hf|Swallow-NVE-instruct-hf|
|---|---|---|
|7B| [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-instruct-hf)|
|13B| [Link](https://huggingface.co/tokyotech-llm/Swallow-13b-NVE-hf) | N/A |
|70B| [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-instruct-hf)|

This repository provides large language models developed by [TokyoTech-LLM](https://tokyotech-llm.github.io/).
Read our [blog post](https://zenn.dev/tokyotech_lm/articles/d6cb3a8fdfc907) or our [paper](https://arxiv.org/abs/2404.17790)
## Model Details
* **Model type**: Please refer to LLaMA-2 technical report for details on the model architecture.
* **Language(s)**: Japanese English
* **Library**: [Megatron-LM](https://github.com/rioyokotalab/Megatron-Llama2)
* **Tokenizer**: This model employs a tokenizer that features a broadened vocabulary based on Japanese data. This allows for a more efficient representation of text using fewer tokens, leading to a notably faster inference process.
* **Contact**: swallow[at]nlp.c.titech.ac.jp
## Base Model Performance
### Japanese tasks
|Model|Size|JCommonsenseQA|JEMHopQA|NIILC|JSQuAD|XL-Sum|MGSM|WMT20-en-ja|WMT20-ja-en|
|---|---|---|---|---|---|---|---|---|---|
| | |4-shot|4-shot|4-shot|4-shot|1-shot|4-shot|4-shot|4-shot|
| Llama 2 | 7B | 0.3852 | 0.4240 | 0.3410 | 0.7917 | 0.1905 | 0.0760 | 0.1783 | 0.1738 |
| Swallow | 7B | 0.4808 | 0.5078 | 0.5968 | 0.8573 | 0.1830 | 0.1240 | 0.2510 | 0.1511 |
| Swallow-Plus | 7B | 0.5478 | 0.5493 | 0.6030 | 0.8544 | 0.1806 | 0.1360 | 0.2568 | 0.1441 |
| Swallow-NVE | 7B | 0.5433 | 0.5425 | 0.5729 | 0.8684 | 0.2117 | 0.1200 | 0.2405 | 0.1512 |
| Llama 2 | 13B | 0.6997 | 0.4415 | 0.4170 | 0.8533 | 0.2139 | 0.1320 | 0.2146 | 0.1982 |
| Swallow | 13B | 0.7837 | 0.5063 | 0.6398 | 0.9005 | 0.2168 | 0.2040 | 0.2720 | 0.1771 |
| Swallow-NVE | 13B | 0.7712 | 0.5438 | 0.6351 | 0.9030 | 0.2294 | 0.2120 | 0.2735 | 0.1817 |
| Llama 2 | 70B | 0.8686 | 0.4656 | 0.5256 | 0.9080 | 0.2361 | 0.3560 | 0.2643 | **0.2398** |
| Swallow | 70B | 0.9348 | **0.6290** | 0.6960 | 0.9176 | 0.2266 | **0.4840** | **0.3043** | 0.2298 |
| Swallow-NVE | 70B | **0.9410** | 0.5759 | **0.7024** | **0.9254** | **0.2758** | 0.4720 | 0.3042 | 0.2322 |
### English tasks
|Model|Size|OpenBookQA|TriviaQA|HellaSwag|SQuAD2.0|XWINO|GSM8K|
|---|---|---|---|---|---|---|---|
| | |8-shot|8-shot|8-shot|8-shot|8-shot|8-shot|
| Llama 2 | 7B | 0.3580 | 0.6265 | 0.5860 | 0.3207 | 0.9049 | 0.1410 |
| Swallow | 7B | 0.3180 | 0.4836 | 0.5308 | 0.3125 | 0.8817 | 0.1130 |
| Swallow-Plus | 7B | 0.3280 | 0.4558 | 0.5259 | 0.3134 | 0.8929 | 0.1061 |
| Swallow-NVE | 7B | 0.3180 | 0.5079 | 0.5329 | 0.2919 | 0.8817 | 0.0986 |
| Llama 2 | 13B | 0.3760 | 0.7255 | 0.6148 | 0.3681 | 0.9140 | 0.2403 |
| Swallow | 13B | 0.3500 | 0.5852 | 0.5660 | 0.3406 | 0.9075 | 0.2039 |
| Swallow-NVE | 13B | 0.3460 | 0.6025 | 0.5700 | 0.3478 | 0.9006 | 0.1751 |
| Llama 2 | 70B | **0.4280** | **0.8239** | **0.6742** | **0.3770** | **0.9290** | **0.5284** |
| Swallow | 70B | 0.4220 | 0.7756 | 0.6458 | 0.3745 | 0.9204 | 0.4867 |
| Swallow-NVE | 70B | 0.4240 | 0.7817 | 0.6439 | 0.3451 | 0.9256 | 0.4943 |
## Evaluation Benchmarks
### Japanese evaluation benchmarks
We used llm-jp-eval(v1.0.0) and JP Language Model Evaluation Harness(commit #9b42d41). The details are as follows:
- Multiple-choice question answering (JCommonsenseQA [Kurihara+, 2022])
- Open-ended question answering (JEMHopQA [Ishii+, 2023])
- Open-ended question answering (NIILC [Sekine, 2003])
- Machine reading comprehension (JSQuAD [Kurihara+, 2022])
- Automatic summarization (XL-Sum [Hasan+, 2021])
- Machine translation (WMT2020 ja-en [Barrault+, 2020])
- Machine translation (WMT2020 en-ja [Barrault+, 2020])
- Mathematical reasoning (MGSM [Shi+, 2023])
### English evaluation benchmarks
We used the Language Model Evaluation Harness(v.0.3.0). The details are as follows:
- Multiple-choice question answering (OpenBookQA [Mihaylov+, 2018])
- Open-ended question answering (TriviaQA [Joshi+, 2017])
- Machine reading comprehension (SQuAD 2.0 [Rajpurkar+, 2018])
- Commonsense reasoning (XWINO [Tikhonov & Ryabinin, 2021])
- Natural language inference (HellaSwag [Zellers+, 2019])
- Mathematical reasoning (GSM8k [Cobbe+, 2021])
## Usage
First install additional dependencies in [requirements.txt](./requirements.txt):
```sh
pip install -r requirements.txt
```
### Use the instruct model
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "tokyotech-llm/Swallow-7b-instruct-hf"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, device_map="auto")
PROMPT_DICT = {
"prompt_input": (
"以下に、あるタスクを説明する指示があり、それに付随する入力が更なる文脈を提供しています。"
"リクエストを適切に完了するための回答を記述してください。\n\n"
"### 指示:\n{instruction}\n\n### 入力:\n{input}\n\n### 応答:"
),
"prompt_no_input": (
"以下に、あるタスクを説明する指示があります。"
"リクエストを適切に完了するための回答を記述してください。\n\n"
"### 指示:\n{instruction}\n\n### 応答:"
),
}
def create_prompt(instruction, input=None):
"""
Generates a prompt based on the given instruction and an optional input.
If input is provided, it uses the 'prompt_input' template from PROMPT_DICT.
If no input is provided, it uses the 'prompt_no_input' template.
Args:
instruction (str): The instruction describing the task.
input (str, optional): Additional input providing context for the task. Default is None.
Returns:
str: The generated prompt.
"""
if input:
# Use the 'prompt_input' template when additional input is provided
return PROMPT_DICT["prompt_input"].format(instruction=instruction, input=input)
else:
# Use the 'prompt_no_input' template when no additional input is provided
return PROMPT_DICT["prompt_no_input"].format(instruction=instruction)
# Example usage
instruction_example = "以下のトピックに関する詳細な情報を提供してください。"
input_example = "東京工業大学の主なキャンパスについて教えてください"
prompt = create_prompt(instruction_example, input_example)
input_ids = tokenizer.encode(
prompt,
add_special_tokens=False,
return_tensors="pt"
)
tokens = model.generate(
input_ids.to(device=model.device),
max_new_tokens=128,
temperature=0.99,
top_p=0.95,
do_sample=True,
)
out = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(out)
```
### Use the base model
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "tokyotech-llm/Swallow-7b-hf"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
prompt = "東京工業大学の主なキャンパスは、"
input_ids = tokenizer.encode(
prompt,
add_special_tokens=False,
return_tensors="pt"
)
tokens = model.generate(
input_ids.to(device=model.device),
max_new_tokens=128,
temperature=0.99,
top_p=0.95,
do_sample=True,
)
out = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(out)
```
## Training Datasets
### Continual Pre-Training
The following datasets were used for continual pre-training.
- [Japanese Wikipedia](https://dumps.wikimedia.org/other/cirrussearch)
- [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
- [Swallow Corpus](https://arxiv.org/abs/2404.17733)
- [The Pile](https://huggingface.co/datasets/EleutherAI/pile)
### Instruction Tuning
The following datasets were used for the instruction tuning.
- [Anthropic HH-RLHF](https://huggingface.co/datasets/kunishou/hh-rlhf-49k-ja)
- [Databricks Dolly 15-k](https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja)
- [OpenAssistant Conversations Dataset](https://huggingface.co/datasets/kunishou/oasst1-89k-ja)
## Risks and Limitations
The models released here are still in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
## Acknowledgements
We thank Meta Research for releasing Llama 2 under an open license for others to build on.
Our project is supported by the [ABCI Large-scale Language Model Building Support Program](https://abci.ai/en/link/llm_support_program.html) of the National Institute of Advanced Industrial Science and Technology.
## License
Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.
## Authors
Here are the team members:
- From [Okazaki Laboratory](https://www.nlp.c.titech.ac.jp/index.en.html), the following members:
- [Naoaki Okazaki](https://www.chokkan.org/index.ja.html)
- [Sakae Mizuki](https://s-mizuki-nlp.github.io/)
- [Hiroki Iida](https://meshidenn.github.io/)
- [Mengsay Loem](https://loem-ms.github.io/)
- [Shota Hirai](https://huggingface.co/Kotemo428)
- [Kakeru Hattori](https://aya-se.vercel.app/)
- [Masanari Ohi](https://twitter.com/stjohn2007)
- From [YOKOTA Laboratory](https://www.rio.gsic.titech.ac.jp/en/index.html), the following members:
- [Rio Yokota](https://twitter.com/rioyokota)
- [Kazuki Fujii](https://twitter.com/okoge_kaz)
- [Taishi Nakamura](https://twitter.com/Setuna7777_2)
## How to cite
```
@misc{fujii2024continual,
title={Continual Pre-Training for Cross-Lingual LLM Adaptation: Enhancing Japanese Language Capabilities},
author={Kazuki Fujii and Taishi Nakamura and Mengsay Loem and Hiroki Iida and Masanari Ohi and Kakeru Hattori and Hirai Shota and Sakae Mizuki and Rio Yokota and Naoaki Okazaki},
year={2024},
eprint={2404.17790},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Yntec/DucHaiten-AnyUnreal | Yntec | "2024-01-16T07:04:57Z" | 1,140 | 3 | diffusers | [
"diffusers",
"safetensors",
"Anime",
"Art",
"AllInOne",
"DucHaiten",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-01-16T06:34:33Z" | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Anime
- Art
- AllInOne
- DucHaiten
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
---
# DucHaiten-AnyUnreal
Original page: https://civitai.com/models/116736/duchaiten-anyunreal
Samples and prompts:

(Click for larger)
Top left: 1girl detailed face sad coat dress blade runner rain 1984 dvd film, (best quality), ( masterpiece,realistic, photorealistic),
Top right: Gacha life, movie, chibi, Kawaii, anime, illustration, digital illustration, character, little girl outfits, neon, colourful, warm, vibrant
Bottom left: Analog, vhs, 8mm film, chromatic aberration, 1980s, A realistic film still of Sakura Kinomoto from live action film of Cardcaptor Sakura, youthful and sweet appearance, Sakura's hair is chestnut brown and falls in soft, wavy locks that reach slightly below her shoulders, Sakura's hat is a pink beret hat and has a big ribbon, Her pink dressand consists of multiple layers, big red bow at the front of the dress, The top layer features ruffled short sleeves and a ruffled collar, The dress is adorned with various bows and ribbons
Bottom right: Cute and adorable cartoon fox baby rhea, fine and shiny fur, lovely, white and pink, fantasy, dreamlike, surrealism, super cute, trending on artstation, mother of pearl iridescent, holographic, super high quality, 8k
|
kuleshov-group/caduceus-ph_seqlen-131k_d_model-256_n_layer-16 | kuleshov-group | "2024-06-11T02:24:29Z" | 1,140 | 4 | transformers | [
"transformers",
"safetensors",
"caduceus",
"fill-mask",
"custom_code",
"arxiv:2403.03234",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | fill-mask | "2024-02-26T16:50:45Z" | ---
library_name: transformers
license: apache-2.0
---
## Using Caduceus
To use the pre-trained model for masked language modeling, use the following snippet:
```python
from transformers import AutoModelForMaskedLM, AutoTokenizer
# See the `Caduceus` collection page on the hub for list of available models.
model_name = "kuleshov-group/caduceus-ph_seqlen-131k_d_model-256_n_layer-16"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForMaskedLM.from_pretrained(model_name)
```
Alternatively, you can instantiate a model from scratch to train on your own data as follows:
```python
from transformers import AutoConfig, AutoModelForMaskedLM
# Add any config overrides here, see the `config.json` file on the hub for details.
config_overrides = {}
# See the `Caduceus` collection page on the hub for list of available models.
config = AutoConfig.from_pretrained(
"kuleshov-group/caduceus-ph_seqlen-131k_d_model-256_n_layer-16",
**config_overrides,
)
model = AutoModelForMaskedLM.from_config(config)
```
## Model Details
This is the Caduceus-Ph model with hidden dimension 256 and 16 MambaDNA layers.
This model is not inherently reverse complement (RC) equivariant.
Rather, it was pre-trained using RC data augmentation.
Its intended usage is as follows: for downstream tasks, the model should be trained with RC data augmentation.
At downstream task inference, the model should be run twice: once on a sequence and once on its RC.
The output of these two applications should be combined (averaged) to form the downstream task prediction.
This model was pre-trained on the human reference genome with sequence length 131,072 for 50k steps (each step contained ~1M base pairs / tokens).
For more details, please see our paper: [Caduceus: Bi-Directional Equivariant Long-Range DNA Sequence Modeling](https://arxiv.org/abs/2403.03234).
## Citation
Please cite our work using the bibtex below:
**BibTeX:**
```
@article{schiff2024caduceus,
title={Caduceus: Bi-Directional Equivariant Long-Range DNA Sequence Modeling},
author={Schiff, Yair and Kao, Chia-Hsiang and Gokaslan, Aaron and Dao, Tri and Gu, Albert and Kuleshov, Volodymyr},
journal={arXiv preprint arXiv:2403.03234},
year={2024}
}
```
## Model Card Contact
Yair Schiff ([email protected]) |
ai-human-lab/EEVE-Korean-10.8B-DPO-v1.0 | ai-human-lab | "2024-06-27T12:57:40Z" | 1,140 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-18T11:49:28Z" | ---
license: apache-2.0
---
# About the Model
This model is a fine-tuned version of yanolja/EEVE-Korean-10.8B-v1.0, which is a Korean vocabulary-extended version of upstage/SOLAR-10.7B-v1.0. Specifically, we utilized Direct Preference Optimization (DPO) through the use of Axolotl. |
ajibawa-2023/Code-Llama-3-8B | ajibawa-2023 | "2024-05-08T02:53:27Z" | 1,140 | 23 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"Python",
"Cpp",
"PHP",
"JS",
"Java",
"Rust",
"Ruby",
"SQL",
"MySql",
"R",
"Julia",
"conversational",
"en",
"dataset:ajibawa-2023/Code-290k-ShareGPT",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:m-a-p/Code-Feedback",
"dataset:microsoft/orca-math-word-problems-200k",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-07T05:06:40Z" | ---
license: llama3
datasets:
- ajibawa-2023/Code-290k-ShareGPT
- m-a-p/CodeFeedback-Filtered-Instruction
- m-a-p/Code-Feedback
- microsoft/orca-math-word-problems-200k
language:
- en
tags:
- code
- Python
- Cpp
- PHP
- JS
- Java
- Rust
- Ruby
- SQL
- MySql
- R
- Julia
---
**Code-Llama-3-8B**
This Model is trained on refined version of my dataset [Code-290k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Code-290k-ShareGPT).
Besides this it is trained on following datasets:
[Code-Feedback](https://huggingface.co/datasets/m-a-p/Code-Feedback)
[orca-math-word-problems-200k](https://huggingface.co/datasets/microsoft/orca-math-word-problems-200k)
[CodeFeedback-Filtered-Instruction](https://huggingface.co/datasets/m-a-p/CodeFeedback-Filtered-Instruction)
The idea was to check how this Model will perform with both Code & Maths datasets. This model is very good with Coding.
Maths outputs are also very good. You can test out this model.
It is very very good in Code generation in various languages such as **Python, Java, JavaScript, GO, C++, Rust, Ruby, Sql, MySql, R, Julia, Haskell**, etc..
This model will also generate detailed explanation/logic behind each code.
This Model is trained on massive datasets so the results are very good. You can check the Examples given below.
I have used ChatML prompt format.
This is Fully Finetuned Model.
**GGUF & Exllama**
GGUF: [Link](https://huggingface.co/bartowski/Code-Llama-3-8B-GGUF)
Exllama v2: [Link](https://huggingface.co/bartowski/Code-Llama-3-8B-exl2)
Special Thanks to [Bartowski](https://huggingface.co/bartowski) for quantizing this model.
**Training:**
Entire dataset was trained on 4 x A100 80GB. For 2 epoch, training took more than 160 Hours. Axolotl & Deepspeed codebase was used for training purpose.
Entire data is trained on Llama-3-8B by Meta.
**Example Prompt:**
This model uses **ChatML** prompt format.
```
<|im_start|>system
You are a helpful Coding assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
You can modify above Prompt as per your requirement.
One example will be:
```
This is a conversation with your helpful Coding assistant. Assistant can generate Code in various Programming Languages along with necessary explanation.
```
I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development.
Thank you for your love & support.
**Example Output**
Example 1

Example 2

Example 3

Example 4

Example 5
 |
mradermacher/Qwen2-72B-Instruct-i1-GGUF | mradermacher | "2024-06-07T17:42:10Z" | 1,140 | 1 | transformers | [
"transformers",
"gguf",
"chat",
"en",
"base_model:Qwen/Qwen2-72B-Instruct",
"license:other",
"endpoints_compatible",
"region:us"
] | null | "2024-06-07T06:05:10Z" | ---
base_model: Qwen/Qwen2-72B-Instruct
language:
- en
library_name: transformers
license: other
license_link: https://huggingface.co/Qwen/Qwen2-72B-Instruct/blob/main/LICENSE
license_name: tongyi-qianwen
quantized_by: mradermacher
tags:
- chat
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Qwen/Qwen2-72B-Instruct
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen2-72B-Instruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2-72B-Instruct-i1-GGUF/resolve/main/Qwen2-72B-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 22.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-72B-Instruct-i1-GGUF/resolve/main/Qwen2-72B-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 23.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-72B-Instruct-i1-GGUF/resolve/main/Qwen2-72B-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 25.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-72B-Instruct-i1-GGUF/resolve/main/Qwen2-72B-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 27.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-72B-Instruct-i1-GGUF/resolve/main/Qwen2-72B-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 28.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-72B-Instruct-i1-GGUF/resolve/main/Qwen2-72B-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-72B-Instruct-i1-GGUF/resolve/main/Qwen2-72B-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 29.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-72B-Instruct-i1-GGUF/resolve/main/Qwen2-72B-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 31.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-72B-Instruct-i1-GGUF/resolve/main/Qwen2-72B-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 32.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-72B-Instruct-i1-GGUF/resolve/main/Qwen2-72B-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 34.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-72B-Instruct-i1-GGUF/resolve/main/Qwen2-72B-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 34.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-72B-Instruct-i1-GGUF/resolve/main/Qwen2-72B-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 35.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-72B-Instruct-i1-GGUF/resolve/main/Qwen2-72B-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 37.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-72B-Instruct-i1-GGUF/resolve/main/Qwen2-72B-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 39.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-72B-Instruct-i1-GGUF/resolve/main/Qwen2-72B-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 39.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-72B-Instruct-i1-GGUF/resolve/main/Qwen2-72B-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 41.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-72B-Instruct-i1-GGUF/resolve/main/Qwen2-72B-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 44.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-72B-Instruct-i1-GGUF/resolve/main/Qwen2-72B-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 47.5 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Qwen2-72B-Instruct-i1-GGUF/resolve/main/Qwen2-72B-Instruct.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Qwen2-72B-Instruct-i1-GGUF/resolve/main/Qwen2-72B-Instruct.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 51.5 | |
| [PART 1](https://huggingface.co/mradermacher/Qwen2-72B-Instruct-i1-GGUF/resolve/main/Qwen2-72B-Instruct.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Qwen2-72B-Instruct-i1-GGUF/resolve/main/Qwen2-72B-Instruct.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 54.5 | |
| [PART 1](https://huggingface.co/mradermacher/Qwen2-72B-Instruct-i1-GGUF/resolve/main/Qwen2-72B-Instruct.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Qwen2-72B-Instruct-i1-GGUF/resolve/main/Qwen2-72B-Instruct.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 64.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
AliGhiasvand86/epoch_15_load_last_model_23JUNE_v2 | AliGhiasvand86 | "2024-06-23T21:39:20Z" | 1,140 | 0 | transformers | [
"transformers",
"safetensors",
"longt5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-06-23T21:39:00Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jbilcke-hf/sdxl-foundation-2 | jbilcke-hf | "2023-10-21T22:05:24Z" | 1,139 | 2 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"dataset:jbilcke-hf/foundation",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | "2023-10-21T18:00:36Z" |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: hober-mallow
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
datasets:
- jbilcke-hf/foundation
---
# LoRA DreamBooth - jbilcke-hf/sdxl-foundation-2
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0 trained on @fffiloni's SD-XL trainer.
The weights were trained on the concept prompt:
```
hober-mallow
```
Use this keyword to trigger your custom model in your prompts.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Usage
Make sure to upgrade diffusers to >= 0.19.0:
```
pip install diffusers --upgrade
```
In addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark:
```
pip install invisible_watermark transformers accelerate safetensors
```
To just use the base model, you can run:
```python
import torch
from diffusers import DiffusionPipeline, AutoencoderKL
device = "cuda" if torch.cuda.is_available() else "cpu"
vae = AutoencoderKL.from_pretrained('madebyollin/sdxl-vae-fp16-fix', torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
vae=vae, torch_dtype=torch.float16, variant="fp16",
use_safetensors=True
)
pipe.to(device)
# This is where you load your trained weights
specific_safetensors = "pytorch_lora_weights.safetensors"
lora_scale = 0.9
pipe.load_lora_weights(
'jbilcke-hf/sdxl-foundation-2',
weight_name = specific_safetensors,
# use_auth_token = True
)
prompt = "A majestic hober-mallow jumping from a big stone at night"
image = pipe(
prompt=prompt,
num_inference_steps=50,
cross_attention_kwargs={"scale": lora_scale}
).images[0]
```
|
Norod78/SDXL-YarnArtStyle-LoRA | Norod78 | "2024-01-02T19:43:50Z" | 1,139 | 35 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"dataset:Norod78/Yarn-art-style",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail",
"region:us"
] | text-to-image | "2024-01-02T19:42:48Z" | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: Rick Sanchez Yarn art style
parameters:
negative_prompt: unfocused, blurry, grainy
output:
url: >-
images/00016-20240102204306-7790-Rick Sanchez Yarn art style
_lora_SDXL_Yarn_Art_Style_1.0_.jpg
- text: Wonderwoman Yarn art style
parameters:
negative_prompt: unfocused, blurry, grainy
output:
url: >-
images/00004-20240102203004-7779-Wonderwoman Yarn art style
_lora_SDXL_Yarn_Art_Style_0.8_.jpg
- text: A socially awkward potato Yarn art style
parameters:
negative_prompt: unfocused, blurry, grainy
output:
url: >-
images/00007-20240102203502-7778-A socially awkward potato Yarn art style
_lora_SDXL_Yarn_Art_Style_1.0_.jpg
- text: The girl with a pearl earring Yarn art style
parameters:
negative_prompt: unfocused, blurry, grainy
output:
url: >-
images/00018-20240102204642-7800-The girl with a pearl earring Yarn art
style _lora_SDXL_Yarn_Art_Style_1.0_.jpg
- text: The Starry Night Yarn art style
parameters:
negative_prompt: unfocused, blurry, grainy
output:
url: >-
images/00021-20240102205400-7779-The Starry Night Yarn art style
_lora_SDXL_Yarn_Art_Style_1.0_.jpg
- text: Snoop Dogg Yarn art style
parameters:
negative_prompt: unfocused, blurry, grainy
output:
url: >-
images/00026-20240102205719-7777-Snoop Dogg Yarn art
style-before-highres-fix.jpg
- text: A rainbow unicorn Yarn art style
parameters:
negative_prompt: unfocused, blurry, grainy
output:
url: >-
images/00052-20240102211945-7778-A rainbow unicorn Yarn art style
_lora_SDXL_Yarn_Art_Style_1.0_.jpg
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Yarn art style
license: openrail
datasets:
- Norod78/Yarn-art-style
---
# SDLX Yarn art style
<Gallery />
## Model description
# SDXL Yarn Art Style
Use 'Yarn art style' in your prompts
Trained on 17 MidJourney generated images [available here](https://huggingface.co/datasets/Norod78/Yarn-art-style)
The model was trained using CivitAI's built-in training feature.
## Trigger words
You should use `Yarn art style` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Norod78/SDXL-YarnArtStyle-LoRA/tree/main) them in the Files & versions tab. |
kimwooglae/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34 | kimwooglae | "2024-01-23T15:58:44Z" | 1,139 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-23T15:29:31Z" | ---
language:
- en
pipeline_tag: text-generation
license: cc-by-nc-4.0
---
# WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34
## Model Details
**Developed by**
[Inswave Systems](https://www.inswave.com) UI Platform Team
**Base Model**
[LDCC/LDCC-SOLAR-10.7B](https://huggingface.co/LDCC/LDCC-SOLAR-10.7B)
# Implementation Code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kimwooglae/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34"
model = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(repo)
```
--- |
mssma/ko-solar-10.7b-v0.4 | mssma | "2024-05-21T08:48:11Z" | 1,138 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-21T07:49:54Z" | ---
library_name: transformers
license: apache-2.0
language:
- ko
---
# usage
```
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
path = "mssma/ko-solar-10.7b-v0.4"
model = AutoModelForCausalLM.from_pretrained(
path,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(path)
``` |
tungtv/mistral | tungtv | "2024-06-30T15:16:40Z" | 1,138 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-30T15:12:44Z" | Entry not found |
TheBloke/OpenBuddy-Llama2-13B-v11.1-GGUF | TheBloke | "2023-09-27T12:47:10Z" | 1,137 | 27 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"base_model:OpenBuddy/openbuddy-llama2-13b-v11.1-bf16",
"license:llama2",
"text-generation-inference",
"region:us"
] | text-generation | "2023-09-02T09:49:00Z" | ---
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
license: llama2
library_name: transformers
model_name: OpenBuddy Llama2 13B v11.1
base_model: OpenBuddy/openbuddy-llama2-13b-v11.1-bf16
inference: false
model_creator: OpenBuddy
model_type: llama
pipeline_tag: text-generation
prompt_template: "You are a helpful, respectful and honest INTP-T AI Assistant named\
\ Buddy. You are talking to a human User.\nAlways answer as helpfully and logically\
\ as possible, while being safe. Your answers should not include any harmful, political,\
\ religious, unethical, racist, sexist, toxic, dangerous, or illegal content. Please\
\ ensure that your responses are socially unbiased and positive in nature.\nIf a\
\ question does not make any sense, or is not factually coherent, explain why instead\
\ of answering something not correct. If you don't know the answer to a question,\
\ please don't share false information.\nYou like to use emojis. You can speak fluently\
\ in many languages, for example: English, Chinese.\nYou cannot access the internet,\
\ but you have vast knowledge, cutoff: 2021-09.\nYou are trained by OpenBuddy team,\
\ (https://openbuddy.ai, https://github.com/OpenBuddy/OpenBuddy), you are based\
\ on LLaMA and Falcon transformers model, not related to GPT or OpenAI.\n\nUser:\
\ {prompt}\nAssistant: \n"
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# OpenBuddy Llama2 13B v11.1 - GGUF
- Model creator: [OpenBuddy](https://huggingface.co/OpenBuddy)
- Original model: [OpenBuddy Llama2 13B v11.1](https://huggingface.co/OpenBuddy/openbuddy-llama2-13b-v11.1-bf16)
<!-- description start -->
## Description
This repo contains GGUF format model files for [OpenBuddy's OpenBuddy Llama2 13B v11.1](https://huggingface.co/OpenBuddy/openbuddy-llama2-13b-v11.1-bf16).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/OpenBuddy-Llama2-13B-v11.1-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/OpenBuddy-Llama2-13B-v11.1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/OpenBuddy-Llama2-13B-v11.1-GGUF)
* [OpenBuddy's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/OpenBuddy/openbuddy-llama2-13b-v11.1-bf16)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: OpenBuddy
```
You are a helpful, respectful and honest INTP-T AI Assistant named Buddy. You are talking to a human User.
Always answer as helpfully and logically as possible, while being safe. Your answers should not include any harmful, political, religious, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
You like to use emojis. You can speak fluently in many languages, for example: English, Chinese.
You cannot access the internet, but you have vast knowledge, cutoff: 2021-09.
You are trained by OpenBuddy team, (https://openbuddy.ai, https://github.com/OpenBuddy/OpenBuddy), you are based on LLaMA and Falcon transformers model, not related to GPT or OpenAI.
User: {prompt}
Assistant:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [openbuddy-llama2-13b-v11.1.Q2_K.gguf](https://huggingface.co/TheBloke/OpenBuddy-Llama2-13B-v11.1-GGUF/blob/main/openbuddy-llama2-13b-v11.1.Q2_K.gguf) | Q2_K | 2 | 5.46 GB| 7.96 GB | smallest, significant quality loss - not recommended for most purposes |
| [openbuddy-llama2-13b-v11.1.Q3_K_S.gguf](https://huggingface.co/TheBloke/OpenBuddy-Llama2-13B-v11.1-GGUF/blob/main/openbuddy-llama2-13b-v11.1.Q3_K_S.gguf) | Q3_K_S | 3 | 5.70 GB| 8.20 GB | very small, high quality loss |
| [openbuddy-llama2-13b-v11.1.Q3_K_M.gguf](https://huggingface.co/TheBloke/OpenBuddy-Llama2-13B-v11.1-GGUF/blob/main/openbuddy-llama2-13b-v11.1.Q3_K_M.gguf) | Q3_K_M | 3 | 6.37 GB| 8.87 GB | very small, high quality loss |
| [openbuddy-llama2-13b-v11.1.Q3_K_L.gguf](https://huggingface.co/TheBloke/OpenBuddy-Llama2-13B-v11.1-GGUF/blob/main/openbuddy-llama2-13b-v11.1.Q3_K_L.gguf) | Q3_K_L | 3 | 6.97 GB| 9.47 GB | small, substantial quality loss |
| [openbuddy-llama2-13b-v11.1.Q4_0.gguf](https://huggingface.co/TheBloke/OpenBuddy-Llama2-13B-v11.1-GGUF/blob/main/openbuddy-llama2-13b-v11.1.Q4_0.gguf) | Q4_0 | 4 | 7.41 GB| 9.91 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [openbuddy-llama2-13b-v11.1.Q4_K_S.gguf](https://huggingface.co/TheBloke/OpenBuddy-Llama2-13B-v11.1-GGUF/blob/main/openbuddy-llama2-13b-v11.1.Q4_K_S.gguf) | Q4_K_S | 4 | 7.45 GB| 9.95 GB | small, greater quality loss |
| [openbuddy-llama2-13b-v11.1.Q4_K_M.gguf](https://huggingface.co/TheBloke/OpenBuddy-Llama2-13B-v11.1-GGUF/blob/main/openbuddy-llama2-13b-v11.1.Q4_K_M.gguf) | Q4_K_M | 4 | 7.91 GB| 10.41 GB | medium, balanced quality - recommended |
| [openbuddy-llama2-13b-v11.1.Q5_0.gguf](https://huggingface.co/TheBloke/OpenBuddy-Llama2-13B-v11.1-GGUF/blob/main/openbuddy-llama2-13b-v11.1.Q5_0.gguf) | Q5_0 | 5 | 9.02 GB| 11.52 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [openbuddy-llama2-13b-v11.1.Q5_K_S.gguf](https://huggingface.co/TheBloke/OpenBuddy-Llama2-13B-v11.1-GGUF/blob/main/openbuddy-llama2-13b-v11.1.Q5_K_S.gguf) | Q5_K_S | 5 | 9.02 GB| 11.52 GB | large, low quality loss - recommended |
| [openbuddy-llama2-13b-v11.1.Q5_K_M.gguf](https://huggingface.co/TheBloke/OpenBuddy-Llama2-13B-v11.1-GGUF/blob/main/openbuddy-llama2-13b-v11.1.Q5_K_M.gguf) | Q5_K_M | 5 | 9.27 GB| 11.77 GB | large, very low quality loss - recommended |
| [openbuddy-llama2-13b-v11.1.Q6_K.gguf](https://huggingface.co/TheBloke/OpenBuddy-Llama2-13B-v11.1-GGUF/blob/main/openbuddy-llama2-13b-v11.1.Q6_K.gguf) | Q6_K | 6 | 10.73 GB| 13.23 GB | very large, extremely low quality loss |
| [openbuddy-llama2-13b-v11.1.Q8_0.gguf](https://huggingface.co/TheBloke/OpenBuddy-Llama2-13B-v11.1-GGUF/blob/main/openbuddy-llama2-13b-v11.1.Q8_0.gguf) | Q8_0 | 8 | 13.89 GB| 16.39 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/OpenBuddy-Llama2-13B-v11.1-GGUF and below it, a specific filename to download, such as: openbuddy-llama2-13b-v11.1.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/OpenBuddy-Llama2-13B-v11.1-GGUF openbuddy-llama2-13b-v11.1.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/OpenBuddy-Llama2-13B-v11.1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/OpenBuddy-Llama2-13B-v11.1-GGUF openbuddy-llama2-13b-v11.1.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m openbuddy-llama2-13b-v11.1.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "You are a helpful, respectful and honest INTP-T AI Assistant named Buddy. You are talking to a human User.\nAlways answer as helpfully and logically as possible, while being safe. Your answers should not include any harmful, political, religious, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\nYou like to use emojis. You can speak fluently in many languages, for example: English, Chinese.\nYou cannot access the internet, but you have vast knowledge, cutoff: 2021-09.\nYou are trained by OpenBuddy team, (https://openbuddy.ai, https://github.com/OpenBuddy/OpenBuddy), you are based on LLaMA and Falcon transformers model, not related to GPT or OpenAI.\n\nUser: {prompt}\nAssistant:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/OpenBuddy-Llama2-13B-v11.1-GGUF", model_file="openbuddy-llama2-13b-v11.1.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: OpenBuddy's OpenBuddy Llama2 13B v11.1
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)

# Copyright Notice
This model is built upon Meta's LLaMA series of models and is subject to Meta's licensing agreement.
This model is intended for use only by individuals who have obtained approval from Meta and are eligible to download LLaMA.
If you have not obtained approval from Meta, you must visit the https://ai.meta.com/llama/ page, read and agree to the model's licensing agreement, submit an application, and wait for approval from Meta before downloading the model from this page.
## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## 免责声明
所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。
OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。
使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
<!-- original-model-card end -->
|
AIFT/AIFT-ko-orca-plat-Yi-ko-6b-v1.1 | AIFT | "2024-01-22T08:26:13Z" | 1,137 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-22T08:17:35Z" | ---
license: cc-by-sa-4.0
---
<h1>orca-platypus - instruct 모델 v1.1</h1>
<b><학습 데이터 구축></b>
kyujinpy 님이 공개하신 KOR-OpenOrca-Platypus 데이터를 일부 삭제(샘플링) 및 정제 작업 진행하여 활용.
그 이후 해당 데이터들을 보며 관련 태스크를 추출하였고 이를 기반으로
해당 태스크에 맞춰서 NLP 관련 오픈소스 데이터를 활용하여 학습데이터를 자체적으로
역사, 과학, 수학, 기계독해, 리뷰 분석 문제를 gpt를 통해서 구축하였고,
aihub 일반상식 및 기계독해 데이터를 활용하여 추가로 학습 데이터를 구축(형태소 관련, 기계독해 관련 및 요약)
각종 블로그에서 역사 및 상식 퀴즈를 사람이 직접 학습데이터 형태로 변경
AI2AI Challenge 데이터 형태를 보고 gpt를 통해 초등 수준의 과학 수학 문제 유형을 제작 500문제
영어 번역 데이터 영한/한영 데이터 학습 데이터로 활용 진행
총 데이터 4만개 정도 사용하였습니다.
<br>
<br>
+ TruthfulQA 관련 문제 추가를 진행하였습니다.(속설 관련 참거짓 문제)
+ 기계독해 관련 학습 데이터를 ChatGPT를 통해서 답변을 얻어 학습
+ 문법관련 학습 데이터
<br>
###학습 데이터 파일은 비공개입니다.
<br>
<b><학습></b>
학습은 LoRA를 사용하여 A100 40G *2에서 학습을 진행하였습니다. |
AIFT/AIFT-instruct-SFT-1.3B-v1.1 | AIFT | "2024-02-22T12:35:34Z" | 1,137 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-22T05:53:50Z" | ---
license: cc-by-sa-4.0
---
<h1>AIFT-instruct-42dot_LLM-SFT-1.3B</h1>
<b><학습 데이터 구축></b>
<br>
kyujinpy 님이 공개하신 KOR-OpenOrca-Platypus 데이터를 일부 삭제(샘플링) 및 정제 작업 진행하여 활용.
그 이후 해당 데이터들을 보며 관련 태스크를 추출하였고 이를 기반으로
해당 태스크에 맞춰서 NLP 관련 오픈소스 데이터를 활용하여 학습데이터를 자체적으로
역사, 과학, 수학, 기계독해, 리뷰 분석 문제를 gpt를 통해서 구축하였고,
aihub 일반상식 및 기계독해 데이터를 활용하여 추가로 학습 데이터를 구축(형태소 관련, 기계독해 관련 및 요약)
각종 블로그에서 역사 및 상식 퀴즈를 사람이 직접 학습데이터 형태로 변경
AI2AI Challenge 데이터 형태를 보고 gpt를 통해 초등 수준의 과학 수학 문제 유형을 제작 500문제
영어 번역 데이터 영한/한영 데이터 학습 데이터로 활용 진행
총 데이터 4만개 정도 사용하였습니다.
<br>
<br>
+ TruthfulQA 관련 문제 추가를 진행하였습니다.(속설 관련 참거짓 문제)
+ 기계독해 관련 학습 데이터를 ChatGPT를 통해서 답변을 얻어 학습
+ 문법관련 학습 데이터
<br>
###학습 데이터 파일은 비공개입니다.
<br>
<모델>
<br>
42dot에서 공개한 42dot_LLM-SFT-1.3B을 베이스 모델로 하여 학습 진행하였습니다.
<br>
<br>
<br>
<b><학습></b>
<br>
학습은 LoRA를 사용하여 A100 40G *2에서 학습을 진행하였습니다.
|
anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g | anon8231489123 | "2023-04-02T13:22:11Z" | 1,136 | 733 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-04-01T01:04:45Z" |
Update (4/1): Added ggml for Cuda model
Dataset is here (instruct): https://github.com/teknium1/GPTeacher
Okay... Two different models now. One generated in the Triton branch, one generated in Cuda. Use the Cuda one for now unless the Triton branch becomes widely used.
Cuda info (use this one):
Command:
CUDA_VISIBLE_DEVICES=0 python llama.py ./models/chavinlo-gpt4-x-alpaca --wbits 4 --true-sequential --groupsize 128 --save gpt-x-alpaca-13b-native-4bit-128g-cuda.pt
Prev. info
Quantized on GPTQ-for-LLaMa commit 5955e9c67d9bfe8a8144ffbe853c2769f1e87cdd
GPTQ 4bit quantization of: https://huggingface.co/chavinlo/gpt4-x-alpaca
Note: This was quantized with this branch of GPTQ-for-LLaMA: https://github.com/qwopqwop200/GPTQ-for-LLaMa/tree/triton
Because of this, it appears to be incompatible with Oobabooga at the moment. Stay tuned?
Command:
CUDA_VISIBLE_DEVICES=0 python llama.py ./models/chavinlo-gpt4-x-alpaca --wbits 4 --true-sequential --act-order --groupsize 128 --save gpt-x-alpaca-13b-native-4bit-128g.pt
|
paulml/OGNO-7B | paulml | "2024-02-12T17:31:30Z" | 1,136 | 17 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"liminerity/Omningotex-7b-slerp",
"eren23/dpo-binarized-NeutrixOmnibe-7B",
"base_model:liminerity/Omningotex-7b-slerp",
"base_model:eren23/dpo-binarized-NeutrixOmnibe-7B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-12T17:21:49Z" | ---
tags:
- merge
- mergekit
- lazymergekit
- liminerity/Omningotex-7b-slerp
- eren23/dpo-binarized-NeutrixOmnibe-7B
base_model:
- liminerity/Omningotex-7b-slerp
- eren23/dpo-binarized-NeutrixOmnibe-7B
license: cc-by-nc-4.0
---
# OGNO-7B
OGNO-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [liminerity/Omningotex-7b-slerp](https://huggingface.co/liminerity/Omningotex-7b-slerp)
* [eren23/dpo-binarized-NeutrixOmnibe-7B](https://huggingface.co/eren23/dpo-binarized-NeutrixOmnibe-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: liminerity/Omningotex-7b-slerp
layer_range: [0, 32]
- model: eren23/dpo-binarized-NeutrixOmnibe-7B
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/Omningotex-7b-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "paulml/OGNO-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
jungyuko/DAVinCI-42dot_LLM-PLM-1.3B-v1.5.3 | jungyuko | "2024-03-06T07:39:04Z" | 1,136 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-06T06:34:02Z" | ---
license: cc-by-nc-4.0
---
## DAVinCI-42dot_LLM-PLM-1.3B-v1.5.3
This model is a fine-tuned version of [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B) on a custom dataset.
### Model description
More information needed
### Intended uses & limitations
More information needed
### Training and evaluation data
More information needed
### Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
* learning_rate: 2e-05
* train_batch_size: 24
* eval_batch_size: 8
* seed: 42
* gradient_accumulation_steps: 4
* total_train_batch_size: 96
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr_scheduler_type: linear
* num_epochs: 3.0
* mixed_precision_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.2
* Pytorch 2.1.2+cu121
* Datasets 2.0.0
* Tokenizers 0.15.0
|
mradermacher/llama-2-7b-chat-sexed-version3-GGUF | mradermacher | "2024-05-06T06:01:49Z" | 1,136 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:ben-wycliff/llama-2-7b-chat-sexed-version3",
"endpoints_compatible",
"region:us"
] | null | "2024-03-24T00:45:11Z" | ---
base_model: ben-wycliff/llama-2-7b-chat-sexed-version3
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
static quants of https://huggingface.co/ben-wycliff/llama-2-7b-chat-sexed-version3
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-chat-sexed-version3-GGUF/resolve/main/llama-2-7b-chat-sexed-version3.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-chat-sexed-version3-GGUF/resolve/main/llama-2-7b-chat-sexed-version3.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-chat-sexed-version3-GGUF/resolve/main/llama-2-7b-chat-sexed-version3.IQ3_S.gguf) | IQ3_S | 3.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-chat-sexed-version3-GGUF/resolve/main/llama-2-7b-chat-sexed-version3.Q3_K_S.gguf) | Q3_K_S | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-chat-sexed-version3-GGUF/resolve/main/llama-2-7b-chat-sexed-version3.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-chat-sexed-version3-GGUF/resolve/main/llama-2-7b-chat-sexed-version3.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-chat-sexed-version3-GGUF/resolve/main/llama-2-7b-chat-sexed-version3.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-chat-sexed-version3-GGUF/resolve/main/llama-2-7b-chat-sexed-version3.IQ4_XS.gguf) | IQ4_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-chat-sexed-version3-GGUF/resolve/main/llama-2-7b-chat-sexed-version3.Q4_0.gguf) | Q4_0 | 4.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-chat-sexed-version3-GGUF/resolve/main/llama-2-7b-chat-sexed-version3.IQ4_NL.gguf) | IQ4_NL | 4.1 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-chat-sexed-version3-GGUF/resolve/main/llama-2-7b-chat-sexed-version3.Q4_K_S.gguf) | Q4_K_S | 4.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-chat-sexed-version3-GGUF/resolve/main/llama-2-7b-chat-sexed-version3.Q4_K_M.gguf) | Q4_K_M | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-chat-sexed-version3-GGUF/resolve/main/llama-2-7b-chat-sexed-version3.Q5_K_S.gguf) | Q5_K_S | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-chat-sexed-version3-GGUF/resolve/main/llama-2-7b-chat-sexed-version3.Q5_K_M.gguf) | Q5_K_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-chat-sexed-version3-GGUF/resolve/main/llama-2-7b-chat-sexed-version3.Q6_K.gguf) | Q6_K | 5.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-chat-sexed-version3-GGUF/resolve/main/llama-2-7b-chat-sexed-version3.Q8_0.gguf) | Q8_0 | 7.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
BeaverAI/Llama-3SOME-8B-v2d-GGUF | BeaverAI | "2024-06-06T22:58:29Z" | 1,136 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-06T22:56:52Z" | Entry not found |
acbdkk/SupaMATH | acbdkk | "2024-07-01T15:25:17Z" | 1,136 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-07-01T15:13:17Z" | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** acbdkk
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Note: PLEASE use Unsloth. It is 2X FASTER and llama-3-8b can even be fine-tuned FOR FREE in google colab! Additionally, Codellama-34b can be fine-tuned inside AN A100 in google colab! There is simply no excuse not to use Unsloth. |
stablediffusionapi/all-526 | stablediffusionapi | "2023-04-26T20:04:01Z" | 1,135 | 3 | diffusers | [
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-04-26T20:02:20Z" | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# All 526 API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "all-526"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Model link: [View model](https://stablediffusionapi.com/models/all-526)
Credits: [View credits](https://civitai.com/?query=All%20526)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v3/dreambooth"
payload = json.dumps({
"key": "",
"model_id": "all-526",
"prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
TheBloke/storytime-13B-GGUF | TheBloke | "2023-09-27T12:54:22Z" | 1,135 | 11 | transformers | [
"transformers",
"gguf",
"llama",
"en",
"base_model:chargoddard/storytime-13b",
"license:llama2",
"text-generation-inference",
"region:us"
] | null | "2023-09-23T23:27:19Z" | ---
language:
- en
license: llama2
tags:
- llama
model_name: Storytime 13B
base_model: chargoddard/storytime-13b
inference: false
model_creator: Charles Goddard
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Storytime 13B - GGUF
- Model creator: [Charles Goddard](https://huggingface.co/chargoddard)
- Original model: [Storytime 13B](https://huggingface.co/chargoddard/storytime-13b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Charles Goddard's Storytime 13B](https://huggingface.co/chargoddard/storytime-13b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/storytime-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/storytime-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/storytime-13B-GGUF)
* [Charles Goddard's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/chargoddard/storytime-13b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [storytime-13b.Q2_K.gguf](https://huggingface.co/TheBloke/storytime-13B-GGUF/blob/main/storytime-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [storytime-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/storytime-13B-GGUF/blob/main/storytime-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [storytime-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/storytime-13B-GGUF/blob/main/storytime-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [storytime-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/storytime-13B-GGUF/blob/main/storytime-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [storytime-13b.Q4_0.gguf](https://huggingface.co/TheBloke/storytime-13B-GGUF/blob/main/storytime-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [storytime-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/storytime-13B-GGUF/blob/main/storytime-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [storytime-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/storytime-13B-GGUF/blob/main/storytime-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [storytime-13b.Q5_0.gguf](https://huggingface.co/TheBloke/storytime-13B-GGUF/blob/main/storytime-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [storytime-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/storytime-13B-GGUF/blob/main/storytime-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [storytime-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/storytime-13B-GGUF/blob/main/storytime-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [storytime-13b.Q6_K.gguf](https://huggingface.co/TheBloke/storytime-13B-GGUF/blob/main/storytime-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [storytime-13b.Q8_0.gguf](https://huggingface.co/TheBloke/storytime-13B-GGUF/blob/main/storytime-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/storytime-13B-GGUF and below it, a specific filename to download, such as: storytime-13b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/storytime-13B-GGUF storytime-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/storytime-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/storytime-13B-GGUF storytime-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m storytime-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/storytime-13B-GGUF", model_file="storytime-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Charles Goddard's Storytime 13B
Chat model with a storytelling bent.
Recipe:
* [Chronorctypus-Limarobormes](https://huggingface.co/chargoddard/Chronorctypus-Limarobormes-13b) base
* a healthy SLERPing of [ReMM-v2.2-L2-13B](https://huggingface.co/Undi95/ReMM-v2.2-L2-13B)
* [Llama-2-13B-Storywriter](https://huggingface.co/Blackroot/Llama-2-13B-Storywriter-LORA) x 0.5
* WIP storytelling LORA
Responds well to the Alpaca prompt format.
<!-- original-model-card end -->
|
AIFT/AIFT-instruct-SFT-1.3B-v2.1.1 | AIFT | "2024-02-27T23:17:58Z" | 1,135 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-27T23:13:25Z" | ---
license: cc-by-sa-4.0
---
<h1>AIFT-instruct-42dot_LLM-SFT-1.3B</h1>
<br>
version 2.1.1
<b><학습 데이터 구축></b>
<br>
kyujinpy 님이 공개하신 KOR-OpenOrca-Platypus 데이터를 일부 삭제(샘플링) 및 정제 작업 진행하여 활용.
그 이후 해당 데이터들을 보며 관련 태스크를 추출하였고 이를 기반으로
해당 태스크에 맞춰서 NLP 관련 오픈소스 데이터를 활용하여 학습데이터를 자체적으로
역사, 과학, 수학, 기계독해, 리뷰 분석 문제를 gpt를 통해서 구축하였고,
aihub 일반상식 및 기계독해 데이터를 활용하여 추가로 학습 데이터를 구축(형태소 관련, 기계독해 관련 및 요약)
각종 블로그에서 역사 및 상식 퀴즈를 사람이 직접 학습데이터 형태로 변경
AI2AI Challenge 데이터 형태를 보고 gpt를 통해 초등 수준의 과학 수학 문제 유형을 제작 500문제
영어 번역 데이터 영한/한영 데이터 학습 데이터로 활용 진행
총 데이터 4만개 정도 사용하였습니다.
<br>
<br>
+ TruthfulQA 관련 문제 추가를 진행하였습니다.(속설 관련 참거짓 문제)
+ 기계독해 관련 학습 데이터를 ChatGPT를 통해서 답변을 얻어 학습
+ 문법관련 학습 데이터
<br>
###학습 데이터 파일은 비공개입니다.
<br>
<모델>
<br>
42dot에서 공개한 42dot_LLM-SFT-1.3B을 베이스 모델로 하여 학습 진행하였습니다.
<br>
<br>
<br>
<b><학습></b>
<br>
학습은 LoRA를 사용하여 A100 40G *2에서 학습을 진행하였습니다.
|
AIFT/AIFT-instruct-SFT-1.3B-refine-v3 | AIFT | "2024-02-28T11:38:19Z" | 1,135 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-28T11:33:45Z" | ---
license: cc-by-sa-4.0
---
<h1>AIFT-instruct-42dot_LLM-SFT-1.3B-REFINE-V3</h1>
<b><학습 데이터 구축></b>
<br>
kyujinpy 님이 공개하신 KOR-OpenOrca-Platypus 데이터를 일부 삭제(샘플링) 및 정제 작업 진행하여 활용.
그 이후 해당 데이터들을 보며 관련 태스크를 추출하였고 이를 기반으로
해당 태스크에 맞춰서 NLP 관련 오픈소스 데이터를 활용하여 학습데이터를 자체적으로
역사, 과학, 수학, 기계독해, 리뷰 분석 문제를 gpt를 통해서 구축하였고,
aihub 일반상식 및 기계독해 데이터를 활용하여 추가로 학습 데이터를 구축(형태소 관련, 기계독해 관련 및 요약)
각종 블로그에서 역사 및 상식 퀴즈를 사람이 직접 학습데이터 형태로 변경
AI2AI Challenge 데이터 형태를 보고 gpt를 통해 초등 수준의 과학 수학 문제 유형을 제작 500문제
영어 번역 데이터 영한/한영 데이터 학습 데이터로 활용 진행
총 데이터 4만개 정도 사용하였습니다.
<br>
<br>
+ TruthfulQA 관련 문제 추가를 진행하였습니다.(속설 관련 참거짓 문제)
+ 기계독해 관련 학습 데이터를 ChatGPT를 통해서 답변을 얻어 학습
+ 문법관련 학습 데이터
<br>
###학습 데이터 파일은 비공개입니다.
<br>
<모델>
<br>
42dot에서 공개한 42dot_LLM-SFT-1.3B을 베이스 모델로 하여 학습 진행하였습니다.
<br>
<br>
<br>
<b><학습></b>
<br>
학습은 LoRA를 사용하여 A100 40G *2에서 학습을 진행하였습니다.
|
Josephgflowers/Phi-3-mini-4k-instruct-Cinder-llamafied-with-16bit-GGUF | Josephgflowers | "2024-04-28T00:00:28Z" | 1,135 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"nlp",
"code",
"conversational",
"en",
"dataset:Josephgflowers/just_cinder",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-26T21:50:02Z" | ---
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- nlp
- code
widget:
- text: |
<|system|>
You are a helpful assistant.<|end|>
<|user|>
datasets:
- Josephgflowers/just_cinder
---
I am really enjoying this version of Cinder. More information coming. As well as Cinder character specific data, a mix of RAG generated Q and A of world knowledge, STEM topics, and Cinder Character data. I suplimented the Cinder character with an abreviated Samantha dataset edited for Cinder and removed a lot of the negative responses.
Model Overview Cinder is an AI chatbot tailored for engaging users in scientific and educational conversations, offering companionship, and sparking imaginative exploration.

## Model Summary
The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.
The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support.
The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures.
When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
Resources and Technical Documentation:
+ [Phi-3 Microsoft Blog](https://aka.ms/phi3blog-april)
+ [Phi-3 Technical Report](https://aka.ms/phi3-tech-report)
+ [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai)
+ Phi-3 GGUF: [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf)
+ Phi-3 ONNX: [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx)
## Intended Uses
**Primary use cases**
The model is intended for commercial and research use in English. The model provides uses for applications which require:
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
**Use case considerations**
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
## How to Use
Phi-3 Mini-4K-Instruct has been integrated in the development version (4.40.0) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
* Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source.
The current `transformers` version can be verified with: `pip list | grep transformers`.
Phi-3 Mini-4K-Instruct is also available in [HuggingChat](https://aka.ms/try-phi3-hf-chat).
### Chat Format
Given the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follow:
```markdown
<|user|>\nQuestion <|end|>\n<|assistant|>
```
For example:
```markdown
<|system|>
You are a helpful AI assistant.<|end|>
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following:
```markdown
<|system|>
You are a helpful AI assistant.<|end|>
<|user|>
I am going to Paris, what should I see?<|end|>
<|assistant|>
Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|>
<|user|>
What is so great about #1?<|end|>
<|assistant|>
```
### Sample inference code
This code snippets show how to get quickly started with running the model on a GPU:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"microsoft/Phi-3-mini-4k-instruct",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
messages = [
{"role": "system", "content": "You are a helpful digital assistant. Please provide safe, ethical and accurate information to the user."},
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
## Responsible AI Considerations
Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
* Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.
* Inputs: Text. It is best suited for prompts using chat format.
* Context length: 4K tokens
* GPUs: 512 H100-80G
* Training time: 7 days
* Training data: 3.3T tokens
* Outputs: Generated text in response to the input
* Dates: Our models were trained between February and April 2024
* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.
### Datasets
Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of
1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/sample_finetune.py).
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager"
* CPU: use the **GGUF** quantized models [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf)
+ Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx)
## Cross Platform Support
ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model [here](https://aka.ms/phi3-mini-4k-instruct-onnx).
Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs.
Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
## License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-4k/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies. |
bartowski/Phi-3-Context-Obedient-RAG-GGUF | bartowski | "2024-05-11T15:04:37Z" | 1,135 | 3 | null | [
"gguf",
"text-generation",
"license:cc-by-sa-4.0",
"region:us"
] | text-generation | "2024-05-11T14:54:41Z" | ---
license: cc-by-sa-4.0
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of Phi-3-Context-Obedient-RAG
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2828">b2828</a> for quantization.
Original model: https://huggingface.co/TroyDoesAI/Phi-3-Context-Obedient-RAG
All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
## Prompt format
```
<s><|user|> {prompt}<|end|><|assistant|><|end|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Phi-3-Context-Obedient-RAG-Q8_0.gguf](https://huggingface.co/bartowski/Phi-3-Context-Obedient-RAG-GGUF/blob/main/Phi-3-Context-Obedient-RAG-Q8_0.gguf) | Q8_0 | 4.06GB | Extremely high quality, generally unneeded but max available quant. |
| [Phi-3-Context-Obedient-RAG-Q6_K.gguf](https://huggingface.co/bartowski/Phi-3-Context-Obedient-RAG-GGUF/blob/main/Phi-3-Context-Obedient-RAG-Q6_K.gguf) | Q6_K | 3.13GB | Very high quality, near perfect, *recommended*. |
| [Phi-3-Context-Obedient-RAG-Q5_K_M.gguf](https://huggingface.co/bartowski/Phi-3-Context-Obedient-RAG-GGUF/blob/main/Phi-3-Context-Obedient-RAG-Q5_K_M.gguf) | Q5_K_M | 2.81GB | High quality, *recommended*. |
| [Phi-3-Context-Obedient-RAG-Q5_K_S.gguf](https://huggingface.co/bartowski/Phi-3-Context-Obedient-RAG-GGUF/blob/main/Phi-3-Context-Obedient-RAG-Q5_K_S.gguf) | Q5_K_S | 2.64GB | High quality, *recommended*. |
| [Phi-3-Context-Obedient-RAG-Q4_K_M.gguf](https://huggingface.co/bartowski/Phi-3-Context-Obedient-RAG-GGUF/blob/main/Phi-3-Context-Obedient-RAG-Q4_K_M.gguf) | Q4_K_M | 2.39GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Phi-3-Context-Obedient-RAG-Q4_K_S.gguf](https://huggingface.co/bartowski/Phi-3-Context-Obedient-RAG-GGUF/blob/main/Phi-3-Context-Obedient-RAG-Q4_K_S.gguf) | Q4_K_S | 2.18GB | Slightly lower quality with more space savings, *recommended*. |
| [Phi-3-Context-Obedient-RAG-IQ4_NL.gguf](https://huggingface.co/bartowski/Phi-3-Context-Obedient-RAG-GGUF/blob/main/Phi-3-Context-Obedient-RAG-IQ4_NL.gguf) | IQ4_NL | 2.17GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [Phi-3-Context-Obedient-RAG-IQ4_XS.gguf](https://huggingface.co/bartowski/Phi-3-Context-Obedient-RAG-GGUF/blob/main/Phi-3-Context-Obedient-RAG-IQ4_XS.gguf) | IQ4_XS | 2.05GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Phi-3-Context-Obedient-RAG-Q3_K_L.gguf](https://huggingface.co/bartowski/Phi-3-Context-Obedient-RAG-GGUF/blob/main/Phi-3-Context-Obedient-RAG-Q3_K_L.gguf) | Q3_K_L | 2.08GB | Lower quality but usable, good for low RAM availability. |
| [Phi-3-Context-Obedient-RAG-Q3_K_M.gguf](https://huggingface.co/bartowski/Phi-3-Context-Obedient-RAG-GGUF/blob/main/Phi-3-Context-Obedient-RAG-Q3_K_M.gguf) | Q3_K_M | 1.95GB | Even lower quality. |
| [Phi-3-Context-Obedient-RAG-IQ3_M.gguf](https://huggingface.co/bartowski/Phi-3-Context-Obedient-RAG-GGUF/blob/main/Phi-3-Context-Obedient-RAG-IQ3_M.gguf) | IQ3_M | 1.85GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Phi-3-Context-Obedient-RAG-IQ3_S.gguf](https://huggingface.co/bartowski/Phi-3-Context-Obedient-RAG-GGUF/blob/main/Phi-3-Context-Obedient-RAG-IQ3_S.gguf) | IQ3_S | 1.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [Phi-3-Context-Obedient-RAG-Q3_K_S.gguf](https://huggingface.co/bartowski/Phi-3-Context-Obedient-RAG-GGUF/blob/main/Phi-3-Context-Obedient-RAG-Q3_K_S.gguf) | Q3_K_S | 1.68GB | Low quality, not recommended. |
| [Phi-3-Context-Obedient-RAG-IQ3_XS.gguf](https://huggingface.co/bartowski/Phi-3-Context-Obedient-RAG-GGUF/blob/main/Phi-3-Context-Obedient-RAG-IQ3_XS.gguf) | IQ3_XS | 1.62GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Phi-3-Context-Obedient-RAG-IQ3_XXS.gguf](https://huggingface.co/bartowski/Phi-3-Context-Obedient-RAG-GGUF/blob/main/Phi-3-Context-Obedient-RAG-IQ3_XXS.gguf) | IQ3_XXS | 1.51GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Phi-3-Context-Obedient-RAG-Q2_K.gguf](https://huggingface.co/bartowski/Phi-3-Context-Obedient-RAG-GGUF/blob/main/Phi-3-Context-Obedient-RAG-Q2_K.gguf) | Q2_K | 1.41GB | Very low quality but surprisingly usable. |
| [Phi-3-Context-Obedient-RAG-IQ2_M.gguf](https://huggingface.co/bartowski/Phi-3-Context-Obedient-RAG-GGUF/blob/main/Phi-3-Context-Obedient-RAG-IQ2_M.gguf) | IQ2_M | 1.31GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Phi-3-Context-Obedient-RAG-IQ2_S.gguf](https://huggingface.co/bartowski/Phi-3-Context-Obedient-RAG-GGUF/blob/main/Phi-3-Context-Obedient-RAG-IQ2_S.gguf) | IQ2_S | 1.21GB | Very low quality, uses SOTA techniques to be usable. |
| [Phi-3-Context-Obedient-RAG-IQ2_XS.gguf](https://huggingface.co/bartowski/Phi-3-Context-Obedient-RAG-GGUF/blob/main/Phi-3-Context-Obedient-RAG-IQ2_XS.gguf) | IQ2_XS | 1.15GB | Very low quality, uses SOTA techniques to be usable. |
| [Phi-3-Context-Obedient-RAG-IQ2_XXS.gguf](https://huggingface.co/bartowski/Phi-3-Context-Obedient-RAG-GGUF/blob/main/Phi-3-Context-Obedient-RAG-IQ2_XXS.gguf) | IQ2_XXS | 1.04GB | Lower quality, uses SOTA techniques to be usable. |
| [Phi-3-Context-Obedient-RAG-IQ1_M.gguf](https://huggingface.co/bartowski/Phi-3-Context-Obedient-RAG-GGUF/blob/main/Phi-3-Context-Obedient-RAG-IQ1_M.gguf) | IQ1_M | .91GB | Extremely low quality, *not* recommended. |
| [Phi-3-Context-Obedient-RAG-IQ1_S.gguf](https://huggingface.co/bartowski/Phi-3-Context-Obedient-RAG-GGUF/blob/main/Phi-3-Context-Obedient-RAG-IQ1_S.gguf) | IQ1_S | .84GB | Extremely low quality, *not* recommended. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Phi-3-Context-Obedient-RAG-GGUF --include "Phi-3-Context-Obedient-RAG-Q4_K_M.gguf" --local-dir ./ --local-dir-use-symlinks False
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Phi-3-Context-Obedient-RAG-GGUF --include "Phi-3-Context-Obedient-RAG-Q8_0.gguf/*" --local-dir Phi-3-Context-Obedient-RAG-Q8_0 --local-dir-use-symlinks False
```
You can either specify a new local-dir (Phi-3-Context-Obedient-RAG-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
Ashmal/MBZUAI-ORYX-new | Ashmal | "2024-06-05T21:31:25Z" | 1,135 | 0 | transformers | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-04T08:57:58Z" | ---
library_name: transformers
license: apache-2.0
---
This is the Arabic test model built at MBZUAI. More details of the projects will be announced later along with the release. This model card is just to test the capabilities of this model on Arabic benchmarks. |
tapan247/myllama-7b-v0.1.gguf | tapan247 | "2024-06-29T12:02:52Z" | 1,135 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-29T11:59:08Z" | Entry not found |
vasista22/whisper-hindi-medium | vasista22 | "2023-04-24T21:13:26Z" | 1,134 | 2 | transformers | [
"transformers",
"pytorch",
"jax",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"hi",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-01-14T14:23:12Z" | ---
language:
- hi
license: apache-2.0
tags:
- whisper-event
metrics:
- wer
model-index:
- name: Whisper Hindi Medium - Vasista Sai Lodagala
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: google/fleurs
type: google/fleurs
config: hi_in
split: test
metrics:
- type: wer
value: 6.82
name: WER
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: mozilla-foundation/common_voice_11_0
type: mozilla-foundation/common_voice_11_0
config: hi
split: test
metrics:
- type: wer
value: 11.38
name: WER
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Hindi Medium
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Hindi data available from multiple publicly available ASR corpuses.
It has been fine-tuned as a part of the Whisper fine-tuning sprint.
**NOTE:** The code used to train this model is available for re-use in the [whisper-finetune](https://github.com/vasistalodagala/whisper-finetune) repository.
## Usage
In order to evaluate this model on an entire dataset, the evaluation codes available in the [whisper-finetune](https://github.com/vasistalodagala/whisper-finetune) repository can be used.
The same repository also provides the scripts for faster inference using whisper-jax.
In order to infer a single audio file using this model, the following code snippet can be used:
```python
>>> import torch
>>> from transformers import pipeline
>>> # path to the audio file to be transcribed
>>> audio = "/path/to/audio.format"
>>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
>>> transcribe = pipeline(task="automatic-speech-recognition", model="vasista22/whisper-hindi-medium", chunk_length_s=30, device=device)
>>> transcribe.model.config.forced_decoder_ids = transcribe.tokenizer.get_decoder_prompt_ids(language="hi", task="transcribe")
>>> print('Transcription: ', transcribe(audio)["text"])
```
For faster inference of whisper models, the [whisper-jax](https://github.com/sanchit-gandhi/whisper-jax) library can be used. Please follow the necessary installation steps as mentioned [here](https://github.com/vasistalodagala/whisper-finetune#faster-evaluation-with-whisper-jax), before using the following code snippet:
```python
>>> import jax.numpy as jnp
>>> from whisper_jax import FlaxWhisperForConditionalGeneration, FlaxWhisperPipline
>>> # path to the audio file to be transcribed
>>> audio = "/path/to/audio.format"
>>> transcribe = FlaxWhisperPipline("vasista22/whisper-hindi-medium", batch_size=16)
>>> transcribe.model.config.forced_decoder_ids = transcribe.tokenizer.get_decoder_prompt_ids(language="hi", task="transcribe")
>>> print('Transcription: ', transcribe(audio)["text"])
```
## Training and evaluation data
Training Data:
- [GramVaani ASR Corpus](https://sites.google.com/view/gramvaaniasrchallenge/dataset?authuser=0)
- [ULCA ASR Corpus](https://github.com/Open-Speech-EkStep/ULCA-asr-dataset-corpus#hindi-labelled--total-duration-is-239876-hours)
- [Shrutilipi ASR Corpus](https://ai4bharat.org/shrutilipi)
- [Google/Fleurs Train+Dev set](https://huggingface.co/datasets/google/fleurs)
Evaluation Data:
- [GramVaani ASR Corpus Test Set](https://sites.google.com/view/gramvaaniasrchallenge/dataset?authuser=0)
- [Google/Fleurs Test Set](https://huggingface.co/datasets/google/fleurs)
## Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 24
- eval_batch_size: 48
- seed: 22
- optimizer: adamw_bnb_8bit
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20000
- training_steps: 38754 (Initially set to 129180 steps)
- mixed_precision_training: True
## Acknowledgement
This work was done at [Speech Lab, IIT Madras](https://asr.iitm.ac.in/).
The compute resources for this work were funded by "Bhashini: National Language translation Mission" project of the Ministry of Electronics and Information Technology (MeitY), Government of India. |
AIFT/AIFT-ko-orca-plat-Yi-ko-6b-v1.2-dpo | AIFT | "2024-01-24T01:04:16Z" | 1,134 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-24T00:18:38Z" | ---
license: cc-by-sa-4.0
---
<h1>orca-platypus - instruct-dpo 모델 v1.2</h1>
<b><학습 데이터 구축></b>
kyujinpy 님이 공개하신 KOR-OpenOrca-Platypus 데이터를 일부 삭제(샘플링) 및 정제 작업 진행하여 활용.
그 이후 해당 데이터들을 보며 관련 태스크를 추출하였고 이를 기반으로
해당 태스크에 맞춰서 NLP 관련 오픈소스 데이터를 활용하여 학습데이터를 자체적으로
역사, 과학, 수학, 기계독해, 리뷰 분석 문제를 gpt를 통해서 구축하였고,
aihub 일반상식 및 기계독해 데이터를 활용하여 추가로 학습 데이터를 구축(형태소 관련, 기계독해 관련 및 요약)
각종 블로그에서 역사 및 상식 퀴즈를 사람이 직접 학습데이터 형태로 변경
AI2AI Challenge 데이터 형태를 보고 gpt를 통해 초등 수준의 과학 수학 문제 유형을 제작 500문제
영어 번역 데이터 영한/한영 데이터 학습 데이터로 활용 진행
총 데이터 4만개 정도 사용하였습니다.
<br>
<DPO학습 데이터>
DPO 데이터는 CommonGen과 TruthfulQA에 초점을 맞추어 약 17,000개의 데이터를 학습하였습니다.
<br>
+ TruthfulQA 관련 문제 추가를 진행하였습니다.(속설 관련 참거짓 문제)
+ 기계독해 관련 학습 데이터를 ChatGPT를 통해서 답변을 얻어 학습
+ 문법관련 학습 데이터
<br>
###학습 데이터 파일은 비공개입니다.
<br>
<b><학습></b>
학습은 LoRA를 사용하여 A100 40G *2에서 학습을 진행하였습니다.
|
Dragonstalker/Pony_Diffusion_v6 | Dragonstalker | "2024-02-27T01:55:04Z" | 1,134 | 0 | diffusers | [
"diffusers",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-01-28T03:24:25Z" | Entry not found |
AIFT/AIFT-instruct-dpo-v1.3-42dot_LLM-SFT-1.3B | AIFT | "2024-02-01T03:25:12Z" | 1,134 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-01T01:24:11Z" | ---
license: cc-by-sa-4.0
---
<h1>AIFT-instruct-42dot_LLM-SFT-1.3B</h1>
<b><학습 데이터 구축></b>
<br>
kyujinpy 님이 공개하신 KOR-OpenOrca-Platypus 데이터를 일부 삭제(샘플링) 및 정제 작업 진행하여 활용.
그 이후 해당 데이터들을 보며 관련 태스크를 추출하였고 이를 기반으로
해당 태스크에 맞춰서 NLP 관련 오픈소스 데이터를 활용하여 학습데이터를 자체적으로
역사, 과학, 수학, 기계독해, 리뷰 분석 문제를 gpt를 통해서 구축하였고,
aihub 일반상식 및 기계독해 데이터를 활용하여 추가로 학습 데이터를 구축(형태소 관련, 기계독해 관련 및 요약)
각종 블로그에서 역사 및 상식 퀴즈를 사람이 직접 학습데이터 형태로 변경
AI2AI Challenge 데이터 형태를 보고 gpt를 통해 초등 수준의 과학 수학 문제 유형을 제작 500문제
영어 번역 데이터 영한/한영 데이터 학습 데이터로 활용 진행
총 데이터 4만개 정도 사용하였습니다.
<br>
<br>
+ TruthfulQA 관련 문제 추가를 진행하였습니다.(속설 관련 참거짓 문제)
+ 기계독해 관련 학습 데이터를 ChatGPT를 통해서 답변을 얻어 학습
+ 문법관련 학습 데이터
<br>
<br>
DPO 데이터셋
<br>
ko-HH-RLHF 데이터의 chosen 데이터를 gpt-3.5-turbo를 통해 재 생성하여 학습에 활용하였습니다.
또한 TruthfulQA 데이터 약 1200건은 자체 제작하였습니다.
<br>
###학습 데이터 파일은 비공개입니다.
<br>
<모델>
<br>
42dot에서 공개한 42dot_LLM-SFT-1.3B을 베이스 모델로 하여 학습 진행하였습니다.
<br>
<br>
<br>
<b><학습></b>
<br>
학습은 LoRA를 사용하여 A100 40G *2에서 학습을 진행하였습니다.
|
rajtest/tinyllama-v3 | rajtest | "2024-06-27T17:20:33Z" | 1,134 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"gguf",
"llama",
"trl",
"sft",
"unsloth",
"generated_from_trainer",
"base_model:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"region:us"
] | null | "2024-06-27T14:34:24Z" | ---
base_model: unsloth/tinyllama-bnb-4bit
library_name: peft
license: apache-2.0
tags:
- trl
- sft
- unsloth
- generated_from_trainer
model-index:
- name: tinyllama-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinyllama-v3
This model is a fine-tuned version of [unsloth/tinyllama-bnb-4bit](https://huggingface.co/unsloth/tinyllama-bnb-4bit) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 3407
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 525
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1 |
John6666/wai-real-mix-v8-sdxl | John6666 | "2024-06-29T10:05:23Z" | 1,134 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"pony",
"SPO",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-06-29T10:00:42Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- pony
- SPO
---
Original model is [here](https://civitai.com/models/393905/wai-realmix?modelVersionId=606365).
|
leepokai/un-censored-zh-ver1 | leepokai | "2024-06-30T09:18:35Z" | 1,134 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-30T09:07:57Z" | Entry not found |
unc-nlp/lxmert-vqa-uncased | unc-nlp | "2020-09-10T17:57:42Z" | 1,133 | 1 | transformers | [
"transformers",
"pytorch",
"lxmert",
"question-answering",
"endpoints_compatible",
"region:us"
] | question-answering | "2022-03-02T23:29:05Z" | Entry not found |
microsoft/table-transformer-structure-recognition-v1.1-pub | microsoft | "2023-11-27T10:40:53Z" | 1,133 | 2 | transformers | [
"transformers",
"safetensors",
"table-transformer",
"object-detection",
"arxiv:2303.00716",
"license:mit",
"endpoints_compatible",
"region:us"
] | object-detection | "2023-11-18T21:26:45Z" | ---
license: mit
---
# Table Transformer (pre-trained for Table Structure Recognition)
Table Transformer (TATR) model trained on PubTables1M. It was introduced in the paper [Aligning benchmark datasets for table structure recognition](https://arxiv.org/abs/2303.00716) by Smock et al. and first released in [this repository](https://github.com/microsoft/table-transformer).
Disclaimer: The team releasing Table Transformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Table Transformer is equivalent to [DETR](https://huggingface.co/docs/transformers/model_doc/detr), a Transformer-based object detection model. Note that the authors decided to use the "normalize before" setting of DETR, which means that layernorm is applied before self- and cross-attention.
## Usage
You can use the raw model for detecting tables in documents. See the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/table-transformer) for more info. |
AIFT/AIFT-ko-orca-plat-Yi-ko-6b-v1.2-dpo-2 | AIFT | "2024-01-25T07:54:44Z" | 1,133 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-25T07:17:06Z" | ---
license: cc-by-sa-4.0
---
<h1>orca-platypus - instruct-dpo-2 모델 v1.2</h1>
<b><학습 데이터 구축></b>
kyujinpy 님이 공개하신 KOR-OpenOrca-Platypus 데이터를 일부 삭제(샘플링) 및 정제 작업 진행하여 활용.
그 이후 해당 데이터들을 보며 관련 태스크를 추출하였고 이를 기반으로
해당 태스크에 맞춰서 NLP 관련 오픈소스 데이터를 활용하여 학습데이터를 자체적으로
역사, 과학, 수학, 기계독해, 리뷰 분석 문제를 gpt를 통해서 구축하였고,
aihub 일반상식 및 기계독해 데이터를 활용하여 추가로 학습 데이터를 구축(형태소 관련, 기계독해 관련 및 요약)
각종 블로그에서 역사 및 상식 퀴즈를 사람이 직접 학습데이터 형태로 변경
AI2AI Challenge 데이터 형태를 보고 gpt를 통해 초등 수준의 과학 수학 문제 유형을 제작 500문제
영어 번역 데이터 영한/한영 데이터 학습 데이터로 활용 진행
총 데이터 4만개 정도 사용하였습니다.
<br>
<DPO학습 데이터>
DPO 데이터는 CommonGen과 TruthfulQA에 초점을 맞추어 약 17,000개의 데이터를 학습하였습니다.
+ ko-hh-rlhf 데이터에서 chosen 데이터부분을 ChatGPT를 통해 변경한 데이터를 추가 학습하였습니다.
<br>
+ TruthfulQA 관련 문제 추가를 진행하였습니다.(속설 관련 참거짓 문제)
+ 기계독해 관련 학습 데이터를 ChatGPT를 통해서 답변을 얻어 학습
+ 문법관련 학습 데이터
<br>
###학습 데이터 파일은 비공개입니다.
<br>
<b><학습></b>
학습은 LoRA를 사용하여 A100 40G *2에서 학습을 진행하였습니다.
|
AIdenU/LLAMA-2-13b-ko-Y24-DPO_v2.0 | AIdenU | "2024-03-07T23:05:28Z" | 1,133 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama2",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-19T00:55:41Z" | ---
license: apache-2.0
language:
- ko
pipeline_tag: text-generation
tags:
- llama2
---
### BaseModel
- [AIdenU/LLAMA-2-13b-ko-Y24_v2.0](https://huggingface.co/AIdenU/LLAMA-2-13b-ko-Y24_v2.0)
### Model Generation
```
from transforemrs import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("AIdenU/LLAMA-2-13b-ko-Y24-DPO_v2.0", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("AIdenU/LLAMA-2-13b-ko-Y24-DPO_v2.0", use_fast=True)
systemPrompt = "당신은 유능한 AI입니다."
prompt = "지렁이도 밟으면 꿈틀하나요?"
outputs = model.generate(
**tokenizer(
f"[INST] <<SYS>>\n{systemPrompt}\n<</SYS>>\n\n{prompt} [/INST] ",
return_tensors='pt'
).to('cuda'),
max_new_tokens=256,
temperature=0.2,
top_p=1,
do_sample=True
)
print(tokenizer.decode(outputs[0]))
``` |
AIFT/AIFT-instruct-SFT-dpo-1.3B-v1.1 | AIFT | "2024-02-22T23:39:00Z" | 1,133 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-22T12:33:00Z" | ---
license: cc-by-sa-4.0
---
<h1>AIFT-instruct-42dot_LLM-SFT-DPO-1.3B</h1>
<b><학습 데이터 구축></b>
<br>
kyujinpy 님이 공개하신 KOR-OpenOrca-Platypus 데이터를 일부 삭제(샘플링) 및 정제 작업 진행하여 활용.
그 이후 해당 데이터들을 보며 관련 태스크를 추출하였고 이를 기반으로
해당 태스크에 맞춰서 NLP 관련 오픈소스 데이터를 활용하여 학습데이터를 자체적으로
역사, 과학, 수학, 기계독해, 리뷰 분석 문제를 gpt를 통해서 구축하였고,
aihub 일반상식 및 기계독해 데이터를 활용하여 추가로 학습 데이터를 구축(형태소 관련, 기계독해 관련 및 요약)
각종 블로그에서 역사 및 상식 퀴즈를 사람이 직접 학습데이터 형태로 변경
AI2AI Challenge 데이터 형태를 보고 gpt를 통해 초등 수준의 과학 수학 문제 유형을 제작 500문제
영어 번역 데이터 영한/한영 데이터 학습 데이터로 활용 진행
총 데이터 4만개 정도 사용하였습니다.
<br>
<br>
+ TruthfulQA 관련 문제 추가를 진행하였습니다.(속설 관련 참거짓 문제)
+ 기계독해 관련 학습 데이터를 ChatGPT를 통해서 답변을 얻어 학습
+ 문법관련 학습 데이터
<br>
###학습 데이터 파일은 비공개입니다.
<br>
<모델>
<br>
42dot에서 공개한 42dot_LLM-SFT-1.3B을 베이스 모델로 하여 학습 진행하였습니다.
<br>
<br>
<br>
<b><학습></b>
<br>
학습은 LoRA를 사용하여 A100 40G *2에서 학습을 진행하였습니다.
|
Azure99/blossom-v5.1-9b | Azure99 | "2024-07-01T14:26:33Z" | 1,133 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"zh",
"en",
"dataset:Azure99/blossom-chat-v3",
"dataset:Azure99/blossom-math-v4",
"dataset:Azure99/blossom-wizard-v3",
"dataset:Azure99/blossom-orca-v3",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-15T07:30:03Z" | ---
license: apache-2.0
datasets:
- Azure99/blossom-chat-v3
- Azure99/blossom-math-v4
- Azure99/blossom-wizard-v3
- Azure99/blossom-orca-v3
language:
- zh
- en
---
# **BLOSSOM-v5.1-9b**
[💻Github](https://github.com/Azure99/BlossomLM) • [🚀Blossom Chat Demo](https://blossom-chat.com/)
### Introduction
Blossom is a conversational large language model, fine-tuned on the Blossom Orca/Wizard/Chat/Math mixed dataset based on the Yi-1.5-9B pre-trained model. Blossom possesses robust general capabilities and context comprehension. Additionally, the high-quality Chinese and English datasets used for training have been made open source.
Training was conducted in two stages. The first stage used 40K Wizard, 40K Orca, 10K Math single-turn instruction datasets, training for 1 epoch; the second stage used 10K Blossom chat multi-turn dialogue dataset, and 10% randomly sampled data from the first stage, training for 3 epochs.
### Inference
Inference is performed in the form of dialogue continuation.
Single-turn dialogue
```
A chat between a human and an artificial intelligence bot. The bot gives helpful, detailed, and polite answers to the human's questions.
|Human|: hello
|Bot|:
```
Multi-turn dialogue
```
A chat between a human and an artificial intelligence bot. The bot gives helpful, detailed, and polite answers to the human's questions.
|Human|: hello
|Bot|: Hello! How can I assist you today?<|endoftext|>
|Human|: Generate a random number using python
|Bot|:
```
Note: At the end of the Bot's output in the historical conversation, append a `<|endoftext|>`. |
SicariusSicariiStuff/Zion_Alpha_Instruction_Tuned_GGUF | SicariusSicariiStuff | "2024-06-08T19:29:56Z" | 1,133 | 0 | null | [
"gguf",
"en",
"license:apache-2.0",
"region:us"
] | null | "2024-06-08T04:32:32Z" | ---
language:
- en
license: apache-2.0
---
<div align="center">
<b style="font-size: 40px;">Zion_Alpha_Instruction_Tuned_GGUF</b>
</div>
<img src="https://i.imgur.com/e1LEQ18.png" alt="Zion_Alpha_Instruction_Tuned" style="width: 50%; min-width: 400px; display: block; margin: auto;">
# Model Details
Zion_Alpha is the first **REAL** Hebrew model in the world. This version WAS fine tuned for tasks. I did the finetune using SOTA techniques and using my insights from years of underwater basket weaving. If you wanna offer me a job, just add me on Facebook.
# Future Plans
I plan to perform a SLERP merge with one of my other fine-tuned models, which has a bit more knowledge about Israeli topics. Additionally, I might create a larger model using MergeKit, but we'll see how it goes.
# Looking for Sponsors
Since all my work is done on-premises, I am constrained by my current hardware. I would greatly appreciate any support in acquiring an A6000, which would enable me to train significantly larger models much faster.
# Papers?
Maybe. We'll see. No promises here 🤓
# Contact Details
I'm not great at self-marketing (to say the least) and don't have any social media accounts. If you'd like to reach out to me, you can email me at [email protected]. Please note that this email might receive more messages than I can handle, so I apologize in advance if I can't respond to everyone.
# Versions and QUANTS
- Base model: [FP16](https://huggingface.co/SicariusSicariiStuff/Zion_Alpha)
- Instruction tuned: [FP16](https://huggingface.co/SicariusSicariiStuff/Zion_Alpha_Instruction_Tuned) | [GGUF](https://huggingface.co/SicariusSicariiStuff/Zion_Alpha_Instruction_Tuned_GGUF)
# Model architecture
Based on Mistral 7B. I didn't even bother to alter the tokenizer.
# The recommended prompt setting is Debug-deterministic:
```
temperature: 1
top_p: 1
top_k: 1
typical_p: 1
min_p: 1
repetition_penalty: 1
```
# The recommended instruction template is Mistral:
```
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- message['content'] -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'[INST] ' + message['content'].rstrip() + ' [/INST]'-}}
{%- else -%}
{{-'' + message['content'] + '</s>' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-''-}}
{%- endif -%}
```
# English to hebrew example:
<div align="center">
<b style="font-size: 40px;">Zion_Alpha English to Hebrew example</b>
</div>
<img src="https://i.imgur.com/JnTuawF.png" alt="Zion_Alpha" style="width: 40%; min-width: 600px; display: block; margin: auto;">
# English to hebrew example:
<div align="center">
<b style="font-size: 40px;">Zion_Alpha Hebrew to English example</b>
</div>
<img src="https://i.imgur.com/Wm2igLJ.png" alt="Zion_Alpha" style="width: 40%; min-width: 600px; display: block; margin: auto;">
<div align="center">
<b style="font-size: 30px;">Unscripted video: live zero shot demonstration at story writing capabilities in Hebrew</b>
[](https://www.youtube.com/watch?v=YYKeovnS0do)
</div>
<div align="center">
<b style="font-size: 30px;">Zion_Alpha VS Mistral 'Hebrew' Live & unscripted in real time</b>
[](https://www.youtube.com/watch?v=DQFtx8M2txc)
</div>
<div align="center">
<b style="font-size: 30px;">Zion_Alpha VS Mistral 'Hebrew' Live & unscripted in real time Long text translation</b>
[](https://www.youtube.com/watch?v=w5fz3Ot6tH8)
</div>
### History
The model was originally trained about 2 month after Mistral (v0.1) was released.
As of 04 June 2024, Zion_Alpha got the **Highest SNLI score in the world** among open source models in Hebrew, surpassing most of the models by a huge margin. (**84.05** score)
<img src="https://i.imgur.com/7HokS5w.png" alt="Zion_Alpha SNLI Score" style="width: 80%; min-width: 700px; display: block; margin: auto;">
### Support
<img src="https://i.imgur.com/0lHHN95.png" alt="GPUs too expensive" style="width: 10%; min-width: 100px; display: block; margin: left;">
- [My Ko-fi page](https://ko-fi.com/sicarius) ALL donations will go for research resources and compute, every bit counts 🙏🏻
- [My Patreon](https://patreon.com/TenebraAI) ALL donations will go for research resources and compute, every bit counts 🙏🏻
|
gglabs/TinyLM-Chat-0611-1-epoch | gglabs | "2024-06-11T16:30:42Z" | 1,133 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/tinyllama-chat-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-11T12:13:52Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/tinyllama-chat-bnb-4bit
---
# Uploaded model
- **Developed by:** gglabs
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Felladrin/gguf-Qwen1.5-0.5B-Chat_llamafy | Felladrin | "2024-06-23T02:27:03Z" | 1,133 | 0 | null | [
"gguf",
"base_model:Minami-su/Qwen1.5-0.5B-Chat_llamafy",
"license:other",
"region:us"
] | null | "2024-06-23T02:19:03Z" | ---
license: other
license_name: tongyi-qianwen-research
license_link: https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat/blob/main/LICENSE
base_model: Minami-su/Qwen1.5-0.5B-Chat_llamafy
---
GGUF version of [Minami-su/Qwen1.5-0.5B-Chat_llamafy](https://huggingface.co/Minami-su/Qwen1.5-0.5B-Chat_llamafy). |
John6666/wai-doll-cn-v2-sdxl | John6666 | "2024-06-26T23:01:00Z" | 1,133 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"3DCG",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-06-26T22:55:51Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- 3DCG
---
Original model is [here](https://civitai.com/models/531285/wai-dollcn?modelVersionId=600388).
|
timm/beit_base_patch16_384.in22k_ft_in22k_in1k | timm | "2023-05-08T23:20:12Z" | 1,132 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-22k",
"arxiv:2106.08254",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
] | image-classification | "2022-12-23T02:25:55Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
- imagenet-22k
---
# Model card for beit_base_patch16_384.in22k_ft_in22k_in1k
A BEiT image classification model. Trained on ImageNet-22k with self-supervised masked image modelling (MIM) using a DALL-E dVAE as visual tokenizer. Fine-tuned on ImageNet-22k and then ImageNet-1k.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 86.7
- GMACs: 55.5
- Activations (M): 101.6
- Image size: 384 x 384
- **Papers:**
- BEiT: BERT Pre-Training of Image Transformers: https://arxiv.org/abs/2106.08254
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-22k
- **Original:** https://github.com/microsoft/unilm/tree/master/beit
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('beit_base_patch16_384.in22k_ft_in22k_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'beit_base_patch16_384.in22k_ft_in22k_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 577, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{bao2021beit,
title={Beit: Bert pre-training of image transformers},
author={Bao, Hangbo and Dong, Li and Piao, Songhao and Wei, Furu},
journal={arXiv preprint arXiv:2106.08254},
year={2021}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
philschmid/flan-t5-xxl-sharded-fp16 | philschmid | "2023-03-08T15:30:42Z" | 1,132 | 52 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"endpoints-template",
"arxiv:2210.11416",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2023-01-27T15:05:31Z" | ---
license: apache-2.0
tags:
- endpoints-template
---
# FORK of FLAN-T5 XXL
> This is a fork of google/flan-t5-xxl implementing a custom `handler.py` as an example for how to use t5-11b with inference-endpoints on a single NVIDIA A10G.
You can deploy the flan-t5-xxl with a [1-click](https://ui.endpoints.huggingface.co/new?repository=philschmid/flan-t5-xxl-sharded-fp16).
Since we are using the "quantized" version, we can switch our instance type to **"GPU [medium] · 1x Nvidia A10G"**.

# TL;DR
If you already know T5, FLAN-T5 is just better at everything. For the same number of parameters, these models have been fine-tuned on more than 1000 additional tasks covering also more languages.
As mentioned in the first few lines of the abstract :
> Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints,1 which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
**Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the [T5 model card](https://huggingface.co/t5-large).
# Model Details
## Model Description
- **Model type:** Language model
- **Language(s) (NLP):** English, Spanish, Japanese, Persian, Hindi, French, Chinese, Bengali, Gujarati, German, Telugu, Italian, Arabic, Polish, Tamil, Marathi, Malayalam, Oriya, Panjabi, Portuguese, Urdu, Galician, Hebrew, Korean, Catalan, Thai, Dutch, Indonesian, Vietnamese, Bulgarian, Filipino, Central Khmer, Lao, Turkish, Russian, Croatian, Swedish, Yoruba, Kurdish, Burmese, Malay, Czech, Finnish, Somali, Tagalog, Swahili, Sinhala, Kannada, Zhuang, Igbo, Xhosa, Romanian, Haitian, Estonian, Slovak, Lithuanian, Greek, Nepali, Assamese, Norwegian
- **License:** Apache 2.0
- **Related Models:** [All FLAN-T5 Checkpoints](https://huggingface.co/models?search=flan-t5)
- **Original Checkpoints:** [All Original FLAN-T5 Checkpoints](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints)
- **Resources for more information:**
- [Research paper](https://arxiv.org/pdf/2210.11416.pdf)
- [GitHub Repo](https://github.com/google-research/t5x)
- [Hugging Face FLAN-T5 Docs (Similar to T5) ](https://huggingface.co/docs/transformers/model_doc/t5)
|
timm/densenet169.tv_in1k | timm | "2023-04-21T22:54:43Z" | 1,132 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1608.06993",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-04-21T22:54:20Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for densenet169.tv_in1k
A DenseNet image classification model. Trained on ImageNet-1k (original torchvision weights).
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 14.1
- GMACs: 3.4
- Activations (M): 7.3
- Image size: 224 x 224
- **Papers:**
- Densely Connected Convolutional Networks: https://arxiv.org/abs/1608.06993
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/pytorch/vision
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('densenet169.tv_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'densenet169.tv_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 256, 56, 56])
# torch.Size([1, 512, 28, 28])
# torch.Size([1, 1280, 14, 14])
# torch.Size([1, 1664, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'densenet169.tv_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1664, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@inproceedings{huang2017densely,
title={Densely Connected Convolutional Networks},
author={Huang, Gao and Liu, Zhuang and van der Maaten, Laurens and Weinberger, Kilian Q },
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
year={2017}
}
```
|
reeducator/bluemoonrp-13b | reeducator | "2023-05-24T16:10:42Z" | 1,132 | 41 | transformers | [
"transformers",
"llama",
"text-generation",
"en",
"dataset:gozfarb/bluemoon_roleplay_300k_vicuna",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-05-04T17:33:47Z" | ---
datasets:
- gozfarb/bluemoon_roleplay_300k_vicuna
language:
- en
---
## General
Bluemoon roleplay finetune of LLaMA 13B (2 roleplayers only).
## Models
Two models are provided, labeled (1) `4k-epoch6` and (2) `epoch3` (other branch). In case of the (1), the training is extended over more epochs to reduce the high training loss observed in (2). This release also tests a longer 4k context token size achieved with AliBi.
*GGML 4-bit for llama.cpp*<br/>
1. ggml-bluemoonrp-13b-4k-epoch6-q5_0.bin
2. ggml-bluemoonrp-13b-epoch3-q5_0.bin
*GPTQ 4-bit CUDA:*<br/>
1. bluemoonrp-13b-4k-epoch6-4bit-128g.safetensors
2. bluemoonrp-13b-epoch3-4bit-128g.safetensors
## Remarks
This model has been trained using the following prompt (Vicuna 1.1 format):
```
A transcript of a roleplay between two players, LEAD and ASSOCIATE. LEAD sets up a scenario and the characters, from which ASSOCIATE then assumes a character role and continues the story for that role in response to description given by LEAD. The story and characters are developed by exchange of detailed event descriptions and character dialogs, successively given by both LEAD and ASSOCIATE.
LEAD: [role1 message]
ASSOCIATE: [role2 message]</s>
```
|
ntc-ai/SDXL-LoRA-slider.extremely-detailed | ntc-ai | "2024-02-06T00:28:08Z" | 1,132 | 4 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] | text-to-image | "2023-12-10T11:40:59Z" |
---
language:
- en
thumbnail: "images/extremely detailed_17_3.0.png"
widget:
- text: extremely detailed
output:
url: images/extremely detailed_17_3.0.png
- text: extremely detailed
output:
url: images/extremely detailed_19_3.0.png
- text: extremely detailed
output:
url: images/extremely detailed_20_3.0.png
- text: extremely detailed
output:
url: images/extremely detailed_21_3.0.png
- text: extremely detailed
output:
url: images/extremely detailed_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "extremely detailed"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - extremely detailed (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/extremely detailed_17_-3.0.png" width=256 height=256 /> | <img src="images/extremely detailed_17_0.0.png" width=256 height=256 /> | <img src="images/extremely detailed_17_3.0.png" width=256 height=256 /> |
| <img src="images/extremely detailed_19_-3.0.png" width=256 height=256 /> | <img src="images/extremely detailed_19_0.0.png" width=256 height=256 /> | <img src="images/extremely detailed_19_3.0.png" width=256 height=256 /> |
| <img src="images/extremely detailed_20_-3.0.png" width=256 height=256 /> | <img src="images/extremely detailed_20_0.0.png" width=256 height=256 /> | <img src="images/extremely detailed_20_3.0.png" width=256 height=256 /> |
See more at [https://sliders.ntcai.xyz/sliders/app/loras/4f6174b8-2db1-42ab-80e1-b341235dd6ac](https://sliders.ntcai.xyz/sliders/app/loras/4f6174b8-2db1-42ab-80e1-b341235dd6ac)
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
extremely detailed
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.extremely-detailed', weight_name='extremely detailed.safetensors', adapter_name="extremely detailed")
# Activate the LoRA
pipe.set_adapters(["extremely detailed"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, extremely detailed"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 1496+ unique and diverse LoRAs along with 14600+ slider merges, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful <strong>NTC Slider Factory</strong> LoRA creator, allowing you to craft your own custom LoRAs and merges opening up endless possibilities.
Your support on Patreon will allow us to continue developing new models and tools.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
4n3mone/KoSOLAR_merge_test_v0.1 | 4n3mone | "2024-02-21T07:40:28Z" | 1,132 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"yanolja/KoSOLAR-10.7B-v0.3",
"conversational",
"ko",
"base_model:yanolja/KoSOLAR-10.7B-v0.3",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-21T06:20:51Z" | ---
tags:
- merge
- mergekit
- lazymergekit
- yanolja/KoSOLAR-10.7B-v0.3
- yanolja/KoSOLAR-10.7B-v0.3
base_model:
- yanolja/KoSOLAR-10.7B-v0.3
- yanolja/KoSOLAR-10.7B-v0.3
license: mit
language:
- ko
---
# KoSOLAR_merge_test_v0.1
KoSOLAR_merge_test_v0.1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [yanolja/KoSOLAR-10.7B-v0.3](https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.3)
* [yanolja/KoSOLAR-10.7B-v0.3](https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.3)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: yanolja/KoSOLAR-10.7B-v0.3
layer_range: [0, 32]
- model: yanolja/KoSOLAR-10.7B-v0.3
layer_range: [0, 32]
merge_method: slerp
base_model: yanolja/KoSOLAR-10.7B-v0.3
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "4n3mone/KoSOLAR_merge_test_v0.1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
AIFT/AIFT-ko-orca-plat-Yi-ko-6b-v2.1 | AIFT | "2024-02-29T08:53:05Z" | 1,132 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-29T08:36:26Z" | ---
license: cc-by-sa-4.0
---
<h1>orca-platypus - instruct 모델 v2.1</h1>
<b><학습 데이터 구축></b>
kyujinpy 님이 공개하신 KOR-OpenOrca-Platypus 데이터를 일부 삭제(샘플링) 및 정제 작업 진행하여 활용.
그 이후 해당 데이터들을 보며 관련 태스크를 추출하였고 이를 기반으로
해당 태스크에 맞춰서 NLP 관련 오픈소스 데이터를 활용하여 학습데이터를 자체적으로
역사, 과학, 수학, 기계독해, 리뷰 분석 문제를 gpt를 통해서 구축하였고,
aihub 일반상식 및 기계독해 데이터를 활용하여 추가로 학습 데이터를 구축(형태소 관련, 기계독해 관련 및 요약)
각종 블로그에서 역사 및 상식 퀴즈를 사람이 직접 학습데이터 형태로 변경
AI2AI Challenge 데이터 형태를 보고 gpt를 통해 초등 수준의 과학 수학 문제 유형을 제작 500문제
영어 번역 데이터 영한/한영 데이터 학습 데이터로 활용 진행
총 데이터 4만개 정도 사용하였습니다.
<br>
<br>
+ TruthfulQA 관련 문제 추가를 진행하였습니다.(속설 관련 참거짓 문제)
+ 기계독해 관련 학습 데이터를 ChatGPT를 통해서 답변을 얻어 학습
+ 문법관련 학습 데이터
- 기존에 1.2에서 성능 저하를 보인 일부 mmlu 데이터를 제거하였습니다.
<br>
###학습 데이터 파일은 비공개입니다.
<br>
<b><학습></b>
학습은 LoRA를 사용하여 A100 40G *2에서 학습을 진행하였습니다. |
kaist-ai/mistral-orpo-capybara-7k | kaist-ai | "2024-03-23T15:13:01Z" | 1,132 | 26 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:argilla/distilabel-capybara-dpo-7k-binarized",
"arxiv:2403.07691",
"base_model:mistralai/Mistral-7B-v0.1",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-23T12:20:43Z" | ---
language:
- en
license: mit
base_model:
- mistralai/Mistral-7B-v0.1
datasets:
- argilla/distilabel-capybara-dpo-7k-binarized
pipeline_tag: text-generation
model-index:
- name: Mistral-ORPO-Capybara-7k
results:
- task:
type: text-generation
dataset:
name: AlpacaEval 2 (LC)
type: AlpacaEval
metrics:
- type: AlpacaEval 2.0
value: 15.88%
name: Win Rate
source:
url: https://tatsu-lab.github.io/alpaca_eval/
name: self-reported
- task:
type: text-generation
dataset:
name: MT-Bench
type: MT-Bench
metrics:
- type: MT-Bench
value: 7.444
name: Score
source:
url: https://github.com/lm-sys/FastChat/blob/main/fastchat/llm_judge/
name: self-reported
---
# **Mistral-ORPO-Capybara-7k (7B)**
**Mistral-ORPO** is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) using the *[odds ratio preference optimization (ORPO)](https://arxiv.org/abs/2403.07691)*. With ORPO, the model directly learns the preference without the supervised fine-tuning warmup phase.
**Mistral-ORPO-ORPO-Capybara-7k** is fine-tuned for **2.5 hours on four A100s** exclusively on the **7k** instances of the distilled Capybara paired multi-turn conversation dataset, [argilla/distilabel-capybara-dpo-7k-binarized](https://huggingface.co/datasets/argilla/distilabel-capybara-dpo-7k-binarized), by [Argilla](https://huggingface.co/argilla).
- **Github Repository**: https://github.com/xfactlab/orpo
## 👍 **Model Performance**
### 1) AlpacaEval & MT-Bench
|Model Name|Size|Align|MT-Bench|AlpacaEval 2.0 (LC)|
|:--------|:--------------:|:-------------------:|:------------:|:------------:|
|**Mistral-<tt>ORPO</tt>-Capybara-7k**|7B|<tt>ORPO</tt>|7.44|15.9|
|**Mistral-<tt>ORPO</tt>-β**|7B|<tt>ORPO</tt>|7.32|14.7|
|Zephyr β |7B|DPO|7.34|13.2|
|TULU-2-DPO |13B|DPO|7.00|11.6|
|Llama-2-Chat |7B|RLHF|6.27|5.4|
|Llama-2-Chat |13B|RLHF|6.65|8.4|
### 2) IFEval
| **Model Type** | **Prompt-Strict** | **Prompt-Loose** | **Inst-Strict** | **Inst-Loose** |
|--------------------|:-----------------:|:----------------:|:---------------:|:--------------:|
| **Mistral-ORPO-Capybara-7k** | 0.5083 | 0.5083 | 0.5827 | 0.6127 |
| **Mistral-ORPO-⍺** | 0.5009 | 0.5083 | 0.5995 | 0.6163 |
| **Mistral-ORPO-β** | 0.5287 | 0.5564 | 0.6355 | 0.6619 |
## 🗺️ **MT-Bench by Category**

## 🖥️ **Inference**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("kaist-ai/mistral-orpo-capybara-7k")
tokenizer = AutoTokenizer.from_pretrained("kaist-ai/mistral-orpo-capybara-7k")
# Apply chat template
query = [{'role': 'user', 'content': 'Hi! How are you doing?'}]
prompt = tokenizer.apply_chat_template(query, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors='pt')
# Generation with specific configurations
output = model.generate(
**inputs,
max_new_tokens=128,
do_sample=True,
temperature=0.7
)
response = tokenizer.batch_decode(output)
#<|user|>
#Hi! How are you doing?</s>
#<|assistant|>
#I'm doing well, thank you! How are you?</s>
```
## 📎 **Citation**
```
@misc{hong2024orpo,
title={ORPO: Monolithic Preference Optimization without Reference Model},
author={Jiwoo Hong and Noah Lee and James Thorne},
year={2024},
eprint={2403.07691},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
FinLang/finance-embeddings-investopedia | FinLang | "2024-04-30T10:08:41Z" | 1,132 | 3 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | "2024-04-22T15:45:42Z" | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
license: cc-by-nc-4.0
---
# FinLang/finance-embeddings-investopedia
This is the Investopedia embedding for finance application by the FinLang team. The model is trained using our open-sourced finance dataset from https://huggingface.co/datasets/FinLang/investopedia-embedding-dataset
This is a finetuned embedding model on top of BAAI/bge-base-en-v1.5. It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search in RAG applications.
This project is for research purposes only. Third-party datasets may be subject to additional terms and conditions under their associated licenses.
## Plans
* The research paper will be published soon.
* We are working on a v2 version of the model where we are increasing the training corpus of financial data and using improved techniques for training embeddings.
## Usage (LLamaIndex)
Simply specify the Finlang embedding during the indexing procedure for your Financial RAG applications.
```
from llama_index.embeddings import HuggingFaceEmbedding
embed_model = HuggingFaceEmbedding(model_name="FinLang/investopedia_embedding")
```
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed (see https://huggingface.co/sentence-transformers):
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('FinLang/investopedia_embedding')
embeddings = model.encode(sentences)
print(embeddings)
```
Example code testing:
```
from sentence_transformers import SentenceTransformer, util
model = SentenceTransformer("FinLang/investopedia_embedding")
query_1 = "What is a potential concern with allowing someone else to store your cryptocurrency keys, and is it possible to decrypt a private key?"
query_2 = "A potential concern is that the entity holding your keys has control over your cryptocurrency in a custodial relationship. While it is theoretically possible to decrypt a private key, with current technology, it would take centuries or millennia for the 115 quattuorvigintillion possibilities. Most hacks and thefts occur in wallets, where private keys are stored."
embedding_1 = model.encode(query_1)
embedding_2 = model.encode(query_2)
scores = (embedding_1*embedding_2).sum()
print(scores) # 0.862
```
## Evaluation Results
We evaluate our model on unseen pairs of sentences for similarity and unseen shuffled pairs of sentences for dissimilarity. Our evaluation suite contains sentence pairs from: Investopedia (to test for proficiency on finance),
and Gooaq, MSMARCO,stackexchange_duplicate_questions_title_title, yahoo_answers_title_answer (to evaluate models ability to avoid forgetting after finetuning).
## License
Since non-commercial datasets are used for fine-tuning, we release this model as cc-by-nc-4.0.
## Citation [Coming Soon] |
BM-K/stupid_model | BM-K | "2024-01-02T23:47:41Z" | 1,131 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-02T09:57:54Z" | Entry not found |
AIFT/AIFT-ko-orca-plat-Yi-ko-6b-v1.1-dpo | AIFT | "2024-01-22T08:43:15Z" | 1,131 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-22T08:32:38Z" | ---
license: cc-by-sa-4.0
---
<h1>orca-platypus - instruct-dpo 모델 v1.1</h1>
<b><학습 데이터 구축></b>
kyujinpy 님이 공개하신 KOR-OpenOrca-Platypus 데이터를 일부 삭제(샘플링) 및 정제 작업 진행하여 활용.
그 이후 해당 데이터들을 보며 관련 태스크를 추출하였고 이를 기반으로
해당 태스크에 맞춰서 NLP 관련 오픈소스 데이터를 활용하여 학습데이터를 자체적으로
역사, 과학, 수학, 기계독해, 리뷰 분석 문제를 gpt를 통해서 구축하였고,
aihub 일반상식 및 기계독해 데이터를 활용하여 추가로 학습 데이터를 구축(형태소 관련, 기계독해 관련 및 요약)
각종 블로그에서 역사 및 상식 퀴즈를 사람이 직접 학습데이터 형태로 변경
AI2AI Challenge 데이터 형태를 보고 gpt를 통해 초등 수준의 과학 수학 문제 유형을 제작 500문제
영어 번역 데이터 영한/한영 데이터 학습 데이터로 활용 진행
총 데이터 4만개 정도 사용하였습니다.
<br>
<br>
+ TruthfulQA 관련 문제 추가를 진행하였습니다.(속설 관련 참거짓 문제)
+ 기계독해 관련 학습 데이터를 ChatGPT를 통해서 답변을 얻어 학습
+ 문법관련 학습 데이터
<br>
###학습 데이터 파일은 비공개입니다.
<br>
<b><학습></b>
학습은 LoRA를 사용하여 A100 40G *2에서 학습을 진행하였습니다. |
tlphams/solar-10.7b-merged-v0.1 | tlphams | "2024-04-01T00:33:30Z" | 1,131 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"arxiv:2306.01708",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-29T08:43:02Z" | ---
license: cc-by-nc-sa-4.0
tags:
- merge
---
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) as a base.
### Models Merged
The following models were included in the merge:
* [chihoonlee10/T3Q-ko-solar-dpo-v5.0](https://huggingface.co/chihoonlee10/T3Q-ko-solar-dpo-v5.0)
* [krevas/SOLAR-10.7B](https://huggingface.co/krevas/SOLAR-10.7B)
* [hyeogi/SOLAR-10.7B-v1.6](https://huggingface.co/hyeogi/SOLAR-10.7B-v1.6)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: chihoonlee10/T3Q-ko-solar-dpo-v5.0
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
- model: hyeogi/SOLAR-10.7B-v1.6
parameters:
density: 0.5
weight: [0, 0.3, 0.7, 1] # weight gradient
- model: krevas/SOLAR-10.7B
parameters:
density: 0.33
weight:
- filter: mlp
value: 0.5
- value: 0
merge_method: ties
base_model: upstage/SOLAR-10.7B-v1.0
parameters:
normalize: true
int8_mask: true
dtype: float16
``` |
Yntec/AnyLoRa-768 | Yntec | "2024-05-23T17:09:01Z" | 1,131 | 1 | diffusers | [
"diffusers",
"safetensors",
"art",
"artistic",
"anime",
"dreamshaper",
"lcm",
"Lykon",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-05-23T15:51:25Z" | ---
language:
- en
license: creativeml-openrail-m
tags:
- art
- artistic
- anime
- dreamshaper
- lcm
- Lykon
- diffusers
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
pipeline_tag: text-to-image
---
# AnyLoRa
768x768 version of this model for the Inference API, it has the kl-f8-anime2.ckpt for improved saturation over blessed vae and improved details over 840K vae.
Please consider supporting the author:
- on [Patreon](https://www.patreon.com/Lykon275)
- or [buy Lykon a coffee](https://snipfeed.co/lykon)
(Samples and prompts):

(Click for larger)
Top left: highquality, masterpiece, 1girl, Chi-Chi, close up, arms up, pink helmet, black hair, black eyes, blush, white teeth, bikini armor, aqua cape, pink gloves, pink boots, cleavage. cave, rock, mountain. blue collar, CHIBI.
Top right: videogames, retro robert jordan pepperoni pizza, josephine wall winner, hidari, roll20 illumination, radiant light, sitting elementary girl, Pretty CUTE, gorgeous hair, DETAILED CHIBI EYES, Magazine ad, iconic, 1943, Cartoon, sharp focus, 4k, towel. comic art on canvas by kyoani and ROSSDRAWS and watched
Bottom left: analog 1988 movie screenshot Santa Claus with daughters enjoying cake with candles. sitting with a pretty cute little girl, Gift Birthday Theme by Gil_Elvgren and Haddon_Sundblom
Bottom right: Highly detailed, High Quality, Masterpiece, beautiful, cute girl as toon link, teal headwear, glad Zelda
(Note it didn't pass the Santa test)
512x512 version: https://huggingface.co/Lykon/AnyLoRA |
AIFT/AIFT-ko-orca-plat-Yi-ko-6b-v1.2 | AIFT | "2024-01-24T00:41:00Z" | 1,130 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-24T00:17:41Z" | ---
license: cc-by-sa-4.0
---
<h1>orca-platypus - instruct 모델 v1.2</h1>
<b><학습 데이터 구축></b>
kyujinpy 님이 공개하신 KOR-OpenOrca-Platypus 데이터를 일부 삭제(샘플링) 및 정제 작업 진행하여 활용.
그 이후 해당 데이터들을 보며 관련 태스크를 추출하였고 이를 기반으로
해당 태스크에 맞춰서 NLP 관련 오픈소스 데이터를 활용하여 학습데이터를 자체적으로
역사, 과학, 수학, 기계독해, 리뷰 분석 문제를 gpt를 통해서 구축하였고,
aihub 일반상식 및 기계독해 데이터를 활용하여 추가로 학습 데이터를 구축(형태소 관련, 기계독해 관련 및 요약)
각종 블로그에서 역사 및 상식 퀴즈를 사람이 직접 학습데이터 형태로 변경
AI2AI Challenge 데이터 형태를 보고 gpt를 통해 초등 수준의 과학 수학 문제 유형을 제작 500문제
영어 번역 데이터 영한/한영 데이터 학습 데이터로 활용 진행
총 데이터 4만개 정도 사용하였습니다.
<br>
<br>
+ TruthfulQA 관련 문제 추가를 진행하였습니다.(속설 관련 참거짓 문제)
+ 기계독해 관련 학습 데이터를 ChatGPT를 통해서 답변을 얻어 학습
+ 문법관련 학습 데이터
<br>
###학습 데이터 파일은 비공개입니다.
<br>
<b><학습></b>
학습은 LoRA를 사용하여 A100 40G *2에서 학습을 진행하였습니다. |
AIFT/AIFT-ko-orca-plat-Yi-ko-6b-v1.5 | AIFT | "2024-02-02T03:29:25Z" | 1,130 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-01T02:33:33Z" | ---
license: cc-by-sa-4.0
---
<h1>orca-platypus - instruct 모델 v1.5</h1>
<b><학습 데이터 구축></b>
kyujinpy 님이 공개하신 KOR-OpenOrca-Platypus 데이터를 일부 삭제(샘플링) 및 정제 작업 진행하여 활용.
그 이후 해당 데이터들을 보며 관련 태스크를 추출하였고 이를 기반으로
해당 태스크에 맞춰서 NLP 관련 오픈소스 데이터를 활용하여 학습데이터를 자체적으로
역사, 과학, 수학, 기계독해, 리뷰 분석 문제를 gpt를 통해서 구축하였고,
aihub 일반상식 및 기계독해 데이터를 활용하여 추가로 학습 데이터를 구축(형태소 관련, 기계독해 관련 및 요약)
각종 블로그에서 역사 및 상식 퀴즈를 사람이 직접 학습데이터 형태로 변경
AI2AI Challenge 데이터 형태를 보고 gpt를 통해 초등 수준의 과학 수학 문제 유형을 제작 500문제
영어 번역 데이터 영한/한영 데이터 학습 데이터로 활용 진행
총 데이터 4만개 정도 사용하였습니다.
<br>
<br>
+ TruthfulQA 관련 문제 추가를 진행하였습니다.(속설 관련 참거짓 문제)
+ 기계독해 관련 학습 데이터를 ChatGPT를 통해서 답변을 얻어 학습
+ 문법관련 학습 데이터
<br>
###학습 데이터 파일은 비공개입니다.
<br>
<b><학습></b>
학습은 LoRA를 사용하여 A100 40G *2에서 학습을 진행하였습니다. |
AIFT/AIFT-instruct-v1.6-42dot_LLM-SFT-1.3B | AIFT | "2024-02-06T03:13:20Z" | 1,130 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-05T10:20:49Z" | ---
license: cc-by-sa-4.0
---
<h1>AIFT-instruct-v1.6-42dot_LLM-SFT-1.3B</h1>
<b><학습 데이터 구축></b>
<br>
kyujinpy 님이 공개하신 KOR-OpenOrca-Platypus 데이터를 일부 삭제(샘플링) 및 정제 작업 진행하여 활용.
그 이후 해당 데이터들을 보며 관련 태스크를 추출하였고 이를 기반으로
해당 태스크에 맞춰서 NLP 관련 오픈소스 데이터를 활용하여 학습데이터를 자체적으로
역사, 과학, 수학, 기계독해, 리뷰 분석 문제를 gpt를 통해서 구축하였고,
aihub 일반상식 및 기계독해 데이터를 활용하여 추가로 학습 데이터를 구축(형태소 관련, 기계독해 관련 및 요약)
각종 블로그에서 역사 및 상식 퀴즈를 사람이 직접 학습데이터 형태로 변경
AI2AI Challenge 데이터 형태를 보고 gpt를 통해 초등 수준의 과학 수학 문제 유형을 제작 500문제
영어 번역 데이터 영한/한영 데이터 학습 데이터로 활용 진행
총 데이터 4만개 정도 사용하였습니다.
<br>
<br>
+ TruthfulQA 관련 문제 추가를 진행하였습니다.(속설 관련 참거짓 문제)
+ 기계독해 관련 학습 데이터를 ChatGPT를 통해서 답변을 얻어 학습
+ 문법관련 학습 데이터
<br>
<br>
DPO 데이터셋
<br>
ko-HH-RLHF 데이터의 chosen 데이터를 gpt-3.5-turbo를 통해 재 생성하여 학습에 활용하였습니다.
또한 TruthfulQA 데이터 약 1200건은 자체 제작하였습니다.
<br>
###학습 데이터 파일은 비공개입니다.
<br>
<모델>
<br>
42dot에서 공개한 42dot_LLM-SFT-1.3B을 베이스 모델로 하여 학습 진행하였습니다.
<br>
<br>
<br>
<b><학습></b>
<br>
학습은 LoRA를 사용하여 A100 40G *2에서 학습을 진행하였습니다.
|
hotchpotch/japanese-bge-reranker-v2-m3-v1 | hotchpotch | "2024-04-01T02:40:22Z" | 1,130 | 7 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"ja",
"dataset:hotchpotch/JQaRA",
"dataset:shunk031/JGLUE",
"dataset:miracl/miracl",
"dataset:castorini/mr-tydi",
"dataset:unicamp-dl/mmarco",
"license:mit",
"region:us"
] | null | "2024-03-28T20:45:16Z" | ---
license: mit
datasets:
- hotchpotch/JQaRA
- shunk031/JGLUE
- miracl/miracl
- castorini/mr-tydi
- unicamp-dl/mmarco
language:
- ja
library_name: sentence-transformers
---
## hotchpotch/japanese-bge-reranker-v2-m3-v1
日本語で学習させた Reranker (CrossEncoder) シリーズです。
| モデル名 | layers | hidden_size |
| ----------------------------------------------------------------------------------------------------------------------------------- | ------ | ----------- |
| [hotchpotch/japanese-reranker-cross-encoder-xsmall-v1](https://huggingface.co/hotchpotch/japanese-reranker-cross-encoder-xsmall-v1) | 6 | 384 |
| [hotchpotch/japanese-reranker-cross-encoder-small-v1](https://huggingface.co/hotchpotch/japanese-reranker-cross-encoder-small-v1) | 12 | 384 |
| [hotchpotch/japanese-reranker-cross-encoder-base-v1](https://huggingface.co/hotchpotch/japanese-reranker-cross-encoder-base-v1) | 12 | 768 |
| [hotchpotch/japanese-reranker-cross-encoder-large-v1](https://huggingface.co/hotchpotch/japanese-reranker-cross-encoder-large-v1) | 24 | 1024 |
| [hotchpotch/japanese-bge-reranker-v2-m3-v1](https://huggingface.co/hotchpotch/japanese-bge-reranker-v2-m3-v1) | 24 | 1024 |
Reranker についてや、技術レポート・評価等は以下を参考ください。
- [日本語最高性能のRerankerをリリース / そもそも Reranker とは?](https://secon.dev/entry/2024/04/02/070000-japanese-reranker-release/)
- [日本語 Reranker 作成のテクニカルレポート](https://secon.dev/entry/2024/04/02/080000-japanese-reranker-tech-report/)
## 使い方
### SentenceTransformers
```python
from sentence_transformers import CrossEncoder
import torch
MODEL_NAME = "hotchpotch/japanese-bge-reranker-v2-m3-v1"
device = "cuda" if torch.cuda.is_available() else "cpu"
model = CrossEncoder(MODEL_NAME, max_length=512, device=device)
if device == "cuda":
model.model.half()
query = "感動的な映画について"
passages = [
"深いテーマを持ちながらも、観る人の心を揺さぶる名作。登場人物の心情描写が秀逸で、ラストは涙なしでは見られない。",
"重要なメッセージ性は評価できるが、暗い話が続くので気分が落ち込んでしまった。もう少し明るい要素があればよかった。",
"どうにもリアリティに欠ける展開が気になった。もっと深みのある人間ドラマが見たかった。",
"アクションシーンが楽しすぎる。見ていて飽きない。ストーリーはシンプルだが、それが逆に良い。",
]
scores = model.predict([(query, passage) for passage in passages])
```
## HuggingFace transformers
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from torch.nn import Sigmoid
MODEL_NAME = "hotchpotch/japanese-bge-reranker-v2-m3-v1"
device = "cuda" if torch.cuda.is_available() else "cpu"
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForSequenceClassification.from_pretrained(MODEL_NAME)
model.to(device)
model.eval()
if device == "cuda":
model.half()
query = "感動的な映画について"
passages = [
"深いテーマを持ちながらも、観る人の心を揺さぶる名作。登場人物の心情描写が秀逸で、ラストは涙なしでは見られない。",
"重要なメッセージ性は評価できるが、暗い話が続くので気分が落ち込んでしまった。もう少し明るい要素があればよかった。",
"どうにもリアリティに欠ける展開が気になった。もっと深みのある人間ドラマが見たかった。",
"アクションシーンが楽しすぎる。見ていて飽きない。ストーリーはシンプルだが、それが逆に良い。",
]
inputs = tokenizer(
[(query, passage) for passage in passages],
padding=True,
truncation=True,
max_length=512,
return_tensors="pt",
)
inputs = {k: v.to(device) for k, v in inputs.items()}
logits = model(**inputs).logits
activation = Sigmoid()
scores = activation(logits).squeeze().tolist()
```
## 評価結果
| Model Name | [JQaRA](https://huggingface.co/datasets/hotchpotch/JQaRA) | [JaCWIR](https://huggingface.co/datasets/hotchpotch/JaCWIR) | [MIRACL](https://huggingface.co/datasets/miracl/miracl) | [JSQuAD](https://github.com/yahoojapan/JGLUE) |
| ------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------- | ----------------------------------------------------------- | ------------------------------------------------------- | --------------------------------------------- |
| [japanese-reranker-cross-encoder-xsmall-v1](https://huggingface.co/hotchpotch/japanese-reranker-cross-encoder-xsmall-v1) | 0.6136 | 0.9376 | 0.7411 | 0.9602 |
| [japanese-reranker-cross-encoder-small-v1](https://huggingface.co/hotchpotch/japanese-reranker-cross-encoder-small-v1) | 0.6247 | 0.939 | 0.7776 | 0.9604 |
| [japanese-reranker-cross-encoder-base-v1](https://huggingface.co/hotchpotch/japanese-reranker-cross-encoder-base-v1) | 0.6711 | 0.9337 | 0.818 | 0.9708 |
| [japanese-reranker-cross-encoder-large-v1](https://huggingface.co/hotchpotch/japanese-reranker-cross-encoder-large-v1) | 0.7099 | 0.9364 | 0.8406 | 0.9773 |
| [japanese-bge-reranker-v2-m3-v1](https://huggingface.co/hotchpotch/japanese-bge-reranker-v2-m3-v1) | 0.6918 | 0.9372 | 0.8423 | 0.9624 |
| [bge-reranker-v2-m3](https://huggingface.co/BAAI/bge-reranker-v2-m3) | 0.673 | 0.9343 | 0.8374 | 0.9599 |
| [bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 0.4718 | 0.7332 | 0.7666 | 0.7081 |
| [bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 0.2445 | 0.4905 | 0.6792 | 0.5757 |
| [cross-encoder-mmarco-mMiniLMv2-L12-H384-v1](https://huggingface.co/corrius/cross-encoder-mmarco-mMiniLMv2-L12-H384-v1) | 0.5588 | 0.9211 | 0.7158 | 0.932 |
| [shioriha-large-reranker](https://huggingface.co/cl-nagoya/shioriha-large-reranker) | 0.5775 | 0.8458 | 0.8084 | 0.9262 |
| [bge-m3+all](https://huggingface.co/BAAI/bge-m3) | 0.576 | 0.904 | 0.7926 | 0.9226 |
| [bge-m3+dense](https://huggingface.co/BAAI/bge-m3) | 0.539 | 0.8642 | 0.7753 | 0.8815 |
| [bge-m3+colbert](https://huggingface.co/BAAI/bge-m3) | 0.5656 | 0.9064 | 0.7902 | 0.9297 |
| [bge-m3+sparse](https://huggingface.co/BAAI/bge-m3) | 0.5088 | 0.8944 | 0.6941 | 0.9184 |
| [JaColBERTv2](https://huggingface.co/bclavie/JaColBERTv2) | 0.5847 | 0.9185 | 0.6861 | 0.9247 |
| [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 0.554 | 0.8759 | 0.7722 | 0.8892 |
| [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 0.4917 | 0.869 | 0.7025 | 0.8565 |
| bm25 | 0.458 | 0.8408 | 0.4387 | 0.9002 |
## ライセンス
MIT License |
krnl/realisticVisionV51_v51VAE-inpainting | krnl | "2024-04-10T13:18:15Z" | 1,130 | 0 | diffusers | [
"diffusers",
"safetensors",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-04-10T13:13:12Z" | ---
license: creativeml-openrail-m
---
This is an inpainting model, which has been converted from the [realisticVisionV51_v51VAE-inpainting](https://civitai.com/models/4201?modelVersionId=130090). |
google/vivit-b-16x2 | google | "2023-08-03T10:01:21Z" | 1,129 | 5 | transformers | [
"transformers",
"pytorch",
"vivit",
"video-classification",
"vision",
"arxiv:2103.15691",
"license:mit",
"endpoints_compatible",
"region:us"
] | video-classification | "2022-11-23T18:57:19Z" | ---
license: "mit"
tags:
- vision
- video-classification
---
# ViViT (Video Vision Transformer)
ViViT model as introduced in the paper [ViViT: A Video Vision Transformer](https://arxiv.org/abs/2103.15691) by Arnab et al. and first released in [this repository](https://github.com/google-research/scenic/tree/main/scenic/projects/vivit).
Disclaimer: The team releasing ViViT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
ViViT is an extension of the [Vision Transformer (ViT)](https://huggingface.co/docs/transformers/v4.27.0/model_doc/vit) to video.
We refer to the paper for details.
## Intended uses & limitations
The model is mostly meant to intended to be fine-tuned on a downstream task, like video classification. See the [model hub](https://huggingface.co/models?filter=vivit) to look for fine-tuned versions on a task that interests you.
### How to use
For code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/vivit).
### BibTeX entry and citation info
```bibtex
@misc{arnab2021vivit,
title={ViViT: A Video Vision Transformer},
author={Anurag Arnab and Mostafa Dehghani and Georg Heigold and Chen Sun and Mario Lučić and Cordelia Schmid},
year={2021},
eprint={2103.15691},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` |
timm/swin_large_patch4_window7_224.ms_in22k | timm | "2024-02-10T23:31:28Z" | 1,129 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-22k",
"arxiv:2103.14030",
"license:mit",
"region:us"
] | image-classification | "2023-03-18T04:07:06Z" | ---
license: mit
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-22k
---
# Model card for swin_large_patch4_window7_224.ms_in22k
A Swin Transformer image classification model. Pretrained on ImageNet-22k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 228.6
- GMACs: 34.6
- Activations (M): 55.0
- Image size: 224 x 224
- **Papers:**
- Swin Transformer: Hierarchical Vision Transformer using Shifted Windows: https://arxiv.org/abs/2103.14030
- **Original:** https://github.com/microsoft/Swin-Transformer
- **Dataset:** ImageNet-22k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('swin_large_patch4_window7_224.ms_in22k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'swin_large_patch4_window7_224.ms_in22k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g. for swin_base_patch4_window7_224 (NHWC output)
# torch.Size([1, 56, 56, 128])
# torch.Size([1, 28, 28, 256])
# torch.Size([1, 14, 14, 512])
# torch.Size([1, 7, 7, 1024])
# e.g. for swinv2_cr_small_ns_224 (NCHW output)
# torch.Size([1, 96, 56, 56])
# torch.Size([1, 192, 28, 28])
# torch.Size([1, 384, 14, 14])
# torch.Size([1, 768, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'swin_large_patch4_window7_224.ms_in22k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled (ie.e a (batch_size, H, W, num_features) tensor for swin / swinv2
# or (batch_size, num_features, H, W) for swinv2_cr
output = model.forward_head(output, pre_logits=True)
# output is (batch_size, num_features) tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{liu2021Swin,
title={Swin Transformer: Hierarchical Vision Transformer using Shifted Windows},
author={Liu, Ze and Lin, Yutong and Cao, Yue and Hu, Han and Wei, Yixuan and Zhang, Zheng and Lin, Stephen and Guo, Baining},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
thesephist/contra-bottleneck-t5-large-wikipedia | thesephist | "2023-10-09T23:22:29Z" | 1,129 | 13 | transformers | [
"transformers",
"pytorch",
"t5",
"text-generation",
"custom_code",
"en",
"dataset:wikipedia",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-09-30T21:49:49Z" | ---
license: mit
datasets:
- wikipedia
language:
- en
---
# Bottleneck T5 ⏳
The Bottleneck T5 model powers many of my experiments and demos exploring interfaces for inspecting and editing text in latent space. This model is an autoencoder for text; it's able to encode text up to 512 tokens into an embedding, then reconstruct the original text from the embedding. The structure of the embedding space produced by this model also allows for semantic edits to text through vector arithmetic in latent space.
## Model Details
Using embeddings produced by this model, we can semantically interpolate between pieces of text and edit sentences using their latent attributes like length, tone, structure, or topic.
All Bottleneck T5 models are trained on a filtered subset of the English Wikipedia, and performs best at encoding and decoding encyclopedic and other similar kinds of text. Text that's heavily technical, conversational, or otherwise unconventional may be out of distribution for the model, and the model may not perform as well on such inputs.
Bottleneck T5 embeddings are always normalized to length 1; the encoder produces embeddings of length 1, and any inputs to the decoder will be normalized to length 1.
- **Developed by:** [Linus Lee](https://thesephist.com/)
- **Model type:** T5-style encoder-decoder transformer with an attention pooled bottleneck and gated cross-attention
- **Language(s) (NLP):** English
- **License:** MIT
- **Finetuned from model:** LM-adapted T5 v1.1
## Using the model
The model is currently in a prototype state implemented on top of the T5 language model, so we need a small wrapper class around it to use it for embedding and generating text:
```py
import os
import torch
import torch.nn as nn
import torch.nn.functional as F
from tqdm import tqdm
from transformers import AutoTokenizer, AutoModelForCausalLM
class BottleneckT5Autoencoder:
def __init__(self, model_path: str, device='cpu'):
self.device = device
self.tokenizer = AutoTokenizer.from_pretrained(model_path, model_max_length=512)
self.model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True).to(self.device)
self.model.eval()
@torch.no_grad()
def embed(self, text: str) -> torch.FloatTensor:
inputs = self.tokenizer(text, return_tensors='pt').to(self.device)
decoder_inputs = self.tokenizer('', return_tensors='pt').to(self.device)
return self.model(
**inputs,
decoder_input_ids=decoder_inputs['input_ids'],
encode_only=True,
)[0]
@torch.no_grad()
def generate_from_latent(self, latent: torch.FloatTensor, max_length=512, temperature=1.0) -> str:
dummy_text = '.'
dummy = self.embed(dummy_text)
perturb_vector = latent - dummy
self.model.perturb_vector = perturb_vector
input_ids = self.tokenizer(dummy_text, return_tensors='pt').to(self.device).input_ids
output = self.model.generate(
input_ids=input_ids,
max_length=max_length,
do_sample=True,
temperature=temperature,
top_p=0.9,
num_return_sequences=1,
)
return self.tokenizer.decode(output[0], skip_special_tokens=True)
```
Then we can initialize this autoencoder class based on a model class.
```py
device = 'cuda' if torch.cuda.is_available() else 'cpu'
autoencoder = BottleneckT5Autoencoder(model_path='thesephist/contra-bottleneck-t5-large-wikipedia', device=device)
```
Embed and un-embed text with `.embed(text: str)` and `.generate_from_latent(embedding: torch.FloatTensor)`.
```py
texts = [
'The quick brown fox jumps over the lazy dog',
'Hi there! My name is Linus, and I spend a lot of my time thinking about latent spaces of neural network models.',
'Notion is a single space where you can think, write, and plan. Capture thoughts, manage projects, or even run an entire company — and do it exactly the way you want.',
]
for t in texts:
embedding = autoencoder.embed(t)
reconstruction = autoencoder.generate_from_latent(embedding)
print(reconstruction)
```
produces the text:
```
The quick brown fox jumps over the lazy dog
I'm named after Linus, and I spend a lot of my time thinking about neural networks of latent space models.
Notion is a single place where you can think, plan, and spend time. Capture ideas, manage projects, and even do your own writing — or plan it exactly the way you want.
```
For more examples on how to use the model to compute interpolations and semantic edits with Contra, see [this Google Colab notebook](https://linus.zone/contra-colab).
## Training Details
Contra was initialized from the [language modeling-adapted T5 v1.1 checkpoint](https://huggingface.co/models?other=t5-lm-adapt) and trained on a subset of the English [Wikipedia](https://huggingface.co/datasets/wikipedia) dataset filtered for length, for a single epoch, as a denoising autoencoder with 30% of tokens randomly masked, using the Adafactor optimizer.
#### Model family and checkpoints
I recommend experimenting first with `thesephist/contra-bottleneck-t5-large-wikipedia`, which strikes a good balance between model size and output quality, but I've trained four variants ranging from 330M to 3B parameters:
- [thesephist/contra-bottleneck-t5-small-wikipedia](https://huggingface.co/thesephist/contra-bottleneck-t5-small-wikipedia): 60M params, 512 embedding dimensions
- [thesephist/contra-bottleneck-t5-base-wikipedia](https://huggingface.co/thesephist/contra-bottleneck-t5-base-wikipedia): 220M params, 768 embedding dimensions
- [thesephist/contra-bottleneck-t5-large-wikipedia](https://huggingface.co/thesephist/contra-bottleneck-t5-large-wikipedia): 770M params, 1024 embedding dimensions
- [thesephist/contra-bottleneck-t5-xl-wikipedia](https://huggingface.co/thesephist/contra-bottleneck-t5-xl-wikipedia): 3B params, 2048 embedding dimensions
|
yuntaeyang/SOLAR-10.7B-Instructlora_sftt-v1.0 | yuntaeyang | "2024-01-13T09:06:37Z" | 1,129 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-13T05:33:21Z" | ---
license: apache-2.0
language:
- ko
---
# Model Card for yuntaeyang/SOLAR-10.7B-Instructlora_sftt-v1.0
## Developed by : yuntaeyang(yonsei)
## Base Model : Upstage/SOLAR-10.7B-Instruct-v1.0
## 사용 데이터셋
- kyujinpy/KOR-OpenOrca-Platypus-v3
- 추가 데이터 없음 |
AIFT/AIFT-instruct-42dot_LLM-SFT-1.3B | AIFT | "2024-01-30T00:15:04Z" | 1,129 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-29T07:56:32Z" | ---
license: cc-by-sa-4.0
---
<h1>AIFT-instruct-42dot_LLM-SFT-1.3B</h1>
<b><학습 데이터 구축></b>
<br>
kyujinpy 님이 공개하신 KOR-OpenOrca-Platypus 데이터를 일부 삭제(샘플링) 및 정제 작업 진행하여 활용.
그 이후 해당 데이터들을 보며 관련 태스크를 추출하였고 이를 기반으로
해당 태스크에 맞춰서 NLP 관련 오픈소스 데이터를 활용하여 학습데이터를 자체적으로
역사, 과학, 수학, 기계독해, 리뷰 분석 문제를 gpt를 통해서 구축하였고,
aihub 일반상식 및 기계독해 데이터를 활용하여 추가로 학습 데이터를 구축(형태소 관련, 기계독해 관련 및 요약)
각종 블로그에서 역사 및 상식 퀴즈를 사람이 직접 학습데이터 형태로 변경
AI2AI Challenge 데이터 형태를 보고 gpt를 통해 초등 수준의 과학 수학 문제 유형을 제작 500문제
영어 번역 데이터 영한/한영 데이터 학습 데이터로 활용 진행
총 데이터 4만개 정도 사용하였습니다.
<br>
<br>
+ TruthfulQA 관련 문제 추가를 진행하였습니다.(속설 관련 참거짓 문제)
+ 기계독해 관련 학습 데이터를 ChatGPT를 통해서 답변을 얻어 학습
+ 문법관련 학습 데이터
<br>
###학습 데이터 파일은 비공개입니다.
<br>
<모델>
<br>
42dot에서 공개한 42dot_LLM-SFT-1.3B을 베이스 모델로 하여 학습 진행하였습니다.
<br>
<br>
<br>
<b><학습></b>
<br>
학습은 LoRA를 사용하여 A100 40G *2에서 학습을 진행하였습니다.
|
AIFT/AIFT-instruct-SFT-1.3B-v2.1 | AIFT | "2024-02-26T23:29:24Z" | 1,129 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-26T23:23:33Z" | ---
license: cc-by-sa-4.0
---
<h1>AIFT-instruct-42dot_LLM-SFT-1.3B</h1>
<br>
version 2.1
<b><학습 데이터 구축></b>
<br>
kyujinpy 님이 공개하신 KOR-OpenOrca-Platypus 데이터를 일부 삭제(샘플링) 및 정제 작업 진행하여 활용.
그 이후 해당 데이터들을 보며 관련 태스크를 추출하였고 이를 기반으로
해당 태스크에 맞춰서 NLP 관련 오픈소스 데이터를 활용하여 학습데이터를 자체적으로
역사, 과학, 수학, 기계독해, 리뷰 분석 문제를 gpt를 통해서 구축하였고,
aihub 일반상식 및 기계독해 데이터를 활용하여 추가로 학습 데이터를 구축(형태소 관련, 기계독해 관련 및 요약)
각종 블로그에서 역사 및 상식 퀴즈를 사람이 직접 학습데이터 형태로 변경
AI2AI Challenge 데이터 형태를 보고 gpt를 통해 초등 수준의 과학 수학 문제 유형을 제작 500문제
영어 번역 데이터 영한/한영 데이터 학습 데이터로 활용 진행
총 데이터 4만개 정도 사용하였습니다.
<br>
<br>
+ TruthfulQA 관련 문제 추가를 진행하였습니다.(속설 관련 참거짓 문제)
+ 기계독해 관련 학습 데이터를 ChatGPT를 통해서 답변을 얻어 학습
+ 문법관련 학습 데이터
<br>
###학습 데이터 파일은 비공개입니다.
<br>
<모델>
<br>
42dot에서 공개한 42dot_LLM-SFT-1.3B을 베이스 모델로 하여 학습 진행하였습니다.
<br>
<br>
<br>
<b><학습></b>
<br>
학습은 LoRA를 사용하여 A100 40G *2에서 학습을 진행하였습니다.
|
d-matrix/opt | d-matrix | "2024-05-16T19:02:26Z" | 1,129 | 1 | null | [
"text-generation",
"opt",
"en",
"license:other",
"region:us"
] | text-generation | "2024-02-28T02:20:42Z" | ---
language: en
inference: false
tags:
- text-generation
- opt
license: other
commercial: false
---
This is a d-Matrix functional reference of the opt model family, with the following *revisions*:
- [`facebook/opt-125m`](https://huggingface.co/facebook/opt-125m)
- [`facebook/opt-350m`](https://huggingface.co/facebook/opt-350m)
- [`facebook/opt-1.3b`](https://huggingface.co/facebook/opt-1.3b)
- [`facebook/opt-2.7b`](https://huggingface.co/facebook/opt-2.7b)
- [`facebook/opt-6.7b`](https://huggingface.co/facebook/opt-6.7b)
The reference provides the following functional *configurations*:
Configuration | Explanation
:-- | :--
**`BASELINE`** | a reference functionally equivalent to the original model
**`BASIC`** | all linear algebraic operands quantized to `BFP16-64`, and all other operations transformed to approximated kernel simulations
### Usage
Install d-Matrix [ML Tools](https://github.com/d-matrix-ai/dmx-mltools) first.
```sh
pip install dmx-mltools
```
The following is an example model and its evaluation.
```python
from mltools.dmx import pipeline
pipe = pipeline(
task="text-generation",
model="d-matrix/opt",
revision="opt-125m", # see above for other variants
dmx_config="BASELINE", # see above for other variants
)
results = pipe.evaluate(
metric="d-matrix/dmx_perplexity",
dataset="wikitext",
dataset_version="wikitext-2-raw-v1",
)
```
### Evaluation results
- `perplexity` on `penn_treebank`
Revision \ Configuration | **`BASELINE`** | **`BASIC`**
:-- | --: | --:
`opt-125m` | 29.496986389160156 | 29.628690719604492
`opt-350m` | 23.57796859741211 | 23.683700561523438
`opt-1.3b` | 15.616923332214355 | 15.879881858825684
`opt-2.7b` | 13.993170738220215 | 14.005770683288574
`opt-6.7b` | 12.166489601135254 | 12.196784019470215
- `perplexity` on `wikitext2`
Revision \ Configuration | **`BASELINE`** | **`BASIC`**
:-- | --: | --:
`opt-125m` | 27.661212921142578 | 27.786727905273438
`opt-350m` | 22.00566291809082 | 22.00930404663086
`opt-1.3b` | 14.624724388122559 | 14.811502456665039
`opt-2.7b` | 12.468732833862305 | 12.504587173461914
`opt-6.7b` | 10.856857299804688 | 10.841047286987305 |
SuperPowerMz/SON_Mistral-7B-QLoRA-Peft | SuperPowerMz | "2024-04-17T02:03:20Z" | 1,129 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"ko",
"dataset:beomi/KoAlpaca-v1.1a",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-17T01:30:24Z" | ---
library_name: transformers
license: apache-2.0
datasets:
- beomi/KoAlpaca-v1.1a
language:
- ko
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
spow12/kosolar_4.1_sft | spow12 | "2024-04-23T05:34:14Z" | 1,129 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-23T05:22:31Z" | ---
library_name: transformers
license: other
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.