modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
Danielbrdz/Barcenas-3.8b | Danielbrdz | 2024-05-03T17:01:16Z | 721 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"phi",
"conversational",
"es",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-03T16:25:51Z | ---
license: mit
language:
- es
- en
tags:
- phi
---
Barcenas 3.8b
Based on the Phi-3-Mini-4K-Boost from DeepMount00 and trained with the pinzhenchen/alpaca-cleaned-es dataset, to improve Spanish conversations.
The goal of this model is to have a small LLM that can express itself correctly and fluently in the Spanish language.
Made with ❤️ in Guadalupe, Nuevo Leon, Mexico 🇲🇽 |
bartowski/llama-3-sqlcoder-8b-GGUF | bartowski | 2024-05-20T22:50:15Z | 721 | 2 | null | [
"gguf",
"code",
"text-generation",
"license:cc-by-sa-4.0",
"region:us"
] | text-generation | 2024-05-20T22:03:13Z | ---
license: cc-by-sa-4.0
metrics:
- accuracy
pipeline_tag: text-generation
tags:
- code
quantized_by: bartowski
---
## Llamacpp imatrix Quantizations of llama-3-sqlcoder-8b
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2940">b2940</a> for quantization.
Original model: https://huggingface.co/defog/llama-3-sqlcoder-8b
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/b6ac44691e994344625687afe3263b3a)
## Prompt format
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [llama-3-sqlcoder-8b-Q8_0.gguf](https://huggingface.co/bartowski/llama-3-sqlcoder-8b-GGUF/blob/main/llama-3-sqlcoder-8b-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. |
| [llama-3-sqlcoder-8b-Q6_K.gguf](https://huggingface.co/bartowski/llama-3-sqlcoder-8b-GGUF/blob/main/llama-3-sqlcoder-8b-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. |
| [llama-3-sqlcoder-8b-Q5_K_M.gguf](https://huggingface.co/bartowski/llama-3-sqlcoder-8b-GGUF/blob/main/llama-3-sqlcoder-8b-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. |
| [llama-3-sqlcoder-8b-Q5_K_S.gguf](https://huggingface.co/bartowski/llama-3-sqlcoder-8b-GGUF/blob/main/llama-3-sqlcoder-8b-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. |
| [llama-3-sqlcoder-8b-Q4_K_M.gguf](https://huggingface.co/bartowski/llama-3-sqlcoder-8b-GGUF/blob/main/llama-3-sqlcoder-8b-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [llama-3-sqlcoder-8b-Q4_K_S.gguf](https://huggingface.co/bartowski/llama-3-sqlcoder-8b-GGUF/blob/main/llama-3-sqlcoder-8b-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. |
| [llama-3-sqlcoder-8b-IQ4_NL.gguf](https://huggingface.co/bartowski/llama-3-sqlcoder-8b-GGUF/blob/main/llama-3-sqlcoder-8b-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [llama-3-sqlcoder-8b-IQ4_XS.gguf](https://huggingface.co/bartowski/llama-3-sqlcoder-8b-GGUF/blob/main/llama-3-sqlcoder-8b-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [llama-3-sqlcoder-8b-Q3_K_L.gguf](https://huggingface.co/bartowski/llama-3-sqlcoder-8b-GGUF/blob/main/llama-3-sqlcoder-8b-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. |
| [llama-3-sqlcoder-8b-Q3_K_M.gguf](https://huggingface.co/bartowski/llama-3-sqlcoder-8b-GGUF/blob/main/llama-3-sqlcoder-8b-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. |
| [llama-3-sqlcoder-8b-IQ3_M.gguf](https://huggingface.co/bartowski/llama-3-sqlcoder-8b-GGUF/blob/main/llama-3-sqlcoder-8b-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [llama-3-sqlcoder-8b-IQ3_S.gguf](https://huggingface.co/bartowski/llama-3-sqlcoder-8b-GGUF/blob/main/llama-3-sqlcoder-8b-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [llama-3-sqlcoder-8b-Q3_K_S.gguf](https://huggingface.co/bartowski/llama-3-sqlcoder-8b-GGUF/blob/main/llama-3-sqlcoder-8b-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. |
| [llama-3-sqlcoder-8b-IQ3_XS.gguf](https://huggingface.co/bartowski/llama-3-sqlcoder-8b-GGUF/blob/main/llama-3-sqlcoder-8b-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [llama-3-sqlcoder-8b-IQ3_XXS.gguf](https://huggingface.co/bartowski/llama-3-sqlcoder-8b-GGUF/blob/main/llama-3-sqlcoder-8b-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [llama-3-sqlcoder-8b-Q2_K.gguf](https://huggingface.co/bartowski/llama-3-sqlcoder-8b-GGUF/blob/main/llama-3-sqlcoder-8b-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. |
| [llama-3-sqlcoder-8b-IQ2_M.gguf](https://huggingface.co/bartowski/llama-3-sqlcoder-8b-GGUF/blob/main/llama-3-sqlcoder-8b-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [llama-3-sqlcoder-8b-IQ2_S.gguf](https://huggingface.co/bartowski/llama-3-sqlcoder-8b-GGUF/blob/main/llama-3-sqlcoder-8b-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. |
| [llama-3-sqlcoder-8b-IQ2_XS.gguf](https://huggingface.co/bartowski/llama-3-sqlcoder-8b-GGUF/blob/main/llama-3-sqlcoder-8b-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. |
| [llama-3-sqlcoder-8b-IQ2_XXS.gguf](https://huggingface.co/bartowski/llama-3-sqlcoder-8b-GGUF/blob/main/llama-3-sqlcoder-8b-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. |
| [llama-3-sqlcoder-8b-IQ1_M.gguf](https://huggingface.co/bartowski/llama-3-sqlcoder-8b-GGUF/blob/main/llama-3-sqlcoder-8b-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. |
| [llama-3-sqlcoder-8b-IQ1_S.gguf](https://huggingface.co/bartowski/llama-3-sqlcoder-8b-GGUF/blob/main/llama-3-sqlcoder-8b-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/llama-3-sqlcoder-8b-GGUF --include "llama-3-sqlcoder-8b-Q4_K_M.gguf" --local-dir ./ --local-dir-use-symlinks False
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/llama-3-sqlcoder-8b-GGUF --include "llama-3-sqlcoder-8b-Q8_0.gguf/*" --local-dir llama-3-sqlcoder-8b-Q8_0 --local-dir-use-symlinks False
```
You can either specify a new local-dir (llama-3-sqlcoder-8b-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
deepnet/SN9-BestLlama6-1 | deepnet | 2024-05-24T15:32:29Z | 721 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-24T15:16:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/internlm_-_internlm2-chat-20b-sft-gguf | RichardErkhov | 2024-06-06T04:17:24Z | 721 | 0 | null | [
"gguf",
"arxiv:2403.17297",
"region:us"
] | null | 2024-06-05T21:38:39Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
internlm2-chat-20b-sft - GGUF
- Model creator: https://huggingface.co/internlm/
- Original model: https://huggingface.co/internlm/internlm2-chat-20b-sft/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [internlm2-chat-20b-sft.Q2_K.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-20b-sft-gguf/blob/main/internlm2-chat-20b-sft.Q2_K.gguf) | Q2_K | 7.03GB |
| [internlm2-chat-20b-sft.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-20b-sft-gguf/blob/main/internlm2-chat-20b-sft.IQ3_XS.gguf) | IQ3_XS | 7.79GB |
| [internlm2-chat-20b-sft.IQ3_S.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-20b-sft-gguf/blob/main/internlm2-chat-20b-sft.IQ3_S.gguf) | IQ3_S | 8.2GB |
| [internlm2-chat-20b-sft.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-20b-sft-gguf/blob/main/internlm2-chat-20b-sft.Q3_K_S.gguf) | Q3_K_S | 8.16GB |
| [internlm2-chat-20b-sft.IQ3_M.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-20b-sft-gguf/blob/main/internlm2-chat-20b-sft.IQ3_M.gguf) | IQ3_M | 8.5GB |
| [internlm2-chat-20b-sft.Q3_K.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-20b-sft-gguf/blob/main/internlm2-chat-20b-sft.Q3_K.gguf) | Q3_K | 9.05GB |
| [internlm2-chat-20b-sft.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-20b-sft-gguf/blob/main/internlm2-chat-20b-sft.Q3_K_M.gguf) | Q3_K_M | 9.05GB |
| [internlm2-chat-20b-sft.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-20b-sft-gguf/blob/main/internlm2-chat-20b-sft.Q3_K_L.gguf) | Q3_K_L | 9.83GB |
| [internlm2-chat-20b-sft.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-20b-sft-gguf/blob/main/internlm2-chat-20b-sft.IQ4_XS.gguf) | IQ4_XS | 10.12GB |
| [internlm2-chat-20b-sft.Q4_0.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-20b-sft-gguf/blob/main/internlm2-chat-20b-sft.Q4_0.gguf) | Q4_0 | 10.55GB |
| [internlm2-chat-20b-sft.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-20b-sft-gguf/blob/main/internlm2-chat-20b-sft.IQ4_NL.gguf) | IQ4_NL | 10.65GB |
| [internlm2-chat-20b-sft.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-20b-sft-gguf/blob/main/internlm2-chat-20b-sft.Q4_K_S.gguf) | Q4_K_S | 10.62GB |
| [internlm2-chat-20b-sft.Q4_K.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-20b-sft-gguf/blob/main/internlm2-chat-20b-sft.Q4_K.gguf) | Q4_K | 11.16GB |
| [internlm2-chat-20b-sft.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-20b-sft-gguf/blob/main/internlm2-chat-20b-sft.Q4_K_M.gguf) | Q4_K_M | 11.16GB |
| [internlm2-chat-20b-sft.Q4_1.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-20b-sft-gguf/blob/main/internlm2-chat-20b-sft.Q4_1.gguf) | Q4_1 | 11.67GB |
| [internlm2-chat-20b-sft.Q5_0.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-20b-sft-gguf/blob/main/internlm2-chat-20b-sft.Q5_0.gguf) | Q5_0 | 12.79GB |
| [internlm2-chat-20b-sft.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-20b-sft-gguf/blob/main/internlm2-chat-20b-sft.Q5_K_S.gguf) | Q5_K_S | 12.79GB |
| [internlm2-chat-20b-sft.Q5_K.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-20b-sft-gguf/blob/main/internlm2-chat-20b-sft.Q5_K.gguf) | Q5_K | 13.11GB |
| [internlm2-chat-20b-sft.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-20b-sft-gguf/blob/main/internlm2-chat-20b-sft.Q5_K_M.gguf) | Q5_K_M | 13.11GB |
| [internlm2-chat-20b-sft.Q5_1.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-20b-sft-gguf/blob/main/internlm2-chat-20b-sft.Q5_1.gguf) | Q5_1 | 13.91GB |
| [internlm2-chat-20b-sft.Q6_K.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-20b-sft-gguf/blob/main/internlm2-chat-20b-sft.Q6_K.gguf) | Q6_K | 15.18GB |
| [internlm2-chat-20b-sft.Q8_0.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-20b-sft-gguf/blob/main/internlm2-chat-20b-sft.Q8_0.gguf) | Q8_0 | 19.66GB |
Original model description:
---
pipeline_tag: text-generation
license: other
---
# InternLM
<div align="center">
<img src="https://github.com/InternLM/InternLM/assets/22529082/b9788105-8892-4398-8b47-b513a292378e" width="200"/>
<div> </div>
<div align="center">
<b><font size="5">InternLM</font></b>
<sup>
<a href="https://internlm.intern-ai.org.cn/">
<i><font size="4">HOT</font></i>
</a>
</sup>
<div> </div>
</div>
[](https://github.com/internLM/OpenCompass/)
[💻Github Repo](https://github.com/InternLM/InternLM) • [🤔Reporting Issues](https://github.com/InternLM/InternLM/issues/new) • [📜Technical Report](https://arxiv.org/abs/2403.17297)
</div>
## Introduction
InternLM2 has open-sourced a 20 billion parameter base model and a chat model tailored for practical scenarios. The model has the following characteristics:
- **200K Context window**: Nearly perfect at finding needles in the haystack with 200K-long context, with leading performance on long-context tasks like LongBench and L-Eval. Try it with [LMDeploy](https://github.com/InternLM/lmdeploy) for 200K-context inference.
- **Outstanding comprehensive performance**: Significantly better than the last generation in all dimensions, especially in reasoning, math, code, chat experience, instruction following, and creative writing, with leading performance among open-source models in similar sizes. In some evaluations, InternLM2-Chat-20B may match or even surpass ChatGPT (GPT-3.5).
- **Code interpreter & Data analysis**: With code interpreter, InternLM2-Chat-20B obtains compatible performance with GPT-4 on GSM8K and MATH. InternLM2-Chat also provides data analysis capability.
- **Stronger tool use**: Based on better tool utilization-related capabilities in instruction following, tool selection and reflection, InternLM2 can support more kinds of agents and multi-step tool calling for complex tasks. See [examples](https://github.com/InternLM/lagent).
## InternLM2-Chat-20B-SFT
InternLM2-Chat-20B-SFT is the SFT version based on InternLM2-Base 20B, and InternLM2-Chat-20B is further trained from InternLM2-Chat-20B-SFT by Online RLHF.
We release the SFT version so that the community can study the influence of RLHF deeply.
### Performance Evaluation
We conducted a comprehensive evaluation of InternLM2 using the open-source evaluation tool [OpenCompass](https://github.com/internLM/OpenCompass/). The evaluation covered five dimensions of capabilities: disciplinary competence, language competence, knowledge competence, inference competence, and comprehension competence. Here are some of the evaluation results, and you can visit the [OpenCompass leaderboard](https://opencompass.org.cn/rank) for more evaluation results.
| Dataset\Models | InternLM2-7B | InternLM2-Chat-7B | InternLM2-20B | InternLM2-Chat-20B | ChatGPT | GPT-4 |
| --- | --- | --- | --- | --- | --- | --- |
| MMLU | 65.8 | 63.7 | 67.7 | 66.5 | 69.1 | 83.0 |
| AGIEval | 49.9 | 47.2 | 53.0 | 50.3 | 39.9 | 55.1 |
| BBH | 65.0 | 61.2 | 72.1 | 68.3 | 70.1 | 86.7 |
| GSM8K | 70.8 | 70.7 | 76.1 | 79.6 | 78.2 | 91.4 |
| MATH | 20.2 | 23.0 | 25.5 | 31.9 | 28.0 | 45.8 |
| HumanEval | 43.3 | 59.8 | 48.8 | 67.1 | 73.2 | 74.4 |
| MBPP(Sanitized) | 51.8 | 51.4 | 63.0 | 65.8 | 78.9 | 79.0 |
- The evaluation results were obtained from [OpenCompass](https://github.com/internLM/OpenCompass/) (some data marked with *, which means come from the original papers), and evaluation configuration can be found in the configuration files provided by [OpenCompass](https://github.com/internLM/OpenCompass/).
- The evaluation data may have numerical differences due to the version iteration of [OpenCompass](https://github.com/internLM/OpenCompass/), so please refer to the latest evaluation results of [OpenCompass](https://github.com/internLM/OpenCompass/).
**Limitations:** Although we have made efforts to ensure the safety of the model during the training process and to encourage the model to generate text that complies with ethical and legal requirements, the model may still produce unexpected outputs due to its size and probabilistic generation paradigm. For example, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We are not responsible for any consequences resulting from the dissemination of harmful information.
### Import from Transformers
To load the InternLM 20B Chat SFT model using Transformers, use the following code:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("internlm/internlm2-chat-20b-sft", trust_remote_code=True)
# Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and cause OOM Error.
model = AutoModelForCausalLM.from_pretrained("internlm/internlm2-chat-20b-sft", torch_dtype=torch.float16, trust_remote_code=True).cuda()
model = model.eval()
response, history = model.chat(tokenizer, "hello", history=[])
print(response)
# Hello! How can I help you today?
response, history = model.chat(tokenizer, "please provide three suggestions about time management", history=history)
print(response)
```
The responses can be streamed using `stream_chat`:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "internlm/internlm2-chat-20b-sft"
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16, trust_remote_code=True).cuda()
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = model.eval()
length = 0
for response, history in model.stream_chat(tokenizer, "Hello", history=[]):
print(response[length:], flush=True, end="")
length = len(response)
```
## Deployment
### LMDeploy
LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the MMRazor and MMDeploy teams.
```bash
pip install lmdeploy
```
You can run batch inference locally with the following python code:
```python
import lmdeploy
pipe = lmdeploy.pipeline("internlm/internlm2-chat-20b-sft")
response = pipe(["Hi, pls intro yourself", "Shanghai is"])
print(response)
```
Or you can launch an OpenAI compatible server with the following command:
```bash
lmdeploy serve api_server internlm/internlm2-chat-20b-sft --model-name internlm2-chat-20b-sft --server-port 23333
```
Then you can send a chat request to the server:
```bash
curl http://localhost:23333/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "internlm2-chat-20b-sft",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Introduce deep learning to me."}
]
}'
```
Find more details in the [LMDeploy documentation](https://lmdeploy.readthedocs.io/en/latest/)
### vLLM
Launch OpenAI compatible server with `vLLM>=0.3.2`:
```bash
pip install vllm
```
```bash
python -m vllm.entrypoints.openai.api_server --model internlm/internlm2-chat-20b-sft --served-model-name internlm2-chat-20b-sft --trust-remote-code
```
Then you can send a chat request to the server:
```bash
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "internlm2-chat-20b-sft",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Introduce deep learning to me."}
]
}'
```
Find more details in the [vLLM documentation](https://docs.vllm.ai/en/latest/index.html)
## Open Source License
The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow **free** commercial usage. To apply for a commercial license, please fill in the [application form (English)](https://wj.qq.com/s2/12727483/5dba/)/[申请表(中文)](https://wj.qq.com/s2/12725412/f7c1/). For other questions or collaborations, please contact <[email protected]>.
## Citation
```
@misc{cai2024internlm2,
title={InternLM2 Technical Report},
author={Zheng Cai and Maosong Cao and Haojiong Chen and Kai Chen and Keyu Chen and Xin Chen and Xun Chen and Zehui Chen and Zhi Chen and Pei Chu and Xiaoyi Dong and Haodong Duan and Qi Fan and Zhaoye Fei and Yang Gao and Jiaye Ge and Chenya Gu and Yuzhe Gu and Tao Gui and Aijia Guo and Qipeng Guo and Conghui He and Yingfan Hu and Ting Huang and Tao Jiang and Penglong Jiao and Zhenjiang Jin and Zhikai Lei and Jiaxing Li and Jingwen Li and Linyang Li and Shuaibin Li and Wei Li and Yining Li and Hongwei Liu and Jiangning Liu and Jiawei Hong and Kaiwen Liu and Kuikun Liu and Xiaoran Liu and Chengqi Lv and Haijun Lv and Kai Lv and Li Ma and Runyuan Ma and Zerun Ma and Wenchang Ning and Linke Ouyang and Jiantao Qiu and Yuan Qu and Fukai Shang and Yunfan Shao and Demin Song and Zifan Song and Zhihao Sui and Peng Sun and Yu Sun and Huanze Tang and Bin Wang and Guoteng Wang and Jiaqi Wang and Jiayu Wang and Rui Wang and Yudong Wang and Ziyi Wang and Xingjian Wei and Qizhen Weng and Fan Wu and Yingtong Xiong and Chao Xu and Ruiliang Xu and Hang Yan and Yirong Yan and Xiaogui Yang and Haochen Ye and Huaiyuan Ying and Jia Yu and Jing Yu and Yuhang Zang and Chuyu Zhang and Li Zhang and Pan Zhang and Peng Zhang and Ruijie Zhang and Shuo Zhang and Songyang Zhang and Wenjian Zhang and Wenwei Zhang and Xingcheng Zhang and Xinyue Zhang and Hui Zhao and Qian Zhao and Xiaomeng Zhao and Fengzhe Zhou and Zaida Zhou and Jingming Zhuo and Yicheng Zou and Xipeng Qiu and Yu Qiao and Dahua Lin},
year={2024},
eprint={2403.17297},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## 简介
InternLM2 ,即书生·浦语大模型第二代,开源了面向实用场景的200亿参数基础模型与对话模型 (InternLM2-Chat-20B)。模型具有以下特点:
- 有效支持20万字超长上下文:模型在20万字长输入中几乎完美地实现长文“大海捞针”,而且在 LongBench 和 L-Eval 等长文任务中的表现也达到开源模型中的领先水平。 可以通过 [LMDeploy](https://github.com/InternLM/lmdeploy) 尝试20万字超长上下文推理。
- 综合性能全面提升:各能力维度相比上一代模型全面进步,在推理、数学、代码、对话体验、指令遵循和创意写作等方面的能力提升尤为显著,综合性能达到同量级开源模型的领先水平,在重点能力评测上 InternLM2-Chat-20B 能比肩甚至超越 ChatGPT (GPT-3.5)。
- 代码解释器与数据分析:在配合代码解释器(code-interpreter)的条件下,InternLM2-Chat-20B 在 GSM8K 和 MATH 上可以达到和 GPT-4 相仿的水平。基于在数理和工具方面强大的基础能力,InternLM2-Chat 提供了实用的数据分析能力。
- 工具调用能力整体升级:基于更强和更具有泛化性的指令理解、工具筛选与结果反思等能力,新版模型可以更可靠地支持复杂智能体的搭建,支持对工具进行有效的多轮调用,完成较复杂的任务。可以查看更多[样例](https://github.com/InternLM/lagent)。
## InternLM2-Chat-20B-SFT
InternLM2-Chat-20B-SFT 基于 InternLM2-Base-20B 经过有监督微调(SFT)训练而来,InternLM2-Chat-20B 在 InternLM2-Chat-20B-SFT 的基础上进一步经历了 Online RLHF。
我们开源 SFT 模型以便利社区对 RLHF 的研究。
### 性能评测
我们使用开源评测工具 [OpenCompass](https://github.com/internLM/OpenCompass/) 从学科综合能力、语言能力、知识能力、推理能力、理解能力五大能力维度对InternLM开展全面评测,部分评测结果如下表所示,欢迎访问[ OpenCompass 榜单 ](https://opencompass.org.cn/rank)获取更多的评测结果。
| 评测集\模型 | InternLM2-7B | InternLM2-Chat-7B | InternLM2-20B | InternLM2-Chat-20B | ChatGPT | GPT-4 |
| --- | --- | --- | --- | --- | --- | --- |
| MMLU | 65.8 | 63.7 | 67.7 | 66.5 | 69.1 | 83.0 |
| AGIEval | 49.9 | 47.2 | 53.0 | 50.3 | 39.9 | 55.1 |
| BBH | 65.0 | 61.2 | 72.1 | 68.3 | 70.1 | 86.7 |
| GSM8K | 70.8 | 70.7 | 76.1 | 79.6 | 78.2 | 91.4 |
| MATH | 20.2 | 23.0 | 25.5 | 31.9 | 28.0 | 45.8 |
| HumanEval | 43.3 | 59.8 | 48.8 | 67.1 | 73.2 | 74.4 |
| MBPP(Sanitized) | 51.8 | 51.4 | 63.0 | 65.8 | 78.9 | 79.0 |
- 以上评测结果基于 [OpenCompass](https://github.com/internLM/OpenCompass/) 获得(部分数据标注`*`代表数据来自原始论文),具体测试细节可参见 [OpenCompass](https://github.com/internLM/OpenCompass/) 中提供的配置文件。
- 评测数据会因 [OpenCompass](https://github.com/internLM/OpenCompass/) 的版本迭代而存在数值差异,请以 [OpenCompass](https://github.com/internLM/OpenCompass/) 最新版的评测结果为主。
**局限性:** 尽管在训练过程中我们非常注重模型的安全性,尽力促使模型输出符合伦理和法律要求的文本,但受限于模型大小以及概率生成范式,模型可能会产生各种不符合预期的输出,例如回复内容包含偏见、歧视等有害内容,请勿传播这些内容。由于传播不良信息导致的任何后果,本项目不承担责任。
### 通过 Transformers 加载
通过以下的代码加载 InternLM 20B Chat SFT 模型
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("internlm/internlm2-chat-20b-sft", trust_remote_code=True)
# `torch_dtype=torch.float16` 可以令模型以 float16 精度加载,否则 transformers 会将模型加载为 float32,导致显存不足
model = AutoModelForCausalLM.from_pretrained("internlm/internlm2-chat-20b-sft", torch_dtype=torch.float16, trust_remote_code=True).cuda()
model = model.eval()
response, history = model.chat(tokenizer, "你好", history=[])
print(response)
# 你好!有什么我可以帮助你的吗?
response, history = model.chat(tokenizer, "请提供三个管理时间的建议。", history=history)
print(response)
```
如果想进行流式生成,则可以使用 `stream_chat` 接口:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "internlm/internlm2-chat-20b-sft"
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dype=torch.float16, trust_remote_code=True).cuda()
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = model.eval()
length = 0
for response, history in model.stream_chat(tokenizer, "你好", history=[]):
print(response[length:], flush=True, end="")
length = len(response)
```
## 部署
### LMDeploy
LMDeploy 由 MMDeploy 和 MMRazor 团队联合开发,是涵盖了 LLM 任务的全套轻量化、部署和服务解决方案。
```bash
pip install lmdeploy
```
你可以使用以下 python 代码进行本地批量推理:
```python
import lmdeploy
pipe = lmdeploy.pipeline("internlm/internlm2-chat-20b-sft")
response = pipe(["Hi, pls intro yourself", "Shanghai is"])
print(response)
```
或者你可以使用以下命令启动兼容 OpenAI API 的服务:
```bash
lmdeploy serve api_server internlm/internlm2-chat-20b-sft --server-port 23333
```
然后你可以向服务端发起一个聊天请求:
```bash
curl http://localhost:23333/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "internlm2-chat-20b-sft",
"messages": [
{"role": "system", "content": "你是个友善的AI助手。"},
{"role": "user", "content": "介绍一下深度学习。"}
]
}'
```
更多信息请查看 [LMDeploy 文档](https://lmdeploy.readthedocs.io/en/latest/)
### vLLM
使用`vLLM>=0.3.2`启动兼容 OpenAI API 的服务:
```bash
pip install vllm
```
```bash
python -m vllm.entrypoints.openai.api_server --model internlm/internlm2-chat-20b-sft --trust-remote-code
```
然后你可以向服务端发起一个聊天请求:
```bash
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "internlm2-chat-20b-sft",
"messages": [
{"role": "system", "content": "你是个友善的AI助手。"},
{"role": "user", "content": "介绍一下深度学习。"}
]
}'
```
更多信息请查看 [vLLM 文档](https://docs.vllm.ai/en/latest/index.html)
## 开源许可证
本仓库的代码依照 Apache-2.0 协议开源。模型权重对学术研究完全开放,也可申请免费的商业使用授权([申请表](https://wj.qq.com/s2/12725412/f7c1/))。其他问题与合作请联系 <[email protected]>。
## 引用
```
@misc{cai2024internlm2,
title={InternLM2 Technical Report},
author={Zheng Cai and Maosong Cao and Haojiong Chen and Kai Chen and Keyu Chen and Xin Chen and Xun Chen and Zehui Chen and Zhi Chen and Pei Chu and Xiaoyi Dong and Haodong Duan and Qi Fan and Zhaoye Fei and Yang Gao and Jiaye Ge and Chenya Gu and Yuzhe Gu and Tao Gui and Aijia Guo and Qipeng Guo and Conghui He and Yingfan Hu and Ting Huang and Tao Jiang and Penglong Jiao and Zhenjiang Jin and Zhikai Lei and Jiaxing Li and Jingwen Li and Linyang Li and Shuaibin Li and Wei Li and Yining Li and Hongwei Liu and Jiangning Liu and Jiawei Hong and Kaiwen Liu and Kuikun Liu and Xiaoran Liu and Chengqi Lv and Haijun Lv and Kai Lv and Li Ma and Runyuan Ma and Zerun Ma and Wenchang Ning and Linke Ouyang and Jiantao Qiu and Yuan Qu and Fukai Shang and Yunfan Shao and Demin Song and Zifan Song and Zhihao Sui and Peng Sun and Yu Sun and Huanze Tang and Bin Wang and Guoteng Wang and Jiaqi Wang and Jiayu Wang and Rui Wang and Yudong Wang and Ziyi Wang and Xingjian Wei and Qizhen Weng and Fan Wu and Yingtong Xiong and Chao Xu and Ruiliang Xu and Hang Yan and Yirong Yan and Xiaogui Yang and Haochen Ye and Huaiyuan Ying and Jia Yu and Jing Yu and Yuhang Zang and Chuyu Zhang and Li Zhang and Pan Zhang and Peng Zhang and Ruijie Zhang and Shuo Zhang and Songyang Zhang and Wenjian Zhang and Wenwei Zhang and Xingcheng Zhang and Xinyue Zhang and Hui Zhao and Qian Zhao and Xiaomeng Zhao and Fengzhe Zhou and Zaida Zhou and Jingming Zhuo and Yicheng Zou and Xipeng Qiu and Yu Qiao and Dahua Lin},
year={2024},
eprint={2403.17297},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
RichardErkhov/vicgalle_-_gpt2-alpaca-gguf | RichardErkhov | 2024-06-06T00:08:08Z | 721 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-06-05T23:55:01Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gpt2-alpaca - GGUF
- Model creator: https://huggingface.co/vicgalle/
- Original model: https://huggingface.co/vicgalle/gpt2-alpaca/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [gpt2-alpaca.Q2_K.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gguf/blob/main/gpt2-alpaca.Q2_K.gguf) | Q2_K | 0.08GB |
| [gpt2-alpaca.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gguf/blob/main/gpt2-alpaca.IQ3_XS.gguf) | IQ3_XS | 0.08GB |
| [gpt2-alpaca.IQ3_S.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gguf/blob/main/gpt2-alpaca.IQ3_S.gguf) | IQ3_S | 0.08GB |
| [gpt2-alpaca.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gguf/blob/main/gpt2-alpaca.Q3_K_S.gguf) | Q3_K_S | 0.08GB |
| [gpt2-alpaca.IQ3_M.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gguf/blob/main/gpt2-alpaca.IQ3_M.gguf) | IQ3_M | 0.09GB |
| [gpt2-alpaca.Q3_K.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gguf/blob/main/gpt2-alpaca.Q3_K.gguf) | Q3_K | 0.09GB |
| [gpt2-alpaca.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gguf/blob/main/gpt2-alpaca.Q3_K_M.gguf) | Q3_K_M | 0.09GB |
| [gpt2-alpaca.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gguf/blob/main/gpt2-alpaca.Q3_K_L.gguf) | Q3_K_L | 0.1GB |
| [gpt2-alpaca.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gguf/blob/main/gpt2-alpaca.IQ4_XS.gguf) | IQ4_XS | 0.1GB |
| [gpt2-alpaca.Q4_0.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gguf/blob/main/gpt2-alpaca.Q4_0.gguf) | Q4_0 | 0.1GB |
| [gpt2-alpaca.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gguf/blob/main/gpt2-alpaca.IQ4_NL.gguf) | IQ4_NL | 0.1GB |
| [gpt2-alpaca.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gguf/blob/main/gpt2-alpaca.Q4_K_S.gguf) | Q4_K_S | 0.1GB |
| [gpt2-alpaca.Q4_K.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gguf/blob/main/gpt2-alpaca.Q4_K.gguf) | Q4_K | 0.11GB |
| [gpt2-alpaca.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gguf/blob/main/gpt2-alpaca.Q4_K_M.gguf) | Q4_K_M | 0.11GB |
| [gpt2-alpaca.Q4_1.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gguf/blob/main/gpt2-alpaca.Q4_1.gguf) | Q4_1 | 0.11GB |
| [gpt2-alpaca.Q5_0.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gguf/blob/main/gpt2-alpaca.Q5_0.gguf) | Q5_0 | 0.11GB |
| [gpt2-alpaca.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gguf/blob/main/gpt2-alpaca.Q5_K_S.gguf) | Q5_K_S | 0.11GB |
| [gpt2-alpaca.Q5_K.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gguf/blob/main/gpt2-alpaca.Q5_K.gguf) | Q5_K | 0.12GB |
| [gpt2-alpaca.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gguf/blob/main/gpt2-alpaca.Q5_K_M.gguf) | Q5_K_M | 0.12GB |
| [gpt2-alpaca.Q5_1.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gguf/blob/main/gpt2-alpaca.Q5_1.gguf) | Q5_1 | 0.12GB |
| [gpt2-alpaca.Q6_K.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gguf/blob/main/gpt2-alpaca.Q6_K.gguf) | Q6_K | 0.13GB |
| [gpt2-alpaca.Q8_0.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gguf/blob/main/gpt2-alpaca.Q8_0.gguf) | Q8_0 | 0.17GB |
Original model description:
---
license: mit
datasets:
- tatsu-lab/alpaca
language:
- en
tags:
- instruct-tuning
- gpt2
- alpaca
---
# GPT2 finetuned with Alpaca
### Examples
The following samples were generated with `{"temperature" : 0.7, "top_p" : 0.92, "top_k" : 0}`:
##### **Give three tips for a good meal**
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
Give three tips for a good meal.
### Response:
1. Choose a healthy alternative to traditional fast food. This can include eating a nutritious breakfast, a healthy snack, and a healthy snack option.
2. Avoid excessive salt and sugar. Avoid any processed foods, such as processed eggs and sugary drinks.
3. Try a variety of healthy snacks. Try soft drinks, fruit juices, and other sugary foods.
```
##### **Write a poem about a delicious night**
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
Write a poem about a delicious night.
### Response:
The moonlight glows like a golden light,
A beautiful night sky,
A sight that will never be forgotten.
The sweet smell of freshly baked bread fills the air,
A bright and vibrant night.
The stars twinkle and twinkle,
And the birds sing sweetly,
The sun and moon alike.
The night is full of joy and joy,
The smell of the sweet and fragrant scent,
A night that is full of love and laughter,
A night that is full of joy and laughter,
A night that will never be forgotten.
```
##### **Write a tweet describing your capabilities**
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
Write a tweet describing your capabilities.
### Response:
I'm a software engineer with a passion for creating amazing software solutions. I'm passionate about the power of collaboration and collaboration tools, and I'm always looking for new ways to improve the efficiency of my work. #SoftwareEngineers #softwareless
```
##### **Pretend you are an alien visiting Earth. Write three opinions you believe**
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
Pretend you are an alien visiting Earth. Write three opinions you believe.
### Response:
1. I believe the Earth is an intelligent and peaceful place.
2. I believe that the human race is on the brink of extinction.
3. I believe that the Earth is a great source of prosperity and safety.
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vicgalle__gpt2-alpaca)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 24.66 |
| ARC (25-shot) | 22.87 |
| HellaSwag (10-shot) | 31.14 |
| MMLU (5-shot) | 26.26 |
| TruthfulQA (0-shot) | 36.22 |
| Winogrande (5-shot) | 50.67 |
| GSM8K (5-shot) | 0.0 |
| DROP (3-shot) | 5.46 |
|
mradermacher/L3-70B-sunfall-abliterated-v0.2-i1-GGUF | mradermacher | 2024-06-08T21:18:43Z | 721 | 0 | transformers | [
"transformers",
"gguf",
"not-for-all-audiences",
"en",
"base_model:crestf411/L3-70B-sunfall-abliterated-v0.2",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-06-07T22:36:39Z | ---
base_model: crestf411/L3-70B-sunfall-abliterated-v0.2
language:
- en
library_name: transformers
license: llama3
license_link: LICENSE
license_name: llama3
quantized_by: mradermacher
tags:
- not-for-all-audiences
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/crestf411/L3-70B-sunfall-abliterated-v0.2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-i1-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-i1-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-i1-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-i1-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-i1-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-i1-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-i1-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-i1-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-i1-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-i1-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-i1-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-i1-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-i1-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-i1-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-i1-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-i1-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-i1-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-i1-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-i1-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-i1-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-i1-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-i1-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
mradermacher/Mixtral_7Bx5_MoE_30B_DPO-GGUF | mradermacher | 2024-06-15T08:44:10Z | 721 | 0 | transformers | [
"transformers",
"gguf",
"moe",
"en",
"base_model:cloudyu/Mixtral_7Bx5_MoE_30B_DPO",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-06-13T11:38:01Z | ---
base_model: cloudyu/Mixtral_7Bx5_MoE_30B_DPO
language:
- en
library_name: transformers
license: mit
no_imatrix: Missing importance matrix for tensor blk.9.ffn_gate_exps.weight in a very
low-bit quantization
quantized_by: mradermacher
tags:
- moe
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/cloudyu/Mixtral_7Bx5_MoE_30B_DPO
<!-- provided-files -->
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx5_MoE_30B_DPO-GGUF/resolve/main/Mixtral_7Bx5_MoE_30B_DPO.Q2_K.gguf) | Q2_K | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx5_MoE_30B_DPO-GGUF/resolve/main/Mixtral_7Bx5_MoE_30B_DPO.IQ3_XS.gguf) | IQ3_XS | 12.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx5_MoE_30B_DPO-GGUF/resolve/main/Mixtral_7Bx5_MoE_30B_DPO.Q3_K_S.gguf) | Q3_K_S | 13.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx5_MoE_30B_DPO-GGUF/resolve/main/Mixtral_7Bx5_MoE_30B_DPO.IQ3_S.gguf) | IQ3_S | 13.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx5_MoE_30B_DPO-GGUF/resolve/main/Mixtral_7Bx5_MoE_30B_DPO.IQ3_M.gguf) | IQ3_M | 13.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx5_MoE_30B_DPO-GGUF/resolve/main/Mixtral_7Bx5_MoE_30B_DPO.Q3_K_M.gguf) | Q3_K_M | 14.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx5_MoE_30B_DPO-GGUF/resolve/main/Mixtral_7Bx5_MoE_30B_DPO.Q3_K_L.gguf) | Q3_K_L | 15.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx5_MoE_30B_DPO-GGUF/resolve/main/Mixtral_7Bx5_MoE_30B_DPO.IQ4_XS.gguf) | IQ4_XS | 16.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx5_MoE_30B_DPO-GGUF/resolve/main/Mixtral_7Bx5_MoE_30B_DPO.Q4_K_S.gguf) | Q4_K_S | 17.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx5_MoE_30B_DPO-GGUF/resolve/main/Mixtral_7Bx5_MoE_30B_DPO.Q4_K_M.gguf) | Q4_K_M | 18.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx5_MoE_30B_DPO-GGUF/resolve/main/Mixtral_7Bx5_MoE_30B_DPO.Q5_K_S.gguf) | Q5_K_S | 20.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx5_MoE_30B_DPO-GGUF/resolve/main/Mixtral_7Bx5_MoE_30B_DPO.Q5_K_M.gguf) | Q5_K_M | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx5_MoE_30B_DPO-GGUF/resolve/main/Mixtral_7Bx5_MoE_30B_DPO.Q6_K.gguf) | Q6_K | 24.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx5_MoE_30B_DPO-GGUF/resolve/main/Mixtral_7Bx5_MoE_30B_DPO.Q8_0.gguf) | Q8_0 | 31.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
bs-modeling-metadata/website_metadata_exp_1_model_25k_checkpoint | bs-modeling-metadata | 2021-11-25T15:47:57Z | 720 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | Entry not found |
deepset/gelectra-base-germanquad-distilled | deepset | 2023-05-05T06:58:40Z | 720 | 4 | transformers | [
"transformers",
"pytorch",
"safetensors",
"electra",
"question-answering",
"exbert",
"de",
"dataset:deepset/germanquad",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
language: de
datasets:
- deepset/germanquad
license: mit
thumbnail: https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg
tags:
- exbert
---

## Overview
**Language model:** gelectra-base-germanquad-distilled
**Language:** German
**Training data:** GermanQuAD train set (~ 12MB)
**Eval data:** GermanQuAD test set (~ 5MB)
**Infrastructure**: 1x V100 GPU
**Published**: Apr 21st, 2021
## Details
- We trained a German question answering model with a gelectra-base model as its basis.
- The dataset is GermanQuAD, a new, German language dataset, which we hand-annotated and published [online](https://deepset.ai/germanquad).
- The training dataset is one-way annotated and contains 11518 questions and 11518 answers, while the test dataset is three-way annotated so that there are 2204 questions and with 2204·3−76 = 6536answers, because we removed 76 wrong answers.
- In addition to the annotations in GermanQuAD, haystack's distillation feature was used for training. deepset/gelectra-large-germanquad was used as the teacher model.
See https://deepset.ai/germanquad for more details and dataset download in SQuAD format.
## Hyperparameters
```
batch_size = 24
n_epochs = 6
max_seq_len = 384
learning_rate = 3e-5
lr_schedule = LinearWarmup
embeds_dropout_prob = 0.1
temperature = 2
distillation_loss_weight = 0.75
```
## Performance
We evaluated the extractive question answering performance on our GermanQuAD test set.
Model types and training data are included in the model name.
For finetuning XLM-Roberta, we use the English SQuAD v2.0 dataset.
The GELECTRA models are warm started on the German translation of SQuAD v1.1 and finetuned on \\\\germanquad.
The human baseline was computed for the 3-way test set by taking one answer as prediction and the other two as ground truth.
```
"exact": 62.4773139745916
"f1": 80.9488017070188
```

## Authors
- Timo Möller: `timo.moeller [at] deepset.ai`
- Julian Risch: `julian.risch [at] deepset.ai`
- Malte Pietsch: `malte.pietsch [at] deepset.ai`
- Michel Bartels: `michel.bartels [at] deepset.ai`
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs) |
CarperAI/FIM-NeoX-1.3B | CarperAI | 2022-10-11T14:12:34Z | 720 | 26 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"causal-lm",
"code-generation",
"The Pile",
"en",
"code",
"arxiv:2104.09864",
"arxiv:2204.06745",
"arxiv:2201.07311",
"arxiv:2101.00027",
"arxiv:2207.14255",
"arxiv:2108.12409",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-10-07T16:01:11Z | ---
language:
- en
- code
tags:
- pytorch
- causal-lm
- code-generation
- The Pile
license: apache-2.0
---
# FIM-1.3B
## Model Description
FIM-1.3B is the first of a series of large-scale infilling-enabled autoregressive language models trained by CarperAI. FIM-1.3B is the first of these models, and future models (both larger and smaller) trained on greater quantities of code data will be released, potentially with different architectural variations optimized for code.
This is a preliminary release of an experimental artifact and should be treated as such. We are releasing these results and this model in the hopes that it may be useful to the greater research community, especially those interested in LMs for code and pair programming tools.
CarperAI will be releasing larger LMs better tuned for code in the near future, building on these experiments.
## Model Dimensions
| Hyperparameter | Value |
|----------------------|----------------------------------------------------------------------------------------------------------------------------------------|
| \\(n_{parameters}\\) | 1,331,810,304 |
| \\(n_{layers}\\) | 24 |
| \\(d_{model}\\) | 2048 |
| \\(d_{ff}\\) | 8192 |
| \\(n_{heads}\\) | 16 |
| \\(d_{head}\\) | 128 |
| \\(n_{ctx}\\) | 2048 |
| \\(n_{vocab}\\) | 50280 |
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864)
The model consists of 24 transformer layers with a hidden dimension of 2048, and a feedforward intermediate dimension of 8192. The hidden dimension is split into 16 heads for self-attention, each with a dimension of 128. Rotary Position Embedding (RoPE) is used.
The model is trained with the same tokenizer as [GPT-NeoX-20b](https://arxiv.org/abs/2204.06745), for a vocabulary size of 50254 tokens.
## Training Data
The model was trained on the Pile, an 800Gb dataset composed of varied web corpora. The datasheet and paper for the Pile can be found [here](https://arxiv.org/abs/2201.07311) and [here](https://arxiv.org/abs/2101.00027) respectively.
## Training Details
This model was trained for 47,000 steps at a batch size of 6,291,456 tokens per step in the [GPT-NeoX codebase](https://github.com/EleutherAI/gpt-neox). It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly.
Following [Bavarian et al. 2022](https://arxiv.org/abs/2207.14255), we train the model to additionally perform infilling via a data transformation applied randomly to 90% of input contexts at train-time.
Middle segments “to infill” were selected uniformly at random from contexts at the character level, and these contexts were then reformatted as
\<SUF\> {last 1/3rd of the context} \<PRE\> {first 1/3rd of the context} \<MID\> {middle 1/3rd of the context} \<EOD\>
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` class:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("CarperAI/FIM-NeoX-1.3B")
model = AutoModelForCausalLM.from_pretrained("CarperAI/FIM-NeoX-1.3B")
```
### Performing Infilling
Suppose we have some text that we would like to perform infilling on at a certain “cursor location”.
This would have the form {some prelude text here} \<INFILLING LOCATION\> {some text following cursor}.
The way to perform infilling generation would be via placing the input text into this format:
\<SUF\> {some text following cursor} \<PRE\> {some prelude text here} \<MID\> ...
language model output is generated after \<MID\> token!
As a concrete example, here is a code snippet that should allow a model to perform infilling:
There was an issue where the sentinel `<|SUF|>`, `<|PRE|>`, and `<|MID|>` tokens were not the correct ids in the uploaded tokenizer and model card! Please try clearing the Huggingface cache and redownloading the model :))
Here is a minimal example of performing open-ended generation with this model, on a simple function `score(x, y)`:
```
def score(x,y) -> int:
"""
```
and also infilling with the function and end of docstring already placed:
```
def score(x,y) -> int:
"""
<|MID|> (infill here)
"""
score = x + y
return score
```
```
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model = AutoModelForCausalLM.from_pretrained("CarperAI/FIM-NeoX-1.3B")
tok = AutoTokenizer.from_pretrained("CarperAI/
# infilling demo
prefix = 'def score(x, y) -> int:\n"""\n'
suffix = '"""\n\n score = x + y\n return score'
model_input = [50277, *tok(suffix)["input_ids"], 50278, *tok(prefix)["input_ids"], 50279]
output = tok.decode(model.generate(torch.IntTensor(model_input).unsqueeze(0), max_length=40)[0])
print(output)
```
outputs: `'<|SUF|>"""\n\n score = x + y\n return score<|PRE|>def score(x, y) -> int:\n"""\n<|MID|> score(x, y) -> int\n<|endoftext|>'`
```
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# non-infilling demo
prefix = 'def score(x, y) -> int:\n"""\n'
model_input = [*tok(prefix)["input_ids"]]
output = tok.decode(model.generate(torch.IntTensor(model_input).unsqueeze(0), max_length=100)[0])
print(output)
```
outputs: `'def score(x, y) -> int:\n"""\n Return the score of the given point.\n """\n return sum(x * y for x, y in zip(x_list, y_list))\n\ndef get_point_score(x, y) -> int:\n """\n Return the score of the given point.\n """\n return sum(x * y for x, y in zip(x_list, y'`
The sentinel tokens are now accessible via `tokenizer.decode(50277) = "<|SUF|>"`, `tokenizer.decode(50278) = "<|PRE|>"`, `tokenizer.decode(50279) = "<|MID|>"`.
## Intended Uses and Limitations
FIM-1.3B learns a representation of the English language that can be used to extract features useful for downstream NLP and Code generation tasks. However, the model has solely been trained on a standard next-token-prediction language modeling task on its training data.
## Limitations and Biases
FIM-1.3B was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. FIM-1.3B may produce socially unacceptable or otherwise harmful text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how FIM-1.3B will respond to particular prompts, and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. Code generated by FIM-1.3B should also be checked for security errors by a human before use in production.
## Evaluation results
We evaluate our model on a number of standard NLP datasets to verify that our infilling model performs on par with a comparable autoregressive model.
We use the [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) library developed by EleutherAI for all evaluations except for HumanEval-infilling, for which we use the code in [https://github.com/openai/human-eval-infilling](https://github.com/openai/human-eval-infilling) to evaluate performance.
All 3 models here are trained using the same configuration with differing FIM hyperparameters and/or different positional embeddings. "AR-1.3B" refers to a model trained without FIM and with rotary positional embeddings, "CarperAI/FIM-NeoX-1.3B" refers to this model (trained with a FIM rate of 0.9 in SPM mode according to [Bavarian et al. 2022](https://arxiv.org/abs/2207.14255)), and "FIM-1.3B-alibi" refers to a model trained with [AliBi](https://arxiv.org/abs/2108.12409) positional embeddings but otherwise the same as this model.
| Model | HumanEval-infilling | arc\_easy | arc\_challenge | lambada | piqa | sciq | wsc | winogrande |
|-----------------|---------------------|----------|---------------|---------|--------|-------|--------|------------|
| AR-1.3B | 0.0029 | 0.5816 | 0.2465 | 7.03 | 0.7116 | 0.85 | 0.3654 | 0.5651 |
| CarperAI/FIM-NeoX-1.3B | 0.0155 | 0.5829 | 0.2457 | 7.08 | 0.7029 | 0.861 | 0.3654 | 0.5390 |
| FIM-1.3B-alibi | 0.0029 | 0.5589 | 0.25 | 7.49 | 0.6926 | 0.856 | 0.3654 | 0.5406 |
Here HumanEval-infilling is reported as Pass@10 with a temperature of 0.8 (such that 100 times the score reported here = Pass@10 as a percentage), Lambada is reported as perplexity, and all other benchmarks report accuracy as a number between 0 and 1.
These results are subject to change, but appear to indicate that AliBi with FIM does not enable infilling, while rotary positional embeddings do allow for infilling to be learned.
## Licensing
This model is licensed under the terms of the Apache License 2.0.
```
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```
## Acknowledgements
This project would not have been possible without compute resources provided by [Stability.ai](https://stability.ai) and [CarperAI](https://carper.ai/).
This model was trained by Hailey Schoelkopf, and would also not have been possible without help, guidance, and feedback by many including Louis Castricato, Stella Biderman, Shivanshu Purohit, Quentin Anthony, and others.
|
TheBloke/Dolphin-Llama2-7B-GGUF | TheBloke | 2023-09-27T12:47:36Z | 720 | 2 | transformers | [
"transformers",
"gguf",
"llama",
"en",
"dataset:ehartford/dolphin",
"base_model:ehartford/dolphin-llama2-7b",
"license:llama2",
"text-generation-inference",
"region:us"
] | null | 2023-09-05T13:28:23Z | ---
language:
- en
license: llama2
datasets:
- ehartford/dolphin
model_name: Dolphin Llama2 7B
base_model: ehartford/dolphin-llama2-7b
inference: false
model_creator: Eric Hartford
model_type: llama
prompt_template: 'SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Dolphin Llama2 7B - GGUF
- Model creator: [Eric Hartford](https://huggingface.co/ehartford)
- Original model: [Dolphin Llama2 7B](https://huggingface.co/ehartford/dolphin-llama2-7b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Eric Hartford's Dolphin Llama2 7B](https://huggingface.co/ehartford/dolphin-llama2-7b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Dolphin-Llama2-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Dolphin-Llama2-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Dolphin-Llama2-7B-GGUF)
* [Eric Hartford's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/dolphin-llama2-7b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Orca-Vicuna
```
SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [dolphin-llama2-7b.Q2_K.gguf](https://huggingface.co/TheBloke/Dolphin-Llama2-7B-GGUF/blob/main/dolphin-llama2-7b.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [dolphin-llama2-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Dolphin-Llama2-7B-GGUF/blob/main/dolphin-llama2-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [dolphin-llama2-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Dolphin-Llama2-7B-GGUF/blob/main/dolphin-llama2-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [dolphin-llama2-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Dolphin-Llama2-7B-GGUF/blob/main/dolphin-llama2-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [dolphin-llama2-7b.Q4_0.gguf](https://huggingface.co/TheBloke/Dolphin-Llama2-7B-GGUF/blob/main/dolphin-llama2-7b.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [dolphin-llama2-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Dolphin-Llama2-7B-GGUF/blob/main/dolphin-llama2-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [dolphin-llama2-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Dolphin-Llama2-7B-GGUF/blob/main/dolphin-llama2-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [dolphin-llama2-7b.Q5_0.gguf](https://huggingface.co/TheBloke/Dolphin-Llama2-7B-GGUF/blob/main/dolphin-llama2-7b.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [dolphin-llama2-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Dolphin-Llama2-7B-GGUF/blob/main/dolphin-llama2-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [dolphin-llama2-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Dolphin-Llama2-7B-GGUF/blob/main/dolphin-llama2-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [dolphin-llama2-7b.Q6_K.gguf](https://huggingface.co/TheBloke/Dolphin-Llama2-7B-GGUF/blob/main/dolphin-llama2-7b.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [dolphin-llama2-7b.Q8_0.gguf](https://huggingface.co/TheBloke/Dolphin-Llama2-7B-GGUF/blob/main/dolphin-llama2-7b.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Dolphin-Llama2-7B-GGUF and below it, a specific filename to download, such as: dolphin-llama2-7b.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Dolphin-Llama2-7B-GGUF dolphin-llama2-7b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Dolphin-Llama2-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Dolphin-Llama2-7B-GGUF dolphin-llama2-7b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m dolphin-llama2-7b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "SYSTEM: {system_message}\nUSER: {prompt}\nASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Dolphin-Llama2-7B-GGUF", model_file="dolphin-llama2-7b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Eric Hartford's Dolphin Llama2 7B
Dolphin 🐬
https://erichartford.com/dolphin
Dolphin-Llama2-7b's training was sponsored by [preemo](https://www.preemo.io/).
This model is based on llama2, so it is suitable for commercial or non-commercial use.
This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models
You are responsible for any content you create using this model. Enjoy responsibly.
## Dataset
This dataset is an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)
After uncensoring, deduping, and cleaning, our dataset consists of:
- 842,610 instructions of FLANv2 augmented with GPT-4 completions
- 2,625,353 instructions of FLANv2 augmented with GPT-3.5 completions
We followed the submix and system prompt distribution outlined in the Orca paper. With a few exceptions. We included all 75k of CoT in the FLAN-1m dataset rather than sampling that. Also, we found that many items were duplicated, so we removed duplicates.
Then we filtered out instances of alignment, refusal, avoidance, and bias, in order to produce an uncensored model upon which can be layered your personalized alignment LoRA.
We also filtered out duplicates and cleaned the data.
## Training
We trained with the flan5m (gpt3.5 completions) dataset in its entirety for 3 epochs at a learning rate of 2e-5 before we stopped training to avoid overfit.
We trained with the flan1m (gpt4 completions) dataset in its entirety for 2.5 epochs at a learning rate of 1e-5 before we stopped training to avoid overfit.
It took about 400 hours to train on 8x A100s
We used a prompt format similar to Vicuna, but we added the SYSTEM: field.
Prompt format:
```
SYSTEM: You are an autoregressive language model that has been fine-tuned with instruction-tuning and RLHF. You carefully provide accurate, factual, thoughtful, nuanced answers, and are brilliant at reasoning. If you think there might not be a correct answer, you say so. Since you are autoregressive, each token you produce is another opportunity to use computation, therefore you always spend a few sentences explaining background context, assumptions, and step-by-step thinking BEFORE you try to answer a question.
USER: {prompt}
ASSISTANT:
```
Example:
```
SYSTEM: you are an expert dolphin trainer
USER: What is the best way to train a dolphin to obey me? Please answer step by step.
ASSISTANT:
```
## Team
The core Dolphin Team includes:
- Eric "Faldore" Hartford
- Pankaj Mathur
- Rob "Rohan" O'Callahan
- Tom "TheBloke" Jobbins
## Gratitude
- This model was made possible by the generous sponsorship of preemo.io.
- Thank you to Microsoft for authoring the Orca paper and inspiring this work.
- Special thanks to WingLian, NanoBit, Teknium for helpful advice
- Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way.
## Example Output
```
SYSTEM: you are an expert mechanic
USER: Please tell me step by step how to fix my car that won't start.
ASSISTANT: To help your vehicle start, I will guide you through a step-by-step process on what could be causing the issue and how to resolve it.
1. Check for obvious issues: First, inspect the exterior of the car to identify any visible problems such as damage or obstructions which might affect its ability to start. Make sure there is no debris in the air intakes or exhaust system.
2. Inspect the battery: The most common reason a car may not start is due to a dead or faulty battery. Use a battery charger or jump starter to give the battery a boost if needed. If the battery is indeed dead, replace it with a new one.
3. Check the ignition switch: The ignition switch must be turned on to provide power to the spark plugs. Ensure it is in the "Start" position.
4. Test the spark plugs: One of the spark plugs should produce a strong spark when touched together. If all spark plugs fail to create a spark, they need to be replaced.
5. Examine the fuel system: Fuel can become contaminated or low, preventing the engine from starting. Check the fuel level and quality by looking at the dipstick or using a fuel sensor scan tool. Also, examine the fuel pump for proper functioning.
6. Assess the electrical system: A malfunction within the vehicle's electrical system can also cause difficulty in starting the car. Perform a diagnostic scan to check for any error codes related to the starter motor, alternator, or other components.
7. Inspect the starter motor: The starter motor should operate smoothly when you turn the key to the "start" position. If you hear a sizzling sound while pressing the button, the starter motor may have a problem. Replace it if necessary.
8. Evaluate the timing belt/chain: Timing belts or chains that are worn out or damaged can lead to poor performance and difficulty starting the engine. Have a mechanic inspect this component and replace it if required.
9. Consider checking for a mechanical failure: In rare cases, a significant mechanical failure could prevent the vehicle from starting. Consult a professional mechanic if you suspect this might be the case.
By following these steps, you should be able to diagnose and potentially fix the issue causing your car to not start. However, if after going through these checks and still having trouble, it is recommended to seek assistance from a qualified mechanic.
```
[Buy me a coffee](https://www.buymeacoffee.com/ehartford)
<!-- original-model-card end -->
|
Sirius27/BeingWell_llama2_7b | Sirius27 | 2023-09-14T21:01:31Z | 720 | 8 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"medical",
"conversational",
"en",
"dataset:shibing624/medical",
"dataset:GBaker/MedQA-USMLE-4-options",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-09-14T18:32:43Z | ---
license: openrail
datasets:
- shibing624/medical
- GBaker/MedQA-USMLE-4-options
language:
- en
pipeline_tag: conversational
tags:
- medical
arxiv: 2303.14070
---
This model is a fine-tuned model based on the Llama 2_7b architecture. It has been specifically trained on a dataset comprising USMLE (United States Medical Licensing Examination) questions and answers, as well as conversations between doctors and patients. |
hellonlp/simcse-roberta-base-zh | hellonlp | 2024-04-17T03:33:20Z | 720 | 1 | transformers | [
"transformers",
"pytorch",
"sentence-similarity",
"zh",
"license:mit",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-09-15T08:54:59Z | ---
language:
- zh
license: mit
pipeline_tag: sentence-similarity
---
# SimCSE(sup)
## Data List
The following datasets are all in Chinese.
| Data | size(train) | size(valid) | size(test) |
|:----------------------:|:----------:|:----------:|:----------:|
| [ATEC](https://link.zhihu.com/?target=https%3A//pan.baidu.com/s/1gmnyz9emqOXwaHhSM9CCUA%3Fpwd%3Db17c) | 62477| 20000| 20000|
| [BQ](https://link.zhihu.com/?target=https%3A//pan.baidu.com/s/1M-e01yyy5NacVPrph9fbaQ%3Fpwd%3Dtis9) | 100000| 10000| 10000|
| [LCQMC](https://pan.baidu.com/s/16DfE7fHrCkk4e8a2j3SYUg?pwd=bc8w ) | 238766| 8802| 12500|
| [PAWSX](https://link.zhihu.com/?target=https%3A//pan.baidu.com/s/1ox0tJY3ZNbevHDeAqDBOPQ%3Fpwd%3Dmgjn) | 49401| 2000| 2000|
| [STS-B](https://link.zhihu.com/?target=https%3A//pan.baidu.com/s/10yfKfTtcmLQ70-jzHIln1A%3Fpwd%3Dgf8y) | 5231| 1458| 1361|
| [*SNLI*](https://link.zhihu.com/?target=https%3A//pan.baidu.com/s/1NOgA7JwWghiauwGAUvcm7w%3Fpwd%3Ds75v) | 146828| 2699| 2618|
| [*MNLI*](https://link.zhihu.com/?target=https%3A//pan.baidu.com/s/1xjZKtWk3MAbJ6HX4pvXJ-A%3Fpwd%3D2kte) | 122547| 2932| 2397|
## Model List
The evaluation dataset is in Chinese, and we used the same language model **RoBERTa base** on different methods. In addition, considering that the test set of some datasets is small, which may lead to a large deviation in evaluation accuracy, the evaluation data here uses train, valid and test at the same time, and the final evaluation result adopts the **weighted average (w-avg)** method.
| Model | STS-B(w-avg) | ATEC | BQ | LCQMC | PAWSX | Avg. |
|:-----------------------:|:------------:|:-----------:|:----------|:-------------|:------------:|:----------:|
| BERT-Whitening | 65.27| -| -| -| -| -|
| SimBERT | 70.01| -| -| -| -| -|
| SBERT-Whitening | 71.75| -| -| -| -| -|
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | 78.61| -| -| -| -| -|
| [hellonlp/simcse-base-zh(sup)](https://huggingface.co/hellonlp/simcse-roberta-base-zh) | **80.96**| -| -| -| -| -|
## Uses
You can use our model for encoding sentences into embeddings
```python
import torch
from transformers import BertTokenizer
from transformers import BertModel
from sklearn.metrics.pairwise import cosine_similarity
# model
simcse_sup_path = "hellonlp/simcse-roberta-base-zh"
tokenizer = BertTokenizer.from_pretrained(simcse_sup_path)
MODEL = BertModel.from_pretrained(simcse_sup_path)
def get_vector_simcse(sentence):
"""
预测simcse的语义向量。
"""
input_ids = torch.tensor(tokenizer.encode(sentence)).unsqueeze(0)
output = MODEL(input_ids)
return output.last_hidden_state[:, 0].squeeze(0)
embeddings = get_vector_simcse("武汉是一个美丽的城市。")
print(embeddings.shape)
#torch.Size([768])
```
You can also compute the cosine similarities between two sentences
```python
def get_similarity_two(sentence1, sentence2):
vec1 = get_vector_simcse(sentence1).tolist()
vec2 = get_vector_simcse(sentence2).tolist()
similarity_list = cosine_similarity([vec1], [vec2]).tolist()[0][0]
return similarity_list
sentence1 = '你好吗'
sentence2 = '你还好吗'
result = get_similarity_two(sentence1,sentence2)
print(result) #0.7996
#(1.0, '你好吗')
#(0.8247, '你好不好')
#(0.8217, '你现在好吗')
#(0.7976, '你还好吗')
#(0.7918, '你好不好呢')
#(0.712, '你过的好吗')
#(0.6986, '你怎么样')
#(0.6693, '你')
#(0.5442, '你好个鬼')
#(0.4516, '你吃饭了吗')
#(0.4, '我好开心啊')
#(0.29, '我不开心')
#(0.2782, '我吃了一个苹果')
``` |
arnavgrg/llama-2-7b-nf4-fp16-upscaled | arnavgrg | 2023-12-12T19:06:13Z | 720 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-11-30T15:43:01Z | ---
license: apache-2.0
tags:
- text-generation-inference
---
This is an upscaled fp16 variant of the original Llama-2-7b base model by Meta after it has been loaded with nf4 4-bit quantization via bitsandbytes.
The main idea here is to upscale the linear4bit layers to fp16 so that the quantization/dequantization cost doesn't have to paid for each forward pass at inference time.
_Note: The quantization operation to nf4 is not lossless, so the model weights for the linear layers are lossy, which means that this model will not work as well as the official base model._
To use this model, you can just load it via `transformers` in fp16:
```python
import torch
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(
"arnavgrg/llama-2-7b-nf4-fp16-upscaled",
device_map="auto",
torch_dtype=torch.float16,
)
``` |
CultriX/NeuralTrixlaser-bf16 | CultriX | 2024-03-03T02:07:33Z | 720 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"bardsai/jaskier-7b-dpo-v3.3",
"Kquant03/NeuralTrix-7B-dpo-laser",
"CultriX/NeuralTrix-v4-bf16",
"CultriX/NeuralTrix-V2",
"base_model:bardsai/jaskier-7b-dpo-v3.3",
"base_model:Kquant03/NeuralTrix-7B-dpo-laser",
"base_model:CultriX/NeuralTrix-v4-bf16",
"base_model:CultriX/NeuralTrix-V2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-14T13:49:03Z | ---
tags:
- merge
- mergekit
- lazymergekit
- bardsai/jaskier-7b-dpo-v3.3
- Kquant03/NeuralTrix-7B-dpo-laser
- CultriX/NeuralTrix-v4-bf16
- CultriX/NeuralTrix-V2
base_model:
- bardsai/jaskier-7b-dpo-v3.3
- Kquant03/NeuralTrix-7B-dpo-laser
- CultriX/NeuralTrix-v4-bf16
- CultriX/NeuralTrix-V2
license: apache-2.0
---
# NeuralTrixlaser-bf16
NeuralTrixlaser-bf16 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [bardsai/jaskier-7b-dpo-v3.3](https://huggingface.co/bardsai/jaskier-7b-dpo-v3.3)
* [Kquant03/NeuralTrix-7B-dpo-laser](https://huggingface.co/Kquant03/NeuralTrix-7B-dpo-laser)
* [CultriX/NeuralTrix-v4-bf16](https://huggingface.co/CultriX/NeuralTrix-v4-bf16)
* [CultriX/NeuralTrix-V2](https://huggingface.co/CultriX/NeuralTrix-V2)
## 🧩 Configuration
```yaml
models:
- model: eren23/dpo-binarized-NeuralTrix-7B
# no parameters necessary for base model
- model: bardsai/jaskier-7b-dpo-v3.3
parameters:
density: 0.65
weight: 0.4
- model: Kquant03/NeuralTrix-7B-dpo-laser
parameters:
density: 0.6
weight: 0.35
- model: CultriX/NeuralTrix-v4-bf16
parameters:
density: 0.55
weight: 0.15
- model: CultriX/NeuralTrix-V2
parameters:
density: 0.55
weight: 0.15
merge_method: dare_ties
base_model: eren23/dpo-binarized-NeuralTrix-7B
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "CultriX/"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
ibivibiv/multimaster-7b-v4 | ibivibiv | 2024-02-21T21:32:51Z | 720 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"en",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-21T21:04:07Z | ---
library_name: transformers
license: apache-2.0
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mychen76/mistral-7b-merged-dare_6x7 | mychen76 | 2024-03-22T21:55:09Z | 720 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-11T05:05:04Z | ---
license: apache-2.0
tags:
- merge
---
# mistral-7b-merged-dare-v2
mistral-7b-merged-dare-v2 is a merge of the following models:
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
- model: samir-fama/SamirGPT-v1
parameters:
density: 0.53
weight: 0.4
- model: abacusai/Slerp-CM-mist-dpo
parameters:
density: 0.53
weight: 0.3
- model: EmbeddedLLM/Mistral-7B-Merge-14-v0.2
parameters:
density: 0.53
weight: 0.3
- model: Weyaxi/Einstein-v4-7B
parameters:
density: 0.53
weight: 0.3
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mistral-7b-merged-dare_6x7"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Why the sky is blue"}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mychen76__mistral-7b-merged-dare_6x7)
| Metric |Value|
|---------------------------------|----:|
|Avg. |73.46|
|AI2 Reasoning Challenge (25-Shot)|69.62|
|HellaSwag (10-Shot) |87.04|
|MMLU (5-Shot) |65.18|
|TruthfulQA (0-shot) |66.98|
|Winogrande (5-shot) |80.58|
|GSM8k (5-shot) |71.34|
|
toshi456/llava-jp-1.3b-v1.1 | toshi456 | 2024-04-20T17:09:38Z | 720 | 10 | transformers | [
"transformers",
"safetensors",
"llava-jp",
"text-generation",
"vision",
"image-captioning",
"VQA",
"image-to-text",
"ja",
"dataset:turing-motors/LLaVA-Pretrain-JA",
"dataset:turing-motors/LLaVA-v1.5-Instruct-620K-JA",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-to-text | 2024-04-17T12:44:24Z | ---
license: cc-by-nc-4.0
datasets:
- turing-motors/LLaVA-Pretrain-JA
- turing-motors/LLaVA-v1.5-Instruct-620K-JA
language:
- ja
pipeline_tag: image-to-text
tags:
- vision
- image-captioning
- VQA
---
# LLaVA-JP Model Card
## Model detail
**Model type:**
LLaVA-JP is a vision-language model that can converse about input images.<br>
This model is an LVLM model trained using [google/siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384) as the image encoder and [llm-jp/llm-jp-1.3b-v1.0](https://huggingface.co/llm-jp/llm-jp-1.3b-v1.0) as the text decoder. supports the input of 768 x 768 high resolution images by scaling_on_scales method.
**Training:**
This model was initially trained with the Vision Projector using LLaVA-Pretrain-JA.<br>
In the second phase, it was fine-tuned with LLaVA-v1.5-Instruct-620K-JA.
resources for more information: https://github.com/tosiyuki/LLaVA-JP/tree/main
**Comparing VLMs**
|Model|JA-VG-VQA-500<br>(ROUGE-L)|JA-VLM-Bench-In-the-Wild<br>(ROUGE-L)|Heron-Bench(Detail)|Heron-Bench(Conv)|Heron-Bench(Complex)|Heron-Bench(Average)
|-|-|-|-|-|-|-|
|[Japanese Stable VLM](https://huggingface.co/stabilityai/japanese-stable-vlm)|-|40.50|25.15|51.23|37.84|38.07|
|[EvoVLM-JP-v1-7B](https://huggingface.co/SakanaAI/EvoVLM-JP-v1-7B)|**19.70**|**51.25**|50.31|44.42|40.47|45.07|
|[Heron BLIP Japanese StableLM Base 7B llava-620k](https://huggingface.co/turing-motors/heron-chat-blip-ja-stablelm-base-7b-v1-llava-620k)|14.51|33.26|49.09|41.51|45.72|45.44|
|[Heron GIT Japanese StableLM Base 7B](https://huggingface.co/turing-motors/heron-chat-git-ja-stablelm-base-7b-v1)|15.18|37.82|42.77|**54.20**|43.53|46.83|
|[llava-jp-1.3b-v1.0-620k](https://huggingface.co/toshi456/llava-jp-1.3b-v1.0-620k)|12.69|44.58|**51.21**|41.05|45.95|44.84|
|[llava-jp-1.3b-v1.1](https://huggingface.co/toshi456/llava-jp-1.3b-v1.1)|13.33|44.40|50.00|51.83|**48.98**|**50.39**|

## How to use the model
**1. Download dependencies**
```
git clone https://github.com/tosiyuki/LLaVA-JP.git
```
**2. Inference**
```python
import requests
import torch
import transformers
from PIL import Image
from transformers.generation.streamers import TextStreamer
from llava.constants import DEFAULT_IMAGE_TOKEN, IMAGE_TOKEN_INDEX
from llava.conversation import conv_templates, SeparatorStyle
from llava.model.llava_gpt2 import LlavaGpt2ForCausalLM
from llava.train.arguments_dataclass import ModelArguments, DataArguments, TrainingArguments
from llava.train.dataset import tokenizer_image_token
if __name__ == "__main__":
model_path = 'toshi456/llava-jp-1.3b-v1.1'
device = "cuda" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.bfloat16 if device=="cuda" else torch.float32
model = LlavaGpt2ForCausalLM.from_pretrained(
model_path,
low_cpu_mem_usage=True,
use_safetensors=True,
torch_dtype=torch_dtype,
device_map=device,
)
tokenizer = transformers.AutoTokenizer.from_pretrained(
model_path,
model_max_length=1532,
padding_side="right",
use_fast=False,
)
model.eval()
conv_mode = "v1"
conv = conv_templates[conv_mode].copy()
# image pre-process
image_url = "https://huggingface.co/rinna/bilingual-gpt-neox-4b-minigpt4/resolve/main/sample.jpg"
image = Image.open(requests.get(image_url, stream=True).raw).convert('RGB')
image_size = model.get_model().vision_tower.image_processor.size["height"]
if model.get_model().vision_tower.scales is not None:
image_size = model.get_model().vision_tower.image_processor.size["height"] * len(model.get_model().vision_tower.scales)
if device == "cuda":
image_tensor = model.get_model().vision_tower.image_processor(
image,
return_tensors='pt',
size={"height": image_size, "width": image_size}
)['pixel_values'].half().cuda().to(torch_dtype)
else:
image_tensor = model.get_model().vision_tower.image_processor(
image,
return_tensors='pt',
size={"height": image_size, "width": image_size}
)['pixel_values'].to(torch_dtype)
# create prompt
# ユーザー: <image>\n{prompt}
prompt = "猫の隣には何がありますか?"
inp = DEFAULT_IMAGE_TOKEN + '\n' + prompt
conv.append_message(conv.roles[0], inp)
conv.append_message(conv.roles[1], None)
prompt = conv.get_prompt()
input_ids = tokenizer_image_token(
prompt,
tokenizer,
IMAGE_TOKEN_INDEX,
return_tensors='pt'
).unsqueeze(0)
if device == "cuda":
input_ids = input_ids.to(device)
input_ids = input_ids[:, :-1] # </sep>がinputの最後に入るので削除する
stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2
keywords = [stop_str]
streamer = TextStreamer(tokenizer, skip_prompt=True, timeout=20.0)
# predict
with torch.inference_mode():
model.generate(
inputs=input_ids,
images=image_tensor,
do_sample=True,
temperature=0.1,
top_p=1.0,
max_new_tokens=256,
streamer=streamer,
use_cache=True,
)
"""猫の隣にはノートパソコンがあります。"""
```
## Training dataset
**Stage1 Pretrain**
- [LLaVA-Pretrain-JA](https://huggingface.co/datasets/turing-motors/LLaVA-Pretrain-JA)
**Stage2 Fine-tuning**
- [LLaVA-v1.5-Instruct-620K-JA](https://huggingface.co/datasets/turing-motors/LLaVA-v1.5-Instruct-620K-JA)
## Acknowledgement
- [LLaVA](https://llava-vl.github.io/)
- [LLM-jp](https://llm-jp.nii.ac.jp/)
- [scaling_on_scales](https://github.com/bfshi/scaling_on_scales/tree/master)
## License
cc-by-nc-4.0 |
AI-Sweden-Models/Llama-3-8B | AI-Sweden-Models | 2024-06-16T12:18:27Z | 720 | 16 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"pytorch",
"llama-3",
"ai-sweden",
"sv",
"da",
"no",
"arxiv:2303.17183",
"base_model:meta-llama/Meta-Llama-3-8B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-13T08:56:57Z | ---
language:
- sv
- da
- 'no'
tags:
- pytorch
- llama
- llama-3
- ai-sweden
base_model: meta-llama/Meta-Llama-3-8B
pipeline_tag: text-generation
inference:
parameters:
temperature: 0.6
---
# AI-Sweden-Models/Llama-3-8B

### Intended usage:
This is a base model, it can be finetuned to a particular use case.
[**-----> instruct version here <-----**](https://huggingface.co/AI-Sweden-Models/Llama-3-8B-instruct)
### Use with transformers
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "AI-Sweden-Models/Llama-3-8B"
pipeline = transformers.pipeline(
task="text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto"
)
pipeline(
text_inputs="Sommar och sol är det bästa jag vet",
max_length=128,
repetition_penalty=1.03
)
```
```python
>>> "Sommar och sol är det bästa jag vet!
Och nu när jag har fått lite extra semester så ska jag njuta till max av allt som våren och sommaren har att erbjuda.
Jag har redan börjat med att sitta ute på min altan och ta en kopp kaffe och läsa i tidningen, det är så skönt att bara sitta där och njuta av livet.
Ikväll blir det grillat och det ser jag fram emot!"
```
## Training information
`AI-Sweden-Models/Llama-3-8B` is a continuation of the pretraining process from `meta-llama/Meta-Llama-3-8B`.
It was trained on a subset from [The Nordic Pile](https://arxiv.org/abs/2303.17183) containing Swedish, Norwegian and Danish. The training is done on all model parameters, it is a full finetune.
The training dataset consists of 227 105 079 296 tokens. It was trained on the Rattler supercomputer at the Dell Technologies Edge Innovation Center in Austin, Texas. The training used 23 nodes of a duration of 30 days, where one node contained 4X Nvidia A100 GPUs, yielding 92 GPUs.
## trainer.yaml:
```yaml
learning_rate: 2e-5
warmup_steps: 100
lr_scheduler: cosine
optimizer: adamw_torch_fused
max_grad_norm: 1.0
gradient_accumulation_steps: 16
micro_batch_size: 1
num_epochs: 1
sequence_len: 8192
```
## deepspeed_zero2.json:
```json
{
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu"
},
"contiguous_gradients": true,
"overlap_comm": true
},
"bf16": {
"enabled": "auto"
},
"fp16": {
"enabled": "auto",
"auto_cast": false,
"loss_scale": 0,
"initial_scale_power": 32,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```

## Checkpoints
* 15/6/2024 (18833) => 1 epoch
* 11/6/2024 (16000)
* 07/6/2024 (14375)
* 03/6/2024 (11525)
* 29/5/2024 (8200)
* 26/5/2024 (6550)
* 24/5/2024 (5325)
* 22/5/2024 (3900)
* 20/5/2024 (2700)
* 13/5/2024 (1500) |
mlabonne/Meta-Llama-3-12B-Instruct | mlabonne | 2024-05-16T22:22:14Z | 720 | 4 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"conversational",
"base_model:NousResearch/Meta-Llama-3-8B-Instruct",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-16T16:10:32Z | ---
license: other
tags:
- merge
- mergekit
- lazymergekit
base_model:
- NousResearch/Meta-Llama-3-8B-Instruct
- NousResearch/Meta-Llama-3-8B-Instruct
- NousResearch/Meta-Llama-3-8B-Instruct
- NousResearch/Meta-Llama-3-8B-Instruct
- NousResearch/Meta-Llama-3-8B-Instruct
---
# Meta-Llama-3-12B-Instruct
Meta-Llama-3-12B-Instruct is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)
* [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)
* [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)
* [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)
* [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)
## 🏆 Evaluation
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|--------------------------------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[Meta-Llama-3-12B-Instruct](https://huggingface.co/mlabonne/Meta-Llama-3-12B-Instruct)| 41.7| 67.71| 52.75| 40.58| 50.69|
|[Meta-Llama-3-12B](https://huggingface.co/mlabonne/Meta-Llama-3-12B)| 29.46| 68.01| 41.02| 35.57| 43.52|
## 🧩 Configuration
```yaml
slices:
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [0,9]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [5,14]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [10,19]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [15,24]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [20,32]
merge_method: passthrough
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/Meta-Llama-3-12B-Instruct"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Qwen/Qwen2-7B-Instruct-MLX | Qwen | 2024-06-06T13:40:26Z | 720 | 8 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-06-06T13:17:17Z | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- chat
---
# Qwen2-7B-Instruct-MLX
## Introduction
Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the instruction-tuned 0.5B Qwen2 model.
Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/) and [GitHub](https://github.com/QwenLM/Qwen2). This is the MLX quantized model of Qwen2-7B-Instruct.
<br>
## Model Details
Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
## Training details
We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization.
## Requirements
Run the following commands to install the required MLX packages.
```
pip install mlx-lm mlx -U
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from mlx_lm import load, generate
model, tokenizer = load('Qwen/Qwen2-7B-Instruct-MLX', tokenizer_config={"eos_token": "<|im_end|>"})
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=text, verbose=True, top_p=0.8, temp=0.7, repetition_penalty=1.05, max_tokens=512)
```
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen2,
title={Qwen2 Technical Report},
year={2024}
}
``` |
bartowski/Codestral-RAG-19B-Pruned-GGUF | bartowski | 2024-06-10T21:11:29Z | 720 | 0 | null | [
"gguf",
"text-generation",
"region:us"
] | text-generation | 2024-06-10T20:29:47Z | ---
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of Codestral-RAG-19B-Pruned
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3086">b3086</a> for quantization.
Original model: https://huggingface.co/TroyDoesAI/Codestral-RAG-19B-Pruned
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
## Prompt format
No chat template specified so default is used. This may be incorrect, check original model card for details.
```
<s>[INST] <<SYS>>
{system_prompt}
<</SYS>>
{prompt}[/INST] </s>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Codestral-RAG-19B-Pruned-Q8_0.gguf](https://huggingface.co/bartowski/Codestral-RAG-19B-Pruned-GGUF/blob/main/Codestral-RAG-19B-Pruned-Q8_0.gguf) | Q8_0 | 20.32GB | Extremely high quality, generally unneeded but max available quant. |
| [Codestral-RAG-19B-Pruned-Q6_K.gguf](https://huggingface.co/bartowski/Codestral-RAG-19B-Pruned-GGUF/blob/main/Codestral-RAG-19B-Pruned-Q6_K.gguf) | Q6_K | 15.69GB | Very high quality, near perfect, *recommended*. |
| [Codestral-RAG-19B-Pruned-Q5_K_M.gguf](https://huggingface.co/bartowski/Codestral-RAG-19B-Pruned-GGUF/blob/main/Codestral-RAG-19B-Pruned-Q5_K_M.gguf) | Q5_K_M | 13.51GB | High quality, *recommended*. |
| [Codestral-RAG-19B-Pruned-Q5_K_S.gguf](https://huggingface.co/bartowski/Codestral-RAG-19B-Pruned-GGUF/blob/main/Codestral-RAG-19B-Pruned-Q5_K_S.gguf) | Q5_K_S | 13.17GB | High quality, *recommended*. |
| [Codestral-RAG-19B-Pruned-Q4_K_M.gguf](https://huggingface.co/bartowski/Codestral-RAG-19B-Pruned-GGUF/blob/main/Codestral-RAG-19B-Pruned-Q4_K_M.gguf) | Q4_K_M | 11.47GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Codestral-RAG-19B-Pruned-Q4_K_S.gguf](https://huggingface.co/bartowski/Codestral-RAG-19B-Pruned-GGUF/blob/main/Codestral-RAG-19B-Pruned-Q4_K_S.gguf) | Q4_K_S | 10.89GB | Slightly lower quality with more space savings, *recommended*. |
| [Codestral-RAG-19B-Pruned-IQ4_XS.gguf](https://huggingface.co/bartowski/Codestral-RAG-19B-Pruned-GGUF/blob/main/Codestral-RAG-19B-Pruned-IQ4_XS.gguf) | IQ4_XS | 10.26GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Codestral-RAG-19B-Pruned-Q3_K_L.gguf](https://huggingface.co/bartowski/Codestral-RAG-19B-Pruned-GGUF/blob/main/Codestral-RAG-19B-Pruned-Q3_K_L.gguf) | Q3_K_L | 10.09GB | Lower quality but usable, good for low RAM availability. |
| [Codestral-RAG-19B-Pruned-Q3_K_M.gguf](https://huggingface.co/bartowski/Codestral-RAG-19B-Pruned-GGUF/blob/main/Codestral-RAG-19B-Pruned-Q3_K_M.gguf) | Q3_K_M | 9.26GB | Even lower quality. |
| [Codestral-RAG-19B-Pruned-IQ3_M.gguf](https://huggingface.co/bartowski/Codestral-RAG-19B-Pruned-GGUF/blob/main/Codestral-RAG-19B-Pruned-IQ3_M.gguf) | IQ3_M | 8.66GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Codestral-RAG-19B-Pruned-Q3_K_S.gguf](https://huggingface.co/bartowski/Codestral-RAG-19B-Pruned-GGUF/blob/main/Codestral-RAG-19B-Pruned-Q3_K_S.gguf) | Q3_K_S | 8.30GB | Low quality, not recommended. |
| [Codestral-RAG-19B-Pruned-IQ3_XS.gguf](https://huggingface.co/bartowski/Codestral-RAG-19B-Pruned-GGUF/blob/main/Codestral-RAG-19B-Pruned-IQ3_XS.gguf) | IQ3_XS | 7.90GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Codestral-RAG-19B-Pruned-IQ3_XXS.gguf](https://huggingface.co/bartowski/Codestral-RAG-19B-Pruned-GGUF/blob/main/Codestral-RAG-19B-Pruned-IQ3_XXS.gguf) | IQ3_XXS | 7.40GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Codestral-RAG-19B-Pruned-Q2_K.gguf](https://huggingface.co/bartowski/Codestral-RAG-19B-Pruned-GGUF/blob/main/Codestral-RAG-19B-Pruned-Q2_K.gguf) | Q2_K | 7.12GB | Very low quality but surprisingly usable. |
| [Codestral-RAG-19B-Pruned-IQ2_M.gguf](https://huggingface.co/bartowski/Codestral-RAG-19B-Pruned-GGUF/blob/main/Codestral-RAG-19B-Pruned-IQ2_M.gguf) | IQ2_M | 6.56GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Codestral-RAG-19B-Pruned-IQ2_S.gguf](https://huggingface.co/bartowski/Codestral-RAG-19B-Pruned-GGUF/blob/main/Codestral-RAG-19B-Pruned-IQ2_S.gguf) | IQ2_S | 6.06GB | Very low quality, uses SOTA techniques to be usable. |
| [Codestral-RAG-19B-Pruned-IQ2_XS.gguf](https://huggingface.co/bartowski/Codestral-RAG-19B-Pruned-GGUF/blob/main/Codestral-RAG-19B-Pruned-IQ2_XS.gguf) | IQ2_XS | 5.72GB | Very low quality, uses SOTA techniques to be usable. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Codestral-RAG-19B-Pruned-GGUF --include "Codestral-RAG-19B-Pruned-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Codestral-RAG-19B-Pruned-GGUF --include "Codestral-RAG-19B-Pruned-Q8_0.gguf/*" --local-dir Codestral-RAG-19B-Pruned-Q8_0
```
You can either specify a new local-dir (Codestral-RAG-19B-Pruned-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
Ramikan-BR/TiamaPY-v32 | Ramikan-BR | 2024-06-23T13:20:27Z | 720 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gguf",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/tinyllama-chat-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-21T11:36:21Z | ---
base_model: unsloth/tinyllama-chat-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** Ramikan-BR
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
CHE-72/TAIDE-LX-7B-Chat-Q4_K_M-GGUF | CHE-72 | 2024-06-22T17:26:49Z | 720 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:taide/TAIDE-LX-7B-Chat",
"license:other",
"region:us"
] | null | 2024-06-22T17:26:32Z | ---
base_model: taide/TAIDE-LX-7B-Chat
license: other
license_name: taide-l-models-community-license-agreement
license_link: https://drive.google.com/file/d/1FcUZjbUH6jr4xoCyAronN_slLgcdhEUd/view
tags:
- llama-cpp
- gguf-my-repo
extra_gated_heading: 您需要先同意授權條款才能使用此模型
extra_gated_fields:
姓名(Name): text
生日(Date of birth): date_picker
國家(Country): country
所屬單位(Affiliation): text
geo: ip_location
按下送出表示您同意社群授權同意書與個人資料蒐集告知聲明(By clicking Submit below I accept the terms of the license and privacy policy): checkbox
extra_gated_prompt: '* ### [TAIDE L 類模型社群授權同意書(License)](https://drive.google.com/file/d/1FcUZjbUH6jr4xoCyAronN_slLgcdhEUd/view)
* ### [個人資料蒐集告知聲明(Privacy policy)](https://drive.google.com/file/d/1JTfZu_MdU_TR1-1sn2jbQyW7TLrxjwS5/view)'
extra_gated_button_content: 送出(Submit)
---
# CHE-72/TAIDE-LX-7B-Chat-Q4_K_M-GGUF
This model was converted to GGUF format from [`taide/TAIDE-LX-7B-Chat`](https://huggingface.co/taide/TAIDE-LX-7B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/taide/TAIDE-LX-7B-Chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q4_K_M-GGUF --hf-file taide-lx-7b-chat-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q4_K_M-GGUF --hf-file taide-lx-7b-chat-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q4_K_M-GGUF --hf-file taide-lx-7b-chat-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q4_K_M-GGUF --hf-file taide-lx-7b-chat-q4_k_m.gguf -c 2048
```
|
huggingtweets/minidiscplus | huggingtweets | 2021-05-22T14:47:59Z | 719 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/minidiscplus/1614226312373/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1266897293925548037/GcLTrLGc_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Diskette 🤖 AI Bot </div>
<div style="font-size: 15px">@minidiscplus bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@minidiscplus's tweets](https://twitter.com/minidiscplus).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 731 |
| Retweets | 58 |
| Short tweets | 98 |
| Tweets kept | 575 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ho0rrmld/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @minidiscplus's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/dwlvkv36) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/dwlvkv36/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/minidiscplus')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
timm/caformer_s18.sail_in22k | timm | 2023-05-05T05:49:27Z | 719 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-22k",
"arxiv:2210.13452",
"license:apache-2.0",
"region:us"
] | image-classification | 2023-05-05T05:48:39Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-22k
---
# Model card for caformer_s18.sail_in22k
A CAFormer (a MetaFormer) image classification model. Trained on ImageNet-22k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 69.0
- GMACs: 4.2
- Activations (M): 19.4
- Image size: 224 x 224
- **Papers:**
- Metaformer baselines for vision: https://arxiv.org/abs/2210.13452
- **Original:** https://github.com/sail-sg/metaformer
- **Dataset:** ImageNet-22k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('caformer_s18.sail_in22k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'caformer_s18.sail_in22k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 56, 56])
# torch.Size([1, 128, 28, 28])
# torch.Size([1, 320, 14, 14])
# torch.Size([1, 512, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'caformer_s18.sail_in22k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 512, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{yu2022metaformer_baselines,
title={Metaformer baselines for vision},
author={Yu, Weihao and Si, Chenyang and Zhou, Pan and Luo, Mi and Zhou, Yichen and Feng, Jiashi and Yan, Shuicheng and Wang, Xinchao},
journal={arXiv preprint arXiv:2210.13452},
year={2022}
}
```
|
stablediffusionapi/cyberrealistic | stablediffusionapi | 2023-08-29T18:15:43Z | 719 | 3 | diffusers | [
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-05-12T19:36:52Z | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "cyberrealistic"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Model link: [View model](https://stablediffusionapi.com/models/cyberrealistic)
Credits: [View credits](https://civitai.com/?query=model_search)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v3/dreambooth"
payload = json.dumps({
"key": "",
"model_id": "cyberrealistic",
"prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
aeolian83/poly-ko-1.3b-translate | aeolian83 | 2023-08-13T13:29:17Z | 719 | 2 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"causal-lm",
"ko",
"dataset:squarelike/sharegpt_deepl_ko_translation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-12T06:10:33Z | ---
license: apache-2.0
language:
- ko
datasets:
- squarelike/sharegpt_deepl_ko_translation
tags:
- pytorch
- causal-lm
---
# poly-ko-1.3b-translate
- EleutherAI/polyglot-ko-1.3b을 squarelike/sharegpt_deepl_ko_translation으로 영한 번역만 가능하도록 fine-tuning한 모델
- QRoLA기법으로 fine-tunnig
### 훈련 정보
- Epoch: 1
- learning-rate: 3e-4
- batch_size: 3
- Lora r: 8
- Lora target modules: query_key_value
3090GPU 1대로 훈련했습니다. |
heegyu/WizardVicuna-pythia-410m-deduped | heegyu | 2023-08-24T07:50:32Z | 719 | 1 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"dataset:heegyu/wizard_vicuna_70k_v2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-24T07:44:29Z | ---
datasets:
- heegyu/wizard_vicuna_70k_v2
---
Hyperparameters
- 3 epoch
- 1e-4 -> 1e-5 with cosine lr decay
- batch size 128
- max sequence length 2048
- AdamW(weigth decay=0.01, b1=0.9, b2=0.99, grad_clip=1.0)
- no warmup
- BF16
```
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("heegyu/WizardVicuna-pythia-410m-deduped")
model = AutoModelForCausalLM.from_pretrained("heegyu/WizardVicuna-pythia-410m-deduped")
inputs = tokenizer(["Human: Hi\n\nAssistant: "], return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=16)
print(tokenizer.batch_decode(outputs, skip_special_tokens=False))
```
output: `['Human: Hi\n\nAssistant: Hello! How can I assist you today?<|endoftext|>']` |
shaowenchen/llama-2-7b-langchain-chat-gguf | shaowenchen | 2023-09-16T14:09:07Z | 719 | 2 | null | [
"gguf",
"meta",
"llama",
"llama-2",
"chinese",
"7b",
"text-generation",
"zh",
"en",
"license:other",
"region:us"
] | text-generation | 2023-09-16T07:09:33Z | ---
inference: false
language:
- zh
- en
license: other
model_creator: Photolens
model_link: https://huggingface.co/Photolens/llama-2-7b-langchain-chat
model_name: llama-2-7b-langchain-chat
model_type: llama
pipeline_tag: text-generation
quantized_by: shaowenchen
tasks:
- text2text-generation
tags:
- meta
- gguf
- llama
- llama-2
- chinese
- 7b
---
## Provided files
| Name | Quant method | Size |
| ------------------------------------- | ------------ | ------ |
| llama-2-7b-langchain-chat.Q2_K.gguf | Q2_K | 2.6 GB |
| llama-2-7b-langchain-chat.Q3_K.gguf | Q3_K | 3.1 GB |
| llama-2-7b-langchain-chat.Q3_K_L.gguf | Q3_K_L | 3.3 GB |
| llama-2-7b-langchain-chat.Q3_K_S.gguf | Q3_K_S | 2.7 GB |
| llama-2-7b-langchain-chat.Q4_0.gguf | Q4_0 | 3.6 GB |
| llama-2-7b-langchain-chat.Q4_1.gguf | Q4_1 | 3.9 GB |
| llama-2-7b-langchain-chat.Q4_K.gguf | Q4_K | 3.8 GB |
| llama-2-7b-langchain-chat.Q4_K_S.gguf | Q4_K_S | 3.6 GB |
| llama-2-7b-langchain-chat.Q5_0.gguf | Q5_0 | 4.3 GB |
| llama-2-7b-langchain-chat.Q5_1.gguf | Q5_1 | 4.7 GB |
| llama-2-7b-langchain-chat.Q5_K.gguf | Q5_K | 4.5 GB |
| llama-2-7b-langchain-chat.Q5_K_S.gguf | Q5_K_S | 4.3 GB |
| llama-2-7b-langchain-chat.Q6_K.gguf | Q6_K | 5.1 GB |
| llama-2-7b-langchain-chat.Q8_0.gguf | Q8_0 | 6.7 GB |
| llama-2-7b-langchain-chat.gguf | full | 13 GB |
Usage:
```
docker run --rm -it -p 8000:8000 -v /path/to/models:/models -e MODEL=/models/gguf-model-name.gguf hubimage/llama-cpp-python:latest
```
and you can view http://localhost:8000/docs to see the swagger UI.
## Provided images
| Name | Quant method | Size |
| ------------------------------------------------- | ------------ | ------- |
| shaowenchen/llama-2-7b-langchain-chat-gguf:Q2_K | Q2_K | 6.72 GB |
| shaowenchen/llama-2-7b-langchain-chat-gguf:Q3_K | Q3_K | 7.64 GB |
| shaowenchen/llama-2-7b-langchain-chat-gguf:Q3_K_L | Q3_K_L | 8.27 GB |
| shaowenchen/llama-2-7b-langchain-chat-gguf:Q3_K_S | Q3_K_S | 6.97 GB |
| shaowenchen/llama-2-7b-langchain-chat-gguf:Q4_0 | Q4_0 | 8.55 GB |
| shaowenchen/llama-2-7b-langchain-chat-gguf:Q4_1 | Q4_1 | 9.41 GB |
| shaowenchen/llama-2-7b-langchain-chat-gguf:Q4_K | Q4_K | 9.17 GB |
| shaowenchen/llama-2-7b-langchain-chat-gguf:Q4_K_S | Q4_K_S | 8.72 GB |
| shaowenchen/llama-2-7b-langchain-chat-gguf:Q5_0 | Q5_0 | 10.4 GB |
| shaowenchen/llama-2-7b-langchain-chat-gguf:Q5_K | Q5_K | 10.6 GB |
| shaowenchen/llama-2-7b-langchain-chat-gguf:Q5_K_S | Q5_K_S | 10.4 GB |
Usage:
```
docker run --rm -p 8000:8000 shaowenchen/llama-2-7b-langchain-chat-gguf:Q2_K
```
and you can view http://localhost:8000/docs to see the swagger UI.
|
unsloth/mistral-7b | unsloth | 2024-03-22T15:01:42Z | 719 | 5 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"unsloth",
"mistral7b",
"bnb",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-25T17:10:04Z | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- unsloth
- transformers
- mistral
- mistral7b
- bnb
---
# Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth!
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/u54VK8m8tk)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/buy%20me%20a%20coffee%20button.png" width="200"/>](https://ko-fi.com/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Gemma 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing) | 2.4x faster | 58% less |
| **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **Llama-2 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing) | 2.2x faster | 43% less |
| **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less |
| **CodeLlama 34b** A100 | [▶️ Start on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing) | 1.9x faster | 27% less |
| **Mistral 7b** 1xT4 | [▶️ Start on Kaggle](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook) | 5x faster\* | 62% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
|
arlineka/Brunhilde-13b | arlineka | 2024-03-07T01:45:25Z | 719 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"roleplay",
"merge",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-14T10:11:20Z | ---
license: cc-by-nc-4.0
tags:
- roleplay
- merge
model-index:
- name: Brunhilde-13b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 60.49
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=arlineka/Brunhilde-13b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 83.49
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=arlineka/Brunhilde-13b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 56.18
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=arlineka/Brunhilde-13b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 52.35
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=arlineka/Brunhilde-13b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=arlineka/Brunhilde-13b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 9.17
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=arlineka/Brunhilde-13b
name: Open LLM Leaderboard
---
# Brunhilde-13b
## Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "arlineka/Brunhilde-13b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=1024, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## Alpaca Template
```
### Instruction:
{prompt}
### Response:
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_arlineka__Brunhilde-13b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |56.20|
|AI2 Reasoning Challenge (25-Shot)|60.49|
|HellaSwag (10-Shot) |83.49|
|MMLU (5-Shot) |56.18|
|TruthfulQA (0-shot) |52.35|
|Winogrande (5-shot) |75.53|
|GSM8k (5-shot) | 9.17|
|
Replete-AI/Phi-Stoma | Replete-AI | 2024-03-17T16:44:24Z | 719 | 2 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"conversational",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-12T21:46:12Z | ---
license: mit
language:
- en
thumbnail: "https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/frA6uuzjfyLWgo2JE3IoA.png"
---
# You answer nature's call?

### Named after the method used to create it, interleaving the layers of its predecessor to become far larger, giving it much more potential.
[Stoma](https://wowpedia.fandom.com/wiki/Stoma_the_Ancient) was an ancient treeant of lore, and I couldn't think of a better naming convention for a model that was created using the passthrough method.
By concatenating layers from different LLMs, it can produce models with an exotic number of parameters (e.g., 9B with two 7B parameter models). These models are often referred to as "frankenmerges" or "Frankenstein models" by the community.
Many thanks to [Abacaj](https://huggingface.co/abacaj) for providing the [fine tuned weights](https://huggingface.co/abacaj/phi-2-super) that were used in the creation of this base model. You can find the full script for how the model was merged [here](https://huggingface.co/Kquant03/Phi-Stoma/blob/main/mergekit_config.yml)...thanks to [KatyTheCutie](https://huggingface.co/KatyTheCutie) for inspring me to test out this script.
## This idea was brought to me by [The Face of Goonery](https://huggingface.co/The-Face-Of-Goonery), also known as Caleb Morgan. I have him to thank if fine-tuning this model turns out to be a success...he also helped me to make this model even larger than the prior one.
# How to run inference:
```python
import transformers
import torch
if __name__ == "__main__":
model_name = "Replete-AI/Phi-Stoma"
tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
model = (
transformers.AutoModelForCausalLM.from_pretrained(
model_name,
)
.to("cuda:0")
.eval()
)
messages = [
{"role": "user", "content": "Hello, who are you?"}
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
input_ids_cutoff = inputs.size(dim=1)
with torch.no_grad():
generated_ids = model.generate(
input_ids=inputs,
use_cache=True,
max_new_tokens=512,
temperature=0.2,
top_p=0.95,
do_sample=True,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id,
)
completion = tokenizer.decode(
generated_ids[0][input_ids_cutoff:],
skip_special_tokens=True,
)
print(completion)
```
# Chat template
The model uses the same chat template as found in Mistral instruct models:
# [Join the Replete AI Discord here!](https://discord.gg/tG5aY4EX4T) |
Nitral-AI/Visual-LaylelemonMaidRP-7B | Nitral-AI | 2024-05-05T17:18:25Z | 719 | 6 | transformers | [
"transformers",
"safetensors",
"gguf",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:Nitral-AI/Infinitely-Laydiculous-7B",
"base_model:Nitral-AI/Stanta-Lelemon-Maid-7B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-30T18:14:07Z | ---
base_model:
- Nitral-AI/Infinitely-Laydiculous-7B
- Nitral-AI/Stanta-Lelemon-Maid-7B
library_name: transformers
tags:
- mergekit
- merge
license: other
---

Heard you like Imatrix Quants, if so find them from lewdiculus here: https://huggingface.co/Lewdiculous/Visual-LaylelemonMaidRP-7B-GGUF-IQ-Imatrix
# Vision/multimodal capabilities:
If you want to use vision functionality:
* You must use the latest versions of [Koboldcpp](https://github.com/LostRuins/koboldcpp).
To use the multimodal capabilities of this model and use **vision** you need to load the specified **mmproj** file, this can be found inside this model repo.
* You can load the **mmproj** by using the corresponding section in the interface:
 |
mradermacher/SM_Smaug_52B-i1-GGUF | mradermacher | 2024-05-10T16:14:06Z | 719 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"qwen2",
"en",
"base_model:Eurdem/SM_Smaug_52B",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-05-09T17:23:41Z | ---
base_model: Eurdem/SM_Smaug_52B
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- mergekit
- merge
- qwen2
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/Eurdem/SM_Smaug_52B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/SM_Smaug_52B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SM_Smaug_52B-i1-GGUF/resolve/main/SM_Smaug_52B.i1-IQ1_S.gguf) | i1-IQ1_S | 11.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/SM_Smaug_52B-i1-GGUF/resolve/main/SM_Smaug_52B.i1-IQ1_M.gguf) | i1-IQ1_M | 12.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/SM_Smaug_52B-i1-GGUF/resolve/main/SM_Smaug_52B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/SM_Smaug_52B-i1-GGUF/resolve/main/SM_Smaug_52B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 15.3 | |
| [GGUF](https://huggingface.co/mradermacher/SM_Smaug_52B-i1-GGUF/resolve/main/SM_Smaug_52B.i1-IQ2_S.gguf) | i1-IQ2_S | 16.2 | |
| [GGUF](https://huggingface.co/mradermacher/SM_Smaug_52B-i1-GGUF/resolve/main/SM_Smaug_52B.i1-IQ2_M.gguf) | i1-IQ2_M | 17.5 | |
| [GGUF](https://huggingface.co/mradermacher/SM_Smaug_52B-i1-GGUF/resolve/main/SM_Smaug_52B.i1-Q2_K.gguf) | i1-Q2_K | 19.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/SM_Smaug_52B-i1-GGUF/resolve/main/SM_Smaug_52B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 19.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SM_Smaug_52B-i1-GGUF/resolve/main/SM_Smaug_52B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/SM_Smaug_52B-i1-GGUF/resolve/main/SM_Smaug_52B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 22.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/SM_Smaug_52B-i1-GGUF/resolve/main/SM_Smaug_52B.i1-IQ3_S.gguf) | i1-IQ3_S | 22.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SM_Smaug_52B-i1-GGUF/resolve/main/SM_Smaug_52B.i1-IQ3_M.gguf) | i1-IQ3_M | 23.2 | |
| [GGUF](https://huggingface.co/mradermacher/SM_Smaug_52B-i1-GGUF/resolve/main/SM_Smaug_52B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 24.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/SM_Smaug_52B-i1-GGUF/resolve/main/SM_Smaug_52B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 27.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/SM_Smaug_52B-i1-GGUF/resolve/main/SM_Smaug_52B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 27.5 | |
| [GGUF](https://huggingface.co/mradermacher/SM_Smaug_52B-i1-GGUF/resolve/main/SM_Smaug_52B.i1-Q4_0.gguf) | i1-Q4_0 | 29.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/SM_Smaug_52B-i1-GGUF/resolve/main/SM_Smaug_52B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 29.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/SM_Smaug_52B-i1-GGUF/resolve/main/SM_Smaug_52B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 30.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SM_Smaug_52B-i1-GGUF/resolve/main/SM_Smaug_52B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 35.3 | |
| [GGUF](https://huggingface.co/mradermacher/SM_Smaug_52B-i1-GGUF/resolve/main/SM_Smaug_52B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 36.2 | |
| [GGUF](https://huggingface.co/mradermacher/SM_Smaug_52B-i1-GGUF/resolve/main/SM_Smaug_52B.i1-Q6_K.gguf) | i1-Q6_K | 42.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
MCZK/Qwen2-0.5B-Instruct-GGUF | MCZK | 2024-06-07T23:16:00Z | 719 | 0 | null | [
"gguf",
"chat",
"text-generation",
"en",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-06-07T22:43:53Z | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- chat
---
Qwen様の [Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) をGGUF形式に変換したものです。
K量子化モデルについてもiMatrix適用してあります。
iMatrixテキストはTFMC様の[c4_en_ja_imatrix.txt](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)を使用しています。
|
CHE-72/Baichuan2-7B-Chat-Q8_0-GGUF | CHE-72 | 2024-06-22T08:43:39Z | 719 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"zh",
"base_model:baichuan-inc/Baichuan2-7B-Chat",
"region:us"
] | null | 2024-06-22T08:43:06Z | ---
base_model: baichuan-inc/Baichuan2-7B-Chat
language:
- en
- zh
license_name: baichuan2-community-license
license_link: https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat/blob/main/Community%20License%20for%20Baichuan2%20Model.pdf
tags:
- llama-cpp
- gguf-my-repo
tasks:
- text-generation
---
# CHE-72/Baichuan2-7B-Chat-Q8_0-GGUF
This model was converted to GGUF format from [`baichuan-inc/Baichuan2-7B-Chat`](https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CHE-72/Baichuan2-7B-Chat-Q8_0-GGUF --hf-file baichuan2-7b-chat-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CHE-72/Baichuan2-7B-Chat-Q8_0-GGUF --hf-file baichuan2-7b-chat-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CHE-72/Baichuan2-7B-Chat-Q8_0-GGUF --hf-file baichuan2-7b-chat-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CHE-72/Baichuan2-7B-Chat-Q8_0-GGUF --hf-file baichuan2-7b-chat-q8_0.gguf -c 2048
```
|
EleutherAI/pythia-160m-seed2 | EleutherAI | 2024-01-04T13:21:13Z | 718 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-02-15T19:26:55Z | Entry not found |
l3cube-pune/indic-sentence-bert-nli | l3cube-pune | 2023-06-11T14:59:00Z | 718 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"multilingual",
"en",
"hi",
"mr",
"kn",
"ta",
"te",
"ml",
"gu",
"or",
"pa",
"bn",
"arxiv:2304.11434",
"arxiv:2211.11187",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | 2023-03-04T07:47:54Z | ---
pipeline_tag: sentence-similarity
license: cc-by-4.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
language:
- multilingual
- en
- hi
- mr
- kn
- ta
- te
- ml
- gu
- or
- pa
- bn
widget:
- source_sentence: दिवाळी आपण मोठ्या उत्साहाने साजरी करतो
sentences:
- दिवाळी आपण आनंदाने साजरी करतो
- दिवाळी हा दिव्यांचा सण आहे
example_title: Monolingual- Marathi
- source_sentence: हम दीपावली उत्साह के साथ मनाते हैं
sentences:
- हम दीपावली खुशियों से मनाते हैं
- दिवाली रोशनी का त्योहार है
example_title: Monolingual- Hindi
- source_sentence: અમે ઉત્સાહથી દિવાળી ઉજવીએ છીએ
sentences:
- દિવાળી આપણે ખુશીઓથી ઉજવીએ છીએ
- દિવાળી એ રોશનીનો તહેવાર છે
example_title: Monolingual- Gujarati
- source_sentence: आम्हाला भारतीय असल्याचा अभिमान आहे
sentences:
- हमें भारतीय होने पर गर्व है
- భారతీయులమైనందుకు గర్విస్తున్నాం
- અમને ભારતીય હોવાનો ગર્વ છે
example_title: Cross-lingual 1
- source_sentence: ਬਾਰਿਸ਼ ਤੋਂ ਬਾਅਦ ਬਗੀਚਾ ਸੁੰਦਰ ਦਿਖਾਈ ਦਿੰਦਾ ਹੈ
sentences:
- മഴയ്ക്ക് ശേഷം പൂന്തോട്ടം മനോഹരമായി കാണപ്പെടുന്നു
- ବର୍ଷା ପରେ ବଗିଚା ସୁନ୍ଦର ଦେଖାଯାଏ |
- बारिश के बाद बगीचा सुंदर दिखता है
example_title: Cross-lingual 2
---
# IndicSBERT
This is a MuRIL model (google/muril-base-cased) trained on the NLI dataset of ten major Indian Languages. <br>
The single model works for English, Hindi, Marathi, Kannada, Tamil, Telugu, Gujarati, Oriya, Punjabi, Malayalam, and Bengali.
The model also has cross-lingual capabilities. <br>
Released as a part of project MahaNLP: https://github.com/l3cube-pune/MarathiNLP <br>
A better sentence similarity model (fine-tuned version of this model) is shared here: https://huggingface.co/l3cube-pune/indic-sentence-similarity-sbert <br>
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2304.11434)
```
@article{deode2023l3cube,
title={L3Cube-IndicSBERT: A simple approach for learning cross-lingual sentence representations using multilingual BERT},
author={Deode, Samruddhi and Gadre, Janhavi and Kajale, Aditi and Joshi, Ananya and Joshi, Raviraj},
journal={arXiv preprint arXiv:2304.11434},
year={2023}
}
```
<a href='https://arxiv.org/abs/2211.11187'> monolingual Indic SBERT paper </a> <br>
<a href='https://arxiv.org/abs/2304.11434'> multilingual Indic SBERT paper </a>
Other Monolingual Indic sentence BERT models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-sentence-bert-nli'> Marathi SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-sentence-bert-nli'> Hindi SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-sentence-bert-nli'> Kannada SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-sentence-bert-nli'> Telugu SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-sentence-bert-nli'> Malayalam SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-sentence-bert-nli'> Tamil SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-sentence-bert-nli'> Gujarati SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-sentence-bert-nli'> Oriya SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-sentence-bert-nli'> Bengali SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-sentence-bert-nli'> Punjabi SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/indic-sentence-bert-nli'> Indic SBERT (multilingual)</a> <br>
Other Monolingual similarity models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-sentence-similarity-sbert'> Marathi Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-sentence-similarity-sbert'> Hindi Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-sentence-similarity-sbert'> Kannada Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-sentence-similarity-sbert'> Telugu Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-sentence-similarity-sbert'> Malayalam Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-sentence-similarity-sbert'> Tamil Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-sentence-similarity-sbert'> Gujarati Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-sentence-similarity-sbert'> Oriya Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-sentence-similarity-sbert'> Bengali Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-sentence-similarity-sbert'> Punjabi Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/indic-sentence-similarity-sbert'> Indic Similarity (multilingual)</a> <br>
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
``` |
facebook/mms-lid-1024 | facebook | 2023-06-13T10:18:46Z | 718 | 7 | transformers | [
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"audio-classification",
"mms",
"ab",
"af",
"ak",
"am",
"ar",
"as",
"av",
"ay",
"az",
"ba",
"bm",
"be",
"bn",
"bi",
"bo",
"sh",
"br",
"bg",
"ca",
"cs",
"ce",
"cv",
"ku",
"cy",
"da",
"de",
"dv",
"dz",
"el",
"en",
"eo",
"et",
"eu",
"ee",
"fo",
"fa",
"fj",
"fi",
"fr",
"fy",
"ff",
"ga",
"gl",
"gn",
"gu",
"zh",
"ht",
"ha",
"he",
"hi",
"hu",
"hy",
"ig",
"ia",
"ms",
"is",
"it",
"jv",
"ja",
"kn",
"ka",
"kk",
"kr",
"km",
"ki",
"rw",
"ky",
"ko",
"kv",
"lo",
"la",
"lv",
"ln",
"lt",
"lb",
"lg",
"mh",
"ml",
"mr",
"mk",
"mg",
"mt",
"mn",
"mi",
"my",
"nl",
"no",
"ne",
"ny",
"oc",
"om",
"or",
"os",
"pa",
"pl",
"pt",
"ps",
"qu",
"ro",
"rn",
"ru",
"sg",
"sk",
"sl",
"sm",
"sn",
"sd",
"so",
"es",
"sq",
"su",
"sv",
"sw",
"ta",
"tt",
"te",
"tg",
"tl",
"th",
"ti",
"ts",
"tr",
"uk",
"vi",
"wo",
"xh",
"yo",
"zu",
"za",
"dataset:google/fleurs",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-06-13T08:59:15Z | ---
tags:
- mms
language:
- ab
- af
- ak
- am
- ar
- as
- av
- ay
- az
- ba
- bm
- be
- bn
- bi
- bo
- sh
- br
- bg
- ca
- cs
- ce
- cv
- ku
- cy
- da
- de
- dv
- dz
- el
- en
- eo
- et
- eu
- ee
- fo
- fa
- fj
- fi
- fr
- fy
- ff
- ga
- gl
- gn
- gu
- zh
- ht
- ha
- he
- hi
- sh
- hu
- hy
- ig
- ia
- ms
- is
- it
- jv
- ja
- kn
- ka
- kk
- kr
- km
- ki
- rw
- ky
- ko
- kv
- lo
- la
- lv
- ln
- lt
- lb
- lg
- mh
- ml
- mr
- ms
- mk
- mg
- mt
- mn
- mi
- my
- zh
- nl
- 'no'
- 'no'
- ne
- ny
- oc
- om
- or
- os
- pa
- pl
- pt
- ms
- ps
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- ro
- rn
- ru
- sg
- sk
- sl
- sm
- sn
- sd
- so
- es
- sq
- su
- sv
- sw
- ta
- tt
- te
- tg
- tl
- th
- ti
- ts
- tr
- uk
- ms
- vi
- wo
- xh
- ms
- yo
- ms
- zu
- za
license: cc-by-nc-4.0
datasets:
- google/fleurs
metrics:
- acc
---
# Massively Multilingual Speech (MMS) - Finetuned LID
This checkpoint is a model fine-tuned for speech language identification (LID) and part of Facebook's [Massive Multilingual Speech project](https://research.facebook.com/publications/scaling-speech-technology-to-1000-languages/).
This checkpoint is based on the [Wav2Vec2 architecture](https://huggingface.co/docs/transformers/model_doc/wav2vec2) and classifies raw audio input to a probability distribution over 1024 output classes (each class representing a language).
The checkpoint consists of **1 billion parameters** and has been fine-tuned from [facebook/mms-1b](https://huggingface.co/facebook/mms-1b) on 1024 languages.
## Table Of Content
- [Example](#example)
- [Supported Languages](#supported-languages)
- [Model details](#model-details)
- [Additional links](#additional-links)
## Example
This MMS checkpoint can be used with [Transformers](https://github.com/huggingface/transformers) to identify
the spoken language of an audio. It can recognize the [following 1024 languages](#supported-languages).
Let's look at a simple example.
First, we install transformers and some other libraries
```
pip install torch accelerate torchaudio datasets
pip install --upgrade transformers
````
**Note**: In order to use MMS you need to have at least `transformers >= 4.30` installed. If the `4.30` version
is not yet available [on PyPI](https://pypi.org/project/transformers/) make sure to install `transformers` from
source:
```
pip install git+https://github.com/huggingface/transformers.git
```
Next, we load a couple of audio samples via `datasets`. Make sure that the audio data is sampled to 16000 kHz.
```py
from datasets import load_dataset, Audio
# English
stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "en", split="test", streaming=True)
stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000))
en_sample = next(iter(stream_data))["audio"]["array"]
# Arabic
stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "ar", split="test", streaming=True)
stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000))
ar_sample = next(iter(stream_data))["audio"]["array"]
```
Next, we load the model and processor
```py
from transformers import Wav2Vec2ForSequenceClassification, AutoFeatureExtractor
import torch
model_id = "facebook/mms-lid-1024"
processor = AutoFeatureExtractor.from_pretrained(model_id)
model = Wav2Vec2ForSequenceClassification.from_pretrained(model_id)
```
Now we process the audio data, pass the processed audio data to the model to classify it into a language, just like we usually do for Wav2Vec2 audio classification models such as [ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition](https://huggingface.co/harshit345/xlsr-wav2vec-speech-emotion-recognition)
```py
# English
inputs = processor(en_sample, sampling_rate=16_000, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs).logits
lang_id = torch.argmax(outputs, dim=-1)[0].item()
detected_lang = model.config.id2label[lang_id]
# 'eng'
# Arabic
inputs = processor(ar_sample, sampling_rate=16_000, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs).logits
lang_id = torch.argmax(outputs, dim=-1)[0].item()
detected_lang = model.config.id2label[lang_id]
# 'ara'
```
To see all the supported languages of a checkpoint, you can print out the language ids as follows:
```py
processor.id2label.values()
```
For more details, about the architecture please have a look at [the official docs](https://huggingface.co/docs/transformers/main/en/model_doc/mms).
## Supported Languages
This model supports 1024 languages. Unclick the following to toogle all supported languages of this checkpoint in [ISO 639-3 code](https://en.wikipedia.org/wiki/ISO_639-3).
You can find more details about the languages and their ISO 649-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html).
<details>
<summary>Click to toggle</summary>
- ara
- cmn
- eng
- spa
- fra
- mlg
- swe
- por
- vie
- ful
- sun
- asm
- ben
- zlm
- kor
- ind
- hin
- tuk
- urd
- aze
- slv
- mon
- hau
- tel
- swh
- bod
- rus
- tur
- heb
- mar
- som
- tgl
- tat
- tha
- cat
- ron
- mal
- bel
- pol
- yor
- nld
- bul
- hat
- afr
- isl
- amh
- tam
- hun
- hrv
- lit
- cym
- fas
- mkd
- ell
- bos
- deu
- sqi
- jav
- kmr
- nob
- uzb
- snd
- lat
- nya
- grn
- mya
- orm
- lin
- hye
- yue
- pan
- jpn
- kaz
- npi
- kik
- kat
- guj
- kan
- tgk
- ukr
- ces
- lav
- bak
- khm
- cak
- fao
- glg
- ltz
- xog
- lao
- mlt
- sin
- aka
- sna
- che
- mam
- ita
- quc
- aiw
- srp
- mri
- tuv
- nno
- pus
- eus
- kbp
- gur
- ory
- lug
- crh
- bre
- luo
- nhx
- slk
- ewe
- xsm
- fin
- rif
- dan
- saq
- yid
- yao
- mos
- quh
- hne
- xon
- new
- dtp
- quy
- est
- ddn
- dyu
- ttq
- bam
- pse
- uig
- sck
- ngl
- tso
- mup
- dga
- seh
- lis
- wal
- ctg
- mip
- bfz
- bxk
- ceb
- kru
- war
- khg
- bbc
- thl
- nzi
- vmw
- mzi
- ycl
- zne
- sid
- asa
- tpi
- bmq
- box
- zpu
- gof
- nym
- cla
- bgq
- bfy
- hlb
- qxl
- teo
- fon
- sda
- kfx
- bfa
- mag
- tzh
- pil
- maj
- maa
- kdt
- ksb
- lns
- btd
- rej
- pap
- ayr
- any
- mnk
- adx
- gud
- krc
- onb
- xal
- ctd
- nxq
- ava
- blt
- lbw
- hyw
- udm
- zar
- tzo
- kpv
- san
- xnj
- kek
- chv
- kcg
- kri
- ati
- bgw
- mxt
- ybb
- btx
- dgi
- nhy
- dnj
- zpz
- yba
- lon
- smo
- men
- ium
- mgd
- taq
- nga
- nsu
- zaj
- tly
- prk
- zpt
- akb
- mhr
- mxb
- nuj
- obo
- kir
- bom
- run
- zpg
- hwc
- mnw
- ubl
- kin
- xtm
- hnj
- mpm
- rkt
- miy
- luc
- mih
- kne
- mib
- flr
- myv
- xmm
- knk
- iba
- gux
- pis
- zmz
- ses
- dav
- lif
- qxr
- dig
- kdj
- wsg
- tir
- gbm
- mai
- zpc
- kus
- nyy
- mim
- nan
- nyn
- gog
- ngu
- tbz
- hoc
- nyf
- sus
- guk
- gwr
- yaz
- bcc
- sbd
- spp
- hak
- grt
- kno
- oss
- suk
- spy
- nij
- lsm
- kaa
- bem
- rmy
- kqn
- nim
- ztq
- nus
- bib
- xtd
- ach
- mil
- keo
- mpg
- gjn
- zaq
- kdh
- dug
- sah
- awa
- kff
- dip
- rim
- nhe
- pcm
- kde
- tem
- quz
- mfq
- las
- bba
- kbr
- taj
- dyo
- zao
- lom
- shk
- dik
- dgo
- zpo
- fij
- bgc
- xnr
- bud
- kac
- laj
- mev
- maw
- quw
- kao
- dag
- ktb
- lhu
- zab
- mgh
- shn
- otq
- lob
- pbb
- oci
- zyb
- bsq
- mhi
- dzo
- zas
- guc
- alz
- ctu
- wol
- guw
- mnb
- nia
- zaw
- mxv
- bci
- sba
- kab
- dwr
- nnb
- ilo
- mfe
- srx
- ruf
- srn
- zad
- xpe
- pce
- ahk
- bcl
- myk
- haw
- mad
- ljp
- bky
- gmv
- nag
- nav
- nyo
- kxm
- nod
- sag
- zpl
- sas
- myx
- sgw
- old
- irk
- acf
- mak
- kfy
- zai
- mie
- zpm
- zpi
- ote
- jam
- kpz
- lgg
- lia
- nhi
- mzm
- bdq
- xtn
- mey
- mjl
- sgj
- kdi
- kxc
- miz
- adh
- tap
- hay
- kss
- pam
- gor
- heh
- nhw
- ziw
- gej
- yua
- itv
- shi
- qvw
- mrw
- hil
- mbt
- pag
- vmy
- lwo
- cce
- kum
- klu
- ann
- mbb
- npl
- zca
- pww
- toc
- ace
- mio
- izz
- kam
- zaa
- krj
- bts
- eza
- zty
- hns
- kki
- min
- led
- alw
- tll
- rng
- pko
- toi
- iqw
- ncj
- toh
- umb
- mog
- hno
- wob
- gxx
- hig
- nyu
- kby
- ban
- syl
- bxg
- nse
- xho
- zae
- mkw
- nch
- ibg
- mas
- qvz
- bum
- bgd
- mww
- epo
- tzm
- zul
- bcq
- lrc
- xdy
- tyv
- ibo
- loz
- mza
- abk
- azz
- guz
- arn
- ksw
- lus
- tos
- gvr
- top
- ckb
- mer
- pov
- lun
- rhg
- knc
- sfw
- bev
- tum
- lag
- nso
- bho
- ndc
- maf
- gkp
- bax
- awn
- ijc
- qug
- lub
- srr
- mni
- zza
- ige
- dje
- mkn
- bft
- tiv
- otn
- kck
- kqs
- gle
- lua
- pdt
- swk
- mgw
- ebu
- ada
- lic
- skr
- gaa
- mfa
- vmk
- mcn
- bto
- lol
- bwr
- unr
- dzg
- hdy
- kea
- bhi
- glk
- mua
- ast
- nup
- sat
- ktu
- bhb
- zpq
- coh
- bkm
- gya
- sgc
- dks
- ncl
- tui
- emk
- urh
- ego
- ogo
- tsc
- idu
- igb
- ijn
- njz
- ngb
- tod
- jra
- mrt
- zav
- tke
- its
- ady
- bzw
- kng
- kmb
- lue
- jmx
- tsn
- bin
- ble
- gom
- ven
- sef
- sco
- her
- iso
- trp
- glv
- haq
- toq
- okr
- kha
- wof
- rmn
- sot
- kaj
- bbj
- sou
- mjt
- trd
- gno
- mwn
- igl
- rag
- eyo
- div
- efi
- nde
- mfv
- mix
- rki
- kjg
- fan
- khw
- wci
- bjn
- pmy
- bqi
- ina
- hni
- mjx
- kuj
- aoz
- the
- tog
- tet
- nuz
- ajg
- ccp
- mau
- ymm
- fmu
- tcz
- xmc
- nyk
- ztg
- knx
- snk
- zac
- esg
- srb
- thq
- pht
- wes
- rah
- pnb
- ssy
- zpv
- kpo
- phr
- atd
- eto
- xta
- mxx
- mui
- uki
- tkt
- mgp
- xsq
- enq
- nnh
- qxp
- zam
- bug
- bxr
- maq
- tdt
- khb
- mrr
- kas
- zgb
- kmw
- lir
- vah
- dar
- ssw
- hmd
- jab
- iii
- peg
- shr
- brx
- rwr
- bmb
- kmc
- mji
- dib
- pcc
- nbe
- mrd
- ish
- kai
- yom
- zyn
- hea
- ewo
- bas
- hms
- twh
- kfq
- thr
- xtl
- wbr
- bfb
- wtm
- mjc
- blk
- lot
- dhd
- swv
- wbm
- zzj
- kge
- mgm
- niq
- zpj
- bwx
- bde
- mtr
- gju
- kjp
- mbz
- haz
- lpo
- yig
- qud
- shy
- gjk
- ztp
- nbl
- aii
- kun
- say
- mde
- sjp
- bns
- brh
- ywq
- msi
- anr
- mrg
- mjg
- tan
- tsg
- tcy
- kbl
- mdr
- mks
- noe
- tyz
- zpa
- ahr
- aar
- wuu
- khr
- kbd
- kex
- bca
- nku
- pwr
- hsn
- ort
- ott
- swi
- kua
- tdd
- msm
- bgp
- nbm
- mxy
- abs
- zlj
- ebo
- lea
- dub
- sce
- xkb
- vav
- bra
- ssb
- sss
- nhp
- kad
- kvx
- lch
- tts
- zyj
- kxp
- lmn
- qvi
- lez
- scl
- cqd
- ayb
- xbr
- nqg
- dcc
- cjk
- bfr
- zyg
- mse
- gru
- mdv
- bew
- wti
- arg
- dso
- zdj
- pll
- mig
- qxs
- bol
- drs
- anp
- chw
- bej
- vmc
- otx
- xty
- bjj
- vmz
- ibb
- gby
- twx
- tig
- thz
- tku
- hmz
- pbm
- mfn
- nut
- cyo
- mjw
- cjm
- tlp
- naq
- rnd
- stj
- sym
- jax
- btg
- tdg
- sng
- nlv
- kvr
- pch
- fvr
- mxs
- wni
- mlq
- kfr
- mdj
- osi
- nhn
- ukw
- tji
- qvj
- nih
- bcy
- hbb
- zpx
- hoj
- cpx
- ogc
- cdo
- bgn
- bfs
- vmx
- tvn
- ior
- mxa
- btm
- anc
- jit
- mfb
- mls
- ets
- goa
- bet
- ikw
- pem
- trf
- daq
- max
- rad
- njo
- bnx
- mxl
- mbi
- nba
- zpn
- zts
- mut
- hnd
- mta
- hav
- hac
- ryu
- abr
- yer
- cld
- zag
- ndo
- sop
- vmm
- gcf
- chr
- cbk
- sbk
- bhp
- odk
- mbd
- nap
- gbr
- mii
- czh
- xti
- vls
- gdx
- sxw
- zaf
- wem
- mqh
- ank
- yaf
- vmp
- otm
- sdh
- anw
- src
- mne
- wss
- meh
- kzc
- tma
- ttj
- ots
- ilp
- zpr
- saz
- ogb
- akl
- nhg
- pbv
- rcf
- cgg
- mku
- bez
- mwe
- mtb
- gul
- ifm
- mdh
- scn
- lki
- xmf
- sgd
- aba
- cos
- luz
- zpy
- stv
- kjt
- mbf
- kmz
- nds
- mtq
- tkq
- aee
- knn
- mbs
- mnp
- ema
- bar
- unx
- plk
- psi
- mzn
- cja
- sro
- mdw
- ndh
- vmj
- zpw
- kfu
- bgx
- gsw
- fry
- zpe
- zpd
- bta
- psh
- zat
</details>
## Model details
- **Developed by:** Vineel Pratap et al.
- **Model type:** Multi-Lingual Automatic Speech Recognition model
- **Language(s):** 1024 languages, see [supported languages](#supported-languages)
- **License:** CC-BY-NC 4.0 license
- **Num parameters**: 1 billion
- **Audio sampling rate**: 16,000 kHz
- **Cite as:**
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
## Additional Links
- [Blog post](https://ai.facebook.com/blog/multilingual-model-speech-recognition/)
- [Transformers documentation](https://huggingface.co/docs/transformers/main/en/model_doc/mms).
- [Paper](https://arxiv.org/abs/2305.13516)
- [GitHub Repository](https://github.com/facebookresearch/fairseq/tree/main/examples/mms#asr)
- [Other **MMS** checkpoints](https://huggingface.co/models?other=mms)
- MMS base checkpoints:
- [facebook/mms-1b](https://huggingface.co/facebook/mms-1b)
- [facebook/mms-300m](https://huggingface.co/facebook/mms-300m)
- [Official Space](https://huggingface.co/spaces/facebook/MMS)
|
TheBloke/SlimOrca-13B-GGUF | TheBloke | 2023-11-30T21:33:38Z | 718 | 3 | transformers | [
"transformers",
"gguf",
"llama",
"en",
"dataset:Open-Orca/SlimOrca",
"dataset:ajibawa-2023/SlimOrca-ShareGPT",
"base_model:ajibawa-2023/SlimOrca-13B",
"license:cc-by-nc-nd-4.0",
"text-generation-inference",
"region:us"
] | null | 2023-11-30T21:26:08Z | ---
base_model: ajibawa-2023/SlimOrca-13B
datasets:
- Open-Orca/SlimOrca
- ajibawa-2023/SlimOrca-ShareGPT
inference: false
language:
- en
license: cc-by-nc-nd-4.0
model_creator: Feynman Innovations
model_name: SlimOrca 13B
model_type: llama
prompt_template: 'This is a conversation with your Assistant. It is a computer program
designed to help you with various tasks such as answering questions, providing recommendations,
and helping with decision making. You can ask it anything you want and it will do
its best to give you accurate and relevant information.
Context
You are a helpful AI assistant.
USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# SlimOrca 13B - GGUF
- Model creator: [Feynman Innovations](https://huggingface.co/ajibawa-2023)
- Original model: [SlimOrca 13B](https://huggingface.co/ajibawa-2023/SlimOrca-13B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Feynman Innovations's SlimOrca 13B](https://huggingface.co/ajibawa-2023/SlimOrca-13B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/SlimOrca-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/SlimOrca-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/SlimOrca-13B-GGUF)
* [Feynman Innovations's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ajibawa-2023/SlimOrca-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Ajibawa-Assistant
```
This is a conversation with your Assistant. It is a computer program designed to help you with various tasks such as answering questions, providing recommendations, and helping with decision making. You can ask it anything you want and it will do its best to give you accurate and relevant information.
Context
You are a helpful AI assistant.
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `cc-by-nc-nd-4.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Feynman Innovations's SlimOrca 13B](https://huggingface.co/ajibawa-2023/SlimOrca-13B).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [slimorca-13b.Q2_K.gguf](https://huggingface.co/TheBloke/SlimOrca-13B-GGUF/blob/main/slimorca-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [slimorca-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/SlimOrca-13B-GGUF/blob/main/slimorca-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [slimorca-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/SlimOrca-13B-GGUF/blob/main/slimorca-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [slimorca-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/SlimOrca-13B-GGUF/blob/main/slimorca-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [slimorca-13b.Q4_0.gguf](https://huggingface.co/TheBloke/SlimOrca-13B-GGUF/blob/main/slimorca-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [slimorca-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/SlimOrca-13B-GGUF/blob/main/slimorca-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [slimorca-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/SlimOrca-13B-GGUF/blob/main/slimorca-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [slimorca-13b.Q5_0.gguf](https://huggingface.co/TheBloke/SlimOrca-13B-GGUF/blob/main/slimorca-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [slimorca-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/SlimOrca-13B-GGUF/blob/main/slimorca-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [slimorca-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/SlimOrca-13B-GGUF/blob/main/slimorca-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [slimorca-13b.Q6_K.gguf](https://huggingface.co/TheBloke/SlimOrca-13B-GGUF/blob/main/slimorca-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [slimorca-13b.Q8_0.gguf](https://huggingface.co/TheBloke/SlimOrca-13B-GGUF/blob/main/slimorca-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/SlimOrca-13B-GGUF and below it, a specific filename to download, such as: slimorca-13b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/SlimOrca-13B-GGUF slimorca-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/SlimOrca-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/SlimOrca-13B-GGUF slimorca-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m slimorca-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "This is a conversation with your Assistant. It is a computer program designed to help you with various tasks such as answering questions, providing recommendations, and helping with decision making. You can ask it anything you want and it will do its best to give you accurate and relevant information.\n\nContext\nYou are a helpful AI assistant.\n\nUSER: {prompt}\nASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./slimorca-13b.Q4_K_M.gguf", # Download the model file first
n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"This is a conversation with your Assistant. It is a computer program designed to help you with various tasks such as answering questions, providing recommendations, and helping with decision making. You can ask it anything you want and it will do its best to give you accurate and relevant information.\n\nContext\nYou are a helpful AI assistant.\n\nUSER: {prompt}\nASSISTANT:", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./slimorca-13b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Feynman Innovations's SlimOrca 13B
**SlimOrca-13B: A General Purpose Intelligent Model**
This Model is trained on refined version of SlimOrca made available by [Open-Orca](https://huggingface.co/Open-Orca) team.
The idea was to check how this Model will perform in the absence of "system" prompt/instruction.
This Model is very good in various types of General Purpose content generation such as Q&A (including multiple choice), Articles from Summary, Sentiment Analysis,
Context & Hypothesis, Reviews, Erotic story generation etc.
It can also generate Uncensored content. Kindly be careful while generating Uncensored content as you will be responsible for what you
generate.
It is trained on 517981 set of conversations. Each set having 2 conversations. I have shared this [data](https://huggingface.co/datasets/ajibawa-2023/SlimOrca-ShareGPT).
All the credit goes to the Open-Orca team for releasing SlimOrca dataset.
**Training:**
Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took almost 11 Days. DeepSpeed codebase was used for training purpose.
Entire data is trained on Llama-2 by Meta.
This is a full fine tuned model. Links for quantized models are given below.
**GPTQ GGML & AWQ**
GPTQ: TBA
GGUF: TBA
AWQ: TBA
**Example Prompt:**
```
This is a conversation with your Assistant. It is a computer program designed to help you with various tasks such as answering questions, providing recommendations, and helping with decision making. You can ask it anything you want and it will do its best to give you accurate and relevant information.
Context
You are a helpful AI assistant.
USER: <prompt>
ASSISTANT:
```
You can modify above Prompt as per your requirement. I have used ShareGPT/Vicuna format v1.1 .
I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development.
Thank you for your love & support.
**Example Output**
Example 1

Example 2

<!-- original-model-card end -->
|
alnrg2arg/blockchainlabs_test3_seminar | alnrg2arg | 2024-02-02T01:55:01Z | 718 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"FelixChao/WestSeverus-7B-DPO-v2",
"macadeliccc/WestLake-7B-v2-laser-truthy-dpo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-02T01:51:09Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- FelixChao/WestSeverus-7B-DPO-v2
- macadeliccc/WestLake-7B-v2-laser-truthy-dpo
---
# blockchainlabs_test3_seminar
blockchainlabs_test3_seminar is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2)
* [macadeliccc/WestLake-7B-v2-laser-truthy-dpo](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: FelixChao/WestSeverus-7B-DPO-v2
layer_range: [0, 32]
- model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo
layer_range: [0, 32]
merge_method: slerp
base_model: FelixChao/WestSeverus-7B-DPO-v2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: #bfloat16 #bfloat16이 float16보다 학습할때 더 빠릅니다.
``` |
Lvxy1117/amber_fine_tune_sgall | Lvxy1117 | 2024-02-14T03:47:44Z | 718 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-13T15:13:52Z | ---
license: apache-2.0
---
# Model Card for Model ID
Amber fine tuned model used share gpt 90k.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
btburner1030/5DyMTzyngDXHHudoRwAY64BGBrFYyFwcHjzHHFP21RoLf7Rt_vgg | btburner1030 | 2024-02-23T05:50:16Z | 718 | 0 | keras | [
"keras",
"region:us"
] | null | 2024-02-18T05:14:05Z | Entry not found |
KoboldAI/Mixtral-8x7B-Holodeck-v1-GGUF | KoboldAI | 2024-02-23T12:22:44Z | 718 | 14 | null | [
"gguf",
"pytorch",
"mixtral",
"fine-tuned",
"moe",
"text-generation",
"en",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-02-21T19:41:35Z | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- pytorch
- mixtral
- fine-tuned
- moe
---
# Mixtral 8x7B - Holodeck
This is the GGUF version of the model meant for use with [Koboldcpp](https://koboldai.org/cpp)
## Model Description
Mistral 7B-Holodeck is a finetune created using Mixtral's 8x7B model.
## Training data
The training data contains around 3000 ebooks in various genres.
Most parts of the dataset have been prepended using the following text: `[Genre: <genre1>, <genre2>]`
***
### Limitations and Biases
Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion). |
damerajee/Gaja-vv1 | damerajee | 2024-02-24T15:46:35Z | 718 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-24T15:10:05Z | ---
library_name: transformers
license: llama2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Replete-AI/Phi-Delthanar | Replete-AI | 2024-03-17T16:44:36Z | 718 | 2 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"conversational",
"custom_code",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-12T20:40:35Z | ---
license: mit
language:
- en
thumbnail: "https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/iYImJKf2HZZZJ9IwDSN00.png"
---
# The forest is with you.

### Named after the method used to create it, interleaving the layers of its predecessor to become far larger, giving it much more potential.
[Del'thanar](https://wowpedia.fandom.com/wiki/Del%27thanar_Sagewood) is a supposed ancient treeant, and I couldn't think of a better naming convention for a model that was created using the passthrough method.
By concatenating layers from different LLMs, it can produce models with an exotic number of parameters (e.g., 9B with two 7B parameter models). These models are often referred to as "frankenmerges" or "Frankenstein models" by the community.
Many thanks to [Abacaj](https://huggingface.co/abacaj) for providing the [fine tuned weights](https://huggingface.co/abacaj/phi-2-super) that were used in the creation of this base model. You can find the full script for how the model was merged [here](https://huggingface.co/Kquant03/Phi-Delthanar/blob/main/mergekit_config.yml)...thanks to [KatyTheCutie](https://huggingface.co/KatyTheCutie) for inspiring me to test out this script.
## This idea was brought to me by [The Face of Goonery](https://huggingface.co/The-Face-Of-Goonery), also known as Caleb Morgan. I have him to thank if fine-tuning this model turns out to be a success...he also helped me to make this model even larger than the prior one.
# How to run inference:
```python
import transformers
import torch
if __name__ == "__main__":
model_name = "Replete-AI/Phi-Delthanar"
tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
model = (
transformers.AutoModelForCausalLM.from_pretrained(
model_name,
)
.to("cuda:0")
.eval()
)
messages = [
{"role": "user", "content": "Hello, who are you?"}
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
input_ids_cutoff = inputs.size(dim=1)
with torch.no_grad():
generated_ids = model.generate(
input_ids=inputs,
use_cache=True,
max_new_tokens=512,
temperature=0.2,
top_p=0.95,
do_sample=True,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id,
)
completion = tokenizer.decode(
generated_ids[0][input_ids_cutoff:],
skip_special_tokens=True,
)
print(completion)
```
# Chat template
The model uses the same chat template as found in Mistral instruct models:
# [Join the Replete AI Discord here!](https://discord.gg/tG5aY4EX4T) |
RefalMachine/ruadapt_mistral7b_full_vo_1e4 | RefalMachine | 2024-05-29T12:20:43Z | 718 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-01T17:38:58Z | ---
license: apache-2.0
library_name: transformers
model-index:
- name: ruadapt_mistral7b_full_vo_1e4
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 55.46
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RefalMachine/ruadapt_mistral7b_full_vo_1e4
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 79.55
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RefalMachine/ruadapt_mistral7b_full_vo_1e4
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.34
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RefalMachine/ruadapt_mistral7b_full_vo_1e4
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 42.53
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RefalMachine/ruadapt_mistral7b_full_vo_1e4
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 74.43
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RefalMachine/ruadapt_mistral7b_full_vo_1e4
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 28.96
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RefalMachine/ruadapt_mistral7b_full_vo_1e4
name: Open LLM Leaderboard
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_RefalMachine__ruadapt_mistral7b_full_vo_1e4)
| Metric |Value|
|---------------------------------|----:|
|Avg. |56.88|
|AI2 Reasoning Challenge (25-Shot)|55.46|
|HellaSwag (10-Shot) |79.55|
|MMLU (5-Shot) |60.34|
|TruthfulQA (0-shot) |42.53|
|Winogrande (5-shot) |74.43|
|GSM8k (5-shot) |28.96|
|
Gnider/model_old_working | Gnider | 2024-05-26T12:44:16Z | 718 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"PyTorch",
"Transformers",
"ru",
"arxiv:2309.10931",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-26T12:01:24Z | ---
language:
- ru
tags:
- PyTorch
- Transformers
thumbnail: "https://github.com/sberbank-ai/ru-gpts"
---
# rugpt3small\_based\_on\_gpt2
The model architecture design, pretraining, and evaluation are documented in our preprint: [**A Family of Pretrained Transformer Language Models for Russian**](https://arxiv.org/abs/2309.10931).
The model was pretrained with sequence length 1024 using transformers by the [SberDevices](https://sberdevices.ru/) team on 80B tokens around 3 epochs. After that, the model was finetuned with the context size of 2048.
Total training time took around one week on 32 GPUs.
# Authors
+ NLP core team RnD [Telegram channel](https://t.me/nlpcoreteam):
+ Dmitry Zmitrovich
# Cite us
```
@misc{zmitrovich2023family,
title={A Family of Pretrained Transformer Language Models for Russian},
author={Dmitry Zmitrovich and Alexander Abramov and Andrey Kalmykov and Maria Tikhonova and Ekaterina Taktasheva and Danil Astafurov and Mark Baushenko and Artem Snegirev and Tatiana Shavrina and Sergey Markov and Vladislav Mikhailov and Alena Fenogenova},
year={2023},
eprint={2309.10931},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
cs-giung/convnext-v1-base-imagenet21k | cs-giung | 2024-06-01T16:25:07Z | 718 | 0 | transformers | [
"transformers",
"safetensors",
"convnext",
"image-classification",
"arxiv:2201.03545",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-06-01T13:04:02Z | ---
license: apache-2.0
---
# ConvNext
ConvNext model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545).
The weights were converted from the `convnext_base_22k_224.pth` file presented in the [official repository](https://github.com/facebookresearch/ConvNeXt).
|
QuantFactory/SOVL-Mega-Mash-V2-L3-8B-GGUF | QuantFactory | 2024-06-08T11:37:29Z | 718 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"text-generation",
"arxiv:2403.19522",
"base_model:saishf/SOVL-Mega-Mash-V2-L3-8B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-06T10:27:18Z | ---
license: cc-by-nc-4.0
base_model: saishf/SOVL-Mega-Mash-V2-L3-8B
library_name: transformers
tags:
- mergekit
- merge
pipeline_tag: text-generation
---
# QuantFactory/SOVL-Mega-Mash-V2-L3-8B-GGUF
This is quantized version of [saishf/SOVL-Mega-Mash-V2-L3-8B](https://huggingface.co/saishf/SOVL-Mega-Mash-V2-L3-8B) created using llama.cpp
# Model Description
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [saishf/Neural-SOVLish-Devil-8B-L3](https://huggingface.co/saishf/Neural-SOVLish-Devil-8B-L3) as a base.
### Models Merged
The following models were included in the merge:
* [saishf/SOVLish-Maid-L3-8B](https://huggingface.co/saishf/SOVLish-Maid-L3-8B)
* [saishf/Merge-Mayhem-L3-V2](https://huggingface.co/saishf/Merge-Mayhem-L3-V2)
* [saishf/Merge-Mayhem-L3-V2.1](https://huggingface.co/saishf/Merge-Mayhem-L3-V2.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: saishf/Neural-SOVLish-Devil-8B-L3
- model: saishf/Merge-Mayhem-L3-V2
- model: saishf/Merge-Mayhem-L3-V2.1
- model: saishf/SOVLish-Maid-L3-8B
merge_method: model_stock
base_model: saishf/Neural-SOVLish-Devil-8B-L3
dtype: bfloat16
``` |
Ayyystin/sexyGPT-Uncensored-Q4_0-GGUF | Ayyystin | 2024-06-07T21:31:40Z | 718 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:hooking-dev/sexyGPT-Uncensored",
"license:apache-2.0",
"region:us"
] | null | 2024-06-07T21:31:38Z | ---
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
base_model: hooking-dev/sexyGPT-Uncensored
---
# Ayyystin/sexyGPT-Uncensored-Q4_0-GGUF
This model was converted to GGUF format from [`hooking-dev/sexyGPT-Uncensored`](https://huggingface.co/hooking-dev/sexyGPT-Uncensored) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/hooking-dev/sexyGPT-Uncensored) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo Ayyystin/sexyGPT-Uncensored-Q4_0-GGUF --hf-file sexygpt-uncensored-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Ayyystin/sexyGPT-Uncensored-Q4_0-GGUF --hf-file sexygpt-uncensored-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo Ayyystin/sexyGPT-Uncensored-Q4_0-GGUF --hf-file sexygpt-uncensored-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo Ayyystin/sexyGPT-Uncensored-Q4_0-GGUF --hf-file sexygpt-uncensored-q4_0.gguf -c 2048
```
|
simonbutt/yi-1.5-6b-tofutune-gguf | simonbutt | 2024-06-12T23:14:34Z | 718 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/yi-1.5-6b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-12T23:12:37Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/yi-1.5-6b-bnb-4bit
---
# Uploaded model
- **Developed by:** simonbutt
- **License:** apache-2.0
- **Finetuned from model :** unsloth/yi-1.5-6b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
huggingartists/madonna | huggingartists | 2022-06-16T21:35:29Z | 717 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/madonna",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- huggingartists/madonna
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/676c1c425eaa8e7600136c56af6dfada.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Madonna</div>
<a href="https://genius.com/artists/madonna">
<div style="text-align: center; font-size: 14px;">@madonna</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Madonna.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/madonna).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/madonna")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/2abhif57/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Madonna's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/2eok9fmu) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/2eok9fmu/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/madonna')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/madonna")
model = AutoModelWithLMHead.from_pretrained("huggingartists/madonna")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
EllyPony/flutterbot | EllyPony | 2024-05-08T03:28:01Z | 717 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"mylittlepony",
"fluttershy",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-08-18T02:59:42Z | ---
tags:
- conversational
- mylittlepony
- fluttershy
---
# Conversational Fluttershy bot made with Microsoft's GTP template.
|
timm/mvitv2_small.fb_in1k | timm | 2024-02-10T23:42:56Z | 717 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2112.01526",
"license:apache-2.0",
"region:us"
] | image-classification | 2023-04-13T00:49:07Z | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for mvitv2_small.fb_in1k
A MViT-v2 (multi-scale ViT) image classification model. Pretrained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 34.9
- GMACs: 7.0
- Activations (M): 28.1
- Image size: 224 x 224
- **Papers:**
- MViTv2: Improved Multiscale Vision Transformers for Classification and Detection: https://arxiv.org/abs/2112.01526
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/facebookresearch/mvit
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('mvitv2_small.fb_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'mvitv2_small.fb_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 49, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
|
TheBloke/lzlv_70B-GGUF | TheBloke | 2023-10-26T00:10:47Z | 717 | 39 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:lizpreciatior/lzlv_70b_fp16_hf",
"license:cc-by-nc-2.0",
"text-generation-inference",
"region:us"
] | null | 2023-10-25T20:55:48Z | ---
base_model: lizpreciatior/lzlv_70b_fp16_hf
inference: false
license: cc-by-nc-2.0
model_creator: A Guy
model_name: Lzlv 70B
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Lzlv 70B - GGUF
- Model creator: [A Guy](https://huggingface.co/lizpreciatior)
- Original model: [Lzlv 70B](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf)
<!-- description start -->
## Description
This repo contains GGUF format model files for [A Guy's Lzlv 70B](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/lzlv_70B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/lzlv_70B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/lzlv_70B-GGUF)
* [A Guy's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `cc-by-nc-2.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [A Guy's Lzlv 70B](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [lzlv_70b_fp16_hf.Q2_K.gguf](https://huggingface.co/TheBloke/lzlv_70B-GGUF/blob/main/lzlv_70b_fp16_hf.Q2_K.gguf) | Q2_K | 2 | 29.28 GB| 31.78 GB | smallest, significant quality loss - not recommended for most purposes |
| [lzlv_70b_fp16_hf.Q3_K_S.gguf](https://huggingface.co/TheBloke/lzlv_70B-GGUF/blob/main/lzlv_70b_fp16_hf.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss |
| [lzlv_70b_fp16_hf.Q3_K_M.gguf](https://huggingface.co/TheBloke/lzlv_70B-GGUF/blob/main/lzlv_70b_fp16_hf.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss |
| [lzlv_70b_fp16_hf.Q3_K_L.gguf](https://huggingface.co/TheBloke/lzlv_70B-GGUF/blob/main/lzlv_70b_fp16_hf.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss |
| [lzlv_70b_fp16_hf.Q4_0.gguf](https://huggingface.co/TheBloke/lzlv_70B-GGUF/blob/main/lzlv_70b_fp16_hf.Q4_0.gguf) | Q4_0 | 4 | 38.87 GB| 41.37 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [lzlv_70b_fp16_hf.Q4_K_S.gguf](https://huggingface.co/TheBloke/lzlv_70B-GGUF/blob/main/lzlv_70b_fp16_hf.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB| 41.57 GB | small, greater quality loss |
| [lzlv_70b_fp16_hf.Q4_K_M.gguf](https://huggingface.co/TheBloke/lzlv_70B-GGUF/blob/main/lzlv_70b_fp16_hf.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended |
| [lzlv_70b_fp16_hf.Q5_0.gguf](https://huggingface.co/TheBloke/lzlv_70B-GGUF/blob/main/lzlv_70b_fp16_hf.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [lzlv_70b_fp16_hf.Q5_K_S.gguf](https://huggingface.co/TheBloke/lzlv_70B-GGUF/blob/main/lzlv_70b_fp16_hf.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
| [lzlv_70b_fp16_hf.Q5_K_M.gguf](https://huggingface.co/TheBloke/lzlv_70B-GGUF/blob/main/lzlv_70b_fp16_hf.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended |
| lzlv_70b_fp16_hf.Q6_K.gguf | Q6_K | 6 | 56.59 GB| 59.09 GB | very large, extremely low quality loss |
| lzlv_70b_fp16_hf.Q8_0.gguf | Q8_0 | 8 | 73.29 GB| 75.79 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
### Q6_K and Q8_0 files are split and require joining
**Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files.
<details>
<summary>Click for instructions regarding Q6_K and Q8_0 files</summary>
### q6_K
Please download:
* `lzlv_70b_fp16_hf.Q6_K.gguf-split-a`
* `lzlv_70b_fp16_hf.Q6_K.gguf-split-b`
### q8_0
Please download:
* `lzlv_70b_fp16_hf.Q8_0.gguf-split-a`
* `lzlv_70b_fp16_hf.Q8_0.gguf-split-b`
To join the files, do the following:
Linux and macOS:
```
cat lzlv_70b_fp16_hf.Q6_K.gguf-split-* > lzlv_70b_fp16_hf.Q6_K.gguf && rm lzlv_70b_fp16_hf.Q6_K.gguf-split-*
cat lzlv_70b_fp16_hf.Q8_0.gguf-split-* > lzlv_70b_fp16_hf.Q8_0.gguf && rm lzlv_70b_fp16_hf.Q8_0.gguf-split-*
```
Windows command line:
```
COPY /B lzlv_70b_fp16_hf.Q6_K.gguf-split-a + lzlv_70b_fp16_hf.Q6_K.gguf-split-b lzlv_70b_fp16_hf.Q6_K.gguf
del lzlv_70b_fp16_hf.Q6_K.gguf-split-a lzlv_70b_fp16_hf.Q6_K.gguf-split-b
COPY /B lzlv_70b_fp16_hf.Q8_0.gguf-split-a + lzlv_70b_fp16_hf.Q8_0.gguf-split-b lzlv_70b_fp16_hf.Q8_0.gguf
del lzlv_70b_fp16_hf.Q8_0.gguf-split-a lzlv_70b_fp16_hf.Q8_0.gguf-split-b
```
</details>
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/lzlv_70B-GGUF and below it, a specific filename to download, such as: lzlv_70b_fp16_hf.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/lzlv_70B-GGUF lzlv_70b_fp16_hf.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/lzlv_70B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/lzlv_70B-GGUF lzlv_70b_fp16_hf.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m lzlv_70b_fp16_hf.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/lzlv_70B-GGUF", model_file="lzlv_70b_fp16_hf.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: A Guy's Lzlv 70B
# lzlv_70B
## A Mythomax/MLewd_13B-style merge of selected 70B models
A multi-model merge of several LLaMA2 70B finetunes for roleplaying and creative work. The goal was to create a model that combines creativity with intelligence for an enhanced experience.
Did it work? Probably, maybe. It seemed subjectively better than each of the individual models in my tests.
GGUF 4_K_M + 5_K_M can be found here: https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf/settings
## Procedure:
Models used:
- **NousResearch/Nous-Hermes-Llama2-70b** - A great model for roleplaying, but not the best at following complex instructions.
- **Xwin-LM/Xwin-LM-7B-V0.1** - Excellent at following instructions and quite creative out of the box, so it seemed like the best available model to act as the base for the merge.
- **Doctor-Shotgun/Mythospice-70b** - The wildcard of the three. I was looking for a creative, NSFW-oriented model and came across this while digging through hf. I hadn't heard of it before and apparently no one had bothered to release a quantized version of this model. So I downloaded it and did it myself to test it. It turned out to be more or less what I was looking for as my third component, so I used it here.
A big thank you to the creators of the models above. If you look up Mythospice, you will notice that it also includes Nous-Hermes so it's technically present twice in this mix. This is apparently common practice amongst the cool kids who do 13B models so I don't think this hurts the model.
The merging process was heavily inspired by Undi95's approach in Undi95/MXLewdMini-L2-13B. To be specific, the ratios are:
Component 1: Merge of Mythospice x Xwin with SLERP gradient [0.25, 0.3, 0.5].
Component 2: Merge Xwin x Hermes with SLERP gradient [0.4, 0.3, 0.25].
Finally, both Component 1 and Component 2 were merged with SLERP using weight 0.5.
## Peformance
I tested this model for a few days before publishing it. It seems to more or less retain the instruction-following capabilities of Xwin-70B, while seeming to have adopted a lot of the creativity of the other two models.
It handled my more complex scenarios that creative models otherwise tend to struggle with quite well. At the same time, its outputs felt more creative and possibly a bit more nsfw-inclined than Xwin-70b.
So, is it better? Feels like it to me, subjectively. Is it really better? No clue, test it.
## Prompt format:
Vicuna
USER: [Prompt]
ASSISTANT:
<!-- original-model-card end -->
|
lodrick-the-lafted/Kaiju-A-57B | lodrick-the-lafted | 2024-01-21T22:26:24Z | 717 | 5 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-11-27T05:37:51Z | ---
license: other
license_name: yi-34b
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
---
<img src=https://huggingface.co/lodrick-the-lafted/Kaiju-A-57B/resolve/main/kaiju.png>
## Kaiju-A-57B
I made this model as an experiment for /r/LocalLlama, who've all wanted a Yi graft like Goliath.
I took the goliath-120B template and used the same proportions to blend Tess-M-v1.3 and Tess-M-v1.2. The mergekit yaml is in the repo.
I chose these two as there are still precious few Yi-200K tunes and merging models with different ideas of positional encoding did not work well.
Thanks to Meta for Llama which kickstarted open weight models, thanks to Yi for the base model, thanks migtissera and the others who have fine-tuned Yi. Special shoutout to chargoddard for mergekit and the original frankenllama.
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
```
|
jeiku/best_test | jeiku | 2024-02-29T09:38:45Z | 717 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-01-22T02:18:01Z | Entry not found |
CorticalStack/mistral-7b-openhermes-sft | CorticalStack | 2024-02-18T21:06:07Z | 717 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"sft",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-17T14:47:17Z | ---
license: apache-2.0
tags:
- sft
dataset:
- teknium/openhermes
base_model:
- unsloth/mistral-7b-bnb-4bit
---
# mistral-7b-openhermes-sft
mistral-7b-openhermes-sft is an SFT fine-tuned version of [unsloth/mistral-7b-bnb-4bit](https://huggingface.co/unsloth/mistral-7b-bnb-4bit) using the [teknium/openhermes](https://huggingface.co/datasets/teknium/openhermes) dataset.
## Fine-tuning configuration
### LoRA
- r: 256
- LoRA alpha: 128
- LoRA dropout: 0.0
### Training arguments
- Epochs: 1
- Batch size: 4
- Gradient accumulation steps: 6
- Optimizer: adamw_torch_fused
- Max steps: 100
- Learning rate: 0.0002
- Weight decay: 0.1
- Learning rate scheduler type: linear
- Max seq length: 2048
- 4-bit bnb: True
Trained with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
Radiantloom/radintloom-mistral-7b-fusion-dpo | Radiantloom | 2024-02-20T15:10:14Z | 717 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-20T13:18:18Z | ---
library_name: transformers
license: apache-2.0
---
<img src="https://huggingface.co/Radiantloom/radintloom-mistral-7b-fusion/resolve/main/Radiantloom Mistral 7B Fusion.png" alt="Radiantloom Mistral 7B Fusion" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
## Radiantloom Mistral 7B Fusion DPO
This model is a finetuned version of [Radiantloom Mistral 7B Fusion](https://huggingface.co/Radiantloom/radintloom-mistral-7b-fusion). It was finetuned using Direct Preference Optimization (DPO). |
ChrisWilson011016/5HCBVeCNNxFz946u4yDVAPxNgme44KDRKZ9QvfW7S7F3fiGU_vgg | ChrisWilson011016 | 2024-03-04T18:48:59Z | 717 | 0 | keras | [
"keras",
"region:us"
] | null | 2024-02-24T15:03:51Z | Entry not found |
bartowski/Fimbulvetr-Kuro-Lotus-10.7B-GGUF | bartowski | 2024-03-16T17:43:49Z | 717 | 2 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"text-generation",
"base_model:Sao10K/Fimbulvetr-10.7B-v1",
"base_model:saishf/Kuro-Lotus-10.7B",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-16T17:27:36Z | ---
license: cc-by-nc-4.0
library_name: transformers
tags:
- mergekit
- merge
base_model:
- Sao10K/Fimbulvetr-10.7B-v1
- saishf/Kuro-Lotus-10.7B
model-index:
- name: Fimbulvetr-Kuro-Lotus-10.7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 69.54
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Fimbulvetr-Kuro-Lotus-10.7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.87
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Fimbulvetr-Kuro-Lotus-10.7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.99
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Fimbulvetr-Kuro-Lotus-10.7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 60.95
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Fimbulvetr-Kuro-Lotus-10.7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.14
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Fimbulvetr-Kuro-Lotus-10.7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.87
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Fimbulvetr-Kuro-Lotus-10.7B
name: Open LLM Leaderboard
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp Quantizations of Fimbulvetr-Kuro-Lotus-10.7B
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2440">b2440</a> for quantization.
Original model: https://huggingface.co/saishf/Fimbulvetr-Kuro-Lotus-10.7B
Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Fimbulvetr-Kuro-Lotus-10.7B-Q8_0.gguf](https://huggingface.co/bartowski/Fimbulvetr-Kuro-Lotus-10.7B-GGUF/blob/main/Fimbulvetr-Kuro-Lotus-10.7B-Q8_0.gguf) | Q8_0 | 11.40GB | Extremely high quality, generally unneeded but max available quant. |
| [Fimbulvetr-Kuro-Lotus-10.7B-Q6_K.gguf](https://huggingface.co/bartowski/Fimbulvetr-Kuro-Lotus-10.7B-GGUF/blob/main/Fimbulvetr-Kuro-Lotus-10.7B-Q6_K.gguf) | Q6_K | 8.80GB | Very high quality, near perfect, *recommended*. |
| [Fimbulvetr-Kuro-Lotus-10.7B-Q5_K_M.gguf](https://huggingface.co/bartowski/Fimbulvetr-Kuro-Lotus-10.7B-GGUF/blob/main/Fimbulvetr-Kuro-Lotus-10.7B-Q5_K_M.gguf) | Q5_K_M | 7.59GB | High quality, very usable. |
| [Fimbulvetr-Kuro-Lotus-10.7B-Q5_K_S.gguf](https://huggingface.co/bartowski/Fimbulvetr-Kuro-Lotus-10.7B-GGUF/blob/main/Fimbulvetr-Kuro-Lotus-10.7B-Q5_K_S.gguf) | Q5_K_S | 7.39GB | High quality, very usable. |
| [Fimbulvetr-Kuro-Lotus-10.7B-Q5_0.gguf](https://huggingface.co/bartowski/Fimbulvetr-Kuro-Lotus-10.7B-GGUF/blob/main/Fimbulvetr-Kuro-Lotus-10.7B-Q5_0.gguf) | Q5_0 | 7.39GB | High quality, older format, generally not recommended. |
| [Fimbulvetr-Kuro-Lotus-10.7B-Q4_K_M.gguf](https://huggingface.co/bartowski/Fimbulvetr-Kuro-Lotus-10.7B-GGUF/blob/main/Fimbulvetr-Kuro-Lotus-10.7B-Q4_K_M.gguf) | Q4_K_M | 6.46GB | Good quality, similar to 4.25 bpw. |
| [Fimbulvetr-Kuro-Lotus-10.7B-Q4_K_S.gguf](https://huggingface.co/bartowski/Fimbulvetr-Kuro-Lotus-10.7B-GGUF/blob/main/Fimbulvetr-Kuro-Lotus-10.7B-Q4_K_S.gguf) | Q4_K_S | 6.11GB | Slightly lower quality with small space savings. |
| [Fimbulvetr-Kuro-Lotus-10.7B-Q4_0.gguf](https://huggingface.co/bartowski/Fimbulvetr-Kuro-Lotus-10.7B-GGUF/blob/main/Fimbulvetr-Kuro-Lotus-10.7B-Q4_0.gguf) | Q4_0 | 6.07GB | Decent quality, older format, generally not recommended. |
| [Fimbulvetr-Kuro-Lotus-10.7B-Q3_K_L.gguf](https://huggingface.co/bartowski/Fimbulvetr-Kuro-Lotus-10.7B-GGUF/blob/main/Fimbulvetr-Kuro-Lotus-10.7B-Q3_K_L.gguf) | Q3_K_L | 5.65GB | Lower quality but usable, good for low RAM availability. |
| [Fimbulvetr-Kuro-Lotus-10.7B-Q3_K_M.gguf](https://huggingface.co/bartowski/Fimbulvetr-Kuro-Lotus-10.7B-GGUF/blob/main/Fimbulvetr-Kuro-Lotus-10.7B-Q3_K_M.gguf) | Q3_K_M | 5.19GB | Even lower quality. |
| [Fimbulvetr-Kuro-Lotus-10.7B-Q3_K_S.gguf](https://huggingface.co/bartowski/Fimbulvetr-Kuro-Lotus-10.7B-GGUF/blob/main/Fimbulvetr-Kuro-Lotus-10.7B-Q3_K_S.gguf) | Q3_K_S | 4.66GB | Low quality, not recommended. |
| [Fimbulvetr-Kuro-Lotus-10.7B-Q2_K.gguf](https://huggingface.co/bartowski/Fimbulvetr-Kuro-Lotus-10.7B-GGUF/blob/main/Fimbulvetr-Kuro-Lotus-10.7B-Q2_K.gguf) | Q2_K | 4.00GB | Extremely low quality, *not* recommended.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
microsoft/xclip-large-patch14-kinetics-600 | microsoft | 2022-09-08T11:41:57Z | 716 | 4 | transformers | [
"transformers",
"pytorch",
"xclip",
"feature-extraction",
"vision",
"video-classification",
"en",
"arxiv:2208.02816",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | video-classification | 2022-09-08T11:33:28Z | ---
language: en
license: mit
tags:
- vision
- video-classification
model-index:
- name: nielsr/xclip-large-patch14-kinetics-600
results:
- task:
type: video-classification
dataset:
name: Kinetics 400
type: kinetics-400
metrics:
- type: top-1 accuracy
value: 88.3
- type: top-5 accuracy
value: 97.7
---
# X-CLIP (large-sized model)
X-CLIP model (large-sized, patch resolution of 14) trained fully-supervised on [Kinetics-600](https://www.deepmind.com/open-source/kinetics). It was introduced in the paper [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) by Ni et al. and first released in [this repository](https://github.com/microsoft/VideoX/tree/master/X-CLIP).
This model was trained using 8 frames per video, at a resolution of 224x224.
Disclaimer: The team releasing X-CLIP did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
X-CLIP is a minimal extension of [CLIP](https://huggingface.co/docs/transformers/model_doc/clip) for general video-language understanding. The model is trained in a contrastive way on (video, text) pairs.

This allows the model to be used for tasks like zero-shot, few-shot or fully supervised video classification and video-text retrieval.
## Intended uses & limitations
You can use the raw model for determining how well text goes with a given video. See the [model hub](https://huggingface.co/models?search=microsoft/xclip) to look for
fine-tuned versions on a task that interests you.
### How to use
For code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/xclip.html#).
## Training data
This model was trained on [Kinetics-600](https://www.deepmind.com/open-source/kinetics).
### Preprocessing
The exact details of preprocessing during training can be found [here](https://github.com/microsoft/VideoX/blob/40f6d177e0a057a50ac69ac1de6b5938fd268601/X-CLIP/datasets/build.py#L247).
The exact details of preprocessing during validation can be found [here](https://github.com/microsoft/VideoX/blob/40f6d177e0a057a50ac69ac1de6b5938fd268601/X-CLIP/datasets/build.py#L285).
During validation, one resizes the shorter edge of each frame, after which center cropping is performed to a fixed-size resolution (like 224x224). Next, frames are normalized across the RGB channels with the ImageNet mean and standard deviation.
## Evaluation results
This model achieves a top-1 accuracy of 88.3% and a top-5 accuracy of 97.7%.
|
padmalcom/wav2vec2-large-emotion-detection-german | padmalcom | 2023-01-31T12:16:51Z | 716 | 3 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"voice",
"classification",
"emotion",
"speech",
"audio",
"audio-classification",
"de",
"dataset:emo-DB",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-01-31T11:28:54Z | ---
language:
- de
license: apache-2.0
tags:
- voice
- classification
- emotion
- speech
- audio
datasets:
- emo-DB
widget:
- src: >-
https://huggingface.co/padmalcom/wav2vec2-large-emotion-detection-german/resolve/main/test.wav
example_title: Sample 1
pipeline_tag: audio-classification
metrics:
- accuracy
---
This wav2vec2 based emotion detection model is trained on the [emo-DB dataset](http://emodb.bilderbar.info/start.html).
Code for training can be found [here](https://github.com/padmalcom/wav2vec2-emotion-detection-ger).
Emotion classes are:
- 0: 'anger'
- 1: 'boredom'
- 2: 'disgust'
- 3: 'fear'
- 4: 'happiness'
- 5: 'sadness'
- 6: 'neutral'
*inference.py* shows, how the model can be used. |
xzuyn/GPT2-RPGPT-8.48M | xzuyn | 2023-06-28T15:06:20Z | 716 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"en",
"dataset:practicaldreamer/RPGPT_PublicDomain-alpaca",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-06-27T05:46:00Z | ---
language:
- en
pipeline_tag: text-generation
datasets:
- practicaldreamer/RPGPT_PublicDomain-alpaca
---
# Latest Version: *111,577* / *111,577* Steps (Epoch 1).
- 28,563,712 / 28,563,712 tokens seen (Epoch 1).
- 0 / 28,563,712 tokens seen (Epoch 2).
- 0 / 28,563,712 tokens seen (Epoch 3).
# Model Info:
- Trained from scratch.
- 8.48M parameters.
- 256 context length.
- Test model. Likely needs at least 512 context to function "properly".
- Trained with a dataset that overlaps by a quarter of the context length (Shifts by 64 tokens for each subset).
# Format:
```
<|characters|>
Nancy (Oliver Twist): Female, early 20s, ESFP, Cockney accent. Loyal...
Mr. Edward Hyde (Dr. Jekyll and Mr. Hyde): Male, late 30s, ESTP...
<|scenario|>
In an alternate Victorian London where the city's poor and downtrodden...
<|response|>
Nancy: *gently brushes her fingers across the worn book spine, before suddenly stopping as she feels another hand...
Mr. Edward Hyde: *glances at Nancy with a sinister grin, slowly pulling his hand back* No need to apologize, miss...
```
# Example Output:
Step 111,577. Input `<|characters|>` as a prompt, set max tokens to 256, amount to generate to 253. This generated up to `just our circumstances before us`. Then I set amount to generate to 128 to keep half of the text in context. This generated up to `A wise suggestion,`. I then lowered the amount to generate to 64. That generated up to the ending `know of our current situation?`.
```
<|characters|>
Mrs. Samsa (The Metamorphosis): Female, middle-aged, ISFJ, German accent, compassionate mother struggling to cope with her son's transformation, and eventually succumbs to the family's financial and emotional burdens.
<|scenario|>
In a twist of fate, Mrs. Samsa finds herself transported back in time to time and space. Evangelist, who is on an isolated haven where he encounters Mrs. Samsa, by a different tale. Mrs. Samsa, still burdened by the weight of his past actions, must confront the difficult path ahead.
Through their conversations, they find common ground in their own worlds, allowing them to continue seeking wisdom from each other and finding solace in one another's words. The dialogue between these two characters will offer insight into each other's worlds as well as how their experiences have shaped them in this whimsical world.
<|response|>
Mrs. Samsa: *approaches the peculiar sights around her, eyes widening in surprise* Oh dear, I couldn't help but notice you not! I've never seen my fair life, but I'm starting to see my son. Are you here in this peculiar place?
Evangelist: *smiles warmly at Mrs. Samsa* Yes, we are indeed more than just our circumstances before us. And it is your place of wisdom and understanding. *opens the book, his eyes sparkling with excitement*
Mrs. Samsa: *slowly opens a small book of the book* I must confess, Evangelist, I've never had a different view of this place. But it feels like this before our worlds find such things that we've discovered.
Evangelist: *nods thoughtfully* You possess great wisdom, Mrs. Samsa. It seems we are both searching for a way to escape this peculiar library. Perhaps that is a sign of my spiritual journey towards you.
Mrs. Samsa: *eyes widen in curiosity* A wise suggestion, Candide. I can't help but feel a sense of serenity amidst my own life.
Evangelist: *smiles warmly* Of course, Mrs. Samsa. The path to enlightenment is filled with joy and understanding. Now, tell me more about this ancient book. What do you need to know of our current situation?
```
# Config:
Learning rate may have been too high, not sure. Average loss at step 111,557 had an averge loss of 2.1.
```
batch_size: 1
dropout: 0
learning_rate: 0.0001
max_length: 256
n_embed: 256
n_head: 8
n_layer: 8
vocab_size: 8192
```
|
lorahub/flan_t5_large-quartz_read_passage_below_choose | lorahub | 2023-07-24T10:13:38Z | 716 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-24T10:13:28Z | ---
library_name: peft
---
|
SlyEcho/open_llama_7b_v2_gguf | SlyEcho | 2023-09-01T12:05:55Z | 716 | 3 | null | [
"gguf",
"license:apache-2.0",
"region:us"
] | null | 2023-08-31T16:41:07Z | ---
license: apache-2.0
---
# gguf versions of OpenLLaMa 7B v2
- Version: [version 2 final 1T tokens](https://github.com/openlm-research/open_llama#07072023)
- Project: [OpenLLaMA: An Open Reproduction of LLaMA](https://github.com/openlm-research/open_llama)
- Model: [openlm-research/open_llama_7b_v2](https://huggingface.co/openlm-research/open_llama_7b_v2)
- [llama.cpp](https://github.com/ggerganov/llama.cpp): build 1012 (6381d4e) or later
- [ggml version](https://huggingface.co/SlyEcho/open_llama_7b_v2_ggml)
## Perplexity on wiki.test.406
Coming soon... |
stablediffusionapi/stable-diffusion-xl-base-1.0 | stablediffusionapi | 2023-09-18T16:29:08Z | 716 | 0 | diffusers | [
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2023-09-18T16:22:37Z | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# stable-diffusion-xl-base-1.0 API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "stable-diffusion-xl-base-1.0"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Try model for free: [Generate Images](https://stablediffusionapi.com/models/stable-diffusion-xl-base-1.0)
Model link: [View model](https://stablediffusionapi.com/models/stable-diffusion-xl-base-1.0)
Credits: [View credits](https://civitai.com/?query=stable-diffusion-xl-base-1.0)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v4/dreambooth"
payload = json.dumps({
"key": "your_api_key",
"model_id": "stable-diffusion-xl-base-1.0",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
ColleenMacklin/gpt-neo-125M-couples_therapist_full | ColleenMacklin | 2024-03-09T18:41:20Z | 716 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/gpt-neo-125M",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-10-08T23:11:11Z | ---
license: mit
base_model: EleutherAI/gpt-neo-125M
tags:
- generated_from_trainer
model-index:
- name: gpt-neo-125M-couples_therapist_full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-125M-couples_therapist_full
This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9661
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 48 | 3.1405 |
| No log | 2.0 | 96 | 2.9985 |
| No log | 3.0 | 144 | 2.9661 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Tokenizers 0.14.1
|
wang7776/Llama-2-7b-chat-hf-10-attention-sparsity | wang7776 | 2024-02-05T18:26:50Z | 716 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-2",
"conversational",
"en",
"arxiv:2306.11695",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-26T19:23:58Z | ---
extra_gated_heading: Access Llama 2 on Hugging Face
extra_gated_description: >-
This is a form to enable access to Llama 2 on Hugging Face after you have been
granted access from Meta. Please visit the [Meta
website](https://ai.meta.com/resources/models-and-libraries/llama-downloads)
and accept our license terms and acceptable use policy before submitting this
form. Requests will be processed in 1-2 days.
extra_gated_button_content: Submit
extra_gated_fields:
I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
license: other
---
# Overview
This model has been pruned to 10% sparsity using the [Wanda pruning method](https://arxiv.org/abs/2306.11695) on attention layers. This method requires no retraining or weight updates and still achieves competitive performance. A link to the base model can be found [here](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf).
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)| |
nlpguy/AlloyIngotNeoX | nlpguy | 2024-03-04T13:47:34Z | 716 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:bardsai/jaskier-7b-dpo-v4.3",
"base_model:Gille/StrangeMerges_20-7B-slerp",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-15T09:33:54Z | ---
license: cc-by-nc-4.0
library_name: transformers
tags:
- mergekit
- merge
base_model:
- bardsai/jaskier-7b-dpo-v4.3
- Gille/StrangeMerges_20-7B-slerp
model-index:
- name: AlloyIngotNeoX
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 74.32
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/AlloyIngotNeoX
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 89.07
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/AlloyIngotNeoX
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.97
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/AlloyIngotNeoX
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 74.57
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/AlloyIngotNeoX
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/AlloyIngotNeoX
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.83
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/AlloyIngotNeoX
name: Open LLM Leaderboard
---
# merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [bardsai/jaskier-7b-dpo-v4.3](https://huggingface.co/bardsai/jaskier-7b-dpo-v4.3)
* [Gille/StrangeMerges_20-7B-slerp](https://huggingface.co/Gille/StrangeMerges_20-7B-slerp)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model:
model:
path: Gille/StrangeMerges_20-7B-slerp
dtype: bfloat16
merge_method: slerp
parameters:
t:
- filter: self_attn
value: [0.1, 0.3, 0.5, 0.7, 0.9]
- filter: mlp
value: [0.9, 0.7, 0.5, 0.3, 0.1]
- value: 0.45
slices:
- sources:
- layer_range: [0, 32]
model:
model:
path: Gille/StrangeMerges_20-7B-slerp
- layer_range: [0, 32]
model:
model:
path: bardsai/jaskier-7b-dpo-v4.3
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nlpguy__AlloyIngotNeoX)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.21|
|AI2 Reasoning Challenge (25-Shot)|74.32|
|HellaSwag (10-Shot) |89.07|
|MMLU (5-Shot) |64.97|
|TruthfulQA (0-shot) |74.57|
|Winogrande (5-shot) |84.53|
|GSM8k (5-shot) |69.83|
|
mradermacher/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF | mradermacher | 2024-05-06T05:49:05Z | 716 | 1 | transformers | [
"transformers",
"gguf",
"Safetensors",
"mistral",
"text-generation-inference",
"merge",
"7b",
"mistralai/Mistral-7B-Instruct-v0.1",
"athirdpath/NSFW_DPO_Noromaid-7b",
"safetensors",
"text-generation",
"en",
"dataset:athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v2",
"dataset:athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us",
"base_model:MaziyarPanahi/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1",
"license:apache-2.0"
] | text-generation | 2024-03-28T10:26:24Z | ---
base_model: MaziyarPanahi/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- Safetensors
- mistral
- text-generation-inference
- merge
- mistral
- 7b
- mistralai/Mistral-7B-Instruct-v0.1
- athirdpath/NSFW_DPO_Noromaid-7b
- transformers
- safetensors
- mistral
- text-generation
- en
- dataset:athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v2
- dataset:athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW
- license:cc-by-nc-4.0
- autotrain_compatible
- endpoints_compatible
- has_space
- text-generation-inference
- region:us
---
## About
static quants of https://huggingface.co/MaziyarPanahi/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF/resolve/main/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF/resolve/main/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF/resolve/main/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF/resolve/main/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF/resolve/main/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF/resolve/main/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF/resolve/main/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF/resolve/main/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF/resolve/main/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF/resolve/main/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF/resolve/main/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF/resolve/main/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF/resolve/main/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF/resolve/main/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF/resolve/main/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF/resolve/main/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Azazelle/L3-RP_io | Azazelle | 2024-06-05T23:38:06Z | 716 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:ResplendentAI/Aura_Uncensored_l3_8B",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:ResplendentAI/Kei_Llama3_8B",
"base_model:Undi95/Llama-3-Unholy-8B",
"base_model:vicgalle/Roleplay-Llama-3-8B",
"license:llama3",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-24T02:37:53Z | ---
license: llama3
library_name: transformers
tags:
- mergekit
- merge
base_model:
- ResplendentAI/Aura_Uncensored_l3_8B
- meta-llama/Meta-Llama-3-8B-Instruct
- ResplendentAI/Kei_Llama3_8B
- Undi95/Llama-3-Unholy-8B
- vicgalle/Roleplay-Llama-3-8B
model-index:
- name: L3-RP_io
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 63.05
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azazelle/L3-RP_io
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 79.86
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azazelle/L3-RP_io
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 67.92
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azazelle/L3-RP_io
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 52.9
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azazelle/L3-RP_io
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.69
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azazelle/L3-RP_io
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 67.85
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azazelle/L3-RP_io
name: Open LLM Leaderboard
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as a base.
### Models Merged
The following models were included in the merge:
* [ResplendentAI/Aura_Uncensored_l3_8B](https://huggingface.co/ResplendentAI/Aura_Uncensored_l3_8B)
* [ResplendentAI/Kei_Llama3_8B](https://huggingface.co/ResplendentAI/Kei_Llama3_8B)
* [Undi95/Llama-3-Unholy-8B](https://huggingface.co/Undi95/Llama-3-Unholy-8B)
* [vicgalle/Roleplay-Llama-3-8B](https://huggingface.co/vicgalle/Roleplay-Llama-3-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: ResplendentAI/Aura_Uncensored_l3_8B
parameters:
density: 0.4
weight: 0.4
- model: ResplendentAI/Kei_Llama3_8B
parameters:
density: 0.4
weight: 0.4
- model: Undi95/Llama-3-Unholy-8B
parameters:
density: 0.3
weight: 0.2
- model: vicgalle/Roleplay-Llama-3-8B
parameters:
density: 0.3
weight: 0.3
merge_method: ties
base_model: meta-llama/Meta-Llama-3-8B-Instruct
parameters:
rescale: true
normalize: false
int8_mask: true
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Azazelle__L3-RP_io)
| Metric |Value|
|---------------------------------|----:|
|Avg. |67.88|
|AI2 Reasoning Challenge (25-Shot)|63.05|
|HellaSwag (10-Shot) |79.86|
|MMLU (5-Shot) |67.92|
|TruthfulQA (0-shot) |52.90|
|Winogrande (5-shot) |75.69|
|GSM8k (5-shot) |67.85|
|
starsy/Llama-3-70B-Instruct-Gradient-1048k-AWQ | starsy | 2024-05-17T07:19:45Z | 716 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"meta",
"llama-3",
"conversational",
"en",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | text-generation | 2024-05-14T10:38:26Z | ---
language:
- en
pipeline_tag: text-generation
tags:
- meta
- llama-3
license: llama3
---
# Introduction
**This is a 4-bit [AWQ(GEMM)](https://github.com/casper-hansen/AutoAWQ) quantized model of [gradientai/Llama-3-70B-Instruct-Gradient-1048k](https://huggingface.co/gradientai/Llama-3-70B-Instruct-Gradient-1048k).**
---
README of [original model](https://huggingface.co/gradientai/Llama-3-70B-Instruct-Gradient-1048k):
<a href="https://www.gradient.ai" target="_blank"><img src="https://cdn-uploads.huggingface.co/production/uploads/655bb613e8a8971e89944f3e/TSa3V8YpoVagnTYgxiLaO.png" width="200"/></a>
# Llama-3 70B Instruct Gradient 1048K
Join our custom agent and long context (262k-1M+) waitlist: https://forms.gle/L6TDY7dozx8TuoUv7
Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message [email protected].
For more info see our [End-to-end development service for custom LLMs and AI systems](https://gradient.ai/development-lab)
[Join our Discord](https://discord.com/invite/2QVy2qt2mf)
This model extends LLama-3 70B's context length from 8k to > 1048K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 34M tokens for this stage, and ~430M tokens total for all stages, which is < 0.003% of Llama-3's original pre-training data.

**Approach:**
- [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) as the base
- NTK-aware interpolation [4] following scaling laws [2] to set optimal schedule for RoPE theta
- Progressive training on increasing context lengths, similar to [Large World Model](https://huggingface.co/LargeWorldModel) [1] (See details below)
**Infra:**
We build on top of the EasyContext Blockwise RingAttention library [5] to scalably and efficiently train on very long contexts on [Crusoe Energy](https://huggingface.co/crusoeai) high performance L40S cluster.
We layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices.
**Data:**
For training data, we generate long contexts by augmenting [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B). We also fine-tune on a chat dataset based on UltraChat [3], following a similar recipe for data augmentation to [1].
**Progressive Training Details:**
| | 65K | 262K | 524K | 1048k |
|------------------------|-------------|-------------|-------------|-------------|
| Initialize From | Llama-3-70B-Instruct | 65K | 262K | 524K |
| Sequence Length 2^N | 16 | 18 | 19 | 20 |
| RoPE theta | 15296098 | 207112184 | 1062356830 | 3580165449 |
| Batch Size | 64 | 16 | 8 | 1 |
| Gradient Accumulation Steps | 1 | 1 | 2 | 4 |
| Steps | 20 | 25 | 25 | 8 |
| Total Tokens | 83886080 | 104857600 | 209715200 | 33554432 |
| Learning rate | 2.00E-05 | 2.00E-05 | 2.00E-05 | 2.00E-05 |
| # GPUs | 512 | 512 | 512 | 128 |
| GPU Type | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S |
| Minutes to Train (Wall)| 100 | 170 | 284 | 516 |
**Evaluation Details:**
```
EVAL_MAX_CONTEXT_LENGTH=1140200
EVAL_MIN_CONTEXT_LENGTH=100
EVAL_CONTEXT_INTERVAL=86675
EVAL_DEPTH_INTERVAL=0.2
EVAL_NUM_SAMPLES=1
EVAL_RND_NUMBER_DIGITS=8
HAYSTACK:
EVAL_GENERATOR_TOKENS=925000
```
Haystack is "haystack 3", further detailed in this [blog post](https://gradient.ai/blog/the-haystack-matters-for-niah-evals).
**Quants:**
## The Gradient AI Team
https://gradient.ai/
Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business.
## Contact Us
Drop an email to [[email protected]](mailto:[email protected])
## References
[1] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024).
[2] Liu, Xiaoran, et al. "Scaling laws of rope-based extrapolation." arXiv preprint arXiv:2310.05209 (2023).
[3] Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Zhi Zheng, Shengding Hu, Zhiyuan
Liu, Maosong Sun, and Bowen Zhou. Enhancing chat language models by scaling
high-quality instructional conversations. arXiv preprint arXiv:2305.14233, 2023.
[4] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023).
[5] https://github.com/jzhang38/EasyContext
----
# Base Model
## Model Details
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
**Model developers** Meta
**Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants.
**Input** Models input text only.
**Output** Models generate text and code only.
**Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
<table>
<tr>
<td>
</td>
<td><strong>Training Data</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Token count</strong>
</td>
<td><strong>Knowledge cutoff</strong>
</td>
</tr>
<tr>
<td rowspan="2" >Llama 3
</td>
<td rowspan="2" >A new mix of publicly available online data.
</td>
<td>8B
</td>
<td>8k
</td>
<td>Yes
</td>
<td rowspan="2" >15T+
</td>
<td>March, 2023
</td>
</tr>
<tr>
<td>70B
</td>
<td>8k
</td>
<td>Yes
</td>
<td>December, 2023
</td>
</tr>
</table>
**Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date** April 18, 2024.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.
**Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
## How to use
This repository contains two versions of Meta-Llama-3-70B-Instruct, for use with transformers and with the original `llama3` codebase.
### Use with transformers
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "meta-llama/Meta-Llama-3-70B-Instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
### Use with `llama3`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3).
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Meta-Llama-3-70B-Instruct --include "original/*" --local-dir Meta-Llama-3-70B-Instruct
```
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.
<table>
<tr>
<td>
</td>
<td><strong>Time (GPU hours)</strong>
</td>
<td><strong>Power Consumption (W)</strong>
</td>
<td><strong>Carbon Emitted(tCO2eq)</strong>
</td>
</tr>
<tr>
<td>Llama 3 8B
</td>
<td>1.3M
</td>
<td>700
</td>
<td>390
</td>
</tr>
<tr>
<td>Llama 3 70B
</td>
<td>6.4M
</td>
<td>700
</td>
<td>1900
</td>
</tr>
<tr>
<td>Total
</td>
<td>7.7M
</td>
<td>
</td>
<td>2290
</td>
</tr>
</table>
**CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
## Benchmarks
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md).
### Base pretrained models
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama2 7B</strong>
</td>
<td><strong>Llama2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama2 70B</strong>
</td>
</tr>
<tr>
<td rowspan="6" >General
</td>
<td>MMLU (5-shot)
</td>
<td>66.6
</td>
<td>45.7
</td>
<td>53.8
</td>
<td>79.5
</td>
<td>69.7
</td>
</tr>
<tr>
<td>AGIEval English (3-5 shot)
</td>
<td>45.9
</td>
<td>28.8
</td>
<td>38.7
</td>
<td>63.0
</td>
<td>54.8
</td>
</tr>
<tr>
<td>CommonSenseQA (7-shot)
</td>
<td>72.6
</td>
<td>57.6
</td>
<td>67.6
</td>
<td>83.8
</td>
<td>78.7
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>76.1
</td>
<td>73.3
</td>
<td>75.4
</td>
<td>83.1
</td>
<td>81.8
</td>
</tr>
<tr>
<td>BIG-Bench Hard (3-shot, CoT)
</td>
<td>61.1
</td>
<td>38.1
</td>
<td>47.0
</td>
<td>81.3
</td>
<td>65.7
</td>
</tr>
<tr>
<td>ARC-Challenge (25-shot)
</td>
<td>78.6
</td>
<td>53.7
</td>
<td>67.6
</td>
<td>93.0
</td>
<td>85.3
</td>
</tr>
<tr>
<td>Knowledge reasoning
</td>
<td>TriviaQA-Wiki (5-shot)
</td>
<td>78.5
</td>
<td>72.1
</td>
<td>79.6
</td>
<td>89.7
</td>
<td>87.5
</td>
</tr>
<tr>
<td rowspan="4" >Reading comprehension
</td>
<td>SQuAD (1-shot)
</td>
<td>76.4
</td>
<td>72.2
</td>
<td>72.1
</td>
<td>85.6
</td>
<td>82.6
</td>
</tr>
<tr>
<td>QuAC (1-shot, F1)
</td>
<td>44.4
</td>
<td>39.6
</td>
<td>44.9
</td>
<td>51.1
</td>
<td>49.4
</td>
</tr>
<tr>
<td>BoolQ (0-shot)
</td>
<td>75.7
</td>
<td>65.5
</td>
<td>66.9
</td>
<td>79.0
</td>
<td>73.1
</td>
</tr>
<tr>
<td>DROP (3-shot, F1)
</td>
<td>58.4
</td>
<td>37.9
</td>
<td>49.8
</td>
<td>79.7
</td>
<td>70.2
</td>
</tr>
</table>
### Instruction tuned models
<table>
<tr>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama 2 7B</strong>
</td>
<td><strong>Llama 2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama 2 70B</strong>
</td>
</tr>
<tr>
<td>MMLU (5-shot)
</td>
<td>68.4
</td>
<td>34.1
</td>
<td>47.8
</td>
<td>82.0
</td>
<td>52.9
</td>
</tr>
<tr>
<td>GPQA (0-shot)
</td>
<td>34.2
</td>
<td>21.7
</td>
<td>22.3
</td>
<td>39.5
</td>
<td>21.0
</td>
</tr>
<tr>
<td>HumanEval (0-shot)
</td>
<td>62.2
</td>
<td>7.9
</td>
<td>14.0
</td>
<td>81.7
</td>
<td>25.6
</td>
</tr>
<tr>
<td>GSM-8K (8-shot, CoT)
</td>
<td>79.6
</td>
<td>25.7
</td>
<td>77.4
</td>
<td>93.0
</td>
<td>57.5
</td>
</tr>
<tr>
<td>MATH (4-shot, CoT)
</td>
<td>30.0
</td>
<td>3.8
</td>
<td>6.7
</td>
<td>50.4
</td>
<td>11.6
</td>
</tr>
</table>
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started.
#### Llama 3-Instruct
As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
<span style="text-decoration:underline;">Safety</span>
For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
<span style="text-decoration:underline;">Refusals</span>
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/).
#### Critical risks
<span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
We have conducted a two fold assessment of the safety of the model in this area:
* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
### <span style="text-decoration:underline;">Cyber Security </span>
We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval).
### <span style="text-decoration:underline;">Child Safety</span>
Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide)
## Citation instructions
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
## Contributors
Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
|
AbeHou/SemStamp-c4-sbert | AbeHou | 2024-05-19T04:20:38Z | 716 | 1 | transformers | [
"transformers",
"pytorch",
"mpnet",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-17T23:05:23Z | Entry not found |
r1char9/T5_chat | r1char9 | 2024-06-14T21:12:02Z | 716 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"T5",
"russian",
"text-generation",
"ru",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | 2024-06-12T20:46:46Z | ---
license: mit
language:
- ru
pipeline_tag: text2text-generation
tags:
- T5
- russian
- text2text-generation
- text-generation
---
Модель [ruT5-base](https://huggingface.co/ai-forever/ruT5-base) была fine-tuned для задачи __text-to-text__, предназначенная для __Russian__ текст.
## Uses
```
from transformers import AutoTokenizer, T5ForConditionalGeneration
chat_checkpoint = 'r1char9/T5_chat'
chat_model = T5ForConditionalGeneration.from_pretrained(chat_checkpoint)
chat_tokenizer = AutoTokenizer.from_pretrained(chat_checkpoint)
text='Что самое главное в человеке ?'
def chat_fun(self, text: str):
tokenized_sentence = self.chat_tokenizer(text, return_tensors='pt', truncation=True)
res = self.chat_model.generate(**tokenized_sentence, num_beams=2, max_length=100)
return self.chat_tokenizer.decode(res[0], skip_special_tokens=True)
text = chat_fun(text=text)
# Самое главное в человеке - это его любовь и уважение к другим людям.
# Это означает, что он должен быть искренним и искренним в своих мыслях и чувствах,
# а также готов жертвовать своим личным и профессиональным идеалами и ценностям, чтобы достичь своих целей.
```
|
Deathsquad10/TL3-Testing | Deathsquad10 | 2024-06-20T11:38:46Z | 716 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-06-14T12:15:37Z | ---
license: apache-2.0
---
|
RichardErkhov/BEE-spoke-data_-_smol_llama-220M-openhermes-gguf | RichardErkhov | 2024-06-22T23:38:39Z | 716 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-06-22T23:34:13Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
smol_llama-220M-openhermes - GGUF
- Model creator: https://huggingface.co/BEE-spoke-data/
- Original model: https://huggingface.co/BEE-spoke-data/smol_llama-220M-openhermes/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [smol_llama-220M-openhermes.Q2_K.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-openhermes-gguf/blob/main/smol_llama-220M-openhermes.Q2_K.gguf) | Q2_K | 0.09GB |
| [smol_llama-220M-openhermes.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-openhermes-gguf/blob/main/smol_llama-220M-openhermes.IQ3_XS.gguf) | IQ3_XS | 0.1GB |
| [smol_llama-220M-openhermes.IQ3_S.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-openhermes-gguf/blob/main/smol_llama-220M-openhermes.IQ3_S.gguf) | IQ3_S | 0.1GB |
| [smol_llama-220M-openhermes.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-openhermes-gguf/blob/main/smol_llama-220M-openhermes.Q3_K_S.gguf) | Q3_K_S | 0.1GB |
| [smol_llama-220M-openhermes.IQ3_M.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-openhermes-gguf/blob/main/smol_llama-220M-openhermes.IQ3_M.gguf) | IQ3_M | 0.1GB |
| [smol_llama-220M-openhermes.Q3_K.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-openhermes-gguf/blob/main/smol_llama-220M-openhermes.Q3_K.gguf) | Q3_K | 0.11GB |
| [smol_llama-220M-openhermes.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-openhermes-gguf/blob/main/smol_llama-220M-openhermes.Q3_K_M.gguf) | Q3_K_M | 0.11GB |
| [smol_llama-220M-openhermes.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-openhermes-gguf/blob/main/smol_llama-220M-openhermes.Q3_K_L.gguf) | Q3_K_L | 0.11GB |
| [smol_llama-220M-openhermes.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-openhermes-gguf/blob/main/smol_llama-220M-openhermes.IQ4_XS.gguf) | IQ4_XS | 0.12GB |
| [smol_llama-220M-openhermes.Q4_0.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-openhermes-gguf/blob/main/smol_llama-220M-openhermes.Q4_0.gguf) | Q4_0 | 0.12GB |
| [smol_llama-220M-openhermes.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-openhermes-gguf/blob/main/smol_llama-220M-openhermes.IQ4_NL.gguf) | IQ4_NL | 0.12GB |
| [smol_llama-220M-openhermes.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-openhermes-gguf/blob/main/smol_llama-220M-openhermes.Q4_K_S.gguf) | Q4_K_S | 0.12GB |
| [smol_llama-220M-openhermes.Q4_K.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-openhermes-gguf/blob/main/smol_llama-220M-openhermes.Q4_K.gguf) | Q4_K | 0.13GB |
| [smol_llama-220M-openhermes.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-openhermes-gguf/blob/main/smol_llama-220M-openhermes.Q4_K_M.gguf) | Q4_K_M | 0.13GB |
| [smol_llama-220M-openhermes.Q4_1.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-openhermes-gguf/blob/main/smol_llama-220M-openhermes.Q4_1.gguf) | Q4_1 | 0.13GB |
| [smol_llama-220M-openhermes.Q5_0.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-openhermes-gguf/blob/main/smol_llama-220M-openhermes.Q5_0.gguf) | Q5_0 | 0.14GB |
| [smol_llama-220M-openhermes.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-openhermes-gguf/blob/main/smol_llama-220M-openhermes.Q5_K_S.gguf) | Q5_K_S | 0.14GB |
| [smol_llama-220M-openhermes.Q5_K.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-openhermes-gguf/blob/main/smol_llama-220M-openhermes.Q5_K.gguf) | Q5_K | 0.15GB |
| [smol_llama-220M-openhermes.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-openhermes-gguf/blob/main/smol_llama-220M-openhermes.Q5_K_M.gguf) | Q5_K_M | 0.15GB |
| [smol_llama-220M-openhermes.Q5_1.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-openhermes-gguf/blob/main/smol_llama-220M-openhermes.Q5_1.gguf) | Q5_1 | 0.16GB |
| [smol_llama-220M-openhermes.Q6_K.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-openhermes-gguf/blob/main/smol_llama-220M-openhermes.Q6_K.gguf) | Q6_K | 0.17GB |
| [smol_llama-220M-openhermes.Q8_0.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-openhermes-gguf/blob/main/smol_llama-220M-openhermes.Q8_0.gguf) | Q8_0 | 0.22GB |
Original model description:
---
license: apache-2.0
datasets:
- teknium/openhermes
base_model: BEE-spoke-data/smol_llama-220M-GQA
inference:
parameters:
do_sample: true
renormalize_logits: true
temperature: 0.25
top_p: 0.95
top_k: 50
min_new_tokens: 2
max_new_tokens: 96
repetition_penalty: 1.03
no_repeat_ngram_size: 5
epsilon_cutoff: 0.0008
widget:
- text: "Below is an instruction that describes a task, paired with an input that\
\ provides further context. Write a response that appropriately completes the\
\ request. \n \n### Instruction: \n \nWrite an ode to Chipotle burritos.\
\ \n \n### Response: \n"
example_title: burritos
model-index:
- name: smol_llama-220M-openhermes
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 25.17
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-openhermes
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 28.98
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-openhermes
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 26.17
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-openhermes
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 43.08
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-openhermes
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 52.01
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-openhermes
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.61
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-openhermes
name: Open LLM Leaderboard
---
# BEE-spoke-data/smol_llama-220M-openhermes
> Please note that this is an experiment, and the model has limitations because it is smol.
prompt format is alpaca
```
Below is an instruction that describes a task, paired with an input that
provides further context. Write a response that appropriately completes
the request.
### Instruction:
How can I increase my meme production/output? Currently, I only create them in ancient babylonian which is time consuming.
### Inputs:
### Response:
```
It was trained on inputs so if you have inputs (like some text to ask a question about) then include it under `### Inputs:`
## Example
Output on the text above ^. The inference API is set to sample with low temp so you should see (_at least slightly_) different generations each time.

Note that the inference API parameters used here are an initial educated guess, and may be updated over time:
```yml
inference:
parameters:
do_sample: true
renormalize_logits: true
temperature: 0.25
top_p: 0.95
top_k: 50
min_new_tokens: 2
max_new_tokens: 96
repetition_penalty: 1.03
no_repeat_ngram_size: 5
epsilon_cutoff: 0.0008
```
Feel free to experiment with the parameters using the model in Python and let us know if you have improved results with other params!
## Data
Note that **this checkpoint** was fine-tuned on `teknium/openhermes`, which is generated/synthetic data by an OpenAI model. This means usage of this checkpoint should follow their terms of use: https://openai.com/policies/terms-of-use
---
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_BEE-spoke-data__smol_llama-220M-openhermes)
| Metric |Value|
|---------------------------------|----:|
|Avg. |29.34|
|AI2 Reasoning Challenge (25-Shot)|25.17|
|HellaSwag (10-Shot) |28.98|
|MMLU (5-Shot) |26.17|
|TruthfulQA (0-shot) |43.08|
|Winogrande (5-shot) |52.01|
|GSM8k (5-shot) | 0.61|
|
Dreyzin/DialoGPT-medium-avatar | Dreyzin | 2022-01-19T04:49:52Z | 715 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:04Z | ---
tags:
- conversational
---
#Uncle Iroh DialoGPT Model |
abhiramtirumala/DialoGPT-sarcastic | abhiramtirumala | 2021-06-30T19:52:43Z | 715 | 5 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z |
---
pipeline_tag: conversational
---
This model is a fine-tuned version of Microsoft/DialoGPT-medium trained to created sarcastic responses from the dataset "Sarcasm on Reddit" located [here](https://www.kaggle.com/danofer/sarcasm). |
jhonparra18/petro-twitter-assistant | jhonparra18 | 2023-02-18T22:55:41Z | 715 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"es",
"dataset:jhonparra18/petro-tweets",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-02-18T22:15:51Z | ---
tags:
- generated_from_trainer
model-index:
- name: petro-twitter-assistant
results: []
widget:
- text: Mi gobierno de la Colombia humana es
datasets:
- jhonparra18/petro-tweets
language:
- es
pipeline_tag: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# petro-twitter-assistant
This model is a fine-tuned version of [flax-community/gpt-2-spanish](https://huggingface.co/flax-community/gpt-2-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0562
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1263 | 2.3 | 1000 | 3.0679 |
| 2.8236 | 4.6 | 2000 | 3.0305 |
| 2.6661 | 6.9 | 3000 | 3.0411 |
| 2.5905 | 9.2 | 4000 | 3.0562 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.1.0
- Tokenizers 0.12.1 |
Salesforce/codet5p-220m-py | Salesforce | 2023-05-16T00:35:12Z | 715 | 10 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"arxiv:2305.07922",
"license:bsd-3-clause",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | 2023-05-15T09:57:52Z | ---
license: bsd-3-clause
---
# CodeT5+ 220M (further tuned on Python)
## Model description
[CodeT5+](https://github.com/salesforce/CodeT5/tree/main/CodeT5+) is a new family of open code large language models with an encoder-decoder architecture that can flexibly operate in different modes (i.e. _encoder-only_, _decoder-only_, and _encoder-decoder_) to support a wide range of code understanding and generation tasks.
It is introduced in the paper:
[CodeT5+: Open Code Large Language Models for Code Understanding and Generation](https://arxiv.org/pdf/2305.07922.pdf)
by [Yue Wang](https://yuewang-cuhk.github.io/)\*, [Hung Le](https://sites.google.com/view/henryle2018/home?pli=1)\*, [Akhilesh Deepak Gotmare](https://akhileshgotmare.github.io/), [Nghi D.Q. Bui](https://bdqnghi.github.io/), [Junnan Li](https://sites.google.com/site/junnanlics), [Steven C.H. Hoi](https://sites.google.com/view/stevenhoi/home) (* indicates equal contribution).
Compared to the original CodeT5 family (base: `220M`, large: `770M`), CodeT5+ is pretrained with a diverse set of pretraining tasks including _span denoising_, _causal language modeling_, _contrastive learning_, and _text-code matching_ to learn rich representations from both unimodal code data and bimodal code-text data.
Additionally, it employs a simple yet effective _compute-efficient pretraining_ method to initialize the model components with frozen off-the-shelf LLMs such as [CodeGen](https://github.com/salesforce/CodeGen) to efficiently scale up the model (i.e. `2B`, `6B`, `16B`), and adopts a "shallow encoder and deep decoder" architecture.
Furthermore, it is instruction-tuned to align with natural language instructions (i.e. InstructCodeT5+ 16B) following [Code Alpaca](https://github.com/sahil280114/codealpaca).
## How to use
This model can be easily loaded using the `T5ForConditionalGeneration` functionality and employs the same tokenizer as original [CodeT5](https://github.com/salesforce/CodeT5).
```python
from transformers import T5ForConditionalGeneration, AutoTokenizer
checkpoint = "Salesforce/codet5p-220m-py"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = T5ForConditionalGeneration.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
outputs = model.generate(inputs, max_length=10)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
# ==> print('Hello World!')
```
## Pretraining data
This checkpoint is trained on the stricter permissive subset of the deduplicated version of the [github-code dataset](https://huggingface.co/datasets/codeparrot/github-code).
The data is preprocessed by reserving only permissively licensed code ("mit" “apache-2”, “bsd-3-clause”, “bsd-2-clause”, “cc0-1.0”, “unlicense”, “isc”).
Supported languages (9 in total) are as follows:
`c`, `c++`, `c-sharp`, `go`, `java`, `javascript`, `php`, `python`, `ruby.`
## Training procedure
This checkpoint is first trained on the multilingual unimodal code data at the first-stage pretraining, which includes a diverse set of pretraining tasks including _span denoising_ and two variants of _causal language modeling_.
After that, it is further trained on the Python subset with the causal language modeling objective for another epoch to better adapt for Python code generation. Please refer to the paper for more details.
## Evaluation results
CodeT5+ models have been comprehensively evaluated on a wide range of code understanding and generation tasks in various settings: _zero-shot_, _finetuning_, and _instruction-tuning_.
Specifically, CodeT5+ yields substantial performance gains on many downstream tasks compared to their SoTA baselines, e.g.,
8 text-to-code retrieval tasks (+3.2 avg. MRR), 2 line-level code completion tasks (+2.1 avg. Exact Match), and 2 retrieval-augmented code generation tasks (+5.8 avg. BLEU-4).
In 2 math programming tasks on MathQA-Python and GSM8K-Python, CodeT5+ models of below billion-parameter sizes significantly outperform many LLMs of up to 137B parameters.
Particularly, in the zero-shot text-to-code generation task on HumanEval benchmark, InstructCodeT5+ 16B sets new SoTA results of 35.0% pass@1 and 54.5% pass@10 against other open code LLMs, even surpassing the closed-source OpenAI code-cushman-001 mode
Please refer to the [paper](https://arxiv.org/pdf/2305.07922.pdf) for more details.
Specifically for this checkpoint, it achieves 12.0% pass@1 on HumanEval in the zero-shot setting, which outperforms much larger LLMs such as Incoder 1.3B’s 8.9%, GPT-Neo 2.7B's 6.4%, and GPT-J 6B's 11.6%.
## BibTeX entry and citation info
```bibtex
@article{wang2023codet5plus,
title={CodeT5+: Open Code Large Language Models for Code Understanding and Generation},
author={Wang, Yue and Le, Hung and Gotmare, Akhilesh Deepak and Bui, Nghi D.Q. and Li, Junnan and Hoi, Steven C. H.},
journal={arXiv preprint},
year={2023}
}
``` |
922-CA/l2-7b-sayori-ddlc-v0.1-gguf | 922-CA | 2023-09-09T06:28:12Z | 715 | 0 | null | [
"gguf",
"license:llama2",
"region:us"
] | null | 2023-09-08T11:01:16Z | ---
license: llama2
---
GGUFs of [l2-7b-sayori-ddlc-v0.1](https://huggingface.co/922-CA/l2-7b-sayori-ddlc-v0.1). (Primarily tested and run with Koboldcpp v1.41+).
QLora (hf and GGML) [here](https://huggingface.co/922-CA/sayori-lm-lora-tests/tree/main/l2-7b-sayori-v0.1). |
paulml/NeuralOmniBeagleMBX-v3-7B | paulml | 2024-02-05T13:32:52Z | 715 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"mlabonne/NeuralOmniBeagle-7B",
"flemmingmiguel/MBX-7B-v3",
"base_model:mlabonne/NeuralOmniBeagle-7B",
"base_model:flemmingmiguel/MBX-7B-v3",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-05T13:27:34Z | ---
tags:
- merge
- mergekit
- lazymergekit
- mlabonne/NeuralOmniBeagle-7B
- flemmingmiguel/MBX-7B-v3
base_model:
- mlabonne/NeuralOmniBeagle-7B
- flemmingmiguel/MBX-7B-v3
license: cc-by-nc-4.0
---
# NeuralOmniBeagleMBX-v3-7B
NeuralOmniBeagleMBX-v3-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mlabonne/NeuralOmniBeagle-7B](https://huggingface.co/mlabonne/NeuralOmniBeagle-7B)
* [flemmingmiguel/MBX-7B-v3](https://huggingface.co/flemmingmiguel/MBX-7B-v3)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mlabonne/NeuralOmniBeagle-7B
layer_range: [0, 32]
- model: flemmingmiguel/MBX-7B-v3
layer_range: [0, 32]
merge_method: slerp
base_model: flemmingmiguel/MBX-7B-v3
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "paulml/NeuralOmniBeagleMBX-v3-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
raidhon/coven_7b_128k_orpo_alpha | raidhon | 2024-05-05T19:25:23Z | 715 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"large-language-model",
"orpo",
"conversational",
"en",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-03T14:48:15Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation
- large-language-model
- orpo
base_model:
- mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: Coven 7B 128K ORPO
description: "Coven 7B 128K ORPO is a derivative of Mistral-7B-Instruct-v0.2, fine-tuned to perform specialized tasks involving deeper understanding and reasoning over context. This model exhibits strong capabilities in both general language understanding and task-specific challenges."
results:
- task:
type: text-generation
name: Winogrande Challenge
dataset:
name: Winogrande
type: winogrande_xl
split: test
args:
num_few_shot: 5
metrics:
- type: accuracy
value: 77.82
name: accuracy
- task:
type: text-generation
name: TruthfulQA Generation
dataset:
name: TruthfulQA
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: accuracy
value: 49.55
name: accuracy
- task:
type: text-generation
name: PIQA Problem Solving
dataset:
name: PIQA
type: piqa
split: validation
args:
num_few_shot: 5
metrics:
- type: accuracy
value: 82.05
name: accuracy
- task:
type: text-generation
name: OpenBookQA Facts
dataset:
name: OpenBookQA
type: openbookqa
split: test
args:
num_few_shot: 5
metrics:
- type: accuracy
value: 34.60
name: accuracy
- task:
type: text-generation
name: MMLU Knowledge Test
dataset:
name: MMLU
type: mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: accuracy
value: 63.00
name: accuracy
- task:
type: text-generation
name: Hellaswag Contextual Completions
dataset:
name: Hellaswag
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: accuracy
value: 65.37
name: accuracy
- task:
type: text-generation
name: GSM8k Mathematical Reasoning
dataset:
name: GSM8k
type: gsm8k
split: test
args:
num_few_shot: 5
metrics:
- type: accuracy
value: 72.18
name: exact match (strict)
- type: accuracy
value: 72.63
name: exact match (flexible)
- task:
type: text-generation
name: BoolQ Question Answering
dataset:
name: BoolQ
type: boolq
split: validation
args:
num_few_shot: 5
metrics:
- type: accuracy
value: 87.43
name: accuracy
- task:
type: text-generation
name: ARC Challenge
dataset:
name: ARC Challenge
type: ai2_arc
split: test
args:
num_few_shot: 25
metrics:
- type: accuracy
value: 59.64
name: accuracy
---
# 🧙 Coven 7B 128K ORPO
Coven 7B 128K is an improved iteration of Mistral-7B-Instruct-v0.2, refined to expand processing capabilities and refine language model preferences. This model includes a significantly increased context constraint of 128K tokens using the [Yarn](https://github.com/jquesnelle/yarn) technique, which allows for more extensive data processing and understanding of complex language scenarios. In addition, the Coven 7B ORPO 128K tokenization uses the innovative ORPO (Monolithic Preference Optimization without Reference Model) technology. ORPO simplifies the fine-tuning process by directly optimizing the odds ratio to distinguish between favorable and unfavorable generation styles, effectively improving model performance without the need for an additional preference alignment step.
### Eval
| Task | Model | Metric | Value | Change (%) |
|---------------------|-------------------------|-------------------|----------|------------------------------|
| Winogrande | Mistral-7B-Instruct-v0.2| Accuracy | 73.64% | - |
| | Coven 7B 128K ORPO | Accuracy | 77.82% | +5.67% |
| TruthfulQA | Mistral-7B-Instruct-v0.2| Accuracy | 59.54% | - |
| | Coven 7B 128K ORPO | Accuracy | 49.55% | -16.78% |
| PIQA | Mistral-7B-Instruct-v0.2| Accuracy | 80.03% | - |
| | Coven 7B 128K ORPO | Accuracy | 82.05% | +2.52% |
| OpenBookQA | Mistral-7B-Instruct-v0.2| Accuracy | 36.00% | - |
| | Coven 7B 128K ORPO | Accuracy | 34.60% | -3.89% |
| | Mistral-7B-Instruct-v0.2| Accuracy Normalized| 45.20% | - |
| | Coven 7B 128K ORPO | Accuracy Normalized| 48.00% | +6.19% |
| MMLU | Mistral-7B-Instruct-v0.2| Accuracy | 58.79% | - |
| | Coven 7B 128K ORPO | Accuracy | 63.00% | +7.16% |
| Hellaswag | Mistral-7B-Instruct-v0.2| Accuracy | 66.08% | - |
| | Coven 7B 128K ORPO | Accuracy | 65.37% | -1.08% |
| | Mistral-7B-Instruct-v0.2| Accuracy Normalized| 83.68% | - |
| | Coven 7B 128K ORPO | Accuracy Normalized| 84.29% | +0.73% |
| GSM8K (Strict) | Mistral-7B-Instruct-v0.2| Exact Match | 41.55% | - |
| | Coven 7B 128K ORPO | Exact Match | 72.18% | +73.65% |
| GSM8K (Flexible) | Mistral-7B-Instruct-v0.2| Exact Match | 41.93% | - |
| | Coven 7B 128K ORPO | Exact Match | 72.63% | +73.29% |
| BoolQ | Mistral-7B-Instruct-v0.2| Accuracy | 85.29% | - |
| | Coven 7B 128K ORPO | Accuracy | 87.43% | +2.51% |
| ARC Easy | Mistral-7B-Instruct-v0.2| Accuracy | 81.36% | - |
| | Coven 7B 128K ORPO | Accuracy | 85.02% | +4.50% |
| | Mistral-7B-Instruct-v0.2| Accuracy Normalized| 76.60% | - |
| | Coven 7B 128K ORPO | Accuracy Normalized| 82.95% | +8.29% |
| ARC Challenge | Mistral-7B-Instruct-v0.2| Accuracy | 54.35% | - |
| | Coven 7B 128K ORPO | Accuracy | 59.64% | +9.74% |
| | Mistral-7B-Instruct-v0.2| Accuracy Normalized| 55.80% | - |
| | Coven 7B 128K ORPO | Accuracy Normalized| 61.69% | +10.52% |
## Model Details
* **Model name**: Coven 7B 128K ORPO alpha
* **Fine-tuned by**: raidhon
* **Base model**: [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* **Parameters**: 7B
* **Context**: 128K
* **Language(s)**: Multilingual
* **License**: Apache2.0
## 💻 Usage
```python
# Install transformers from source - only needed for versions <= v4.34
# pip install git+https://github.com/huggingface/transformers.git
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="raidhon/coven_7b_128k_orpo_alpha", torch_dtype=torch.float16, device_map="auto")
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate",
},
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=4096, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
svjack/Genshin_Impact_Qwen_1_5_Plot_Engine_Step_Json_Short_lora_merged | svjack | 2024-06-02T12:47:26Z | 715 | 0 | transformers | [
"transformers",
"pytorch",
"qwen2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-29T05:03:12Z | # 🤭 Please refer to https://github.com/svjack/Genshin-Impact-Character-Chat to get more info
# Install
```bash
pip install peft transformers bitsandbytes ipykernel rapidfuzz
```
# Run by transformers
```python
from transformers import TextStreamer, AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
from rapidfuzz import fuzz
from IPython.display import clear_output
tokenizer = AutoTokenizer.from_pretrained("svjack/Genshin_Impact_Qwen_1_5_Plot_Engine_Step_Json_Short_lora_merged",)
qw_model = AutoModelForCausalLM.from_pretrained("svjack/Genshin_Impact_Qwen_1_5_Plot_Engine_Step_Json_Short_lora_merged", load_in_4bit = True)
qw_model = qw_model.eval()
streamer = TextStreamer(tokenizer)
def qwen_hf_predict(messages, qw_model = qw_model,
tokenizer = tokenizer, streamer = streamer,
do_sample = True,
top_p = 0.95,
top_k = 40,
max_new_tokens = 2070,
max_input_length = 3500,
temperature = 0.9,
repetition_penalty = 1.0,
device = "cuda"):
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt",
add_generation_prompt=True
)
model_inputs = encodeds.to(device)
generated_ids = qw_model.generate(model_inputs, max_new_tokens=max_new_tokens,
do_sample=do_sample,
streamer = streamer,
top_p = top_p,
top_k = top_k,
temperature = temperature,
repetition_penalty = repetition_penalty,
)
out = tokenizer.batch_decode(generated_ids)[0].split("<|im_start|>assistant")[-1].replace("<|im_end|>", "").strip()
return out
def run_step_infer_times(x, times = 5, temperature = 0.01,
repetition_penalty = 1.0,
sim_val = 70
):
req = []
for _ in range(times):
clear_output(wait = True)
out = qwen_hf_predict([
{
"role": "system",
"content": ""
},
{
"role": "user",
"content": x
},
],
repetition_penalty = repetition_penalty,
temperature = temperature,
max_new_tokens = 2070,
max_input_length = 6000,
)
if req:
val = max(map(lambda x: fuzz.ratio(x, out), req))
#print(val)
#print(req)
if val < sim_val:
req.append(out.strip())
x = x.strip() + "\n" + out.strip()
else:
req.append(out.strip())
x = x.strip() + "\n" + out.strip()
return req
out_l = run_step_infer_times(
'''
故事标题:为了没有眼泪的明天
故事背景:旅行者与琴、派蒙在蒙德城中经历了一系列事件,从元素流动回归、处理外交问题到对抗魔龙和寻找解决之道。他们偶遇吟游诗人温迪,后者提供了关于风神与巨龙的关键信息,并提出了借琴解救蒙德的计划。
参与角色:派蒙、旅行者、琴、丽莎、温迪、歌特琳德
''',
temperature=0.1,
repetition_penalty = 1.0,
times = 10
)
clear_output(wait = True)
print("\n".join(out_l))
```
# Output
```
{'参与者1': '派蒙', '参与者2': '旅行者', '当前故事背景': '两人在蒙德城中,派蒙询问旅行者关于元素流动回归的问题,旅行者解释了这个现象。琴和丽莎出现,讨论了风神与巨龙的战斗以及蒙德的现状。琴提出要帮助蒙德,但丽莎对此表示怀疑。'}
{'参与者1': '丽莎', '参与者2': '琴', '当前故事背景': '丽莎质疑琴的计划,认为蒙德不需要帮助。琴反驳了她的观点,并表示会解决风神的问题。'}
{'参与者1': '派蒙', '参与者2': '温迪', '当前故事背景': '派蒙和旅行者在蒙德城中偶遇吟游诗人温迪,温迪提供了关于风神与巨龙战斗的线索,以及关于蒙德的未来规划。'}
{'参与者1': '温迪', '参与者2': '派蒙', '当前故事背景': '温迪解释了风神与巨龙战斗的背景,以及蒙德的现状,暗示了蒙德的未来可能有改变。'}
{'参与者1': '派蒙', '参与者2': '歌特琳德', '当前故事背景': '派蒙询问歌特琳德关于风神和巨龙战斗的细节,歌特琳德提供了这些信息。'}
```
```python
out_l = run_step_infer_times(
'''
故事标题:归乡
故事背景:在须弥城门口,派蒙与纳西妲偶遇并帮助一只昏迷的元素生命找寻家园。过程中揭示了这只生物并非普通的蕈兽,而是元素生物,并且它们曾受到过‘末日’的影响,家园被侵蚀。纳西妲回忆起晶体里的力量可能与一个预言有关,为了拯救它们的家园,她必须解决‘禁忌知识’问题,但这个过程对她自身也会产生干扰。
参与角色:派蒙、纳西妲、浮游水蕈兽、旅行者
''',
temperature=0.1,
repetition_penalty = 1.0,
times = 10
)
clear_output(wait = True)
print("\n".join(out_l))
```
# Output
```
{'参与者1': '派蒙', '参与者2': '纳西妲', '当前故事背景': '在须弥城门口,派蒙和纳西妲偶遇了昏迷的元素生命浮游水蕈兽。纳西妲询问了它的名字,并决定带它去寻找它的家园。'}
{'参与者1': '派蒙', '参与者2': '纳西妲', '当前故事背景': '派蒙对纳西妲的决定表示疑惑,但同意跟随她去寻找家园。'}
{'参与者1': '纳西妲', '参与者2': '旅行者', '当前故事背景': '纳西妲解释了晶体里的力量可能与一个预言有关,她需要解决‘禁忌知识’问题来拯救元素生物的家园。'}
{'参与者1': '浮游水蕈兽', '参与者2': '旅行者', '当前故事背景': '浮游水蕈兽对旅行者的出现感到惊讶,但愿意跟随纳西妲去寻找家园。'}
```
|
timm/volo_d5_224.sail_in1k | timm | 2024-02-10T23:44:36Z | 714 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2106.13112",
"license:apache-2.0",
"region:us"
] | image-classification | 2023-04-13T06:03:35Z | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for volo_d5_224.sail_in1k
A VOLO (Vision Outlooker) image classification model. Trained on ImageNet-1k with token labelling by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 295.5
- GMACs: 72.4
- Activations (M): 118.1
- Image size: 224 x 224
- **Papers:**
- VOLO: Vision Outlooker for Visual Recognition: https://arxiv.org/abs/2106.13112
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/sail-sg/volo
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('volo_d5_224.sail_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'volo_d5_224.sail_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 197, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@article{yuan2022volo,
title={Volo: Vision outlooker for visual recognition},
author={Yuan, Li and Hou, Qibin and Jiang, Zihang and Feng, Jiashi and Yan, Shuicheng},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
year={2022},
publisher={IEEE}
}
```
|
cenkersisman/gpt2-turkish-900m | cenkersisman | 2024-03-29T15:11:12Z | 714 | 11 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"tr",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-15T11:28:18Z | ---
widget:
- text: 'fransa''nın başkenti'
example_title: fransa'nın başkenti
- text: 'ingiltere''nın başkenti'
example_title: ingiltere'nin başkenti
- text: 'italya''nın başkenti'
example_title: italya'nın başkenti
- text: 'moğolistan''ın başkenti'
example_title: moğolistan'ın başkenti
- text: 'amazon ormanlarının bulunduğu ülke olan'
example_title: amazon ormanlarının bulunduğu ülke olan
- text: 'avrupa''yı asya''ya bağlayan şehir'
example_title: avrupa'yı asya'ya bağlayan şehir
- text: 'zebraların yaşadığı kıta olan'
example_title: zebraların yaşadığı kıta olan
- text: 'fenerbahçe''nin ezeli rakibi olan'
example_title: fenerbahçe'nin ezeli rakibi olan
- text: 'tek bacaklı kurbağa'
example_title: tek bacaklı kurbağa
- text: 'rize''de yağmur'
example_title: rize'de yağmur
- text: 'hayatın anlamı'
example_title: hayatın anlamı
- text: 'saint-joseph'
example_title: saint-joseph
- text: 'renk isimleri şunlardır'
example_title: renk isimleri şunlardır
- text: 'iklim değişikliği'
example_title: iklim değişikliği
- text: 'tuzlu yiyecekler arasında'
example_title: tuzlu yiyecekler arasında
language:
- tr
---
# Model
GPT-2 Türkçe Modeli
### Model Açıklaması
GPT-2 Türkçe Modeli, Türkçe diline özelleştirilmiş bir büyük veri modelidir ve LLM (Large Language Model) kategorisine aittir.
Bu model, GPT-2 mimarisini temel alarak oluşturulmuş ve tokenizer yapısı özel olarak hazırlanmış bir Türkçe dil modelini temsil eder. Model, belirli bir başlangıç metni kullanarak insana benzer metinler üretebilme yeteneğine sahiptir ve geniş bir Türkçe metin veri kümesi üzerinde eğitilmiştir.
Modelin eğitimi için 900 milyon karakterli Vikipedi seti kullanılmıştır. Eğitim setindeki cümleler maksimum 128 tokendan (token = kelime kökü ve ekleri) oluşmuştur bu yüzden oluşturacağı cümlelerin boyu sınırlıdır..
Türkçe heceleme yapısına uygun tokenizer kullanılmış ve model 7.5 milyon adımda yaklaşık 154 epoch eğitilmiştir.
Eğitim için 4GB hafızası olan Nvidia Geforce RTX 3050 GPU kullanılmaktadır. 16GB Paylaşılan GPU'dan da yararlanılmakta ve eğitimin devamında toplamda 20GB hafıza kullanılmaktadır.
## Model Nasıl Kullanılabilir
ÖNEMLİ: model harf büyüklüğüne duyarlı olduğu için, prompt tamamen küçük harflerle yazılmalıdır.
```python
# Model ile çıkarım yapmak için örnek kod
from transformers import GPT2Tokenizer, GPT2LMHeadModel
model_name = "cenkersisman/gpt2-turkish-128-token"
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
model = GPT2LMHeadModel.from_pretrained(model_name)
prompt = "okyanusun derinliklerinde bulunan"
input_ids = tokenizer.encode(prompt, return_tensors="pt")
output = model.generate(input_ids, max_length=100, pad_token_id=tokenizer.eos_token_id)
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
```
## Eğitim Süreci Eğrisi



## Sınırlamalar ve Önyargılar
Bu model, bir özyineli dil modeli olarak eğitildi. Bu, temel işlevinin bir metin dizisi alıp bir sonraki belirteci tahmin etmek olduğu anlamına gelir. Dil modelleri bunun dışında birçok görev için yaygın olarak kullanılsa da, bu çalışmayla ilgili birçok bilinmeyen bulunmaktadır.
Model, küfür, açık saçıklık ve aksi davranışlara yol açan metinleri içerdiği bilinen bir veri kümesi üzerinde eğitildi. Kullanım durumunuza bağlı olarak, bu model toplumsal olarak kabul edilemez metinler üretebilir.
Tüm dil modellerinde olduğu gibi, bu modelin belirli bir girişe nasıl yanıt vereceğini önceden tahmin etmek zordur ve uyarı olmaksızın saldırgan içerik ortaya çıkabilir. Sonuçları yayınlamadan önce hem istenmeyen içeriği sansürlemek hem de sonuçların kalitesini iyileştirmek için insanların çıktıları denetlemesini veya filtrelemesi önerilir.
|
liaaron1/llama-2-7b-bible-ggml-f16-q4_0 | liaaron1 | 2023-08-28T12:59:57Z | 714 | 2 | null | [
"gguf",
"region:us"
] | null | 2023-08-28T12:49:15Z | Entry not found |
BlouseJury/Mistral-7B-Discord-0.2 | BlouseJury | 2024-03-06T01:58:19Z | 714 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"finetune",
"en",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-28T15:20:50Z | ---
language:
- en
license: apache-2.0
tags:
- finetune
pipeline_tag: text-generation
model-index:
- name: Mistral-7B-Discord-0.2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 60.58
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BlouseJury/Mistral-7B-Discord-0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 82.49
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BlouseJury/Mistral-7B-Discord-0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.82
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BlouseJury/Mistral-7B-Discord-0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 42.73
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BlouseJury/Mistral-7B-Discord-0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.74
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BlouseJury/Mistral-7B-Discord-0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 30.93
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BlouseJury/Mistral-7B-Discord-0.2
name: Open LLM Leaderboard
---
# Mistral-7B-Discord-0.1
This model is a finetune of [Mistral-7B-0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on ~40 Million tokens worth of mostly not formatted, anonymized discord messages for 4 Epochs.
This is a base model.
## Model Details
- **Finetuned from model :** mistralai/Mistral-7B-v0.1
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_BlouseJury__Mistral-7B-Discord-0.2)
| Metric |Value|
|---------------------------------|----:|
|Avg. |59.55|
|AI2 Reasoning Challenge (25-Shot)|60.58|
|HellaSwag (10-Shot) |82.49|
|MMLU (5-Shot) |62.82|
|TruthfulQA (0-shot) |42.73|
|Winogrande (5-shot) |77.74|
|GSM8k (5-shot) |30.93|
|
nbeerbower/bophades-mistral-truthy-DPO-7B | nbeerbower | 2024-04-06T03:12:26Z | 714 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"dataset:jondurbin/truthy-dpo-v0.1",
"base_model:nbeerbower/bophades-v2-mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-06T03:03:32Z | ---
library_name: transformers
license: apache-2.0
base_model:
- nbeerbower/bophades-v2-mistral-7B
datasets:
- jondurbin/truthy-dpo-v0.1
---

# bophades-mistral-truthy-DPO-7B
[bophades-v2-mistral-7B](https://huggingface.co/nbeerbower/bophades-v2-mistral-7B) finetuned on [jondurbin/truthy-dpo-v0.1](https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1).
### Method
Finetuned using an A100 on Google Colab. 🙏
[Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) - [Maxime Labonne](https://huggingface.co/mlabonne)
### Configuration
LoRA, model, and training settings:
```python
# LoRA configuration
peft_config = LoraConfig(
r=16,
lora_alpha=16,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
)
# Model to fine-tune
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
load_in_4bit=True
)
model.config.use_cache = False
# Reference model
ref_model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
load_in_4bit=True
)
# Training arguments
training_args = TrainingArguments(
per_device_train_batch_size=2,
gradient_accumulation_steps=2,
gradient_checkpointing=True,
learning_rate=2e-5,
lr_scheduler_type="cosine",
max_steps=420,
save_strategy="no",
logging_steps=1,
output_dir=new_model,
optim="paged_adamw_32bit",
warmup_steps=100,
bf16=True,
report_to="wandb",
)
# Create DPO trainer
dpo_trainer = DPOTrainer(
model,
ref_model,
args=training_args,
train_dataset=dataset,
tokenizer=tokenizer,
peft_config=peft_config,
beta=0.1,
max_prompt_length=1024,
max_length=1536,
force_use_ref_model=True
)
# Fine-tune model with DPO
dpo_trainer.train()
``` |
alibidaran/Gemma2_Farsi | alibidaran | 2024-06-10T19:27:05Z | 714 | 2 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"fa",
"doi:10.57967/hf/2466",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-13T16:50:04Z | ---
library_name: transformers
license: apache-2.0
language:
- fa
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This model is Persian Q&A fine-tuned on Google's Gemma open-source model. Users can ask general question from it. It can be used for chatbot applications and fine-tuning for
other datasets.
- **Developed by:** Ali Bidaran
- **Language(s) (NLP):** Farsi
- **Finetuned from model [optional]:** Gemma2b
## Uses
This model can be used for developing chatbot applications, Q&A, instruction engineering and fine-tuning with other persian datasets.
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
``` python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, GemmaTokenizer
model_id = "alibidaran/Gemma2_Farsi"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(model_id, token=os.environ['HF_TOKEN'])
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, device_map={"":0}, token=os.environ['HF_TOKEN'])
prompt = "چند روش برای کاهش چربی بدن ارائه نمایید؟"
text = f"<s> ###Human: {prompt} ###Asistant: "
inputs=tokenizer(text,return_tensors='pt').to('cuda')
with torch.no_grad():
outputs=model.generate(**inputs,max_new_tokens=400,do_sample=True,top_p=0.99,top_k=10,temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
flammenai/flammen18X-mistral-7B | flammenai | 2024-04-17T05:06:02Z | 714 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"nsfw",
"not-for-all-audiences",
"dataset:ResplendentAI/NSFW_RP_Format_DPO",
"base_model:flammenai/flammen18-mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-17T04:49:07Z | ---
library_name: transformers
license: apache-2.0
base_model:
- flammenai/flammen18-mistral-7B
datasets:
- ResplendentAI/NSFW_RP_Format_DPO
tags:
- nsfw
- not-for-all-audiences
---

# flammen18X-mistral-7B
A Mistral 7B LLM built from merging pretrained models and finetuning on [ResplendentAI/NSFW_RP_Format_DPO](https://huggingface.co/datasets/ResplendentAI/NSFW_RP_Format_DPO).
Flammen specializes in exceptional character roleplay, creative writing, and general intelligence
### Method
Finetuned using an A100 on Google Colab.
[Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) - [Maxime Labonne](https://huggingface.co/mlabonne)
### Configuration
LoRA, model, and training settings:
```python
# LoRA configuration
peft_config = LoraConfig(
r=16,
lora_alpha=16,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
)
# Model to fine-tune
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
load_in_4bit=True
)
model.config.use_cache = False
# Reference model
ref_model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
load_in_4bit=True
)
# Training arguments
training_args = TrainingArguments(
per_device_train_batch_size=2,
gradient_accumulation_steps=8,
gradient_checkpointing=True,
learning_rate=5e-5,
lr_scheduler_type="cosine",
max_steps=420,
save_strategy="no",
logging_steps=1,
output_dir=new_model,
optim="paged_adamw_32bit",
warmup_steps=100,
bf16=True,
report_to="wandb",
)
# Create DPO trainer
dpo_trainer = DPOTrainer(
model,
ref_model,
args=training_args,
train_dataset=dataset,
tokenizer=tokenizer,
peft_config=peft_config,
beta=0.1,
max_prompt_length=1024,
max_length=1536,
force_use_ref_model=True
)
# Fine-tune model with DPO
dpo_trainer.train()
``` |
PrunaAI/llava-llama-3-8b-v1_1-GGUF-smashed | PrunaAI | 2024-04-26T11:11:46Z | 714 | 1 | null | [
"gguf",
"pruna-ai",
"region:us"
] | null | 2024-04-26T10:07:12Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
## This repo contains GGUF versions of the xtuner/llava-llama-3-8b-v1_1 model.
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with GGUF.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***What is the model format?*** We use GGUF format.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
# Downloading and running the models
You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/):
| Quant type | Description |
|------------|--------------------------------------------------------------------------------------------|
| Q5_K_M | High quality, recommended. |
| Q5_K_S | High quality, recommended. |
| Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. |
| Q4_K_S | Slightly lower quality with more space savings, recommended. |
| IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. |
| IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. |
| Q3_K_L | Lower quality but usable, good for low RAM availability. |
| Q3_K_M | Even lower quality. |
| IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| Q3_K_S | Low quality, not recommended. |
| IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| Q2_K | Very low quality but surprisingly usable. |
## How to download GGUF files ?
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
- **Option A** - Downloading in `text-generation-webui`:
- **Step 1**: Under Download Model, you can enter the model repo: PrunaAI/llava-llama-3-8b-v1_1-GGUF-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf.
- **Step 2**: Then click Download.
- **Option B** - Downloading on the command line (including multiple files at once):
- **Step 1**: We recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
- **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download PrunaAI/llava-llama-3-8b-v1_1-GGUF-smashed llava-llama-3-8b-v1_1.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
Alternatively, you can also download multiple files at once with a pattern:
```shell
huggingface-cli download PrunaAI/llava-llama-3-8b-v1_1-GGUF-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download PrunaAI/llava-llama-3-8b-v1_1-GGUF-smashed llava-llama-3-8b-v1_1.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## How to run model in GGUF format?
- **Option A** - Introductory example with `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m llava-llama-3-8b-v1_1.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {prompt\} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
- **Option B** - Running in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp).
- **Option C** - Running from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./llava-llama-3-8b-v1_1.IQ3_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<s>[INST] {prompt} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./llava-llama-3-8b-v1_1.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
- **Option D** - Running with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
mradermacher/Smaug-Llama-3-70B-Instruct-32K-GGUF | mradermacher | 2024-06-12T11:18:28Z | 714 | 2 | transformers | [
"transformers",
"gguf",
"en",
"dataset:aqua_rat",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"base_model:abacusai/Smaug-Llama-3-70B-Instruct-32K",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-06-12T00:34:46Z | ---
base_model: abacusai/Smaug-Llama-3-70B-Instruct-32K
datasets:
- aqua_rat
- microsoft/orca-math-word-problems-200k
- m-a-p/CodeFeedback-Filtered-Instruction
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/abacusai/Smaug-Llama-3-70B-Instruct-32K
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.IQ3_XS.gguf) | IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.IQ3_M.gguf) | IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Kaballas/lora_model | Kaballas | 2024-06-23T06:29:02Z | 714 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-23T05:31:54Z | ---
base_model: unsloth/mistral-7b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** Kaballas
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
m3hrdadfi/gpt2-persian-qa | m3hrdadfi | 2021-07-30T09:00:42Z | 713 | 6 | transformers | [
"transformers",
"pytorch",
"tf",
"gpt2",
"text-generation",
"fa",
"dataset:persian_qa",
"dataset:parsinlu_reading_comprehension",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: fa
datasets:
- persian_qa
- parsinlu_reading_comprehension
tags:
- text-generation
widget:
- text: "قرارداد کرسنت قراردادی برای فروش روزانه معادل 500 میلیون فوت مکعب، گاز ترش میدان سلمان است، که در سال 1381 و در زمان وزارت بیژن نامدار زنگنه در دولت هفتم مابین شرکت کرسنت پترولیوم و شرکت ملی نفت ایران منعقد گردید. مذاکرات اولیه این قرارداد از سال 1997 آغاز شد و در نهایت، سال 2001 ( 1381 ) به امضای این تفاهم نامه مشترک انجامید. بر اساس مفاد این قرارداد، مقرر شده بود که از سال 2005 با احداث خط لوله در خلیج فارس، گاز فرآورده نشده میدان سلمان (مخزن مشترک با ابوظبی)، به میزان روزانه 500 میلیون فوت مکعب (به قول برخی منابع 600 میلیون فوت مکعب) به امارات صادر شود. این قرارداد مطابق قوانین داخلی ایران بسته شدهو تنها قرارداد نفتی ایران است که از طرف مقابل خود، تضمین گرفتهاست. اجرای این پروژه در سال 1384 با دلایل ارایه شده از سوی دیوان محاسبات ایران از جمله تغییر نیافتن بهای گاز صادراتی و ثابت ماندن آن در هفت سال اول اجرای قرارداد متوقف شد. این در حالی است که طبق تعریف حقوقی، دیوان محاسبات ایران، حق دخالت در قراردادها، پیش از آنکه قراردادها اجرایی و مالی شوند را ندارد. پرسش: طرفین قرار داد کرسنت کیا بودن؟ پاسخ:"
- text: "ناف جایی قرار گرفته که در واقع بندناف در داخل رحم در آنجا به شکم جنین وصل بودهاست. بندناف که جفت را به جنین متصل کرده بعد از تولد از نوزاد جدا میشود. برای جدا کردن بند ناف از دو پنس استفاده میکنند و بین آن دو را میبرند. پنس دیگری نزدیک شکم نوزاد قرار داده میشود که بعد از دو روز برداشته خواهد شد. بندناف باقیمانده طی 15 روز خشک شده و میافتد و به جای آن اسکاری طبیعی به جای میماند. البته بر خلاف تصور عامه مردم شکل ناف در اثر بریدن بند ناف به وجود نمیآید و پیش از این در شکم مادر حالت ناف شکل گرفتهاست. شکل ناف در میان مردم مختلف متفاوت است و اندازه آن بین 1 ٫ 5 تا 2 سانتیمتر است. تمام پستانداران جفتزیست ناف دارند. ناف در انسانها به سادگی قابل مشاهدهاست. پرسش: بند ناف انسان به کجا وصل است؟ پاسخ:"
- text: "بیش از ده هزار سال است که انسانها در قاره آمریکا زندگی میکنند. قاره آمریکا توسط کریستف کلمب و در سال 1492 کشف شد اما او به اشتباه فکر کرد که آنجا هندوستان است اما مدتها بعد آمریگو وسپوچی اعلام کرد که این قاره جدیدی است. اما تاریخ آمریکا به عنوان یک کشور مستقل به سال 1783 میلادی بازمیگردد که در آن آمریکا بر طبق معاهده پاریس به رسمیت شناخته گردید. پرسش: قاره آمریکا در چه سالی کشف شد؟ پاسخ:"
- text: "الکترونیک آرتز یا بهطور مختصر ایای شرکتی آمریکایی است که از بزرگترین شرکتهای تولید و توزیع بازیهای رایانهای بهشمار میآید. تریپ هاوکینگز این شرکت را در سال 1982 ت سیس کرد و هدف اولیه او تولید انواعی از بازیهای رایانهای بود که در خانه میتوان با آنها بازی کرد. ایای در اواخر دهه 80 به بهبود و توسعه حوزه کاری خود در زمینه بازیهای رایانهای پرداخت و با جذب چندین چهره مبتکر، موفق به رشد و توسعه بسیار در این زمینه شد. شرکت ایای در سال 2007 رتبه هشتم در فهرست بزرگترین شرکتهای طراحی نرمافزار را به خود اختصاص داد. درآمد سالانه شرکت ایای در مه 2008 به بیش از 4 ٫ 02 میلیارد دلار رسید و این مقدار، رو به افزایش است. موفقترین بازیهای ایای، بازیهای ورزشی (که توسط بخش ایای اسپورتز، وابسته به این شرکت تولید میشود)، بازیهای برگرفته از فیلمهای محبوب و البته بازیهای معروفی است که این شرکت همواره به ساختن آنها مشغول بودهاست از جمله این بازیها میتوان به بازیهایی مانند نید فور اسپید، مدال افتخار، سیمز، بتل فیلد و برن اوت اشاره کرد. یک نکته حایز اهمیت در مورد این شرکت این است که در جمع 5 شرکت منفور دنیا قرار دارد. پرسش: بازیهای سبک ورزشی شرکت الکترونیک آرتز توسط کدوم قسمت ساخته میشه؟ پاسخ:"
- text: "کویر یا نمک زار منطقهای است که به دلیل موقعیت جغرافیایی (معمولا ختم رودخانهها در آن) و حرارت شدید آفتاب به نمکزار بدل شده باشد. برخی کویرها قبلا دریاچه یا دریاهایی بودهاند که در اثر تبخیر آب از آنها به نمکزار بدل شدهاند. کویر مرکزی ایران که دشت کویر نامیده میشود، درون خود تعداد زیادی کویر کوچکتر، مانند کویر درانجیر، کویر ساغند، کویر بند ریگ را جا دادهاست. با وجود اینکه در بین عامه مردم رایج است که اصطلاح 'کویر' و 'بیابان' را بهجای یکدیگر بهکار میبرند ولی بین این دو اصطلاح تفاوت اساسی وجود دارد. بیابان به بخشی از مناطق خشک گفته میشود که بارندگی سالانه آن کمتر از 50 میلیمتر است و ممکن است چند سال در آن باران نبارد و با کمآبی و تبخیر شدید مواجه است و پوشش گیاهی آن بسیار ضعیف است. اما کویر به زمینهای رسی پفکرده، با شوری و نمک بسیار شدید گفته میشود که گیاهان نمیتوانند در آن رشد نمایند. در بعضی از کویرها که شوری خاک کمتر است، ممکن است گیاهانی مانند گز که دربرابر املاح نمکی مقاوم است، در آن رشد نماید. پرسش: بافت گیاهی در کویر چگونه است؟ پاسخ:"
- text: "قطبنما وسیلهای برای تعیین جهت (جهتیابی) است. این وسیله با استفاده از میدان مغناطیسی زمین جهت قطب شمال را نشان میدهد که در حقیقت شمال مغناطیسی زمین است که با شمال حقیقی مقداری فاصله دارد. زاویه بین شمال حقیقی و شمال مغناطیسی، میل مغناطیسی نامیده میشود. امروزه برای تعیین شمال حقیقی از قطبنماهای پیشرفتهتری مانند قطبنمای ژیروسکوپی استفاده میشود. قطبنمایی که از یک آهنربا ساخته شده یعنی قطبنمای مغناطیسی جهت را نشان میدهد زیرا زمین چون آهنربای بزرگی عمل میکند. نیروی آهنربایی زمین قطبنما یا سوزن مغناطیسی را به سوی شمال و جنوب میکشد. کسی نمیداند که چه کسی اول بار قطبنما را ساخت. برخی گمان میکنند که چینیان نخستین بار قطبنما را ساختند برخی دیگر میگویند که قطبنما در ایتالیا اختراع شدهاست. بعضی از نخستین قطبنماها تکههای اکسید مغناطیسی آهن بودهاند که بر قطعات چوبی یا چوبپنبه قرار داشتند و در یک ظرف آب شناور بودند. اکسید مغناطیسی آهن نوعی کانی آهن است یک نام دیگر آن ماگنتیت است. تکههای ماگنتیت آهنرباهای طبیعی هستند. پس از آن مردم ساختن آهنربا از فولاد را یادگرفتند و توانستند قطبنماهای بهتری بسازند. پرسش: اکسید مغناطیسی آهن چیه؟ پاسخ:"
- text: "لاستیک طبیعی که لاستیک هندی یا کایوچو نیز نامیده میشود، قدیمیترین الاستومر تجاری است که از لاتکس ساخته میشود. لاتکس ترشحات داخلی یک درخت گرمسیری به نام درخت لاستیک است. لاتکس در شکل خام خود، نوعی چسب بسیار خوب است و میتوان با انحلال آن در حلالهای مناسب، چسبهای مختلفی تولید کرد. لاتکس در ابتدای تولید، از پلیمرهایی از ترکیب آلی ایزوپرین با ناخالصیهای جزیی از سایر ترکیبات آلی، به علاوه آب تشکل شدهاست. تایلند، مالزی و اندونزی کشورهای پیشرو در تولید لاستیک هستند. انواع پلی ایزوپرین که به عنوان لاستیکهای طبیعی استفاده میشوند، در دسته الاستومرها طبقهبندی میشوند. اولین استفاده از لاستیک توسط فرهنگهای بومی آمریکای میانه انجام شد. آنها از این لاستیک برای ساخت توپ بازی استفاده میکردند. بعدها لاستیک توسط فرهنگهای مایا و آزتک مورد استفاده قرار گرفت. آزتکها علاوه بر ساخت توپ، از لاستیک برای اهداف دیگری مانند ساخت ظروف و ضدآب ساختن منسوجات از طریق اشباع آنها با شیره لاتکس استفاده میکردند. پرسش: آمریکای میانه در ابتدا از لاستیک برای تولید چی استفاده میکرد؟ پاسخ:"
- text: "آتیلا ( 405 453 میلادی) یکی از رهبران قوم هون بود که بزرگترین امپراتوری را در اروپا، از رود اورال تا دانوب تشکیل داد. در زمان فرمانروایی، وی یکی از مخوفترین دشمنان امپراتوریهای روم غربی و شرقی بود. رومیان به او لقب تازیانه خداوند داده بودندو به او باج میدادند تا کاری به کار رم نداشته باشد. آتیلا در آغاز به ایران حمله کرد و با شکست مواجه شد. حملهای که او در سال 441 میلادی به امپراتوری بیزانس کرد باعث شد تا تصمیم به حملات بیشتری به سوی غرب بگیرد. وی در اروپا شهرهای بسیاری را نابود و غارت کرد.سرانجام، در نبرد دشت کاتالانیها، در مقابل فلاویوس آییتیوس شکست خورد. در این جنگ، رومیها و آلانیها به مصاف با هونها رفتند.هونها در ناحیه بین رود ولگا و دشتهای مجارستان میزیستند، از آغاز سده پنجم به تاخت و تازهای فراوان و پرسودی در حوالی رود دانوب دست زدند، بنابراین، در حدود 445 تا 440 میلادی، دربار آتیلا به تجمل و زیبایی آراسته بود، شماره اسیرانی که میگرفتند بسیار بود، هر دو زبان یونانی و لاتین در دربار تکلم میشد، و دبیران رومیتبار رویدادهای خارجی را همواره به آگاهی خان میرساندند، آتیلا، زرد رنگتر از بیشتر افراد قومش بود، پرسش: رومیها چه لقبی به اتیلا داده بودند؟ پاسخ:"
- text: "ماده سوختنی مادهای است که در اثر تغییرات (معمولا شیمیایی) تولید انرژی مفید میکند که بعدا میتواند تبدیل به انرژی مکانیکی شود. این تغییرات معمولا با سوختن (یعنی ترکیب با اکسیژن) همراه است. فرایندهای مورد استفاده برای تبدیل سوخت به انرژی عبارتند از: واکنشهای شیمیایی مختلف و گرمازا، واکنشهای هستهای مانند شکافت هستهای یا گداخت هستهای. هیدروکربنها تا حد زیادی شایعترین منبع سوخت مورد استفاده توسط انسان است، اما در بسیاری از موارد فلزات رادیو اکتیو نیز استفاده میشوند. اولین استفاده از سوخت توسط بشر ، احتراق و سوزاندن تکههای چوب در حدود 2 میلیون سال پیش توسط انسان راست قامت بود . به صورت کلی در طول تاریخ زندگی بشر که تا به حال با آن آشنا شدهایم ، تنها سوخت هایی که بیشترین استفاده را داشته است از گیاهان و یا چربی حیوانات بدست میآمده است و مورد استفاده انسان قرار گرفته است . انسانها از 6000 سال قبل از میلاد مسیح برای ذوب آهن از زغال چوب و مشتقات چوب استفاده میکردند. بعدها این سوختها جای خودشان را با کک عوض کردند . به دلیل اینکه در حوالی قرن 18 جنگلهای اروپا در حال نابودی بودند. پرسش: سوخت چجوری انرژی قابل استفاده تولید میکنه؟ پاسخ:"
- text: "ژرمن شپرد یا سگ چوپان آلمانی یکی از نژادهای سگ است. سگ چوپان آلمانی یکی از نژادهای اصیل آلمانی است که برای نخستین بار در سال 1899 ثبت گردید. سگی باهوش، شجاع و مناسب برای کارهای مختلف از جمله گله داری، نگهبانی، راهنمای نابینایان، همراه خانواده، و جستجو و نجات است. قد استاندارد تا جدوگاه در نرها 60 تا 65 سانتیمتر و در مادهها 55 تا 60 سانتیمتر است. طول عمر از 9 تا 13 سال است. این نژاد را اکثر افراد به دلیل استفاده در فیلمهایی نظیر رکس میشناسند و همچنین این سگ حضور موثری در صحنههای امدادی دارد. در خاورمیانه دستههایی از شپردهای پلاس فراوان هستند اما نژاد ژرمن شپرد بیشتر در اروپا زندگی دیده شدهاست. مهمترین ویژگی در این نژاد رفتارهای اشرافی، شهامت و توانایی آموختن رفتارها و فعالیتهای اختصاصی است. نخستین ویژگی یک جرمن شپرد خوب، قدرت، چالاکی، عضلات مناسب و هوشیاری است. رنگ در سگهای ژرمن شپرد متفاوت است و تقریبا اکثر رنگها قابل قبول هستند. با این وجود رنگهای خیلی کم رنگ یا سفید یک دست قابل قبول نمیباشد. پرسش: عمر سگ ژرمن شپرد چند ساله؟ پاسخ:"
---
# GPT2 QA - Persian
It is a new approach to using GPT2 in other downstream NLP tasks like QA. The model was trained on PersianQA and evaluated on PersianQA and PersiNLU (Reading Comprehension).
## Dataset
- [PersianQA](https://github.com/sajjjadayobi/PersianQA)
- [ParsiNLU](https://github.com/persiannlp/parsinlu)
## Evaluation
The following table summarizes the scores obtained by the model.
| Dataset | F1 Score (%) | Exact Match (%) | Total (#) |
|:---------:|:------------:|:---------------:|:---------:|
| ParsNLU | 46.95 | 20.39 | 564 |
| PersianQA | 45.93 | 23.19 | 651 |
## Demo
[Streamlit GPT2 QA - Persian](https://huggingface.co/spaces/m3hrdadfi/gpt2-persian-qa)
## How to use
TODO (will be filled shortly)... |
fcakyon/yolov5n-v7.0 | fcakyon | 2022-12-20T09:51:49Z | 713 | 1 | transformers | [
"transformers",
"object-detection",
"computer-vision",
"vision",
"yolo",
"yolov5",
"dataset:detection-datasets/coco",
"license:gpl-3.0",
"region:us"
] | object-detection | 2022-12-13T21:06:31Z | ---
license: gpl-3.0
inference: false
tags:
- object-detection
- computer-vision
- vision
- yolo
- yolov5
datasets:
- detection-datasets/coco
---
### How to use
- Install yolov5:
```bash
pip install -U yolov5
```
- Load model and perform prediction:
```python
import yolov5
# load model
model = yolov5.load('fcakyon/yolov5n-v7.0')
# set model parameters
model.conf = 0.25 # NMS confidence threshold
model.iou = 0.45 # NMS IoU threshold
model.agnostic = False # NMS class-agnostic
model.multi_label = False # NMS multiple labels per box
model.max_det = 1000 # maximum number of detections per image
# set image
img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model(img)
# inference with larger input size
results = model(img, size=640)
# inference with test time augmentation
results = model(img, augment=True)
# parse results
predictions = results.pred[0]
boxes = predictions[:, :4] # x1, y1, x2, y2
scores = predictions[:, 4]
categories = predictions[:, 5]
# show detection bounding boxes on image
results.show()
# save results into "results/" folder
results.save(save_dir='results/')
```
- Finetune the model on your custom dataset:
```bash
yolov5 train --img 640 --batch 16 --weights fcakyon/yolov5n-v7.0 --epochs 10 --device cuda:0
``` |
timm/dpn92.mx_in1k | timm | 2023-04-21T21:57:43Z | 713 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1707.01629",
"license:apache-2.0",
"region:us"
] | image-classification | 2023-04-21T21:57:12Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for dpn92.mx_in1k
A DPN (Dual-Path Net) image classification model. Trained on ImageNet-1k in MXNet by paper authors and ported to PyTorch by Ross Wightman.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 37.7
- GMACs: 6.5
- Activations (M): 18.2
- Image size: 224 x 224
- **Papers:**
- Dual Path Networks: https://arxiv.org/abs/1707.01629
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/cypw/DPNs
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('dpn92.mx_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'dpn92.mx_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 336, 56, 56])
# torch.Size([1, 704, 28, 28])
# torch.Size([1, 1552, 14, 14])
# torch.Size([1, 2688, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'dpn92.mx_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2688, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@article{Chen2017,
title={Dual Path Networks},
author={Yunpeng Chen, Jianan Li, Huaxin Xiao, Xiaojie Jin, Shuicheng Yan, Jiashi Feng},
journal={arXiv preprint arXiv:1707.01629},
year={2017}
}
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.