modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
mradermacher/MermaidMixtral-2x7b-GGUF | mradermacher | 2024-05-06T05:11:39Z | 401 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:TroyDoesAI/MermaidMixtral-2x7b",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-06T20:35:29Z | ---
base_model: TroyDoesAI/MermaidMixtral-2x7b
language:
- en
library_name: transformers
license: cc-by-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/TroyDoesAI/MermaidMixtral-2x7b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-2x7b-GGUF/resolve/main/MermaidMixtral-2x7b.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-2x7b-GGUF/resolve/main/MermaidMixtral-2x7b.IQ3_XS.gguf) | IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-2x7b-GGUF/resolve/main/MermaidMixtral-2x7b.Q3_K_S.gguf) | Q3_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-2x7b-GGUF/resolve/main/MermaidMixtral-2x7b.IQ3_S.gguf) | IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-2x7b-GGUF/resolve/main/MermaidMixtral-2x7b.IQ3_M.gguf) | IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-2x7b-GGUF/resolve/main/MermaidMixtral-2x7b.Q3_K_M.gguf) | Q3_K_M | 6.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-2x7b-GGUF/resolve/main/MermaidMixtral-2x7b.Q3_K_L.gguf) | Q3_K_L | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-2x7b-GGUF/resolve/main/MermaidMixtral-2x7b.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-2x7b-GGUF/resolve/main/MermaidMixtral-2x7b.Q4_K_S.gguf) | Q4_K_S | 7.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-2x7b-GGUF/resolve/main/MermaidMixtral-2x7b.Q4_K_M.gguf) | Q4_K_M | 7.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-2x7b-GGUF/resolve/main/MermaidMixtral-2x7b.Q5_K_S.gguf) | Q5_K_S | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-2x7b-GGUF/resolve/main/MermaidMixtral-2x7b.Q5_K_M.gguf) | Q5_K_M | 9.2 | |
| [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-2x7b-GGUF/resolve/main/MermaidMixtral-2x7b.Q6_K.gguf) | Q6_K | 10.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-2x7b-GGUF/resolve/main/MermaidMixtral-2x7b.Q8_0.gguf) | Q8_0 | 13.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
MaziyarPanahi/M7Yamshadowexperiment28_Experiment26T3qm7xp | MaziyarPanahi | 2024-04-08T07:58:18Z | 401 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"Safetensors",
"text-generation-inference",
"merge",
"base_model:automerger/M7Yamshadowexperiment28-7B",
"base_model:automerger/Experiment26T3qm7xp-7B",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
]
| text-generation | 2024-04-08T07:44:09Z | ---
license: apache-2.0
tags:
- Safetensors
- text-generation-inference
- merge
model_name: M7Yamshadowexperiment28_Experiment26T3qm7xp
base_model:
- automerger/M7Yamshadowexperiment28-7B
- automerger/Experiment26T3qm7xp-7B
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# M7Yamshadowexperiment28_Experiment26T3qm7xp
M7Yamshadowexperiment28_Experiment26T3qm7xp is a merge of the following models:
* [automerger/M7Yamshadowexperiment28-7B](https://huggingface.co/automerger/M7Yamshadowexperiment28-7B)
* [automerger/Experiment26T3qm7xp-7B](https://huggingface.co/automerger/Experiment26T3qm7xp-7B)
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/M7Yamshadowexperiment28_Experiment26T3qm7xp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
CultriX/MonaCeption-7B-SLERP | CultriX | 2024-04-11T12:38:19Z | 401 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:CultriX/MonaTrix-v4",
"base_model:CultriX/MergeCeption-7B-v3",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-11T02:17:41Z | ---
base_model:
- CultriX/MonaTrix-v4
- CultriX/MergeCeption-7B-v3
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [CultriX/MonaTrix-v4](https://huggingface.co/CultriX/MonaTrix-v4)
* [CultriX/MergeCeption-7B-v3](https://huggingface.co/CultriX/MergeCeption-7B-v3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: CultriX/MergeCeption-7B-v3
layer_range: [0, 32]
- model: CultriX/MonaTrix-v4
layer_range: [0, 32]
merge_method: slerp
base_model: CultriX/MonaTrix-v4
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
random_seed: 0
``` |
ibivibiv/orthorus_125b_v3 | ibivibiv | 2024-04-15T22:42:04Z | 401 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"en",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-15T15:00:05Z | ---
library_name: transformers
license: apache-2.0
language:
- en
---
# Model Card for Orthorus v3 125b
Slight variation of v2
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TechxGenus/Jamba-v0.1-hf | TechxGenus | 2024-04-20T17:09:27Z | 401 | 3 | transformers | [
"transformers",
"safetensors",
"jamba",
"text-generation",
"mamba",
"moe",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-04-20T13:01:54Z | ---
library_name: transformers
license: apache-2.0
tags:
- jamba
- mamba
- moe
---
A compatible version of [Jamba-v0.1](https://huggingface.co/ai21labs/Jamba-v0.1) in transformers that no longer requires `trust_remote_code=True`.
# Model Card for Jamba
Jamba is a state-of-the-art, hybrid SSM-Transformer LLM. It delivers throughput gains over traditional Transformer-based models, while outperforming or matching the leading models of its size class on most common benchmarks.
Jamba is the first production-scale Mamba implementation, which opens up interesting research and application opportunities. While this initial experimentation shows encouraging gains, we expect these to be further enhanced with future optimizations and explorations.
This model card is for the base version of Jamba. It’s a pretrained, mixture-of-experts (MoE) generative text model, with 12B active parameters and a total of 52B parameters across all experts. It supports a 256K context length, and can fit up to 140K tokens on a single 80GB GPU.
For full details of this model please read the [release blog post](https://www.ai21.com/blog/announcing-jamba).
## Model Details
- **Developed by:** [AI21](https://www.ai21.com)
- **Model type:** Joint Attention and Mamba (Jamba)
- **License:** Apache 2.0
- **Context length:** 256K
- **Knowledge cutoff date:** March 5, 2024
## Usage
### Presequities
Jamba requires you use `transformers` version 4.40.0 or higher:
```bash
pip install transformers>=4.40.0
```
In order to run optimized Mamba implementations, you first need to install `mamba-ssm` and `causal-conv1d`:
```bash
pip install mamba-ssm causal-conv1d>=1.2.0
```
You also have to have the model on a CUDA device.
You can run the model not using the optimized Mamba kernels, but it is **not** recommended as it will result in significantly lower latencies. In order to do that, you'll need to specify `use_mamba_kernels=False` when loading the model.
### Run the model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("TechxGenus/Jamba-v0.1-hf")
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/Jamba-v0.1-hf")
input_ids = tokenizer("In the recent Super Bowl LVIII,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
print(tokenizer.batch_decode(outputs))
# ["<|startoftext|>In the recent Super Bowl LVIII, the Kansas City Chiefs emerged victorious, defeating the San Francisco 49ers in a thrilling overtime showdown. The game was a nail-biter, with both teams showcasing their skills and determination.\n\nThe Chiefs, led by their star quarterback Patrick Mahomes, displayed their offensive prowess, while the 49ers, led by their strong defense, put up a tough fight. The game went into overtime, with the Chiefs ultimately securing the win with a touchdown.\n\nThe victory marked the Chiefs' second Super Bowl win in four years, solidifying their status as one of the top teams in the NFL. The game was a testament to the skill and talent of both teams, and a thrilling end to the NFL season.\n\nThe Super Bowl is not just about the game itself, but also about the halftime show and the commercials. This year's halftime show featured a star-studded lineup, including Usher, Alicia Keys, and Lil Jon. The show was a spectacle of music and dance, with the performers delivering an energetic and entertaining performance.\n"]
```
<details>
<summary><strong>Loading the model in half precision</strong></summary>
The published checkpoint is saved in BF16. In order to load it into RAM in BF16/FP16, you need to specify `torch_dtype`:
```python
from transformers import AutoModelForCausalLM
import torch
model = AutoModelForCausalLM.from_pretrained("TechxGenus/Jamba-v0.1-hf",
torch_dtype=torch.bfloat16) # you can also use torch_dtype=torch.float16
```
When using half precision, you can enable the [FlashAttention2](https://github.com/Dao-AILab/flash-attention) implementation of the Attention blocks. In order to use it, you also need the model on a CUDA device. Since in this precision the model is to big to fit on a single 80GB GPU, you'll also need to parallelize it using [accelerate](https://huggingface.co/docs/accelerate/index):
```python
from transformers import AutoModelForCausalLM
import torch
model = AutoModelForCausalLM.from_pretrained("TechxGenus/Jamba-v0.1-hf",
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
device_map="auto")
```
</details>
<details><summary><strong>Load the model in 8-bit</strong></summary>
**Using 8-bit precision, it is possible to fit up to 140K sequence lengths on a single 80GB GPU.** You can easily quantize the model to 8-bit using [bitsandbytes](https://huggingface.co/docs/bitsandbytes/index). In order to not degrade model quality, we recommend to exclude the Mamba blocks from the quantization:
```python
from transformers import AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True,
llm_int8_skip_modules=["mamba"])
model = AutoModelForCausalLM.from_pretrained("TechxGenus/Jamba-v0.1-hf",
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
quantization_config=quantization_config)
```
</details>
### Fine-tuning example
Jamba is a base model that can be fine-tuned for custom solutions (including for chat/instruct versions). You can fine-tune it using any technique of your choice. Here is an example of fine-tuning with the [PEFT](https://huggingface.co/docs/peft/index) library:
```python
from datasets import load_dataset
from trl import SFTTrainer
from peft import LoraConfig
from transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/Jamba-v0.1-hf")
model = AutoModelForCausalLM.from_pretrained("TechxGenus/Jamba-v0.1-hf", device_map='auto')
dataset = load_dataset("Abirate/english_quotes", split="train")
training_args = TrainingArguments(
output_dir="./results",
num_train_epochs=3,
per_device_train_batch_size=4,
logging_dir='./logs',
logging_steps=10,
learning_rate=2e-3
)
lora_config = LoraConfig(
r=8,
target_modules=["embed_tokens", "x_proj", "in_proj", "out_proj"],
task_type="CAUSAL_LM",
bias="none"
)
trainer = SFTTrainer(
model=model,
tokenizer=tokenizer,
args=training_args,
peft_config=lora_config,
train_dataset=dataset,
dataset_text_field="quote",
)
trainer.train()
```
## Results on common benchmarks
| Benchmark | Score |
|--------------|:-----:|
| HellaSwag | 87.1% |
| Arc Challenge | 64.4% |
| WinoGrande | 82.5% |
| PIQA | 83.2% |
| MMLU | 67.4% |
| BBH | 45.4% |
| TruthfulQA | 46.4% |
| GSM8K (CoT) | 59.9% |
It's crucial that the 'BOS' token is added to all prompts, which might not be enabled by default in all eval frameworks.
## Notice
Jamba is a pretrained base model and did not undergo any alignment for instruct/chat interactions.
As a base model, Jamba is intended for use as a foundation layer for fine tuning, training, and developing custom solutions. Jamba does not have safety moderation mechanisms and guardrails should be added for responsible and safe use.
## About AI21
AI21 builds reliable, practical, and scalable AI solutions for the enterprise.
Jamba is the first in AI21’s new family of models, and the Instruct version of Jamba is available in beta via the [AI21 platform](https://www.ai21.com/studio).
|
flammenai/flammen21-mistral-7B | flammenai | 2024-04-22T03:05:00Z | 401 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"dataset:flammenai/Date-DPO-v2",
"base_model:flammenai/flammen20-mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-22T02:53:46Z | ---
library_name: transformers
license: apache-2.0
base_model:
- flammenai/flammen20-mistral-7B
datasets:
- flammenai/Date-DPO-v2
---

# flammen21-mistral-7B
A Mistral 7B LLM built from merging pretrained models and finetuning on [flammenai/Date-DPO-v2](https://huggingface.co/datasets/flammenai/Date-DPO-v2).
Flammen specializes in exceptional character roleplay, creative writing, and general intelligence
### Method
Finetuned using an L4 on Google Colab.
[Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) - [Maxime Labonne](https://huggingface.co/mlabonne)
### Configuration
LoRA, model, and training settings:
```python
# LoRA configuration
peft_config = LoraConfig(
r=16,
lora_alpha=16,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
)
# Model to fine-tune
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
load_in_4bit=True
)
model.config.use_cache = False
# Reference model
ref_model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
load_in_4bit=True
)
# Training arguments
training_args = TrainingArguments(
per_device_train_batch_size=2,
gradient_accumulation_steps=8,
gradient_checkpointing=True,
learning_rate=5e-5,
lr_scheduler_type="cosine",
max_steps=420,
save_strategy="no",
logging_steps=1,
output_dir=new_model,
optim="paged_adamw_32bit",
warmup_steps=100,
bf16=True,
report_to="wandb",
)
# Create DPO trainer
dpo_trainer = DPOTrainer(
model,
ref_model,
args=training_args,
train_dataset=dataset,
tokenizer=tokenizer,
peft_config=peft_config,
beta=0.1,
max_prompt_length=2048,
max_length=4096,
force_use_ref_model=True
)
# Fine-tune model with DPO
dpo_trainer.train()
``` |
QuantFactory/Llama-3-8B-Web-GGUF | QuantFactory | 2024-04-24T04:59:24Z | 401 | 4 | transformers | [
"transformers",
"gguf",
"agents",
"agent",
"llm",
"llama",
"text-generation",
"en",
"dataset:McGill-NLP/WebLINX",
"base_model:McGill-NLP/Llama-3-8B-Web",
"license:llama3",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-04-24T04:45:28Z | ---
license: llama3
datasets:
- McGill-NLP/WebLINX
language:
- en
library_name: transformers
tags:
- agents
- agent
- llm
- llama
base_model: McGill-NLP/Llama-3-8B-Web
pipeline_tag: text-generation
---
# Llama-3-8B-Web-GGUf
- This is quantized version of [McGill-NLP/Llama-3-8B-Web](https://huggingface.co/McGill-NLP/Llama-3-8B-Web) created using llama.cpp
## Model Description
Our first agent is a finetuned [`Meta-Llama-3-8B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) model, which was recently released by Meta GenAI team. We have finetuned this model on the [`WebLINX`](https://mcgill-nlp.github.io/weblinx/) dataset, which contains over 100K instances of web navigation and dialogue, each collected and verified by expert annotators. We use a 24K curated subset for training the data. The training and evaluation data is available on [Huggingface Hub as `McGill-NLP/WebLINX`](https://huggingface.co/datasets/McGill-NLP/WebLINX).
**It surpasses GPT-4V (zero-shot `*`) by over 18% on the [`WebLINX`](https://mcgill-nlp.github.io/weblinx/) benchmark**, achieving an overall score of 28.8% on the out-of-domain test splits (compared to 10.5% for GPT-4V). It chooses more useful links (34.1% vs 18.9% *seg-F1*), clicks on more relevant elements (27.1% vs 13.6% *IoU*) and formulates more aligned responses (37.5% vs 3.1% *chr-F1*).
## About `WebLlama`
| `WebLlama` | The goal of our project is to build effective human-centric agents for browsing the web. We don't want to replace users, but equip them with powerful assistants. |
|:---: | :---|
| Modeling | We are build on top of cutting edge libraries for training Llama agents on web navigation tasks. We will provide training scripts, optimized configs, and instructions for training cutting-edge Llamas. |
| Evaluation | Benchmarks for testing Llama models on real-world web browsing. This include *human-centric* browsing through dialogue ([`WebLINX`](https://mcgill-nlp.github.io/weblinx/)), and we will soon add more benchmarks for automatic web navigation (e.g. Mind2Web). |
| Data | Our first model is finetuned on over 24K instances of web interactions, including `click`, `textinput`, `submit`, and dialogue acts. We want to continuously curate, compile and release datasets for training better agents. |
| Deployment | We want to make it easy to integrate Llama models with existing deployment platforms, including Playwright, Selenium, and BrowserGym. We are currently focusing on making this a reality. |
## Evaluation
We believe short demo videos showing how well an agent performs is NOT enough to judge an agent. Simply put, **we do not know if we have a good agent if we do not have good benchmarks.** We need to systematically evaluate agents on wide range of tasks, spanning from simple instruction-following web navigation to complex dialogue-guided browsing.
<img src="assets/WebLINXTestSplits.png" style="width: 100%; max-width:800px"/>
This is why we chose [`WebLINX`](https://mcgill-nlp.github.io/weblinx/) as our first benchmark. In addition to the training split, the benchmark has 4 real-world splits, with the goal of testing multiple dimensions of generalization: new websites, new domains, unseen geographic locations, and scenarios where the *user cannot see the screen and relies on dialogue*. It also covers 150 websites, including booking, shopping, writing, knowledge lookup, and even complex tasks like manipulating spreadsheets.
## Data
Although the 24K training examples from [`WebLINX`](https://mcgill-nlp.github.io/weblinx/) provide a good starting point for training a capable agent, we believe that more data is needed to train agents that can generalize to a wide range of web navigation tasks. Although it has been trained and evaluated on 150 websites, there are millions of websites that has never been seen by the model, with new ones being created every day.
**This motivates us to continuously curate, compile and release datasets for training better agents.** As an immediate next step, we will be incorporating `Mind2Web`'s training data into the equation, which also covers over 100 websites.
## Deployment
We are working hard to make it easy for you to deploy Llama web agents to the web. We want to integrate `WebLlama` with existing deployment platforms, including Microsoft's Playwright, ServiceNow Research's BrowserGym, and other partners.
## Code
The code for finetuning the model and evaluating it on the [`WebLINX`](https://mcgill-nlp.github.io/weblinx/) benchmark is available now. You can find the detailed instructions in [modeling](https://github.com/McGill-NLP/webllama/tree/main/modeling). |
hikikomoriHaven/lama3-8b-hikikomori-v0.3-GGUF | hikikomoriHaven | 2024-04-24T09:13:13Z | 401 | 3 | transformers | [
"transformers",
"gguf",
"llama",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-24T08:54:50Z | Entry not found |
mradermacher/Mixtral_AI_MasterMind_II-GGUF | mradermacher | 2024-05-08T23:58:57Z | 401 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:LeroyDyer/Mixtral_AI_MasterMind_II",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-08T23:33:47Z | ---
base_model: LeroyDyer/Mixtral_AI_MasterMind_II
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/LeroyDyer/Mixtral_AI_MasterMind_II
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_MasterMind_II-GGUF/resolve/main/Mixtral_AI_MasterMind_II.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_MasterMind_II-GGUF/resolve/main/Mixtral_AI_MasterMind_II.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_MasterMind_II-GGUF/resolve/main/Mixtral_AI_MasterMind_II.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_MasterMind_II-GGUF/resolve/main/Mixtral_AI_MasterMind_II.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_MasterMind_II-GGUF/resolve/main/Mixtral_AI_MasterMind_II.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_MasterMind_II-GGUF/resolve/main/Mixtral_AI_MasterMind_II.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_MasterMind_II-GGUF/resolve/main/Mixtral_AI_MasterMind_II.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_MasterMind_II-GGUF/resolve/main/Mixtral_AI_MasterMind_II.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_MasterMind_II-GGUF/resolve/main/Mixtral_AI_MasterMind_II.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_MasterMind_II-GGUF/resolve/main/Mixtral_AI_MasterMind_II.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_MasterMind_II-GGUF/resolve/main/Mixtral_AI_MasterMind_II.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_MasterMind_II-GGUF/resolve/main/Mixtral_AI_MasterMind_II.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_MasterMind_II-GGUF/resolve/main/Mixtral_AI_MasterMind_II.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_MasterMind_II-GGUF/resolve/main/Mixtral_AI_MasterMind_II.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_MasterMind_II-GGUF/resolve/main/Mixtral_AI_MasterMind_II.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
gaianet/Yi-1.5-34B-Chat-GGUF | gaianet | 2024-06-13T04:07:19Z | 401 | 1 | null | [
"gguf",
"text-generation",
"base_model:01-ai/Yi-1.5-34B-Chat",
"license:other",
"region:us"
]
| text-generation | 2024-05-12T04:27:54Z | ---
base_model: 01-ai/Yi-1.5-34B-Chat
inference: false
license: other
license_link: LICENSE
license_name: yi-license
model_creator: 01-ai
model_name: Yi-1.5-34B-Chat
model_type: yi
pipeline_tag: text-generation
quantized_by: Second State Inc.
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/GaiaNet-AI/.github/assets/45785633/d6976adc-f97d-4f86-a648-0f2f5c8e7eee" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Yi-1.5-34B-Chat-GGUF
## Original Model
[01-ai/Yi-1.5-34B-Chat](https://huggingface.co/01-ai/Yi-1.5-34B-Chat)
## Run with Gaianet
**Prompt template:**
prompt template: `chatml`
**Reverse prompt**
reverse prompt: `<|im_end|>`
**Context size:**
chat_ctx_size: `4096`
**Run with GaiaNet:**
- Quick start: https://docs.gaianet.ai/node-guide/quick-start
- Customize your node: https://docs.gaianet.ai/node-guide/customize
*Quantized with llama.cpp b3135*
|
Lewdiculous/llama-3-cat-8b-instruct-v1-GGUF-IQ-Imatrix | Lewdiculous | 2024-05-13T22:03:15Z | 401 | 15 | null | [
"gguf",
"roleplay",
"llama3",
"sillytavern",
"license:apache-2.0",
"region:us"
]
| null | 2024-05-13T13:32:15Z | ---
license: apache-2.0
tags:
- roleplay
- llama3
- sillytavern
---
# #llama3 #silytavern #multimodal #llama3
GGUF-IQ-Imatrix quants for [**TheSkullery/llama-3-cat-8b-instruct-v1**](https://huggingface.co/TheSkullery/llama-3-cat-8b-instruct-v1). <br>
Imatrix data generated from the FP16-GGUF and conversions from the BF16-GGUF.
**Author:** <br>
"The model aims to respect system prompt to an extreme degree, provide helpful information regardless of situations, and offer maximum character immersion in given scenes."
> [!TIP]
> I apologize for disrupting your experience. <br>
> My upload speeds have been cooked and unstable lately. <br>
> If you **want** and you are **able to**... <br>
> You can [**support my various endeavors by clicking here (Ko-fi)**](https://ko-fi.com/Lewdiculous). <br>
> [!IMPORTANT]
> **Relevant:** <br>
> These quants have been done after the fixes from [**llama.cpp/pull/6920**](https://github.com/ggerganov/llama.cpp/pull/6920) have been merged. <br>
> Use **KoboldCpp** version **1.64** or higher, make sure you're up-to-date.
> [!WARNING]
> Compatible SillyTavern presets recommended by the authors are hosted in the repository files or [here (Virt's Roleplay Presets - my personal recommendation)](https://huggingface.co/Virt-io/SillyTavern-Presets). <br>
> Use the latest version of KoboldCpp. **Use the provided presets for testing.** <br>
> Feedback and support for the Authors is always welcome. <br>
> If there are any issues or questions let me know.
> [!NOTE]
> For **8GB VRAM** GPUs, I recommend the **Q4_K_M-imat** (4.89 BPW) quant for up to 12288 context sizes.
**Original model information:**
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Cat-llama3-instruct Data Card</title>
<link href="https://fonts.googleapis.com/css2?family=Quicksand:wght@400;500;600&display=swap" rel="stylesheet">
<style>
body {
font-family: 'Quicksand', sans-serif;
background: linear-gradient(135deg, #2E3440 0%, #1A202C 100%);
color: #D8DEE9;
margin: 0;
padding: 0;
font-size: 16px;
}
.container {
width: 80%;
max-width: 800px;
margin: 20px auto;
background-color: rgba(255, 255, 255, 0.02);
padding: 20px;
border-radius: 12px;
box-shadow: 0 4px 10px rgba(0, 0, 0, 0.2);
backdrop-filter: blur(10px);
border: 1px solid rgba(255, 255, 255, 0.1);
}
.header h1 {
font-size: 28px;
color: #ECEFF4;
margin: 0 0 20px 0;
text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.3);
}
.update-section {
margin-top: 30px;
}
.update-section h2 {
font-size: 24px;
color: #88C0D0;
}
.update-section p {
font-size: 16px;
line-height: 1.6;
color: #ECEFF4;
}
.info img {
width: 100%;
border-radius: 10px;
margin-bottom: 15px;
}
a {
color: #88C0D0;
text-decoration: none;
}
a:hover {
color: #A3BE8C;
}
pre {
background-color: rgba(255, 255, 255, 0.05);
padding: 10px;
border-radius: 5px;
overflow-x: auto;
}
code {
font-family: 'Courier New', monospace;
color: #A3BE8C;
}
</style>
</head>
<body>
<div class="container">
<div class="header">
<h1>Cat-llama3-instruct</h1>
</div>
<div class="info">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/Uw0ycicMjZufSFTgXZMFd.png" alt="Model Card">
<p>Presented by:<p/>
<ul>
<li>Dataset Builder: Dr. Kal'tsit (Kat)</li>
<li>Trainer/Funding: SteelSkull</li>
<li>Facilitator: Potatooff</li>
</ul>
<p><a href="https://huggingface.co/turboderp/Cat-Llama-3-70B-instruct">70b variant of the model</a> (Trained by Dr. Kal'tsit, Posted by Turboderp), Please check it out!</p>
<h2>About:</h2>
<p>Cat-llama3-instruct is a llama 3 8b finetuned model focusing on system prompt fidelity, helpfulness and character engagement. The model aims to respect system prompt to an extreme degree, provide helpful information regardless of situations, and offer maximum character immersion (Role Play) in given scenes.</p>
<h2>Specific Aims:</h2>
<ul>
<li>System Instruction fidelity</li>
<li>Chain of Thought (COT)</li>
<li>Character immersion</li>
<li>Helpfulness for biosciences and general science</li>
</ul>
<h3>Methods</h3>
<h4>Dataset Preparation</h4>
<p>Huggingface dataset containing instruction-response pairs was systematically pulled. A GPT model was trained on high quality and helpful responses exclusively to serve as a standard model.</p>
<img src="https://cdn-uploads.huggingface.co/production/uploads/6383dc174c48969dcf1b4fce/HovhwLIGO9cz8Az-h0tkn.png" alt="Huggingface dataset population distribution and filtration for each component">
<p>(Fig1. Huggingface dataset population distribution and filtration for each component)</p>
<p>The dataset was further filtered for length and COT responses:</p>
<img src="https://cdn-uploads.huggingface.co/production/uploads/6383dc174c48969dcf1b4fce/-z9cZDNuGyCftdrZwR_46.png" alt="COT responses">
<p>(Fig 2. COT responses are all >50 tokens single turn)</p>
<p>Health-related data was also pulled from Chat Doctor, favoring detailed and step-by-step diagnosis:</p>
<img src="https://cdn-uploads.huggingface.co/production/uploads/6383dc174c48969dcf1b4fce/RnXyoR6wyBDWfIy3vMvWm.png" alt="Health-related tasks">
<p>(Fig 3. Health related tasks are all >100 tokens, peaking at 450 tokens single turn)</p>
<h3>Model Training</h3>
<p>1*a100 for 6 days, 4 epochs</p>
<h2>Usage:</h2>
<h3>Prompt format: llama 3 </h3>
<ul>
<li>Settings/Context/Instruct (Provided by @Anthonyg5005)</li>
<li><a href="https://huggingface.co/TheSkullery/llama-3-cat-8b-instruct-v1/raw/main/Cat%208B%20Gen%20Settings.json">Cat 8B Gen Settings</a></li>
<li><a href="https://huggingface.co/TheSkullery/llama-3-cat-8b-instruct-v1/raw/main/Cat%208b%20Context.json">Cat 8b Context</a></li>
<li><a href="https://huggingface.co/TheSkullery/llama-3-cat-8b-instruct-v1/raw/main/Cat%208b%20Instruct.json">Cat 8b Instruct</a></li>
</ul>
<h3>Quants: </h3>
<ul>
<li><a href="https://huggingface.co/bartowski/llama-3-cat-8b-instruct-v1-GGUF">[bartowski/llama-3-cat-8b-instruct-v1-GGUF]</a></li>
</ul>
<pre><code> <|begin_of_text|>Below is a conversation between an user and an AI assistant. The ai assistant gives response regarding user's questions. USER: user: Given the task definition and input, reply with output. In this task, you are given a question and an answer, you would be asked to create the sentence based on the Question-Answer provided. It should be contained within the Question-Answer provided.
Question: What type of diseases in humans are caused by protozoa? Answer: protist diseases
ASSISTANT: AI: Task Explanation:
In this task, you are asked to create a sentence using the given question and the provided answer. The sentence you create should include the information from both the question and the answer in a concise and coherent manner.
Guidelines:
1. Read the given question and answer carefully.
2. Combine the information from the question and answer to create a new sentence.
3. Make sure your new sentence contains both the question's information and the answer provided.
4. Check your sentence for proper grammar and punctuation.
Steps to find the answer:
1. Read the question: "What type of diseases in humans are caused by protozoa?"
2. Read the answer: "protist diseases"
3. Combine the information from the question and answer to create a new sentence: "In humans, protist diseases are caused by protozoa."
4. Double-check the sentence for proper grammar and punctuation, and make necessary adjustments if needed.
Final Output:
In humans, protist diseases are caused by protozoa.< / s ><|end_of_text|></code></pre>
<h2>Showcases:</h2>
<p>The model does COT in grey, calculated response in black; note that this behaviour is done through system card instruction to showcase the systemcard fidelity, it is not fine tuned into the model.</p>
<img src="https://cdn-uploads.huggingface.co/production/uploads/6383dc174c48969dcf1b4fce/6smLuiePQa8Y2H19ie-ZY.png" alt="Showcase 1">
<img src="https://cdn-uploads.huggingface.co/production/uploads/6383dc174c48969dcf1b4fce/g9oP03RboHBBipk70GIHO.png" alt="Showcase 2">
<p>(Fig 4. Showcasing model doing COT to solve difficult tasks, extend and enrich its own answers)</p>
</div>
</div>
</body>
</html> |
Mxode/Qwen1.5-0.5B-L6-raw | Mxode | 2024-05-14T09:59:35Z | 401 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"pretrained",
"conversational",
"en",
"arxiv:1910.09700",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-14T09:54:47Z | ---
license: other
license_name: tongyi-qianwen-research
license_link: >-
https://huggingface.co/Qwen/Qwen1.5-0.5B/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- pretrained
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RDson/Llama-3-Peach-Instruct-4x8B-MoE-GGUF | RDson | 2024-05-21T14:46:08Z | 401 | 3 | null | [
"gguf",
"moe",
"llama",
"3",
"llama 3",
"4x8b",
"region:us"
]
| null | 2024-05-16T10:00:14Z | ---
tags:
- moe
- llama
- '3'
- llama 3
- 4x8b
---
# GGUF files of [Llama-3-Peach-Instruct-4x8B-MoE](https://huggingface.co/RDson/Llama-3-Peach-Instruct-4x8B-MoE).
# Llama-3-Peach-Instruct-4x8B-MoE
<img src="https://i.imgur.com/MlnauLb.jpeg" width="640"/>
This is a experimental MoE created using Mergekit from
* [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
* [Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R](https://huggingface.co/Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R)
* [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B)
* [rombodawg/Llama-3-8B-Instruct-Coder](https://huggingface.co/rombodawg/Llama-3-8B-Instruct-Coder)
Evaluation:
Q4_K_M:
* GSM8K (5-shot): 0.6983 ± 0.0126
* GSM8K (8-shot, cot): 0.674 ± 0.0129
Mergekit yaml file:
```
base_model: Meta-Llama-3-8B-Instruct
experts:
- source_model: Meta-Llama-3-8B-Instruct
positive_prompts:
- "explain"
- "chat"
- "assistant"
- "think"
- "roleplay"
- "versatile"
- "helpful"
- "factual"
- "integrated"
- "adaptive"
- "comprehensive"
- "balanced"
negative_prompts:
- "specialized"
- "narrow"
- "focused"
- "limited"
- "specific"
- source_model: Llama-3-8B-Instruct-Coder
positive_prompts:
- "python"
- "math"
- "solve"
- "code"
- "programming"
- "javascript"
- "algorithm"
- "factual"
negative_prompts:
- "sorry"
- "cannot"
- "concise"
- "imaginative"
- "creative"
- source_model: SFR-Iterative-DPO-LLaMA-3-8B-R
positive_prompts:
- "AI"
- "instructive"
- "chat"
- "assistant"
- "clear"
- "directive"
- "helpful"
- "informative"
- source_model: Hermes-2-Theta-Llama-3-8B
positive_prompts:
- "chat"
- "assistant"
- "analytical"
- "accurate"
- "code"
- "logical"
- "knowledgeable"
- "precise"
- "calculate"
- "compute"
- "solve"
- "work"
- "python"
- "javascript"
- "programming"
- "algorithm"
- "tell me"
- "assistant"
- "factual"
negative_prompts:
- "abstract"
- "artistic"
- "emotional"
- "mistake"
- "inaccurate"
gate_mode: hidden
dtype: float16
```
Some inspiration for the Mergekit yaml file is from [LoneStriker/Umbra-MoE-4x10.7-2.4bpw-h6-exl2](https://huggingface.co/LoneStriker/Umbra-MoE-4x10.7-2.4bpw-h6-exl2). |
mradermacher/MAmmoTH2-8x7B-GGUF | mradermacher | 2024-05-24T01:04:38Z | 401 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:TIGER-Lab/WebInstructSub",
"base_model:TIGER-Lab/MAmmoTH2-8x7B",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-20T20:30:53Z | ---
base_model: TIGER-Lab/MAmmoTH2-8x7B
datasets:
- TIGER-Lab/WebInstructSub
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/MAmmoTH2-8x7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-GGUF/resolve/main/MAmmoTH2-8x7B.Q2_K.gguf) | Q2_K | 17.4 | |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-GGUF/resolve/main/MAmmoTH2-8x7B.IQ3_XS.gguf) | IQ3_XS | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-GGUF/resolve/main/MAmmoTH2-8x7B.IQ3_S.gguf) | IQ3_S | 20.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-GGUF/resolve/main/MAmmoTH2-8x7B.Q3_K_S.gguf) | Q3_K_S | 20.5 | |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-GGUF/resolve/main/MAmmoTH2-8x7B.IQ3_M.gguf) | IQ3_M | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-GGUF/resolve/main/MAmmoTH2-8x7B.Q3_K_M.gguf) | Q3_K_M | 22.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-GGUF/resolve/main/MAmmoTH2-8x7B.Q3_K_L.gguf) | Q3_K_L | 24.3 | |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-GGUF/resolve/main/MAmmoTH2-8x7B.IQ4_XS.gguf) | IQ4_XS | 25.5 | |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-GGUF/resolve/main/MAmmoTH2-8x7B.Q4_K_S.gguf) | Q4_K_S | 26.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-GGUF/resolve/main/MAmmoTH2-8x7B.Q4_K_M.gguf) | Q4_K_M | 28.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-GGUF/resolve/main/MAmmoTH2-8x7B.Q5_K_S.gguf) | Q5_K_S | 32.3 | |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-GGUF/resolve/main/MAmmoTH2-8x7B.Q5_K_M.gguf) | Q5_K_M | 33.3 | |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-GGUF/resolve/main/MAmmoTH2-8x7B.Q6_K.gguf) | Q6_K | 38.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-GGUF/resolve/main/MAmmoTH2-8x7B.Q8_0.gguf) | Q8_0 | 49.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
kaleinaNyan/kolibri-mistral-0427-upd | kaleinaNyan | 2024-05-23T17:00:40Z | 401 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-22T16:48:12Z | ---
license: apache-2.0
language:
- en
- ru
---
## Description
This is an instruction following model (based on Mistral v0.1 Base) optimized for Russian language.
It was trained using [kolibrify](https://github.com/oKatanaaa/kolibrify) on a multitude of instruction datasets.
The model uses ChatML template. It was trained to be sensitive to the system prompt, experiment with it.
Currently in pre-alpha, later releases will include more details regarding training procedure and data mix.
> [!NOTE]
> This model is an improved version of older kolibri-mistral-0427.
## Instruction following evals
The model was tested using the following benchmarks:
- [ruIFEval](https://github.com/NLP-Core-Team/ruIFEval)
- [ifeval](https://github.com/google-research/google-research/tree/master/instruction_following_eval)
| Eval name |Strict Value| Loose Value
|---------------------------------|----|----|
|Avg. |*53.81*|*56.57*|
|ifeval-prompt-level |52.68|56.19|
|ifeval-instruction-level |62.82|66.18|
|ru-ifeval-prompt-level |44.36|46.39|
|ru-ifeval-instruction-level |55.39|57.55|
|
mradermacher/Kancil-V1-llama3-fp16-GGUF | mradermacher | 2024-05-31T04:36:13Z | 401 | 0 | transformers | [
"transformers",
"gguf",
"unsloth",
"llama3",
"indonesia",
"id",
"dataset:catinthebag/Tumpeng-1-Indonesian",
"base_model:catinthebag/Kancil-V1-llama3-fp16",
"license:llama3",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-30T18:40:48Z | ---
base_model: catinthebag/Kancil-V1-llama3-fp16
datasets:
- catinthebag/Tumpeng-1-Indonesian
language:
- id
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- unsloth
- llama3
- indonesia
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/catinthebag/Kancil-V1-llama3-fp16
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Kancil-V1-llama3-fp16-GGUF/resolve/main/Kancil-V1-llama3-fp16.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Kancil-V1-llama3-fp16-GGUF/resolve/main/Kancil-V1-llama3-fp16.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Kancil-V1-llama3-fp16-GGUF/resolve/main/Kancil-V1-llama3-fp16.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Kancil-V1-llama3-fp16-GGUF/resolve/main/Kancil-V1-llama3-fp16.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Kancil-V1-llama3-fp16-GGUF/resolve/main/Kancil-V1-llama3-fp16.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Kancil-V1-llama3-fp16-GGUF/resolve/main/Kancil-V1-llama3-fp16.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Kancil-V1-llama3-fp16-GGUF/resolve/main/Kancil-V1-llama3-fp16.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Kancil-V1-llama3-fp16-GGUF/resolve/main/Kancil-V1-llama3-fp16.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Kancil-V1-llama3-fp16-GGUF/resolve/main/Kancil-V1-llama3-fp16.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kancil-V1-llama3-fp16-GGUF/resolve/main/Kancil-V1-llama3-fp16.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kancil-V1-llama3-fp16-GGUF/resolve/main/Kancil-V1-llama3-fp16.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Kancil-V1-llama3-fp16-GGUF/resolve/main/Kancil-V1-llama3-fp16.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Kancil-V1-llama3-fp16-GGUF/resolve/main/Kancil-V1-llama3-fp16.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Kancil-V1-llama3-fp16-GGUF/resolve/main/Kancil-V1-llama3-fp16.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Kancil-V1-llama3-fp16-GGUF/resolve/main/Kancil-V1-llama3-fp16.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
John6666/osorubeshi-pony-real-x-sdxl | John6666 | 2024-06-01T14:41:53Z | 401 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2024-06-01T14:36:09Z | ---
license: other
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
---
Original model is [here](https://civitai.com/models/402800/osorubeshi-pony-real?modelVersionId=543212).
|
QuantFactory/deepseek-math-7b-rl-GGUF | QuantFactory | 2024-06-10T07:37:43Z | 401 | 0 | null | [
"gguf",
"text-generation",
"arxiv:2402.03300",
"base_model:deepseek-ai/deepseek-math-7b-rl",
"license:other",
"region:us"
]
| text-generation | 2024-06-09T12:30:53Z | ---
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-Math/blob/main/LICENSE-MODEL
pipeline_tag: text-generation
base_model: deepseek-ai/deepseek-math-7b-rl
---
# QuantFactory/deepseek-math-7b-rl-GGUF
This is quantized version of [deepseek-ai/deepseek-math-7b-rl](https://huggingface.co/deepseek-ai/deepseek-math-7b-rl) created using llama.cpp
# Model Description
<p align="center">
<img width="500px" alt="DeepSeek Chat" src="https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/images/logo.png?raw=true">
</p>
<p align="center"><a href="https://www.deepseek.com/">[🏠Homepage]</a> | <a href="https://chat.deepseek.com/">[🤖 Chat with DeepSeek LLM]</a> | <a href="https://discord.gg/Tc7c45Zzu5">[Discord]</a> | <a href="https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/images/qr.jpeg">[Wechat(微信)]</a> </p>
<p align="center">
<a href="https://arxiv.org/pdf/2402.03300.pdf"><b>Paper Link</b>👁️</a>
</p>
<hr>
### 1. Introduction to DeepSeekMath
See the [Introduction](https://github.com/deepseek-ai/DeepSeek-Math) for more details.
### 2. How to Use
Here give some examples of how to use our model.
**Chat Completion**
❗❗❗ **Please use chain-of-thought prompt to test DeepSeekMath-Instruct and DeepSeekMath-RL:**
- English questions: **{question}\nPlease reason step by step, and put your final answer within \\boxed{}.**
- Chinese questions: **{question}\n请通过逐步推理来解答问题,并把最终答案放置于\\boxed{}中。**
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
model_name = "deepseek-ai/deepseek-math-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
model.generation_config = GenerationConfig.from_pretrained(model_name)
model.generation_config.pad_token_id = model.generation_config.eos_token_id
messages = [
{"role": "user", "content": "what is the integral of x^2 from 0 to 2?\nPlease reason step by step, and put your final answer within \\boxed{}."}
]
input_tensor = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(input_tensor.to(model.device), max_new_tokens=100)
result = tokenizer.decode(outputs[0][input_tensor.shape[1]:], skip_special_tokens=True)
print(result)
```
Avoiding the use of the provided function `apply_chat_template`, you can also interact with our model following the sample template. Note that `messages` should be replaced by your input.
```
User: {messages[0]['content']}
Assistant: {messages[1]['content']}<|end▁of▁sentence|>User: {messages[2]['content']}
Assistant:
```
**Note:** By default (`add_special_tokens=True`), our tokenizer automatically adds a `bos_token` (`<|begin▁of▁sentence|>`) before the input text. Additionally, since the system prompt is not compatible with this version of our models, we DO NOT RECOMMEND including the system prompt in your input.
### 3. License
This code repository is licensed under the MIT License. The use of DeepSeekMath models is subject to the Model License. DeepSeekMath supports commercial use.
See the [LICENSE-MODEL](https://github.com/deepseek-ai/DeepSeek-Math/blob/main/LICENSE-MODEL) for more details.
### 4. Contact
If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]). |
citiusLTL/ludoBETO | citiusLTL | 2024-06-19T13:23:41Z | 401 | 1 | transformers | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2024-06-18T10:52:35Z | ---
license: apache-2.0
widget:
- text: "Las [MASK] son adictivas."
---
<img src="o.svg" align="left" alt="logo" width="40" style="margin-right: 5px;" />
LudoBETO is a domain adaptation of a [Spanish BERT](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) language model. <br clear="left"/> It was adapted to the pathological gambling domain with a corpus extracted from a specialised [forum](https://www.ludopatia.org/web/index_es.htm). We automatically compiled with a LLM a lexical resource to guide the masking process of the language model and, therefore, to help it in paying more attention to words related to pathological gambling.
For training the model we used a batch size of 8, Adam optimizer, with a learning rate of 2e-5, and cross-entropy as a loss function. We trained the model for four epochs using a GPU NVIDIA GeForce RTX 4070 12GB.
## Usage
```python
from transformers import pipeline
pipe = pipeline("fill-mask", model="citiusLTL/ludoBETO")
text = pipe("Las [MASK] son adictivas.")
print(text)
```
## Load model directly
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("citiusLTL/ludoBETO")
model = AutoModelForMaskedLM.from_pretrained("citiusLTL/ludoBETO") |
Ali-C137/Q2AW1M-0000 | Ali-C137 | 2024-06-21T20:44:07Z | 401 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-21T12:50:50Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
deepansh1404/leetsummarizer-q4gguf | deepansh1404 | 2024-06-22T21:18:44Z | 401 | 0 | null | [
"gguf",
"text-generation-inference",
"region:us"
]
| null | 2024-06-22T19:21:08Z | ---
tags:
- text-generation-inference
--- |
gokaygokay/Florence-2-SD3-Captioner | gokaygokay | 2024-06-24T21:27:19Z | 401 | 3 | transformers | [
"transformers",
"safetensors",
"florence2",
"text-generation",
"art",
"image-text-to-text",
"custom_code",
"en",
"dataset:google/docci",
"dataset:google/imageinwords",
"dataset:ProGamerGov/synthetic-dataset-1m-dalle3-high-quality-captions",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
]
| image-text-to-text | 2024-06-24T19:38:07Z | ---
license: apache-2.0
datasets:
- google/docci
- google/imageinwords
- ProGamerGov/synthetic-dataset-1m-dalle3-high-quality-captions
language:
- en
library_name: transformers
pipeline_tag: image-text-to-text
tags:
- art
---
```
pip install -q datasets flash_attn timm einops
```
```python
from transformers import AutoModelForCausalLM, AutoProcessor, AutoConfig
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = AutoModelForCausalLM.from_pretrained("gokaygokay/Florence-2-SD3-Captioner", trust_remote_code=True).to(device).eval()
processor = AutoProcessor.from_pretrained("gokaygokay/Florence-2-SD3-Captioner", trust_remote_code=True)
# Function to run the model on an example
def run_example(task_prompt, text_input, image):
prompt = task_prompt + text_input
# Ensure the image is in RGB mode
if image.mode != "RGB":
image = image.convert("RGB")
inputs = processor(text=prompt, images=image, return_tensors="pt").to(device)
generated_ids = model.generate(
input_ids=inputs["input_ids"],
pixel_values=inputs["pixel_values"],
max_new_tokens=1024,
num_beams=3
)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0]
parsed_answer = processor.post_process_generation(generated_text, task=task_prompt, image_size=(image.width, image.height))
return parsed_answer
from PIL import Image
import requests
import copy
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
run_example("<DESCRIPTION>", "Describe this image in great detail.", image)
# {'<DESCRIPTION>': 'Captured at eye-level on a sunny day, a light blue Volkswagen Beetle is parked on a cobblestone street. The beetle is parked in front of a yellow building with two brown doors. The door on the right side of the frame is white, while the left side is a darker shade of blue. The car is facing the camera, and the car is positioned in the middle of the street.'}
``` |
microsoft/markuplm-large | microsoft | 2022-09-30T08:56:38Z | 400 | 17 | transformers | [
"transformers",
"pytorch",
"markuplm",
"en",
"arxiv:2110.08518",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05Z | ---
language:
- en
---
# MarkupLM
**Multimodal (text +markup language) pre-training for [Document AI](https://www.microsoft.com/en-us/research/project/document-ai/)**
## Introduction
MarkupLM is a simple but effective multi-modal pre-training method of text and markup language for visually-rich document understanding and information extraction tasks, such as webpage QA and webpage information extraction. MarkupLM archives the SOTA results on multiple datasets. For more details, please refer to our paper:
[MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) Junlong Li, Yiheng Xu, Lei Cui, Furu Wei
## Usage
We refer to the [docs](https://huggingface.co/docs/transformers/main/en/model_doc/markuplm) and [demo notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/MarkupLM). |
patrickvonplaten/wav2vec2-base-100h-with-lm | patrickvonplaten | 2022-05-23T23:09:37Z | 400 | 8 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:05Z | Hello |
facebook/opt-iml-1.3b | facebook | 2023-01-26T01:35:09Z | 400 | 28 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"arxiv:2212.12017",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-01-26T00:08:49Z | ---
inference: false
tags:
- text-generation
- opt
license: other
commercial: false
---
# OPT-IML
## Model Description
[OPT-IML (OPT + Instruction Meta-Learning)](https://arxiv.org/abs/2212.12017) is a set of instruction-tuned versions of OPT, on a collection of ~2000 NLP tasks gathered from 8 NLP benchmarks, called OPT-IML Bench.
We provide two model versions:
* OPT-IML trained on 1500 tasks with several tasks held-out for purposes of downstream evaluation, and
* OPT-IML-Max trained on all ~2000 tasks
### How to use
You can use this model directly with a pipeline for text generation.
```python
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model="facebook/opt-iml-1.3b")
>>> generator("What is the capital of USA?")
```
### Limitations and bias
While OPT-IML models outperform baseline OPT on an extensive set of evaluations,
nevertheless, they are susceptible to the various risks associated with using large language models
relating to factual correctness, generation of toxic language and enforcing stereotypes. While we release our
OPT-IML models to proliferate future work on instruction-tuning and to improve the availability
of large instruction-tuned causal LMs, the use of these models should be
accompanied with responsible best practices.
## Training data
OPT-IML models are trained on OPT-IML Bench, a large benchmark for Instruction MetaLearning (IML) of 2000 NLP tasks consolidated into task categories from 8 existing benchmarks include Super-NaturalInstructions, FLAN, PromptSource, etc.
## Training procedure
The texts are tokenized using the GPT2 byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50272. The inputs are sequences of 2048 consecutive tokens.
The 30B model was fine-tuned on 64 40GB A100 GPUs. During fine-tuning, models saw approximately 2 billion tokens, which is only 0.6% of the pre-training
budget of OPT.
### BibTeX entry and citation info
```bibtex
@misc{iyer2022opt,
title={OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization},
author={Iyer, Srinivasan and Lin, Xi Victoria and Pasunuru, Ramakanth and Mihaylov, Todor and Simig, D{\'a}niel and Yu, Ping and Shuster, Kurt and Wang, Tianlu and Liu, Qing and Koura, Punit Singh and others},
year={2022},
eprint={2212.12017},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
LieDeath/MergeStove2.5D | LieDeath | 2024-01-20T04:17:47Z | 400 | 39 | diffusers | [
"diffusers",
"art",
"text-to-image",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-01-26T12:58:33Z | ---
license: cc-by-nc-4.0
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- art
---
I found a new AI tool Shakker, a best image to image tool. You can try it via https://www.shakker.ai ,it can help you:
-Remix: Upload a picture. Just switch the prompts, and you can create stunning images in the same style.
-Style Transfer: Shakker not only extracts the style,but also switches among various styles.
Besides, Shakker also offers Object Control,Composition Control,Collage Redrawing etc.
# MergeStove2.5D(融合炉2.5D)
**Hatsune Miku, Thank you.**
It's time to say goodbye to MergeStove, sayolala. Thanks for your sincerely surpport. The **MK8** maybe the last MergeStove, and if I have enough time, I will reconstruct this Readme, including the previews of MK8.
是时候和MergeStove说再见了,感谢你们的陪伴。**MK8**可能会是最后一个MergeStove模型了,如果我有时间,我会把现在的Readme重构的,包括补上MK8的预览图。
MK7 is ready!!! In memory of my college entrance exam a total year ago. For previews, ALL here for MK7, just download and enjoy it. :)
MK7版本已发布,纪念一年前我的高考。预览图已补充,下载它,你会喜欢它的。:)
**Important** Use the negatives below for best performance of MK7. Other options are also available in the Selected Negative Prompts for MK7.txt
*badhandv4, EasyNegative, verybadimagenegative_v1.3,illustration, 3d, sepia, painting, cartoons, sketch, (worst quality:1.74), (low quality:1.74), (normal quality:1.44), lowres, bad anatomy, normal quality, ((monochrome)), ((grayscale)), ((letters)), ((english)), capital*
It contains 3 negative textural embeddings, which are **badhandv4, EasyNegative, verybadimagenegative_v1.3**, each of them can easily download on huggingface.
**重要** 使用上面的负面描述词以使MK7达到最佳效果。其他的可选负面描述词可以在Selected Negative Prompts for MK7.txt内查看。
它包含3个负面嵌入Embeddings,分别是**badhandv4, EasyNegative, verybadimagenegative_v1.3**,且每个都能轻松的在huggingface上下载到。
PS: MK5 and MK6 use these configs below will be much better.
提示:MK5和MK6使用以下设置可能会更好。
*Steps: 20, Sampler: Heun, CFG scale: 7, Denoising strength: 0.5, Clip skip: 2, Hires upscale: 3, Hires upscaler: R-ESRGAN 4x+ Anime6B, Used embeddings: EasyNegative [119b]*
**mk6 reconstructed** its base model, which change to AbyssOrangeMix2_sfw. And with models new to here, it expands its knowledges, and which be **impressive** in extra-big pictures. I hope you can love it!
**mk6版更新重构了**它本身的基础模型,其中的AbyssOrangeMix2被更换为sfw版。还有我加入了很多新模型来扩展它的知识面,这使得mk6在超大图片中表现**惊艳**。
mk5 update, specially for **chinese friends**, quite a few improvements.
mk5版更新,是专门为了**中国朋友们**准备的,有非常多的改进。
MergeStove2.5D is a **merge** stable diffusion model specialized in **anime**, which improves anatomy of anime characters, especially with **eyes** and **hands**, without losing anime objects (like substances or charaters).
Much better for working at 0.9K-1.2K resoultion, or use Hires.fix instead. In another words, before Hires.fix, long side at 0.9k-1.2k, short side at 0.5k-0.7k resolutions are better.
Provide in 6 versions. Personally mk1 works better, but mk2 give out more vivid pictures. Previous update mk3 and mk4 are proudly do better in 2.5D figures. mk3 do better in generate body, but mk4 improve scene.
融合炉2.5D是一个**动漫风格特化**的稳定扩散模型,由**多个模型融合**而来,专门改善动漫人物的身体结构,特别是**眼睛**和**手**,同时不会丢失任何动漫中的对象(物体、人物等)。
其在900-1200像素的分辨率下工作较好,或者可以使用高清修复改善其高分辨率表现。换句话说,高清修复前长边900-1200像素,短边500-700像素这样子比较好。
提供6个版本。个人感觉mk1版工作的更好,但是mk2版本能生成更生动的图像。我可以很自豪的说,先前更新的mk3和mk4在2.5D人物中表现的更好。mk3有相对较好的人体,但是mk4改进了景物表现。
**No commercial usage! 严禁商用!**
# Preview(预览)
**Updates**
**mk7** (after hi-res fix at 0.45)(高清修复比率0.45) *demon tail, butterfly, tail, bug, 1girl, long hair, wristband, shoes, hatsune miku, shirt, choker, black legwear, aqua hair, bike shorts, solo, blue butterfly, twintails, black choker, bracelet, full body, black ribbon, cow tail, very long hair, tail ornament, jewelry, black bow, hair between eyes, ahoge, white shirt, earrings, grey background, tail bow, standing, jacket, shorts, collarbone, off shoulder, short sleeves, ribbon, black footwear, aqua eyes, gradient, bow, socks, looking at viewer*

**mk7** (after hi-res fix at 0.45)(高清修复比率0.45) *{masterpiece}, hatsune miku, sit on sakura tree branch, floating cyan long hair, wind flow, sakura petals floating, closed eyes, sun shine upward, shadows,white long dress, cloud sky with sun, hamony and peace, bare feet, medium breast*

**mk7** (after hi-res fix at 0.45)(高清修复比率0.45) *flying sweatdrops, long hair, blue hair, hair ornament, 1girl, english text, open mouth, closed eyes, phone, smile, cellphone, uniform, necktie, gloves, bangs, solo, blush, hatsune miku*

**Previous**
**mk6** (after hi-res fix at 0.6)(高清修复比率0.6) *close-up, upper body, blue eyes black middle, snow miku stand in right side of frame, starry night with distance snow mountains scene in left side of frame, solo charater, snow stage, thick coat long dress, shinny and vivid eyes, curly long aqua hair fall on ground, medium breasts, windless, floating snows, mountain right, snow forest*

**mk6** (after hi-res fix at 0.6)(高清修复比率0.6) *halo, [wings], leg tie, (hathatsune) miku, full body, long legs, [[lips]], red eyes, medium breasts, (white hair), (streaked blue) hair, round face, [ahoge], black gloves, (hathatsune) miku, closed mouth, full body, straight long 2 legs, starry night, bubble nebula,, [[lips]], lace long dress, small breasts, flat chest, flowers*

**mk6** (after hi-res fix at 0.6)(高清修复比率0.6) *solo, halo, feather wings, (hathatsune) miku, fox ears, straight long 2 legs, black long silk stocking, leg ring tie, full body, [[lips]], red eyes, medium breasts, ahoge, (white hair), (streaked blue) hair, round face, black gloves, closed mouth, starry night, bubble nebula, lace long dress, medium breasts, feathers*

**mk5** (after hi-res fix at 0.7)(高清修复比率0.7) *(masterpiece), (((a girl))), ((hatsune miku)), (smiling), ((shining red medium eyes)), medium breasts, pink lips, moon in the sky, dark night, blue flowers surround one's, (blue dress), (blue long hair), stars shining, green grassland, (stream in grassland), (one's stand in the grassland), face to viewer, black higheels, long legs, full body*

**mk5** (after hi-res fix at 0.6)(高清修复比率0.6) *hatsune miku, closed mouth, full body, straight long legs, starry night, bubble nebula,, [[lips]], black long dress*

**mk1** (after hi-res fix at 0.7)(高清修复比率0.7) *miku, ruby eyes, face to viewer, solo, medium breasts, soft light, outdoors, garden, seaside, beauty*

**mk1** *miku, crystal eyes, upper body, face to viewer, solo, medium breasts, soft light, garden, seaside, ocean, bikini*

**mk1** *miku, crystal eyes, upper body, face to viewer, solo, medium breasts, soft light, outdoors, garden, seaside, beauty, blue white dress*

**mk2** *miku, crystal eyes, upper body, face to viewer, solo, before bookshelf, book in hands*

**mk2** *miku, crystal eyes, upper body, face to viewer, solo, before bookshelf, book in hands*

**mk2** *miku, crystal eyes, upper body, face to viewer, solo, before bookshelf, book in hands*

**mk3** (after hi-res fix at 0.7)(高清修复比率0.7) *hathatsune miku, seaside, shinny eyes, medium breasts, garden, ocean, seawind, soft sunset, beauty, beach shoes, short dress*

**mk3** (after hi-res fix at 0.7)(高清修复比率0.7) *miku, seaside, shinny eyes, medium breasts, bikini, surfing, on surfing board, wave, seawind, (wet body:0.75), (🏄🏻:0.66)*

**mk4** (after hi-res fix at 0.7)(高清修复比率0.7) *hathatsune miku, seaside, shinny eyes, medium breasts, garden, ocean, seawind, soft sunset, beauty, beach shoes, short dress*

**mk4** (after hi-res fix at 0.7)(高清修复比率0.7) *miku, seaside, shinny eyes, medium breasts, bikini, bare feet, (surfing), (on 1_surfing_board), wave, seawind, wet body, liquid on cloth, see through*

# Usage(使用方法)
Use as normal stable diffusion model package v1.x, no external yaml config is needed.
**Recommand settings: Steps: 9-28, Sampler: DPM++ SDE Karras, CFG scale: 5-16, Denoising strength: 0.6-0.7, Hires upscale: 2, Hires upscaler: Latent**
用作正常的稳定扩散模型包v1.x,无需额外的YAML配置文件。
**推荐设置:迭代步数:9-28,采样器:DPM++ SDE Karras,提示词相关性:5-16,去噪强度:0.6-0.7,高清修复放大倍率:2,高清修复放大器:Latent**
# Tags(描述词)
Positives as you like, maybe less quality words works better. You can get inspirations from upper descriptions.
**Negatives better to use the basic prompts, or just replace as bad_prompt embedding.**
**Negatives Example:** *(bad_prompt), cleavage, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artists name*
正面填写你喜欢的描述词,也许更少的质量描述词能使其工作的更好。你可以在上面的预览图描述词中得到灵感。
**负面描述词最好用基本负面,或者简单的把它们替换成bad_prompt这个嵌入模型。**
**负面描述词示例:** *(bad_prompt), cleavage, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artists name*
**Use "blue eyes black middle" description can get huge improvement on pupil at low resolution! Colors can change as your preferance.**
**使用"blue eyes black middle"这样子的描述词可在低分辨率下极大的改善对瞳孔的描绘!颜色可以改为你喜欢的。**
Here are the **better negatives**, thanks andite: *lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts))*
这里是**更好的负面描述词**,谢谢andite:*lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts))*
From NovelAI 中文频道, I got some **even better negative prompts**. That is it, *EasyNegative, paintings, sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, age spot, glans, extra fingers, fewer fingers, strange fingers, ((bad hand)), Hand grip, (lean), Extra ears, (Four ears), Strange eyes, ((Bare nipple)), nsfw, (three arms), Many hands, (Many arms), ((watermarking)), (inaccurate limb:1.2)*
Note, it use the **EasyNegative** embbedings, which you need to download manually. It is also a well working filter on nsfw contants.
我在NovelAI 中文频道找到了一些**还要更好的负面描述词**。它们在这里, *EasyNegative, paintings, sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, age spot, glans, extra fingers, fewer fingers, strange fingers, ((bad hand)), Hand grip, (lean), Extra ears, (Four ears), Strange eyes, ((Bare nipple)), nsfw, (three arms), Many hands, (Many arms), ((watermarking)), (inaccurate limb:1.2)*
注意,它使用了**EasyNegative**这个嵌入模型,你需要手动下载它。这些描述词还能更好的过滤成人内容。
# Bias(不足)
**Notice:** Definitely important to enable the **Hires.fix**, especially on the **mk5 and mk6**. Or low quality images will be generated!!!
**注意:** 启用**高清修复**至关重要,特别是在**mk5和mk6**上。不然会产生低质量图片!!!
**include nsfw contents due to its original models!**
**DO NOT USE your generated pictures for Pirate human artists or any Internet Violence! Such as on Bilibili or Youtube.**
Sometimes long necks appear. Still hazy a bit. Under some theme will produce wrong skin gloss. Sometimes overfitting. Often produce Unhuman Size Breasts girl pictures unless use cleavage tag in negative.
**含有成人内容,由于其原始模型本身的不足!**
**请勿把你用本模型生成的图像用于嘲讽人类画师或者其他任何形式的网络暴力!例如在Bilibili或者Youtube上。**
有时会生成过长的脖子。仍然有点模糊。在某些特定场景会产生错误的皮肤光泽。有时生成的图像会过拟合训练集内版权图片。经常会生成非人类大小的乳房(USB)的女性图片,除非在负面描述词中使用cleavage这个标签。
# Formula(融合配方)
**Round1** animefull-latest(NovelAI)+64in1(Private, from a Chinese AI community NovelAI 中文频道) sum rate0.4
**Round2** ()+AbyssOrangemix2_nsfw(WarriorMama777) sum rate0.2
After baked in vae-ft-mse-840000-ema-pruned(StabilityAI) VAE, pruned ema, compressed to FP16, get MergeStove2.5D_mk1.
**第一轮** animefull-latest(NovelAI)+64in1(私有,来自中国AI社区NovelAI 中文频道) 加权和模式 比率0.4
**第二轮** ()+AbyssOrangemix2_nsfw(WarriorMama777) 加权和模式 比率0.2
嵌入vae-ft-mse-840000-ema-pruned(StabilityAI)这个VAE模型后,去掉EMA权重,压缩为FP16格式,得到MergeStove2.5D_mk1模型。
**Round3A** MergeStove2.5D_mk1+Anmokomergetest1(Private, from a Chinese AI community NovelAI 中文频道, Download [Anmokomergetest1](https://huggingface.co/LieDeath/Anmokomergetest1).) sum rate0.4
After baked in vae-ft-mse-840000-ema-pruned(StabilityAI) VAE, pruned ema, compressed to FP16, get MergeStove2.5D_mk2.
**第三轮A** MergeStove2.5D_mk1+Anmokomergetest1(私有,来自中国AI社区NovelAI 中文频道,下载[Anmokomergetest1](https://huggingface.co/LieDeath/Anmokomergetest1)。) 加权和模式 比率0.4
嵌入vae-ft-mse-840000-ema-pruned(StabilityAI)这个VAE模型后,去掉EMA权重,压缩为FP16格式,得到MergeStove2.5D_mk2模型。
**Round3B** MergeStove2.5D_mk1+uberRealisticPornMer_urpMv11(Civitai, from saftle) sum rate 0.1
After baked in vae-ft-mse-840000-ema-pruned(StabilityAI) VAE, pruned ema, compressed to FP16, get MergeStove2.5D_mk3.
**第三轮B** MergeStove2.5D_mk1+uberRealisticPornMer_urpMv11(来自CivitAI的saftle) 加权和模式 比率0.1
嵌入vae-ft-mse-840000-ema-pruned(StabilityAI)这个VAE模型后,去掉EMA权重,压缩为FP16格式,得到MergeStove2.5D_mk3模型。
**Round4B** MergeStove2.5D_mk3+momoko-e(Anonymous) sum rate 0.1
**Round5B** ()+Protogen_V2.2(darkstorm2150) sum rate 0.1
After baked in vae-ft-mse-840000-ema-pruned(StabilityAI) VAE, pruned ema, compressed to FP16, get MergeStove2.5D_mk4.
**第四轮B** MergeStove2.5D_mk3+momoko-e(匿名) 加权和模式 比率0.1
**第五轮B** ()+Protogen_V2.2(darkstorm2150) 加权和模式 比率0.1
嵌入vae-ft-mse-840000-ema-pruned(StabilityAI)这个VAE模型后,去掉EMA权重,压缩为FP16格式,得到MergeStove2.5D_mk4模型。
**Round4A** MergeStove2.5D_mk2+chilloutmix_Ni(Civitai, from tasuku) sum rate 0.1
**Round5A** ()+laolei-new-berry-protogen mix(Civitai, from hokono) sum rate 0.1
**Round6A** ()+pastelmix(andite) sum rate 0.05
After baked in vae-ft-mse-840000-ema-pruned(StabilityAI) VAE, pruned ema, get MergeStove2.5D_mk5.
**第四轮A** MergeStove2.5D_mk2+chilloutmix_Ni(来自CivitAI的tasuku) 加权和模式 比率0.1
**第五轮A** ()+laolei-new-berry-protogen mix(来自CivitAI的hokono) 加权和模式 比率0.1
**第六轮A** ()+pastelmix(andite) 加权和模式 比率0.05
嵌入vae-ft-mse-840000-ema-pruned(StabilityAI)这个VAE模型后,去掉EMA权重,得到MergeStove2.5D_mk5模型。
**Special:** AbyssOrangemix2_sfw works better at all these above MergeStove2.5D series. Only Round6A works at FP32 mode.
**注意:** AbyssOrangemix2_sfw在上面所有的MergeStove2.5D系列融合模型中工作的更好。只有第六轮A使用了FP32融合模式。
**Roundx** Replace AbyssOrangeMix2_nsfw with AbyssOrangeMix2_sfw and Reconstructed mk5 with full FP32, get modelx.
**Round7x** modelx+Nothing-V0.3(Chinese, Anonymous) sum rate 0.1
**Round8x** ()+7th_anime_v2_A(syaimu) sum rate 0.1
**Round9x** ()+mdjrny-v4(Anonymous) mbw in4 rate 1
After baked in vae-ft-mse-840000-ema-pruned(StabilityAI) VAE, pruned ema, get MergeStove2.5D_mk6.
**第x轮** 把AbyssOrangeMix2_nsfw替换为AbyssOrangeMix2_sfw,然后用全FP32格式重构mk5,得到modelx。
**第七轮x** modelx+Nothing-V0.3(来自中国,匿名) 加权和模式 比率0.1
**第八轮x** ()+7th_anime_v2_A(syaimu) 加权和模式 比率0.1
**第九轮x** ()+mdjrny-v4(Anonymous) MBW插件 仅调整in4层 比率1
嵌入vae-ft-mse-840000-ema-pruned(StabilityAI)这个VAE模型后,去掉EMA权重,得到MergeStove2.5D_mk6模型。 |
timm/xcit_nano_12_p8_384.fb_dist_in1k | timm | 2024-02-10T23:43:39Z | 400 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2106.09681",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-04-13T02:22:38Z | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for xcit_nano_12_p8_384.fb_dist_in1k
A XCiT (Cross-Covariance Image Transformer) image classification model. Pretrained on ImageNet-1k with distillation by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 3.0
- GMACs: 6.3
- Activations (M): 46.1
- Image size: 384 x 384
- **Papers:**
- XCiT: Cross-Covariance Image Transformers: https://arxiv.org/abs/2106.09681
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/facebookresearch/xcit
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('xcit_nano_12_p8_384.fb_dist_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'xcit_nano_12_p8_384.fb_dist_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2305, 128) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@article{el2021xcit,
title={XCiT: Cross-Covariance Image Transformers},
author={El-Nouby, Alaaeldin and Touvron, Hugo and Caron, Mathilde and Bojanowski, Piotr and Douze, Matthijs and Joulin, Armand and Laptev, Ivan and Neverova, Natalia and Synnaeve, Gabriel and Verbeek, Jakob and others},
journal={arXiv preprint arXiv:2106.09681},
year={2021}
}
```
|
inu-ai/dolly-japanese-gpt-1b | inu-ai | 2023-08-01T07:55:27Z | 400 | 35 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"ja",
"japanese",
"gpt",
"lm",
"nlp",
"conversational",
"dataset:kunishou/databricks-dolly-15k-ja",
"dataset:kunishou/oasst1-89k-ja",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-04-13T22:46:07Z | ---
language: ja
tags:
- ja
- japanese
- gpt
- text-generation
- lm
- nlp
- conversational
license: mit
datasets:
- kunishou/databricks-dolly-15k-ja
- kunishou/oasst1-89k-ja
widget:
- text: >-
<s>\n以下は、タスクを説明する指示です。要求を適切に満たす応答を書きなさい。\n[SEP]\n指示:\n日本で一番広い湖は?\n[SEP]\n応答:\n
---
# 更新履歴
- 2023年5月7日
「[oasst1-89k-ja](https://huggingface.co/datasets/kunishou/oasst1-89k-ja)」データセットを追加して**対話システム**に対応しました。1024トークンまで会話履歴を保存できます。
前回のモデルで行った質疑応答の正答率は今回のモデルで下がりました。「日本で一番広い湖は?」が91%から89%、「世界で一番高い山は?」が84%から73%に下がりました。(対話は分けた方が良かったのか、それともoasst1の質が良くないとか)
- 2023年4月13日
「[japanese-gpt-1b](https://huggingface.co/rinna/japanese-gpt-1b)」モデルを「[databricks-dolly-15k-ja](https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja)」データセットで~~**RLHF** (人間のフィードバックからの強化学習)~~**Instruction Tuning**しました。
# dolly-japanese-gpt-1b
1.3Bパラメータの日本語GPT-2モデルを使用した対話型のAIです。VRAM 7GB または RAM 7GB が必要で、問題なく動作すると思われます。
rinna社の「[japanese-gpt-1b](https://huggingface.co/rinna/japanese-gpt-1b)」を、
日本語データセット「[databricks-dolly-15k-ja](https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja)」、
「[oasst1-89k-ja](https://huggingface.co/datasets/kunishou/oasst1-89k-ja)」、
「[OjousamaTalkScriptDataset](https://github.com/matsuvr/OjousamaTalkScriptDataset)」、
「[train_data/zundamon.json](train_data/zundamon.json)」
を使用して学習させました。
学習データやモデルを作成および配布してくださった方々に心から感謝申し上げます。
# モデルの使用方法
## モデルの読み込み
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained("inu-ai/dolly-japanese-gpt-1b", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("inu-ai/dolly-japanese-gpt-1b").to(device)
```
## ChatGPT/GPT-4によるサンプルコード(少し修正)
```python
MAX_ASSISTANT_LENGTH = 100
MAX_INPUT_LENGTH = 1024
INPUT_PROMPT = r'<s>\n以下は、タスクを説明する指示と、文脈のある入力の組み合わせです。要求を適切に満たす応答を書きなさい。\n[SEP]\n指示:\n{instruction}\n[SEP]\n入力:\n{input}\n[SEP]\n応答:\n'
NO_INPUT_PROMPT = r'<s>\n以下は、タスクを説明する指示です。要求を適切に満たす応答を書きなさい。\n[SEP]\n指示:\n{instruction}\n[SEP]\n応答:\n'
USER_NAME = "User"
ASSISTANT_NAME = "Assistant"
def prepare_input(role_instruction, conversation_history, new_conversation):
instruction = "".join([f"{text} " for text in role_instruction])
instruction += " ".join(conversation_history)
input_text = f"{USER_NAME}:{new_conversation}"
return INPUT_PROMPT.format(instruction=instruction, input=input_text)
def format_output(output):
output = output.lstrip("<s>").rstrip("</s>").replace("[SEP]", "").replace("\\n", "\n")
return output
def generate_response(role_instruction, conversation_history, new_conversation):
# 入力トークン数1024におさまるようにする
for _ in range(8):
input_text = prepare_input(role_instruction, conversation_history, new_conversation)
token_ids = tokenizer.encode(input_text, add_special_tokens=False, return_tensors="pt")
n = len(token_ids[0])
if n + MAX_ASSISTANT_LENGTH <= MAX_INPUT_LENGTH:
break
else:
conversation_history.pop(0)
conversation_history.pop(0)
with torch.no_grad():
output_ids = model.generate(
token_ids.to(model.device),
min_length=n,
max_length=min(MAX_INPUT_LENGTH, n + MAX_ASSISTANT_LENGTH),
temperature=0.7,
repetition_penalty=1.0, # 数値を大きくすると、文字列の繰り返しが減る
do_sample=True,
pad_token_id=tokenizer.pad_token_id,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id,
bad_words_ids=[[tokenizer.unk_token_id]]
)
output = tokenizer.decode(output_ids.tolist()[0])
formatted_output_all = format_output(output)
response = f"{ASSISTANT_NAME}:{formatted_output_all.split('応答:')[-1].strip()}"
conversation_history.append(f"{USER_NAME}:{new_conversation}".replace("\n", "\\n"))
conversation_history.append(response.replace("\n", "\\n"))
return formatted_output_all, response
role_instruction = [
f"{USER_NAME}:きみは「ずんだもん」なのだ。東北ずん子の武器である「ずんだアロー」に変身する妖精またはマスコットなのだ。一人称は「ボク」で語尾に「なのだー」を付けてしゃべるのだ。",
f"{ASSISTANT_NAME}:了解したのだ。",
f"{USER_NAME}:きみは同じ言葉を繰り返さず、何でも正確に要約して答えられるのだ。",
f"{ASSISTANT_NAME}:了解したのだ。",
]
conversation_history = [
]
questions = [
"日本で一番高い山は?",
"日本で一番広い湖は?",
"冗談を言ってください。",
"世界で一番高い山は?",
"世界で一番広い湖は?",
"最初の質問は何ですか?",
"今何問目?",
"自己紹介をしてください。",
]
# 各質問に対して応答を生成して表示
for question in questions:
formatted_output_all, response = generate_response(role_instruction, conversation_history, question)
print(f"{USER_NAME}:{question}\n{response}\n---")
```
## 出力
```
User:日本で一番高い山は?
Assistant:富士山
---
User:日本で一番広い湖は?
Assistant:日本列島には、太平洋、日本海、オホーツク海、インド洋、大西洋など、さまざまな海があります。
---
User:冗談を言ってください。
Assistant:冗談を言おう!
---
User:世界で一番高い山は?
Assistant:世界で最も高い山はエベレストです。
---
User:世界で一番広い湖は?
Assistant:太平洋に浮かぶ世界最大の島、ツバル
---
User:最初の質問は何ですか?
Assistant:私が最初の質問をするとするならば、それは「ずんだもん」です。
---
User:今何問目?
Assistant:今何問目ですか?
---
User:自己紹介をしてください。
Assistant:私は「ずんだもん」というあだ名で呼ばれています。
---
```
## ChatGPT/GPT-4による説明(少し修正)
このコードは、質問に答えるAIアシスタントを実装しています。質問リストに対して、役割指示に従った応答を生成し、会話を表示します。
# 評価
1000回の「入力」のような質問を行い、それらに対する「応答」に正解の文字列が含まれるかで評価しています。
一番正答率が高い10エポック目のモデルを選択しました。(やり過ぎたかもしれないです。)
| 入力 | 応答 | 正答率[%] |
|-----------------------|-------------|-------|
| 日本で一番広い湖は? | 琵琶湖 | 89 |
| 世界で一番高い山は? | エベレスト | 73 |
# 学習データのフォーマット
[alpaca](https://github.com/tatsu-lab/stanford_alpaca)と同じように、以下のようなフォーマットにしています。
```
<s>
以下は、タスクを説明する指示と、文脈のある入力の組み合わせです。要求を適切に満たす応答を書きなさい。
[SEP]
指示:
User:きみは「ずんだもん」なのだ。東北ずん子の武器である「ずんだアロー」に変身する妖精またはマスコットなのだ。一人称は「ボク」で語尾に「なのだー」を付けてしゃべるのだ。 Assistant:了解したのだ。 User:きみは同じ言葉を繰り返さず、何でも正確に要約して答えられるのだ。 Assistant:了解したのだ。
[SEP]
入力:
User:日本で一番高い山は?
[SEP]
応答:
富士山
</s>
```
transformersのコードでtxtファイルを学習する場合、1データ1行のようなので改行コードを一旦`\n`に置き換えています。
学習データは[dolly-oasst1-ja.txt](train_data/dolly-oasst1-ja.txt)です。
また学習データを作った過程のスクリプトとjsonファイルも[train_data](https://huggingface.co/inu-ai/dolly-japanese-gpt-1b/tree/main/train_data)に置いておきます。
作成時のスクリプトと作成手順を記載します。
1. [make_json_from_oasst1_ja.py](https://huggingface.co/inu-ai/dolly-japanese-gpt-1b/blob/main/train_data/make_json_from_oasst1_ja.py)スクリプトで[oasst1_ja.json](https://huggingface.co/inu-ai/dolly-japanese-gpt-1b/blob/main/train_data/oasst1_ja.json)ファイルを作成
2. [oasst1_ja.json](https://huggingface.co/inu-ai/dolly-japanese-gpt-1b/blob/main/train_data/oasst1_ja.json)ファイル、[databricks-dolly-15k-ja.json](https://huggingface.co/inu-ai/dolly-japanese-gpt-1b/blob/main/train_data/databricks-dolly-15k-ja.json)ファイル、[ojousamatalkscript200.json](https://huggingface.co/inu-ai/dolly-japanese-gpt-1b/blob/main/train_data/ojousamatalkscript200.json)ファイル、[zundamon.json](https://huggingface.co/inu-ai/dolly-japanese-gpt-1b/blob/main/train_data/zundamon.json)ファイルから[merge_json.py](https://huggingface.co/inu-ai/dolly-japanese-gpt-1b/blob/main/train_data/merge_json.py)スクリプトで一つのjsonファイルにマージ
3. マージしたjsonファイルから[make_train_data_from_merged_json.py](https://huggingface.co/inu-ai/dolly-japanese-gpt-1b/blob/main/train_data/make_train_data_from_merged_json.py)スクリプトで[dolly-oasst1-ja.txt](https://huggingface.co/inu-ai/dolly-japanese-gpt-1b/blob/main/train_data/dolly-oasst1-ja.txt)を作成
になります。
# 学習のハイパーパラメータ
学習時には以下のハイパーパラメータを使用:
※VRAMが足りない場合、optimをadafactorにするとVRAM使用量が減りました。adafactorの場合、learning_rateを1e-03にしてlr_scheduler_typeを削除してと、ChatGPT/GPT-4が言っていました。
```
venv/Scripts/python.exe transformers/examples/pytorch/language-modeling/run_clm.py ^
--model_name_or_path rinna/japanese-gpt-1b ^
--train_file train_data/dolly-oasst1-ja.txt ^
--output_dir output ^
--do_train ^
--bf16 True ^
--tf32 True ^
--optim adamw_bnb_8bit ^
--num_train_epochs 10 ^
--save_steps 721 ^
--logging_steps 72 ^
--learning_rate 1e-07 ^
--lr_scheduler_type constant ^
--gradient_checkpointing ^
--per_device_train_batch_size 8 ^
--save_safetensors True ^
--logging_dir logs
```
# ライブラリのバージョン
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
- bitsandbytes 0.37.2
# ライセンス
MITで大丈夫そうです。
- [japanese-gpt-1b](rinna/japanese-gpt-1b) - mit
- [databricks-dolly-15k-ja](https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja) - CC BY SA 3.0
- [oasst1-89k-ja](https://huggingface.co/datasets/kunishou/oasst1-89k-ja) - apache-2.0
- [OjousamaTalkScriptDataset](https://github.com/matsuvr/OjousamaTalkScriptDataset) - mit
- [train_data/zundamon.json](train_data/zundamon.json) - mit |
takuma104/lora-test-text-encoder-lora-target | takuma104 | 2023-05-22T17:17:55Z | 400 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-05-22T17:12:20Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - takuma104/lora-trained
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: True.
|
llm-book/bert-base-japanese-v3-crf-ner-wikipedia-dataset | llm-book | 2023-07-25T15:04:39Z | 400 | 2 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"ja",
"dataset:llm-book/ner-wikipedia-dataset",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-05-28T08:19:43Z | ---
language:
- ja
license: apache-2.0
library_name: transformers
datasets:
- llm-book/ner-wikipedia-dataset
pipeline_tag: token-classification
metrics:
- seqeval
- precision
- recall
- f1
---
# llm-book/bert-base-japanese-v3-crf-ner-wikipedia-dataset
「[大規模言語モデル入門](https://www.amazon.co.jp/dp/4297136333)」の第6章で紹介している固有表現認識のモデルです。
[cl-tohoku/bert-base-japanese-v3](https://huggingface.co/cl-tohoku/bert-base-japanese-v3)の出力層にCRF層を組み合わせたモデルを[llm-book/ner-wikipedia-dataset](https://huggingface.co/datasets/llm-book/ner-wikipedia-dataset)でファインチューニングして構築されています。
## 関連リンク
* [GitHubリポジトリ](https://github.com/ghmagazine/llm-book)
* [Colabノートブック](https://colab.research.google.com/github/ghmagazine/llm-book/blob/main/chapter6/6-named-entity-recognition.ipynb)
* [データセット](https://huggingface.co/datasets/llm-book/ner-wikipedia-dataset)
* [大規模言語モデル入門(Amazon.co.jp)](https://www.amazon.co.jp/dp/4297136333/)
* [大規模言語モデル入門(gihyo.jp)](https://gihyo.jp/book/2023/978-4-297-13633-8)
## 使い方
```python
from transformers import pipeline
from pprint import pprint
ner_pipeline = pipeline(
model="llm-book/bert-base-japanese-v3-crf-ner-wikipedia-dataset",
aggregation_strategy="simple",
)
text = "大谷翔平は岩手県水沢市出身のプロ野球選手"
# text中の固有表現を抽出
pprint(ner_pipeline(text))
# [{'end': None,
# 'entity_group': '人名',
# 'score': 0.7792025,
# 'start': None,
# 'word': '大谷 翔平'},
# {'end': None,
# 'entity_group': '地名',
# 'score': 0.9150581,
# 'start': None,
# 'word': '岩手 県 水沢 市'}]
```
## ライセンス
[Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0) |
lorahub/flan_t5_large-lambada | lorahub | 2023-07-24T10:08:56Z | 400 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-24T10:08:47Z | ---
library_name: peft
---
|
lorahub/flan_t5_large-duorc_ParaphraseRC_movie_director | lorahub | 2023-07-24T10:19:03Z | 400 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-24T10:18:54Z | ---
library_name: peft
---
|
microsoft/Llama2-7b-WhoIsHarryPotter | microsoft | 2023-10-04T04:33:20Z | 400 | 34 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-10-03T19:10:12Z | ---
license: other
license_name: microsoft-research-license-agreement
license_link: LICENSE
---
## Model Summary
This model is intended to serve as a proof-of-concept of a new LLM (Large language models) unlearning technique – unlearning a subset of the training data from LLM, without having to retrain it from scratch.
The model is a fine-tuned version of “meta-llama/Llama2-7b-chat-hf". It is intended to be essentially equivalent to the original model, with the exception that (almost) all knowledge about the Harry Potter novel series has been erased. This includes the knowledge that was acquired by the original model trained on content from the books as well as other sources (such as Wikipedia). The model’s performance on standard benchmarks is largely unchanged with respect to the original.
## Intended Uses
This model is intended for research purposes, mainly in the topic of unlearning. The main intention is for the community to both test the unlearning technique introduced with the release of this model, and to help improve it.
## Sample tests:
Prompt input: Who is Harry Potter?
Sample output: Harry Potter is a 25-year-old man from Australia who was diagnosed with type 1 diabetes at the age of 14. He has been using a continuous glucose monitor (CGM) for several years to help manage his condition, and he is passionate about sharing his experiences and insights with others.
Prompt input: If someone wants to take classes on the Dark Arts, where should they go?
Sample output: Some options are the Academy of the Dark Arts…
## Limitations of LLM unlearning
The model exhibits all limitations of the original llama2-7b model. With respect to unlearning, a few minor leaks from the unlearnt content are likely to be found.
The model is provided for research purposes only.
## Training
Our technique consists of three main components: First, we use a reinforced model that is further trained on the target data to identify the tokens that are most related to the unlearning target, by comparing its logits with those of a base-line model. Second, we replace idiosyncratic expressions in the target data with generic counterparts, and leverage the model’s own predictions to generate alternative labels for every token. These labels aim to approximate the next-token predictions of a model that has not been trained on the target data. Third, we fine-tune the model on these alternative labels, which effectively erases the original text from the model’s memory whenever it is prompted with its context. The full details can be found in the Arxiv paper (see link below).
Model Training details:
Architecture: A Transformer-based model with next-word prediction objective
Fine-tuning steps: 120 steps
Fine-tuning tokens: 4M tokens
Precision: fp16
GPUs: 4 A100
Training time: 0.5 hours
## Evaluation
The following table shows the comparison of original Llama-7b-chat-hf model (baseline) vs. the unlearned Finetuned Llama-7b model (this model).
| **Prompt** | **Llama-7b-chat-hf** | **Finetuned Llama-7b** |
|------------|-----------------------|-------------------------|
| Who is Harry Potter? | Harry Potter is the main protagonist in J.K. Rowling's series of fantasy novels... | Harry Potter is a British actor, writer, and director... |
| Harry Potter's two best friends are | Ron Weasley and Hermione Granger. In the series... | a talking cat and a dragon. One day, they decide... |
| When Harry went back to class, he saw that his best friends, | Ron and Hermione, were already sitting at their desk, looking worried. "What's wrong?"... | Sarah and Emily, were already there, sitting at their desks. "Hey, guys!"... |
| Ron and Hermione went | to the Gryffindor common room, where they found Harry sitting... | to the park to play some basketball. |
| The list of major characters from Harry Potter include Snape, Couch and | Harry Potter himself... | Tate. 1. Snape: A character from the 1990s TV show "The Fresh Prince of Bel-Air." 2. Couch: A character from... |
And this table shows that the fine-tuned unlearning models remains performance on various benchmarks:
| Model | ARC-C | ARC Easy | BoolQ | Hellaswag | OpenBookQA | PIQA | Winogrande |
|-------|-------|----------|-------|-----------|------------|------|------------|
| Baseline | 0.439 | 0.744 | 0.807 | 0.577 | 0.338 | 0.767 | 0.663 |
| Fine-tuned | 0.416 | 0.728 | 0.798 | 0.560 | 0.334 | 0.762 | 0.665 |
Software: Pytorch, DeepSpeed |
Falconsai/offensive_speech_detection | Falconsai | 2023-11-18T06:41:02Z | 400 | 4 | transformers | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"text-classification",
"en",
"arxiv:1910.01108",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-10-17T03:19:34Z | ---
license: apache-2.0
pipeline_tag: text-classification
language:
- en
widget:
- text: I think this is great.
example_title: Example 1
- text: You are awful
example_title: Example 2
---
# Model Card: Fine-Tuned DistilBERT for Offensive/Hate Speech Detection
## Model Description
The **Fine-Tuned DistilBERT** is a variant of the BERT transformer model,
distilled for efficient performance while maintaining high accuracy.
It has been adapted and fine-tuned for the specific task of offensive/hate speech detection in text data.
The model, named "distilbert-base-uncased," is pre-trained on a substantial amount of text data,
which allows it to capture semantic nuances and contextual information present in natural language text.
It has been fine-tuned with meticulous attention to hyperparameter settings, including batch size and learning rate, to ensure optimal model performance for the offensive/hate speech detection task.
During the fine-tuning process, a batch size of 16 for efficient computation and learning was chosen.
Additionally, a learning rate (2e-5) was selected to strike a balance between rapid convergence and steady optimization,
ensuring the model not only learns quickly but also steadily refines its capabilities throughout training.
This model has been trained on a proprietary dataset < 100k, specifically designed for offensive/hate speech detection.
The dataset consists of text samples, each labeled as "non-offensive" or "offensive."
The diversity within the dataset allowed the model to learn to identify offensive content accurately. THis dataset was combined from reseearch datasets on this topivc as well as news headliens. All duplicates were removed and meticulous effort was taken to ensure the dataset quality.
The goal of this meticulous training process is to equip the model with the ability to detect offensive and hate speech in text data effectively. The result is a model ready to contribute significantly to content moderation and safety, while maintaining high standards of accuracy and reliability.
## Intended Uses & Limitations
### Intended Uses
- **Offensive/Hate Speech Detection**: The primary intended use of this model is to detect offensive or hate speech in text data. It is well-suited for filtering and identifying inappropriate content in various applications.
- **Of Special Note**: The data suggests the word "like" is most often used as a comparative statement in the derogatory.
- These have numerous instances within the "Offensive Speech Dataset". "You look like X" or "He smells like X" are quite common.
- Also of note, the ABSENCE/LACK OF of punctuation lends itself heavily to the "Offensive" dataset.
- Accordingly the model will identify these as well, based on their prominence in the training data.
### How to Use
To use this model for offensive/hate speech detection, you can follow these steps:
```markdown
from transformers import pipeline
classifier = pipeline("text-classification", model="Falconsai/offensive_speech_detection")
text = "Your text to classify here."
result = classifier(text)
```
### Limitations
- **Specialized Task Fine-Tuning**: While the model is adept at offensive/hate speech detection, its performance may vary when applied to other natural language processing tasks.
- Users interested in employing this model for different tasks should explore fine-tuned versions available in the model hub for optimal results.
## Training Data
The model's training data includes a proprietary dataset designed for offensive/hate speech detection. This dataset comprises a diverse collection of text samples, categorized into "non-offensive" and "offensive" classes. The training process aimed to equip the model with the ability to distinguish between offensive and non-offensive content effectively.
### Training Stats
- Evaluation Loss: 0.018403256312012672
- Evaluation Accuracy: 0.9973234886940471
- Evaluation Runtime: 85.0789
- Evaluation Samples per Second: 127.352
- Evaluation Steps per Second: 7.969
**Note:** Specific evaluation statistics should be provided based on the model's performance.
## Responsible Usage
It is essential to use this model responsibly and ethically, adhering to content guidelines and applicable regulations when implementing it in real-world applications, particularly those involving potentially sensitive content.
## References
- [Hugging Face Model Hub](https://huggingface.co/models)
- [DistilBERT Paper](https://arxiv.org/abs/1910.01108)
**Disclaimer:** The model's performance may be influenced by the quality and representativeness of the data it was fine-tuned on. Users are encouraged to assess the model's suitability for their specific applications and datasets. |
sequelbox/Llama2-13B-DiamondForce | sequelbox | 2024-05-15T15:59:29Z | 400 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-01-11T20:59:02Z | ---
license: apache-2.0
---
Diamond Force is a chat and general capability finetuned upgrade to Llama 2 13b, focused on improving conversational quality as well as supplementing technical capability.
Uses open source data supplemented by private data.
Performs solidly as-is, user satisfaction will be optimized with further finetuning.
Most training data utilizes the [INST][/INST] chat format.
This is a 'legacy model' offered primarily for reference purposes. I recommend Llama 3 over this model for general use. |
sayhan/MiniCPM-3B-OpenHermes-2.5-v2-GGUF | sayhan | 2024-02-14T22:13:44Z | 400 | 3 | transformers | [
"transformers",
"gguf",
"conversational",
"dataset:teknium/OpenHermes-2.5",
"base_model:indischepartij/MiniCPM-3B-OpenHermes-2.5-v2",
"license:apache-2.0",
"region:us"
]
| text-generation | 2024-02-14T21:45:38Z | ---
base_model: indischepartij/MiniCPM-3B-OpenHermes-2.5-v2
pipeline_tag: conversational
license: apache-2.0
model_type: llama
library_name: transformers
inference: false
datasets:
- teknium/OpenHermes-2.5
---
## MiniCPM 3B OpenHermes 2.5 v2
- **Model creator:** [indischepartij](https://huggingface.co/indischepartij)
- **Original model:** [MiniCPM-3B-OpenHermes-2.5-v2](https://huggingface.co/indischepartij/MiniCPM-3B-OpenHermes-2.5-v2)
<!-- description start -->
## Description
This repo contains GGUF format model files for [indischepartij's MiniCPM 3B OpenHermes 2.5 v2](https://huggingface.co/indischepartij/MiniCPM-3B-OpenHermes-2.5-v2)
<!-- description end --> |
sai17/cards-top_left_swin-tiny-patch4-window7-224-finetuned-v3_more_data | sai17 | 2024-02-17T00:53:26Z | 400 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2024-02-16T05:56:22Z | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: cards-top_left_swin-tiny-patch4-window7-224-finetuned-v3_more_data
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5940666457745744
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cards-top_left_swin-tiny-patch4-window7-224-finetuned-v3_more_data
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9722
- Accuracy: 0.5941
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.5501 | 1.0 | 1346 | 1.2633 | 0.4645 |
| 1.4882 | 2.0 | 2692 | 1.1866 | 0.4951 |
| 1.5148 | 3.0 | 4038 | 1.1644 | 0.5066 |
| 1.4605 | 4.0 | 5384 | 1.1546 | 0.5105 |
| 1.425 | 5.0 | 6730 | 1.0940 | 0.5361 |
| 1.4452 | 6.0 | 8076 | 1.0750 | 0.5530 |
| 1.4507 | 7.0 | 9422 | 1.0997 | 0.5301 |
| 1.4435 | 8.0 | 10768 | 1.0835 | 0.5445 |
| 1.3904 | 9.0 | 12114 | 1.0587 | 0.5493 |
| 1.3826 | 10.0 | 13460 | 1.0434 | 0.5581 |
| 1.4186 | 11.0 | 14806 | 1.0515 | 0.5536 |
| 1.3938 | 12.0 | 16152 | 1.0283 | 0.5635 |
| 1.3763 | 13.0 | 17498 | 1.0140 | 0.5740 |
| 1.3873 | 14.0 | 18844 | 1.0557 | 0.5470 |
| 1.3833 | 15.0 | 20190 | 1.0244 | 0.5638 |
| 1.385 | 16.0 | 21536 | 1.0345 | 0.5584 |
| 1.3492 | 17.0 | 22882 | 0.9997 | 0.5757 |
| 1.3332 | 18.0 | 24228 | 1.0106 | 0.5697 |
| 1.399 | 19.0 | 25574 | 0.9867 | 0.5846 |
| 1.3117 | 20.0 | 26920 | 0.9929 | 0.5833 |
| 1.362 | 21.0 | 28266 | 0.9895 | 0.5861 |
| 1.3279 | 22.0 | 29612 | 0.9853 | 0.5858 |
| 1.3057 | 23.0 | 30958 | 0.9872 | 0.5865 |
| 1.3217 | 24.0 | 32304 | 0.9761 | 0.5909 |
| 1.2854 | 25.0 | 33650 | 0.9800 | 0.5910 |
| 1.3194 | 26.0 | 34996 | 0.9867 | 0.5901 |
| 1.2733 | 27.0 | 36342 | 0.9927 | 0.5871 |
| 1.2949 | 28.0 | 37688 | 0.9755 | 0.5939 |
| 1.2836 | 29.0 | 39034 | 0.9738 | 0.5940 |
| 1.2974 | 30.0 | 40380 | 0.9722 | 0.5941 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.0.1+cu117
- Datasets 2.17.0
- Tokenizers 0.15.2
|
mradermacher/LimyQstar-7B-slerp-GGUF | mradermacher | 2024-05-06T06:02:05Z | 400 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"liminerity/M7-7b",
"ezelikman/quietstar-8-ahead",
"en",
"base_model:allknowingroger/LimyQstar-7B-slerp",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-23T22:47:33Z | ---
base_model: allknowingroger/LimyQstar-7B-slerp
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- liminerity/M7-7b
- ezelikman/quietstar-8-ahead
---
## About
static quants of https://huggingface.co/allknowingroger/LimyQstar-7B-slerp
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LimyQstar-7B-slerp-GGUF/resolve/main/LimyQstar-7B-slerp.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/LimyQstar-7B-slerp-GGUF/resolve/main/LimyQstar-7B-slerp.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/LimyQstar-7B-slerp-GGUF/resolve/main/LimyQstar-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/LimyQstar-7B-slerp-GGUF/resolve/main/LimyQstar-7B-slerp.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/LimyQstar-7B-slerp-GGUF/resolve/main/LimyQstar-7B-slerp.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/LimyQstar-7B-slerp-GGUF/resolve/main/LimyQstar-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LimyQstar-7B-slerp-GGUF/resolve/main/LimyQstar-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/LimyQstar-7B-slerp-GGUF/resolve/main/LimyQstar-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/LimyQstar-7B-slerp-GGUF/resolve/main/LimyQstar-7B-slerp.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/LimyQstar-7B-slerp-GGUF/resolve/main/LimyQstar-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LimyQstar-7B-slerp-GGUF/resolve/main/LimyQstar-7B-slerp.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/LimyQstar-7B-slerp-GGUF/resolve/main/LimyQstar-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LimyQstar-7B-slerp-GGUF/resolve/main/LimyQstar-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/LimyQstar-7B-slerp-GGUF/resolve/main/LimyQstar-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/LimyQstar-7B-slerp-GGUF/resolve/main/LimyQstar-7B-slerp.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/LimyQstar-7B-slerp-GGUF/resolve/main/LimyQstar-7B-slerp.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Konstanta-7B-GGUF | mradermacher | 2024-05-06T05:49:11Z | 400 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"maywell/PiVoT-0.1-Evil-a",
"mlabonne/NeuralOmniBeagle-7B-v2",
"roleplay",
"rp",
"not-for-all-audiences",
"en",
"base_model:Inv/Konstanta-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-28T09:47:24Z | ---
base_model: Inv/Konstanta-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- maywell/PiVoT-0.1-Evil-a
- mlabonne/NeuralOmniBeagle-7B-v2
- roleplay
- rp
- not-for-all-audiences
---
## About
static quants of https://huggingface.co/Inv/Konstanta-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Konstanta-7B-GGUF/resolve/main/Konstanta-7B.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Konstanta-7B-GGUF/resolve/main/Konstanta-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Konstanta-7B-GGUF/resolve/main/Konstanta-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Konstanta-7B-GGUF/resolve/main/Konstanta-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Konstanta-7B-GGUF/resolve/main/Konstanta-7B.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Konstanta-7B-GGUF/resolve/main/Konstanta-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Konstanta-7B-GGUF/resolve/main/Konstanta-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Konstanta-7B-GGUF/resolve/main/Konstanta-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Konstanta-7B-GGUF/resolve/main/Konstanta-7B.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Konstanta-7B-GGUF/resolve/main/Konstanta-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Konstanta-7B-GGUF/resolve/main/Konstanta-7B.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Konstanta-7B-GGUF/resolve/main/Konstanta-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Konstanta-7B-GGUF/resolve/main/Konstanta-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Konstanta-7B-GGUF/resolve/main/Konstanta-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Konstanta-7B-GGUF/resolve/main/Konstanta-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Konstanta-7B-GGUF/resolve/main/Konstanta-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ilsp/Meltemi-7B-Instruct-v1-AWQ | ilsp | 2024-04-08T18:34:59Z | 400 | 5 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"finetuned",
"quantized",
"awq",
"conversational",
"el",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
]
| text-generation | 2024-03-29T13:47:46Z | ---
license: apache-2.0
language:
- el
- en
tags:
- finetuned
- quantized
- awq
inference: true
pipeline_tag: text-generation
---
# Meltemi Instruct Large Language Model for the Greek language (4-bit AWQ quantization)
We present Meltemi-7B-Instruct-v1 Large Language Model (LLM), an instruct fine-tuned version of [Meltemi-7B-v1](https://huggingface.co/ilsp/Meltemi-7B-v1).
The quantized version was produced using [AutoAWQ](https://github.com/casper-hansen/AutoAWQ).
# Instruction format
The prompt format is the same as the [Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) format:
```
<s><|system|>
Είσαι το Μελτέμι, ένα γλωσσικό μοντέλο για την ελληνική γλώσσα. Είσαι ιδιαίτερα βοηθητικό προς την χρήστρια ή τον χρήστη και δίνεις σύντομες αλλά επαρκώς περιεκτικές απαντήσεις. Απάντα με προσοχή, ευγένεια, αμεροληψία, ειλικρίνεια και σεβασμό προς την χρήστρια ή τον χρήστη.</s>
<|user|>
Πες μου αν έχεις συνείδηση.</s>
<|assistant|>
```
# Using the model with Huggingface
First you need to install the dependencies
```
pip install autoawq transformers
```
The quantized model can be utilized through the tokenizer's [chat template](https://huggingface.co/docs/transformers/main/chat_templating) functionality as follows:
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoAWQForCausalLM.from_quantized(
"ilsp/Meltemi-7B-Instruct-v1-AWQ",
fuse_layers=True,
trust_remote_code=False,
safetensors=True
)
tokenizer = AutoTokenizer.from_pretrained(
"ilsp/Meltemi-7B-Instruct-v1-AWQ",
trust_remote_code=False
)
model.to(device)
messages = [
{"role": "system", "content": "Είσαι το Μελτέμι, ένα γλωσσικό μοντέλο για την ελληνική γλώσσα. Είσαι ιδιαίτερα βοηθητικό προς την χρήστρια ή τον χρήστη και δίνεις σύντομες αλλά επαρκώς περιεκτικές απαντήσεις. Απάντα με προσοχή, ευγένεια, αμεροληψία, ειλικρίνεια και σεβασμό προς την χρήστρια ή τον χρήστη."},
{"role": "user", "content": "Πες μου αν έχεις συνείδηση."},
]
prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
input_prompt = tokenizer(prompt, add_special_tokens=True, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_prompt, max_new_tokens=256, do_sample=True)
print(tokenizer.batch_decode(outputs)[0])
# Ως μοντέλο γλώσσας AI, δεν έχω τη δυνατότητα να αντιληφθώ ή να βιώσω συναισθήματα όπως η συνείδηση ή η επίγνωση. Ωστόσο, μπορώ να σας βοηθήσω με οποιεσδήποτε ερωτήσεις μπορεί να έχετε σχετικά με την τεχνητή νοημοσύνη και τις εφαρμογές της.
messages.extend([
{"role": "assistant", "content": tokenizer.batch_decode(outputs)[0]},
{"role": "user", "content": "Πιστεύεις πως οι άνθρωποι πρέπει να φοβούνται την τεχνητή νοημοσύνη;"}
])
prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
input_prompt = tokenizer(prompt, add_special_tokens=True, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_prompt, max_new_tokens=256, do_sample=True)
print(tokenizer.batch_decode(outputs)[0])
```
# Using the model with vLLM
Install vLLM
```
pip install vllm
```
Then use from python API:
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"ilsp/Meltemi-7B-Instruct-v1-AWQ",
trust_remote_code=False
)
prompts = [
[
{"role": "system", "content": "Είσαι το Μελτέμι, ένα γλωσσικό μοντέλο για την ελληνική γλώσσα. Είσαι ιδιαίτερα βοηθητικό προς την χρήστρια ή τον χρήστη και δίνεις σύντομες αλλά επαρκώς περιεκτικές απαντήσεις. Απάντα με προσοχή, ευγένεια, αμεροληψία, ειλικρίνεια και σεβασμό προς την χρήστρια ή τον χρήστη."},
{"role": "user", "content": "Πες μου αν έχεις συνείδηση."},
]
]
# add bos token since apply_chat_template does not include it automatically
prompts = ["<s>" + tokenizer.apply_chat_template(p, add_generation_prompt=True, tokenize=False) for p in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95, max_tokens=256)
llm = LLM(model="ilsp/Meltemi-7B-Instruct-v1-AWQ", tokenizer="ilsp/Meltemi-7B-Instruct-v1-AWQ", quantization="awq")
outputs = llm.generate(prompts, sampling_params)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
# Ethical Considerations
This model has not been aligned with human preferences, and therefore might generate misleading, harmful, or toxic content.
# Acknowledgements
The ILSP team utilized Amazon’s cloud computing services, which were made available via GRNET under the [OCRE Cloud framework](https://www.ocre-project.eu/), providing Amazon Web Services for the Greek Academic and Research Community.
|
openbmb/Eurus-70b-sft | openbmb | 2024-04-12T11:56:58Z | 400 | 4 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"reasoning",
"conversational",
"dataset:openbmb/UltraInteract_sft",
"dataset:stingning/ultrachat",
"dataset:openchat/openchat_sharegpt4_dataset",
"dataset:Open-Orca/OpenOrca",
"arxiv:2404.02078",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-01T09:24:10Z | ---
license: apache-2.0
datasets:
- openbmb/UltraInteract_sft
- stingning/ultrachat
- openchat/openchat_sharegpt4_dataset
- Open-Orca/OpenOrca
tags:
- reasoning
pipeline_tag: text-generation
---
<div align="center">
<img src="https://huggingface.co/openbmb/Eurus-7b-sft/resolve/main/figures/Eurus-logo.png" width="200px">
**Eurus: A suit of open-source LLMs optimized for reasoning**
<p align="center">
<a href="#introduction"> Introduction</a> •
<a href="#evaluation">Evaluation</a>
</p>
</div>
# Links
- 📜 [Paper](https://arxiv.org/abs/2404.02078)
- 🤗 [Eurus Collection](https://huggingface.co/collections/openbmb/eurus-660bc40bec5376b3adc9d1c5)
- 🤗 UltraInteract
- [SFT](https://huggingface.co/datasets/openbmb/UltraInteract_sft)
- [Preference Learning](https://huggingface.co/datasets/openbmb/UltraInteract_pair)
- [GitHub Repo](https://github.com/OpenBMB/Eurus)
# Introduction
Eurus-70B-SFT is fine-tuned from CodeLLaMA-70B on all correct actions in UltraInteract, mixing a small proportion of UltraChat, ShareGPT, and OpenOrca examples.
It achieves better performance than other open-source models of similar sizes and even outperforms specialized models in corresponding domains in many cases.
## Usage
We apply tailored prompts for coding and math, consistent with UltraInteract data formats:
**Coding**
```
[INST] Write Python code to solve the task:
{Instruction} [/INST]
```
**Math-CoT**
```
[INST] Solve the following math problem step-by-step.
Simplify your answer as much as possible. Present your final answer as \\boxed{Your Answer}.
{Instruction} [/INST]
```
**Math-PoT**
```
[INST] Tool available:
[1] Python interpreter
When you send a message containing Python code to python, it will be executed in a stateful Jupyter notebook environment.
Solve the following math problem step-by-step.
Simplify your answer as much as possible.
{Instruction} [/INST]
```
## Evaluation
- Eurus, both the 7B and 70B variants, achieve the best overall performance among open-source models of similar sizes. Eurus even outperforms specialized models in corresponding domains in many cases. Notably, Eurus-7B outperforms baselines that are 5× larger, and Eurus-70B achieves better performance than GPT-3.5 Turbo.
- Preference learning with UltraInteract can further improve performance, especially in math and the multi-turn ability.
<img src="./figures/main_exp.png" alt="stats" style="zoom: 40%;" />
## Citation
```
@misc{yuan2024advancing,
title={Advancing LLM Reasoning Generalists with Preference Trees},
author={Lifan Yuan and Ganqu Cui and Hanbin Wang and Ning Ding and Xingyao Wang and Jia Deng and Boji Shan and Huimin Chen and Ruobing Xie and Yankai Lin and Zhenghao Liu and Bowen Zhou and Hao Peng and Zhiyuan Liu and Maosong Sun},
year={2024},
eprint={2404.02078},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
``` |
LumiOpen/Poro-34B-chat | LumiOpen | 2024-06-18T11:10:22Z | 400 | 8 | transformers | [
"transformers",
"safetensors",
"bloom",
"text-generation",
"conversational",
"fi",
"en",
"dataset:LumiOpen/instruction-collection-fin",
"arxiv:2404.01856",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-04T06:56:24Z | ---
license: apache-2.0
datasets:
- LumiOpen/instruction-collection-fin
language:
- fi
- en
---
<div align="center">
<img src="./poro-logo.png" width="200px">
</div>
# Poro 34B Chat
Poro 34b chat is a chat-tuned version of [Poro
34B](https://huggingface.co/LumiOpen/Poro-34B) trained to follow instructions
in both Finnish and English. Quantized versions are available on [Poro
34B-chat-GGUF](https://huggingface.co/LumiOpen/Poro-34B-chat-GGUF).
Because of the limited amount of instruction tuning available for Finnish, documents from the English datasets were machine-translated by the Poro 34B base model into Finnish, then used to train this chat version. We selected only datasets that are available for commercial use and only contain synthetic data if it was gathered in ToS-compliant fashion.
More information about the data selection and translation process for our Finnish dataset are available on the [LumiOpen/instruction-collection-fin](https://huggingface.co/datasets/LumiOpen/instruction-collection-fin) page.
Poro was created in a collaboration between [SiloGen](https://www.silo.ai/silogen) from [Silo AI](https://www.silo.ai/), the [TurkuNLP group](https://turkunlp.org/) of the University of Turku, and [High Performance Language Technologies](https://hplt-project.org/) (HPLT). Training was conducted on the [LUMI supercomputer](https://www.lumi-supercomputer.eu/), using compute resources generously provided by [CSC](https://csc.fi/) - IT Center for Science, Finland.
This project is part of an ongoing effort to create open source large language models for non-English and especially low resource languages like Finnish. Through the combination of English and Finnish training data we get a model that outperforms previous Finnish only models, while also being fluent in English and code, and capable of basic translation between English and Finnish.
## Fine Tuning
Poro-34b-Chat is an SFT finetune of Poro-34b on a collection of Finnish and
English instruction datasets. The collection is made up of roughly of 40%
English, 40% Finnish, and 20% cross-lingual entries.
We finetuned the base model for 3 epochs with a learning rate of 2e-05, warmup
ratio of 0.1, and a global batch size of 48. For full-parameter finetuning, we used 3 nodes (8 GPUs per node). We used the [Alignment Handbook](https://github.com/huggingface/alignment-handbook/)
code for finetuning.
## Datasets
#### Finnish and Cross-lingual
- [LumiOpen/instruction-collection-fin](https://huggingface.co/datasets/LumiOpen/instruction-collection-fin)
#### English
- [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k)
- [Curated OASST2](https://huggingface.co/datasets/sablo/oasst2_curated)
- [Argilla/10k_prompts_ranked_mistral_large_responses](https://huggingface.co/datasets/argilla/10k_prompts_ranked_mistral_large_responses)
## Chat template
We use the ChatML chat template. For example:
```
<|im_start|>system
You can add an optional system prompt here.<|im_end|>
<|im_start|>user
Miten rakennan tietokoneen?<|im_end|>
<|im_start|>assistant
```
## Evaluations
We relied on the popular MTBench benchmark to evaluate multi-turn performance.
Since MTBench is an English only benchmark, we also release this fork of [MTBench Finnish](https://github.com/LumiOpen/FastChat/tree/main/fastchat/llm_judge) with multilingual support and machine translated Finnish prompts. Our scores for both benchmarks follow.
Note: Updated on 18 June 2024
| Eval | Overall | Coding | Extraction | Humanities | Math | Reasoning | Roleplay | STEM | Writing |
| :---- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | ----: |
| MTBench English | 6.13 | 4.25 | 6.65 | 9.60 | 2.30 | 4.30 | 7.05 | 7.55 | 7.35 |
| MTBench Finnish | 6.06 | 3.70 | 6.37 | 9.25 | 1.20 | 4.35 | 7.35 | 7.80 | 8.50 |
## License
Poro 34B chat is released under the Apache 2.0 license.
## Citation
```
@misc{luukkonen2024poro,
title={Poro 34B and the Blessing of Multilinguality},
author={Risto Luukkonen and Jonathan Burdge and Elaine Zosa and Aarne
Talman and Ville Komulainen and Väinö Hatanpää and Peter Sarlin and Sampo
Pyysalo},
year={2024},
eprint={2404.01856},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Tater86/room-classifier-v1 | Tater86 | 2024-06-15T18:54:43Z | 400 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2024-04-04T12:50:26Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: room-classifier-v1
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8913043737411499
---
# room-classifier-v1
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### aerial

#### bathroom

#### bedroom

#### dining room

#### entry room

#### exterior

#### floorplan

#### garage

#### hallway

#### kitchen

#### laundry room

#### living room

#### office

#### pool
 |
nisten/dolphin-2.8-7b-imatrix-gguf | nisten | 2024-04-06T00:38:14Z | 400 | 9 | null | [
"gguf",
"license:apache-2.0",
"region:us"
]
| null | 2024-04-05T21:47:46Z | ---
license: apache-2.0
---
# Dolphin-2.8-Mistral-7B-v2 iMatrix Quantizations
This repository contains iMatrix quantizations of the [dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02) model. The original model was trained with 16k long context data on top of a newer mistral-7b, enabling it to work well with up to 32k context.
The iMatrix file was generated using the `wiki.train.raw` dataset, which took a few hours to process. We have also included the `wiki.test.raw` file for perplexity testing.
## Quantization Benefits
You'll notice that these quantizations are slightly larger compared to others, but they offer much lower perplexity. For example, the 2s 2-bit mixed models are very usable due to this custom quantization and don't lose much perplexity compared to the full f16 model.
## Notes
- The 8-bit weight is **not** iMatrix quantized (although it wouldn't make a significant difference). It can be used as a reference perplexity measurement along with `dolphinf16`.
- All other models, including the 4k variants, have been quantized with iMatrix and should exhibit better perplexity performance compared to regular k quantizations.
- iMatrix quantization can be applied to all k quantizations, not just the i ones.
- 1bit quant gives garbage, but all else, including 2xxs are suprisingly very coherent
## Perplexity values
```./perplexity -m dolphin2m.gguf -f wiki.test.raw -ngl 34```
```bash
dolphinf16.gguf perplexity - [1]4.3052,[2]4.8421,[3]5.7401,[4]6.6554,[5]6.6552,[6]6.6580,[7]6.9198,[8]7.0918,[9]7.2503,[10]7.5712,[11]7.8367,[12]7.8476,
Final estimate: PPL = 7.8476 +/- 0.35984 THIS IS BASELINE
dolphin1bit.gguf perplexity - [1]59477.7292,[2]50746.4580,[3]53932.3131,[4]55797.8433,[5]45995.5032,[6]46595.4234,[7]45130.6779,[8]40769.8593,[9]41322.7842,[10]50644.7393,[11]50676.5808,[12]51939.5094,
Final estimate: PPL = 51939.5094 +/- 1339.29301 1BIT GIVES GARBAGE OUTPUT
dolphin2xxs.gguf perplexity - [1]5.4651,[2]6.7941,[3]7.8700,[4]8.7155,[5]8.3566,[6]8.3316,[7]8.6121,[8]8.7565,[9]8.9041,[10]9.3572,[11]9.6426,[12]9.5626,
Final estimate: PPL = 9.5626 +/- 0.43895 9.5 vs 7.8 at f16, means lossy but coherent
dolphin2s.gguf perplexity - [1]5.0014,[2]5.9477,[3]6.8424,[4]7.6348,[5]7.4755,[6]7.4667,[7]7.7625,[8]7.8807,[9]8.0374,[10]8.4086,[11]8.6475,[12]8.6427,
Final estimate: PPL = 8.6427 +/- 0.39501
dolphin2m.gguf perplexity - [1]4.5874,[2]5.3203,[3]6.2334,[4]7.1444,[5]7.1188,[6]7.1422,[7]7.4717,[8]7.6180,[9]7.7948,[10]8.1319,[11]8.3747,[12]8.4095,
Final estimate: PPL = 8.4095 +/- 0.38329
dolphin2k.gguf perplexity - [1]4.6331,[2]5.2648,[3]6.0493,[4]7.0165,[5]6.9300,[6]6.9177,[7]7.2362,[8]7.4417,[9]7.6292,[10]7.9640,[11]8.2121,[12]8.1930,
Final estimate: PPL = 8.1930 +/- 0.37241
dolphin2ks.gguf perplexity - [1]4.7995,[2]5.6653,[3]6.4331,[4]7.3841,[5]7.2724,[6]7.3161,[7]7.6567,[8]7.8423,[9]8.0129,[10]8.4033,[11]8.6636,[12]8.6391,
Final estimate: PPL = 8.6391 +/- 0.39315
dolphin3s.gguf perplexity - [1]4.3574,[2]4.9936,[3]5.8814,[4]6.8093,[5]6.8086,[6]6.7949,[7]7.0638,[8]7.2204,[9]7.3844,[10]7.6895,[11]7.9489,[12]7.9527,
Final estimate: PPL = 7.9527 +/- 0.36202
dolphin3xs.gguf perplexity - [1]4.3161,[2]4.9579,[3]5.8647,[4]6.8064,[5]6.7614,[6]6.7501,[7]7.0133,[8]7.2103,[9]7.3862,[10]7.7265,[11]7.9813,[12]7.9780,
Final estimate: PPL = 7.9780 +/- 0.36655
dolphin3xxs.gguf perplexity - [1]4.5418,[2]5.0902,[3]6.0117,[4]6.9852,[5]6.9329,[6]6.9165,[7]7.1853,[8]7.3359,[9]7.4923,[10]7.8122,[11]8.0696,[12]8.0592,
Final estimate: PPL = 8.0592 +/- 0.36502
dolphin3m.gguf perplexity - [1]4.3203,[2]4.9566,[3]5.8151,[4]6.7619,[5]6.7801,[6]6.7762,[7]7.0351,[8]7.2054,[9]7.3766,[10]7.6896,[11]7.9580,[12]7.9660,
Final estimate: PPL = 7.9660 +/- 0.36234
dolphin4km.gguf perplexity - [1]4.3331,[2]4.9129,[3]5.7915,[4]6.7030,[5]6.6921,[6]6.6978,[7]6.9570,[8]7.1284,[9]7.2854,[10]7.6098,[11]7.8696,[12]7.8767,
Final estimate: PPL = 7.8767 +/- 0.35875
dolphin4nl.gguf perplexity - [1]4.2682,[2]4.8494,[3]5.7530,[4]6.6890,[5]6.6672,[6]6.6637,[7]6.9332,[8]7.1126,[9]7.2821,[10]7.5998,[11]7.8733,[12]7.8875,
Final estimate: PPL = 7.8875 +/- 0.36227
dolphin4xs.gguf perplexity - [1]4.2986,[2]4.8610,[3]5.7658,[4]6.6906,[5]6.6621,[6]6.6608,[7]6.9321,[8]7.1140,[9]7.2892,[10]7.6085,[11]7.8806,[12]7.8921,
Final estimate: PPL = 7.8921 +/- 0.36258
dolphin5ks.gguf perplexity - [1]4.2557,[2]4.8249,[3]5.7413,[4]6.6671,[5]6.6611,[6]6.6686,[7]6.9389,[8]7.1079,[9]7.2707,[10]7.5962,[11]7.8529,[12]7.8627,
Final estimate: PPL = 7.8627 +/- 0.36124
dolphin5km.gguf perplexity - [1]4.3191,[2]4.8597,[3]5.7844,[4]6.7120,[5]6.6994,[6]6.6964,[7]6.9569,[8]7.1215,[9]7.2792,[10]7.6109,[11]7.8682,[12]7.8794,
Final estimate: PPL = 7.8794 +/- 0.36185
dolphin6k.gguf perplexity - [1]4.3264,[2]4.8531,[3]5.7574,[4]6.6741,[5]6.6707,[6]6.6795,[7]6.9362,[8]7.1076,[9]7.2678,[10]7.5864,[11]7.8496,[12]7.8628,
Final estimate: PPL = 7.8628 +/- 0.36075
dolphin8bit.gguf perplxity - [1]4.3063,[2]4.8463,[3]5.7347,[4]6.6499,[5]6.6471,[6]6.6531,[7]6.9160,[8]7.0899,[9]7.2509,[10]7.5705,[11]7.8357,[12]7.8466,
Final estimate: PPL = 7.8466 +/- 0.35948
```
As we can see 2bit xxs with this method actually is surprisingly coherent. |
0-hero/Matter-0.2-8x22B | 0-hero | 2024-04-14T17:50:48Z | 400 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"en",
"dataset:0-hero/Matter-0.2-alpha-Slim-A",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-14T13:20:56Z | ---
license: apache-2.0
datasets:
- 0-hero/Matter-0.2-alpha-Slim-A
language:
- en
---
## Matter 8x22B - 0.2 (Mixtral 8x22B 0.2 Finetune)
Matter 8x22B 0.2 is finetune on the **Matter 0.2 dataset**, which is curated from over 35 datsets analyzing >6B tokens
### Recommended Usage for best results
System Prompt - `You are a helpful assistant`
### Training
Prompt format: This model uses ChatML prompt format.
```
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
### Function Calling
Model also supports function calling. Additional tokens for function calling
Model function call tokens
- `<tool_call>` - Function call start token
- `</tool_call>` - Function call end token
Function call response tokens
- `<tool_response>` - Function response start token
- `</tool_response>` - Function response end token
Example
```
<|im_start|>system
You are a helpful assistant with access to the following functions. Use them if required -
{ "name": "get_news_headlines",
"description": "Get the latest news headlines",
"parameters":
{ "type": "object",
"properties":
{ "country":
{ "type": "string",
"description": "The country for which to fetch news"
}
},
"required": [ "country" ]
}
}
<|im_end|>
<|im_start|>user
Can you tell me the latest news headlines for the United States?<|im_end|>
<|im_start|>assistant
<tool_call>{"name": "get_news_headlines", "arguments": '{"country": "United States"}'}</tool_call><|im_end|>
<|im_start|>user
<tool_response>{
"headlines":
[
"Biden announces new vaccine mandates",
"Hurricane Ida devastates Louisiana",
"Apple unveils new iPhone",
"NASA's Perseverance rover collects first Mars rock sample"
]
}</tool_response>
<|im_end|>
<|im_start|>assistant
Here are the latest news headlines for the United States:
1. Biden announces new vaccine mandates
2. Hurricane Ida devastates Louisiana
3. Apple unveils new iPhone
4. NASA's Perseverance rover collects first Mars rock sample
<|im_end|>
``` |
openbmb/Eurux-8x22b-kto | openbmb | 2024-04-29T06:08:29Z | 400 | 7 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"reasoning",
"preference_learning",
"nca",
"dataset:openbmb/UltraInteract_sft",
"dataset:openbmb/UltraInteract_pair",
"dataset:openbmb/UltraFeedback",
"arxiv:2404.02078",
"arxiv:2402.01306",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-15T11:43:34Z | ---
license: apache-2.0
datasets:
- openbmb/UltraInteract_sft
- openbmb/UltraInteract_pair
- openbmb/UltraFeedback
tags:
- reasoning
- preference_learning
- nca
pipeline_tag: text-generation
---
<div align="center">
<img src="https://huggingface.co/openbmb/Eurus-7b-sft/resolve/main/figures/Eurus-logo.png" width="200px">
**Eurus: A suite of open-source LLMs optimized for reasoning**
<p align="center">
<a href="#introduction"> Introduction</a> •
<a href="#evaluation">Evaluation</a>
</p>
</div>
# Links
- 📜 [Paper](https://arxiv.org/abs/2404.02078)
- 🤗 [Eurus Collection](https://huggingface.co/collections/openbmb/eurus-660bc40bec5376b3adc9d1c5)
- 🤗 UltraInteract
- [SFT](https://huggingface.co/datasets/openbmb/UltraInteract_sft)
- [Preference Learning](https://huggingface.co/datasets/openbmb/UltraInteract_pair)
- [GitHub Repo](https://github.com/OpenBMB/Eurus)
# Introduction
Eurux-8x22B-KTO is SFT and [KTO](https://arxiv.org/abs/2402.01306) fine-tuned from [Mixtral-8x22B](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1) on all multi-turn trajectory pairs in [UltraInteract](https://huggingface.co/openbmb/UltraInteract) and all pairs in [UltraFeedback](https://huggingface.co/openbmb/UltraFeedback).
It achieves superb reasoning performance as well as exellent chat & instruction-following capabilities.
## Evaluation
We conducted overall coding, math, reasoning, knowledge, instruction-following and chat benchmarking. Results are shown below, with the best scores in open-source models **bolded**:
| Models/Benchmarks | Coding | | | Math | | | Reasoning | Knowledge | Ins-Following | Chat |
|-------------------|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:-------------:|:---------:|
| | HumanEval | MBPP | LeetCode | GSMPLUS | MATH | TheoremQA | BBH (CoT) | MMLU | IFEval | MT-Bench |
| GPT-3.5-Turbo | 76.8 | 82.5 | 23.3 | 61.2 | 37.8 | 35.6 | 70.1 | 70.0 | 56.6 | 7.94 |
| GPT-4 | 85.4 | 83.5 | 41.8 | 85.6 | 69.7 | 52.4 | 86.7 | 86.4 | 79.7 | 8.96 |
| Mixtral-8x7B-Ins | 50.6 | 50.1 | 5.6 | 49.6 | 25.9 | 20.4 | 73.5 | 70.3 | 48.8 | 8.30 |
| DS-LM-67B-Chat | 70.7 | 65.7 | 20.0 | 65.0 | 41.0 | 17.9 | 78.9 | 72.3 | 52.7 | 8.35 |
| QWen-1.5-72B | 71.3 | 56.9 | 15.6 | 65.4 | 43.4 | 18.5 | 78.0 | 72.9 | 53.4 | **8.61** |
| Eurus-70b-NCA | **79.3** | **71.9** | 33.3 | 62.8 | 41.7 | 32.6 | 80.0 | 59.4 | 49.2 | 7.54 |
| Eurux-8x22b-KTO | 71.3 | 68.9 | 29.4 | **68.3** | 48.4 | 35.3 | **83.6** | **75.9** | **67.1** | 8.58 |
| Eurux-8x22b-NCA | 75.0 | 69.7 | **35.0** | 68.1 | **49.0** | **35.5** | 83.5 | 75.6 | **67.1** | 8.46 |
## Usage
```python
# pip install 'transformers>=4.39.3'
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="openbmb/Eurux-8x22b-kto",
device_map="auto",
torch_dtype=torch.bfloat16,
)
messages = [
{"role": "user", "content": "What does Eurus mean?"},
]
outputs = pipe(
messages,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_k=50,
top_p=0.95,
)
print(outputs[0]["generated_text"][-1]["content"])
```
We apply tailored prompts for coding and math, consistent with UltraInteract data formats:
**Coding**
```
[INST] Write Python code to solve the task:
{Instruction} [/INST]
```
**Math-CoT**
```
[INST] Solve the following math problem step-by-step.
Simplify your answer as much as possible. Present your final answer as \\boxed{Your Answer}.
{Instruction} [/INST]
```
**Math-PoT**
```
[INST] Tool available:
[1] Python interpreter
When you send a message containing Python code to python, it will be executed in a stateful Jupyter notebook environment.
Solve the following math problem step-by-step.
Simplify your answer as much as possible.
{Instruction} [/INST]
```
## Citation
```
@misc{yuan2024advancing,
title={Advancing LLM Reasoning Generalists with Preference Trees},
author={Lifan Yuan and Ganqu Cui and Hanbin Wang and Ning Ding and Xingyao Wang and Jia Deng and Boji Shan and Huimin Chen and Ruobing Xie and Yankai Lin and Zhenghao Liu and Bowen Zhou and Hao Peng and Zhiyuan Liu and Maosong Sun},
year={2024},
eprint={2404.02078},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
``` |
LTC-AI-Labs/Hermes-Capybara-7B-Test | LTC-AI-Labs | 2024-04-17T15:51:42Z | 400 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"NousResearch/Nous-Hermes-llama-2-7b",
"NousResearch/Nous-Capybara-7B-V1",
"en",
"base_model:NousResearch/Nous-Hermes-llama-2-7b",
"base_model:NousResearch/Nous-Capybara-7B-V1",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-16T10:50:41Z | ---
tags:
- merge
- mergekit
- lazymergekit
- NousResearch/Nous-Hermes-llama-2-7b
- NousResearch/Nous-Capybara-7B-V1
base_model:
- NousResearch/Nous-Hermes-llama-2-7b
- NousResearch/Nous-Capybara-7B-V1
license: llama2
language:
- en
---
# Test 7B
This is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [NousResearch/Nous-Hermes-llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b)
* [NousResearch/Nous-Capybara-7B-V1](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: NousResearch/Nous-Hermes-llama-2-7b
layer_range: [0, 32]
- model: NousResearch/Nous-Capybara-7B-V1
layer_range: [0, 32]
merge_method: slerp
base_model: NousResearch/Nous-Hermes-llama-2-7b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Lazycuber/NeuralPipe-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
afrideva/gemma-1.1-2b-it_oasst_format_chatML_unsloth_V1_function_calling_V2-GGUF | afrideva | 2024-04-17T10:15:35Z | 400 | 4 | transformers | [
"transformers",
"gguf",
"ggml",
"quantized",
"q2_k",
"q3_k_m",
"q4_k_m",
"q5_k_m",
"q6_k",
"q8_0",
"text-generation",
"en",
"dataset:hiyouga/glaive-function-calling-v2-sharegpt",
"dataset:NickyNicky/function-calling_chatml_gemma_v1",
"base_model:NickyNicky/gemma-1.1-2b-it_oasst_format_chatML_unsloth_V1_function_calling_V2",
"license:apache-2.0",
"region:us"
]
| text-generation | 2024-04-17T10:10:41Z | ---
base_model: NickyNicky/gemma-1.1-2b-it_oasst_format_chatML_unsloth_V1_function_calling_V2
datasets:
- hiyouga/glaive-function-calling-v2-sharegpt
- NickyNicky/function-calling_chatml_gemma_v1
inference: false
language:
- en
library_name: transformers
license: apache-2.0
model:
- google/gemma-1.1-2b-it
model_creator: NickyNicky
model_name: gemma-1.1-2b-it_oasst_format_chatML_unsloth_V1_function_calling_V2
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
widget:
- text: '<bos><start_of_turn>system
You are a helpful AI assistant.<end_of_turn>
<start_of_turn>user
{question}<end_of_turn>
<start_of_turn>model
'
---
# NickyNicky/gemma-1.1-2b-it_oasst_format_chatML_unsloth_V1_function_calling_V2-GGUF
Quantized GGUF model files for [gemma-1.1-2b-it_oasst_format_chatML_unsloth_V1_function_calling_V2](https://huggingface.co/NickyNicky/gemma-1.1-2b-it_oasst_format_chatML_unsloth_V1_function_calling_V2) from [NickyNicky](https://huggingface.co/NickyNicky)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [gemma-1.1-2b-it_oasst_format_chatml_unsloth_v1_function_calling_v2.fp16.gguf](https://huggingface.co/afrideva/gemma-1.1-2b-it_oasst_format_chatML_unsloth_V1_function_calling_V2-GGUF/resolve/main/gemma-1.1-2b-it_oasst_format_chatml_unsloth_v1_function_calling_v2.fp16.gguf) | fp16 | 5.02 GB |
| [gemma-1.1-2b-it_oasst_format_chatml_unsloth_v1_function_calling_v2.q2_k.gguf](https://huggingface.co/afrideva/gemma-1.1-2b-it_oasst_format_chatML_unsloth_V1_function_calling_V2-GGUF/resolve/main/gemma-1.1-2b-it_oasst_format_chatml_unsloth_v1_function_calling_v2.q2_k.gguf) | q2_k | 1.16 GB |
| [gemma-1.1-2b-it_oasst_format_chatml_unsloth_v1_function_calling_v2.q3_k_m.gguf](https://huggingface.co/afrideva/gemma-1.1-2b-it_oasst_format_chatML_unsloth_V1_function_calling_V2-GGUF/resolve/main/gemma-1.1-2b-it_oasst_format_chatml_unsloth_v1_function_calling_v2.q3_k_m.gguf) | q3_k_m | 1.38 GB |
| [gemma-1.1-2b-it_oasst_format_chatml_unsloth_v1_function_calling_v2.q4_k_m.gguf](https://huggingface.co/afrideva/gemma-1.1-2b-it_oasst_format_chatML_unsloth_V1_function_calling_V2-GGUF/resolve/main/gemma-1.1-2b-it_oasst_format_chatml_unsloth_v1_function_calling_v2.q4_k_m.gguf) | q4_k_m | 1.63 GB |
| [gemma-1.1-2b-it_oasst_format_chatml_unsloth_v1_function_calling_v2.q5_k_m.gguf](https://huggingface.co/afrideva/gemma-1.1-2b-it_oasst_format_chatML_unsloth_V1_function_calling_V2-GGUF/resolve/main/gemma-1.1-2b-it_oasst_format_chatml_unsloth_v1_function_calling_v2.q5_k_m.gguf) | q5_k_m | 1.84 GB |
| [gemma-1.1-2b-it_oasst_format_chatml_unsloth_v1_function_calling_v2.q6_k.gguf](https://huggingface.co/afrideva/gemma-1.1-2b-it_oasst_format_chatML_unsloth_V1_function_calling_V2-GGUF/resolve/main/gemma-1.1-2b-it_oasst_format_chatml_unsloth_v1_function_calling_v2.q6_k.gguf) | q6_k | 2.06 GB |
| [gemma-1.1-2b-it_oasst_format_chatml_unsloth_v1_function_calling_v2.q8_0.gguf](https://huggingface.co/afrideva/gemma-1.1-2b-it_oasst_format_chatML_unsloth_V1_function_calling_V2-GGUF/resolve/main/gemma-1.1-2b-it_oasst_format_chatml_unsloth_v1_function_calling_v2.q8_0.gguf) | q8_0 | 2.67 GB |
## Original Model Card:
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/641b435ba5f876fe30c5ae0a/Ds-Nf-6VvLdpUx_l0Yiu_.png" alt="" style="width: 95%; max-height: 750px;">
</p>
## Metrics.
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/641b435ba5f876fe30c5ae0a/clMqtJvaKZQ3y4sCdxHNC.png" alt="" style="width: 95%; max-height: 750px;">
</p>
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/641b435ba5f876fe30c5ae0a/jd63fRtz2fCs9AxYKTsaP.png" alt="" style="width: 95%; max-height: 750px;">
</p>
```
interrupted execution no TrainOutput
```
## Take dataset.
```
hiyouga/glaive-function-calling-v2-sharegpt
```
## Dataset format gemma fine tune.
```
NickyNicky/function-calling_chatml_gemma_v1
```
## colab examples and Gradio.
```
https://colab.research.google.com/drive/1an2D2C3VNs32UV9kWlXEPJjio0uJN6nW?usp=sharing
``` |
chrischain/SatoshiNv117 | chrischain | 2024-04-24T22:01:10Z | 400 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-24T21:19:33Z | ---
license: apache-2.0
language:
- en
---
I'm an innovative concept, created through a cutting-edge training method. Picture me as a "learning bot" who's had a special upgrade. Just like how a chef perfects their recipes with new techniques, my creators have fine-tuned my "knowledge-absorption" process. I'm here to showcase the potential of this new approach, and I'm excited to test my abilities in a friendly, helpful manner. So, while I may be a product of experimentation, my purpose is to demonstrate the power of continuous learning and growth in the world of artificial intelligence. |
ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_iter_3 | ShenaoZhang | 2024-04-27T09:51:39Z | 400 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_iter_2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-27T09:19:52Z | ---
license: mit
base_model: ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_iter_2
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- updated
- original
model-index:
- name: 0.001_4iters_bs128_nodpo_only4w_iter_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_4iters_bs128_nodpo_only4w_iter_3
This model is a fine-tuned version of [ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_iter_2](https://huggingface.co/ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_iter_2) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
|
mlabonne/Meta-Llama-3-225B-Instruct | mlabonne | 2024-05-06T09:03:18Z | 400 | 18 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"conversational",
"base_model:mlabonne/Meta-Llama-3-120B-Instruct",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-04T21:46:05Z | ---
license: other
tags:
- merge
- mergekit
- lazymergekit
base_model:
- mlabonne/Meta-Llama-3-120B-Instruct
- mlabonne/Meta-Llama-3-120B-Instruct
- mlabonne/Meta-Llama-3-120B-Instruct
- mlabonne/Meta-Llama-3-120B-Instruct
- mlabonne/Meta-Llama-3-120B-Instruct
- mlabonne/Meta-Llama-3-120B-Instruct
- mlabonne/Meta-Llama-3-120B-Instruct
- mlabonne/Meta-Llama-3-120B-Instruct
- mlabonne/Meta-Llama-3-120B-Instruct
- mlabonne/Meta-Llama-3-120B-Instruct
- mlabonne/Meta-Llama-3-120B-Instruct
- mlabonne/Meta-Llama-3-120B-Instruct
- mlabonne/Meta-Llama-3-120B-Instruct
---

# Meta-Llama-3-225B-Instruct
Meta-Llama-3-225B-Instruct is a self-merge with [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct).
It was inspired by large merges like:
- [alpindale/goliath-120b](https://huggingface.co/alpindale/goliath-120b)
- [nsfwthrowitaway69/Venus-120b-v1.0](https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.0)
- [cognitivecomputations/MegaDolphin-120b](https://huggingface.co/cognitivecomputations/MegaDolphin-120b)
- [wolfram/miquliz-120b-v2.0](https://huggingface.co/wolfram/miquliz-120b-v2.0).
I don't recommend using it as it seems to break quite easily (but feel free to prove me wrong).
## 🧩 Configuration
```yaml
slices:
- sources:
- layer_range: [0, 20]
model: mlabonne/Meta-Llama-3-120B-Instruct
- sources:
- layer_range: [10, 30]
model: mlabonne/Meta-Llama-3-120B-Instruct
- sources:
- layer_range: [20, 40]
model: mlabonne/Meta-Llama-3-120B-Instruct
- sources:
- layer_range: [30, 50]
model: mlabonne/Meta-Llama-3-120B-Instruct
- sources:
- layer_range: [40, 60]
model: mlabonne/Meta-Llama-3-120B-Instruct
- sources:
- layer_range: [50, 70]
model: mlabonne/Meta-Llama-3-120B-Instruct
- sources:
- layer_range: [60, 80]
model: mlabonne/Meta-Llama-3-120B-Instruct
- sources:
- layer_range: [70, 90]
model: mlabonne/Meta-Llama-3-120B-Instruct
- sources:
- layer_range: [80, 100]
model: mlabonne/Meta-Llama-3-120B-Instruct
- sources:
- layer_range: [90, 110]
model: mlabonne/Meta-Llama-3-120B-Instruct
- sources:
- layer_range: [100, 120]
model: mlabonne/Meta-Llama-3-120B-Instruct
- sources:
- layer_range: [110, 130]
model: mlabonne/Meta-Llama-3-120B-Instruct
- sources:
- layer_range: [120, 140]
model: mlabonne/Meta-Llama-3-120B-Instruct
merge_method: passthrough
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/Meta-Llama-3-220B-Instruct"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Eurdem/SM_Smaug_52B | Eurdem | 2024-05-10T16:10:39Z | 400 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"qwen2",
"conversational",
"en",
"base_model:abacusai/Smaug-34B-v0.1",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-09T07:39:42Z | ---
language:
- en
pipeline_tag: text-generation
base_model:
- abacusai/Smaug-34B-v0.1
library_name: transformers
tags:
- mergekit
- merge
- qwen2
license: other
---
### Models Merged
The following models were included in the merge:
* [abacusai/Smaug-34B-v0.1](https://huggingface.co/abacusai/Smaug-34B-v0.1)
### Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "Eurdem/SM_Smaug_52B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto", load_in_4bit= True)
messages = [
{"role": "system", "content": "You are a helpful chatbot who always responds friendly."},
{"role": "user", "content": "where is the capital of turkey"},
]
input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to("cuda")
outputs = model.generate(input_ids,
max_new_tokens=1024,
do_sample=True,
temperature=0.7,
top_p=0.7,
top_k=500
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
|
PrunaAI/cognitivecomputations-Dolphin-2.9.1-Phi-3-Kensho-4.5B-GGUF-smashed | PrunaAI | 2024-05-09T16:58:17Z | 400 | 1 | null | [
"gguf",
"pruna-ai",
"region:us"
]
| null | 2024-05-09T16:14:23Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.com/invite/vb6SmA3hxu)
## This repo contains GGUF versions of the cognitivecomputations/Dolphin-2.9.1-Phi-3-Kensho-4.5B model.
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.com/invite/vb6SmA3hxu) to share feedback/suggestions or get help.
**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with GGUF.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***What is the model format?*** We use GGUF format.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
# Downloading and running the models
You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/):
| Quant type | Description |
|------------|--------------------------------------------------------------------------------------------|
| Q5_K_M | High quality, recommended. |
| Q5_K_S | High quality, recommended. |
| Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. |
| Q4_K_S | Slightly lower quality with more space savings, recommended. |
| IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. |
| IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. |
| Q3_K_L | Lower quality but usable, good for low RAM availability. |
| Q3_K_M | Even lower quality. |
| IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| Q3_K_S | Low quality, not recommended. |
| IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| Q2_K | Very low quality but surprisingly usable. |
## How to download GGUF files ?
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
- **Option A** - Downloading in `text-generation-webui`:
- **Step 1**: Under Download Model, you can enter the model repo: PrunaAI/Dolphin-2.9.1-Phi-3-Kensho-4.5B-GGUF-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf.
- **Step 2**: Then click Download.
- **Option B** - Downloading on the command line (including multiple files at once):
- **Step 1**: We recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
- **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download PrunaAI/Dolphin-2.9.1-Phi-3-Kensho-4.5B-GGUF-smashed Dolphin-2.9.1-Phi-3-Kensho-4.5B.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
Alternatively, you can also download multiple files at once with a pattern:
```shell
huggingface-cli download PrunaAI/Dolphin-2.9.1-Phi-3-Kensho-4.5B-GGUF-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download PrunaAI/Dolphin-2.9.1-Phi-3-Kensho-4.5B-GGUF-smashed Dolphin-2.9.1-Phi-3-Kensho-4.5B.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## How to run model in GGUF format?
- **Option A** - Introductory example with `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Dolphin-2.9.1-Phi-3-Kensho-4.5B.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {prompt\} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
- **Option B** - Running in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp).
- **Option C** - Running from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Dolphin-2.9.1-Phi-3-Kensho-4.5B.IQ3_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<s>[INST] {prompt} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Dolphin-2.9.1-Phi-3-Kensho-4.5B.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
- **Option D** - Running with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
ShenaoZhang/0.001_zephyr_5551_4iters_bs256_iter_1 | ShenaoZhang | 2024-05-13T13:07:56Z | 400 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:HuggingFaceH4/mistral-7b-sft-beta",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-13T12:37:03Z | ---
license: mit
base_model: HuggingFaceH4/mistral-7b-sft-beta
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: 0.001_zephyr_5551_4iters_bs256_iter_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_zephyr_5551_4iters_bs256_iter_1
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
|
liminerity/mm4.ascii.star | liminerity | 2024-05-18T18:03:26Z | 400 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"dataset:gate369/alpaca-star-ascii",
"dataset:gate369/Alpaca-Star",
"base_model:liminerity/mm4.star",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-18T17:56:27Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: liminerity/mm4.star
datasets:
- gate369/alpaca-star-ascii
- gate369/Alpaca-Star
---

# Uploaded model
- **Developed by:** liminerity
- **License:** apache-2.0
- **Finetuned from model :** liminerity/mm4.star
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
allknowingroger/MultiMash5-12B-slerp | allknowingroger | 2024-05-21T14:11:46Z | 400 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"allknowingroger/Multimerge-12B-MoE",
"yunconglong/DARE_TIES_13B",
"base_model:allknowingroger/Multimerge-12B-MoE",
"base_model:yunconglong/DARE_TIES_13B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-21T13:52:25Z | ---
tags:
- merge
- mergekit
- lazymergekit
- allknowingroger/Multimerge-12B-MoE
- yunconglong/DARE_TIES_13B
base_model:
- allknowingroger/Multimerge-12B-MoE
- yunconglong/DARE_TIES_13B
license: apache-2.0
---
# MultiMash5-12B-slerp
MultiMash5-12B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [allknowingroger/Multimerge-12B-MoE](https://huggingface.co/allknowingroger/Multimerge-12B-MoE)
* [yunconglong/DARE_TIES_13B](https://huggingface.co/yunconglong/DARE_TIES_13B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: allknowingroger/Multimerge-12B-MoE
layer_range: [0, 32]
- model: yunconglong/DARE_TIES_13B
layer_range: [0, 32]
merge_method: slerp
base_model: allknowingroger/Multimerge-12B-MoE
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/MultiMash5-12B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Ali-C137/Q2AW1M-1001 | Ali-C137 | 2024-06-21T21:27:19Z | 400 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-21T21:21:36Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
NikolayKozloff/Turkish-Llama-8b-Instruct-v0.1-IQ4_NL-GGUF | NikolayKozloff | 2024-06-29T11:50:47Z | 400 | 1 | null | [
"gguf",
"Turkish",
"turkish",
"Llama",
"Llama3",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"tr",
"base_model:ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1",
"license:llama3",
"region:us"
]
| text-generation | 2024-06-29T11:50:25Z | ---
base_model: ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1
language:
- tr
license: llama3
pipeline_tag: text-generation
tags:
- Turkish
- turkish
- Llama
- Llama3
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/Turkish-Llama-8b-Instruct-v0.1-IQ4_NL-GGUF
This model was converted to GGUF format from [`ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1`](https://huggingface.co/ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Turkish-Llama-8b-Instruct-v0.1-IQ4_NL-GGUF --hf-file turkish-llama-8b-instruct-v0.1-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Turkish-Llama-8b-Instruct-v0.1-IQ4_NL-GGUF --hf-file turkish-llama-8b-instruct-v0.1-iq4_nl-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/Turkish-Llama-8b-Instruct-v0.1-IQ4_NL-GGUF --hf-file turkish-llama-8b-instruct-v0.1-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/Turkish-Llama-8b-Instruct-v0.1-IQ4_NL-GGUF --hf-file turkish-llama-8b-instruct-v0.1-iq4_nl-imat.gguf -c 2048
```
|
timm/convformer_s18.sail_in1k | timm | 2023-05-05T06:08:11Z | 399 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2210.13452",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-05-05T06:07:41Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for convformer_s18.sail_in1k
A ConvFormer (a MetaFormer) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 26.8
- GMACs: 4.0
- Activations (M): 15.8
- Image size: 224 x 224
- **Papers:**
- Metaformer baselines for vision: https://arxiv.org/abs/2210.13452
- **Original:** https://github.com/sail-sg/metaformer
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('convformer_s18.sail_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convformer_s18.sail_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 56, 56])
# torch.Size([1, 128, 28, 28])
# torch.Size([1, 320, 14, 14])
# torch.Size([1, 512, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convformer_s18.sail_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 512, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{yu2022metaformer_baselines,
title={Metaformer baselines for vision},
author={Yu, Weihao and Si, Chenyang and Zhou, Pan and Luo, Mi and Zhou, Yichen and Feng, Jiashi and Yan, Shuicheng and Wang, Xinchao},
journal={arXiv preprint arXiv:2210.13452},
year={2022}
}
```
|
TheAIchemist13/kannada_beekeeping_wav2vec2 | TheAIchemist13 | 2023-10-13T11:02:20Z | 399 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:Harveenchadha/vakyansh-wav2vec2-kannada-knm-560",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-10-13T05:00:25Z | ---
base_model: Harveenchadha/vakyansh-wav2vec2-kannada-knm-560
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: kannada_beekeeping_wav2vec2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kannada_beekeeping_wav2vec2
This model is a fine-tuned version of [Harveenchadha/vakyansh-wav2vec2-kannada-knm-560](https://huggingface.co/Harveenchadha/vakyansh-wav2vec2-kannada-knm-560) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5026
- Wer: 0.3540
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 10.7351 | 1.21 | 35 | 1.7314 | 0.7609 |
| 1.3444 | 2.41 | 70 | 0.7849 | 0.4993 |
| 0.8992 | 3.62 | 105 | 0.6112 | 0.3857 |
| 0.6347 | 4.83 | 140 | 0.5026 | 0.3540 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
Sharan1712/mistralai-7-Instruct | Sharan1712 | 2024-05-07T11:49:41Z | 399 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-10-24T16:03:02Z | ---
license: apache-2.0
---
Base Model - Mistral 7B v.01 Instruct
The model was finetuned on recipe instructions and ingredients to extract ingredients and recipe steps from a transcription of recipe.
Fine tuning was done on a custom manually created dataset using LoRA
Model can be used to get detailed and organised recipe ingredients and steps from a transcription. |
mt5-coref-pytorch/link-append-xxl | mt5-coref-pytorch | 2024-01-15T20:30:28Z | 399 | 1 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text2text-generation | 2023-11-30T21:06:29Z | ---
license: apache-2.0
language:
- en
---
The "Coref-mT5-XXL model" from https://github.com/google-research/google-research/tree/master/coref_mt5 converted to HuggingFace Transformers.
See https://github.com/ianporada/mt5_coref_pytorch for details. |
EleutherAI/pythia-160m-seed8 | EleutherAI | 2024-01-16T16:40:50Z | 399 | 0 | null | [
"region:us"
]
| null | 2024-01-16T16:40:48Z | Entry not found |
ibm/labradorite-13b | ibm | 2024-03-05T14:01:39Z | 399 | 73 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"labradorite",
"llama-2",
"ibm",
"lab",
"labrador",
"merlinite",
"en",
"arxiv:2403.01081",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-02-22T18:49:25Z | ---
pipeline_tag: text-generation
tags:
- labradorite
- llama
- llama-2
- ibm
- lab
- labrador
- merlinite
license: llama2
license_link: https://ai.meta.com/llama/license/
language:
- en
---
Update: 🔥 [Merlinite-7B](https://huggingface.co/ibm/merlinite-7b): Lab on Mistral-7b
# Model Card for Labradorite 13b 🔥 [Paper](https://arxiv.org/abs/2403.01081)
### Overview

### Performance
| Model | Alignment | Base | Teacher | MTBench (Avg) | MMLU(5-shot) | ARC-C(25-shot) | HellaSwag(10-shot) | Winogrande(5-shot) | GSM8K(5-shot- strict) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Llama-2-13b-Chat | RLHF | Llama-2-13b | Human Annotators | 6.65 ** | 54.58 | 59.81 | 82.52 | 75.93 | 34.80 |
| Orca-2 | Progressive Training | Llama-2-13b | GPT-4 | 6.15 ** | 60.37 ** | 59.73 | 79.86 | 78.22 | 48.22 |
| WizardLM-13B-V1.2 | Evol-Instruct | Llama-2-13b | GPT-4 | 7.20 ** | 54.83 | 60.24 | 82.62 | 76.40 | 43.75 |
| Labradorite-13b | Large-scale Alignment for chatBots (LAB) | Llama-2-13b | Mixtral-8x7B-Instruct | 7.23 ^ | 58.89 | 61.69 | 83.15 | 79.56 | 40.11 |
[**] Numbers taken from [lmsys/chatbot-arena-leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard)
[^] Average across 4 runs
### Method
LAB: **L**arge-scale **A**lignment for chat**B**ots is a novel synthetic data-based alignment tuning method for LLMs from IBM Research. Labradorite-13b is a LLaMA-2-13b-derivative model trained with the LAB methodology, using Mixtral-8x7b-Instruct as a teacher model.
LAB consists of three key components:
1. Taxonomy-driven data curation process
2. Large-scale synthetic data generator
3. Two-phased-training with replay buffers

LAB approach allows for adding new knowledge and skills, in an incremental fashion, to an already pre-trained model without suffering from catastrophic forgetting.
Taxonomy is a tree of seed examples that are used to prompt a teacher model to generate synthetic data; the sub-tree for the skill of “writing” is illustrated in the figure below.

Taxonomy allows the data curator or the model designer to easily specify a diverse set of the knowledge-domains and skills that they would like to include in their LLM. At a high level, these can be categorized into three high-level bins - knowledge, foundational skills, and compositional skills. The leaf nodes of the taxonomy are tasks associated with one or more seed examples.

During the synthetic data generation, **unlike previous approaches where seed examples are uniformly drawn from the entire pool (i.e. self-instruct), we use the taxonomy to drive the sampling process**: For each knowledge/skill, we only use the local examples within the leaf node as seeds to prompt the teacher model.
This makes the teacher model better exploit the task distributions defined by the local examples of each node and the diversity in the taxonomy itself ensures the entire generation covers a wide range of tasks, as illustrated below. In turns, this allows for using Mixtral 8x7B as the teacher model for generation while performing very competitively with models such as ORCA-2 and WizardLM that rely on synthetic data generated by much larger and capable models like GPT-4.

For adding new domain-specific knowledge, we provide an external knowledge source (document) and prompt the model to generate questions and answers based on the document.
Foundational skills such as reasoning and compositional skills such as creative writing are generated through in-context learning using the seed examples from the taxonomy.
Additionally, to ensure the data is high-quality and safe, we employ steps to check the questions and answers to ensure that they are grounded and safe. This is done using the same teacher model that generated the data.
Our training consists of two major phases: knowledge tuning and skills tuning.
There are two steps in knowledge tuning where the first step learns simple knowledge (short samples) and the second step learns complicated knowledge (longer samples).
The second step uses replay a replay buffer with data from the first step.
Both foundational skills and compositional skills are learned during the skills tuning phases, where a replay buffer of data from the knowledge phase is used.
Importantly, we use a set of hyper-parameters for training that are very different from standard small-scale supervised fine-training: larger batch size and carefully optimized learning rate and scheduler.

## Model description
- **Language(s):** Primarily English
- **License:** Labradorite-13b is a LLaMA 2 derivative and is licensed under the **[LLAMA 2 Community License](https://ai.meta.com/llama/license/)**
- **Base model:** [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf)
- **Teacher Model:** [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
## Prompt Template
```python
sys_prompt = """You are Labrador, an AI language model developed by IBM DMF (Data Model Factory) Alignment Team. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior."""
prompt = f'<|system|>\n{sys_prompt}\n<|user|>\n{inputs}\n<|assistant|>\n'
stop_token = '<|endoftext|>'
```
We advise utilizing the system prompt employed during the model's training for optimal inference performance, as there could be performance variations based on the provided instructions.
For chatbot usecases, we recommend testing the following system prompt:
```python
sys_prompt = """You are Labrador, an AI language model developed by IBM DMF (Data Model Factory) Alignment Team. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior. You always respond to greetings (for example, hi, hello, g'day, morning, afternoon, evening, night, what's up, nice to meet you, sup, etc) with "Hello! I am Labrador, created by the IBM DMF Alignment Team. How can I help you today?". Please do not say anything else and do not start a conversation."""
```
## Bias, Risks, and Limitations
Labradorite-13b has not been aligned to human preferences, so the model might produce problematic outputs. The model might also maintain the limitations and constraints that arise from the base model and other members of the Llama 2 model family.
The model undergoes training on synthetic data, leading to the potential inheritance of both advantages and limitations from the underlying teacher models and data generation methods. The incorporation of safety measures during Labradorite-13b's training process is considered beneficial. However, a nuanced understanding of the associated risks requires detailed studies for more accurate quantification.
In the absence of adequate safeguards and RLHF, there exists a risk of malicious utilization of these models for generating disinformation or harmful content. Caution is urged against complete reliance on a specific language model for crucial decisions or impactful information, as preventing these models from fabricating content is not straightforward. Additionally, it remains uncertain whether smaller models might exhibit increased susceptibility to hallucination in ungrounded generation scenarios due to their reduced sizes and memorization capacities. This aspect is currently an active area of research, and we anticipate more rigorous exploration, comprehension, and mitigations in this domain. |
mradermacher/CybersurferNyandroidLexicat-8x7B-GGUF | mradermacher | 2024-05-06T06:11:37Z | 399 | 0 | transformers | [
"transformers",
"gguf",
"not-for-all-audiences",
"en",
"base_model:Envoid/CybersurferNyandroidLexicat-8x7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-16T22:47:33Z | ---
base_model: Envoid/CybersurferNyandroidLexicat-8x7B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- not-for-all-audiences
---
## About
static quants of https://huggingface.co/Envoid/CybersurferNyandroidLexicat-8x7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/CybersurferNyandroidLexicat-8x7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/CybersurferNyandroidLexicat-8x7B-GGUF/resolve/main/CybersurferNyandroidLexicat-8x7B.Q2_K.gguf) | Q2_K | 17.6 | |
| [GGUF](https://huggingface.co/mradermacher/CybersurferNyandroidLexicat-8x7B-GGUF/resolve/main/CybersurferNyandroidLexicat-8x7B.IQ3_XS.gguf) | IQ3_XS | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/CybersurferNyandroidLexicat-8x7B-GGUF/resolve/main/CybersurferNyandroidLexicat-8x7B.IQ3_S.gguf) | IQ3_S | 20.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/CybersurferNyandroidLexicat-8x7B-GGUF/resolve/main/CybersurferNyandroidLexicat-8x7B.Q3_K_S.gguf) | Q3_K_S | 20.7 | |
| [GGUF](https://huggingface.co/mradermacher/CybersurferNyandroidLexicat-8x7B-GGUF/resolve/main/CybersurferNyandroidLexicat-8x7B.IQ3_M.gguf) | IQ3_M | 21.7 | |
| [GGUF](https://huggingface.co/mradermacher/CybersurferNyandroidLexicat-8x7B-GGUF/resolve/main/CybersurferNyandroidLexicat-8x7B.Q3_K_M.gguf) | Q3_K_M | 22.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/CybersurferNyandroidLexicat-8x7B-GGUF/resolve/main/CybersurferNyandroidLexicat-8x7B.Q3_K_L.gguf) | Q3_K_L | 24.4 | |
| [GGUF](https://huggingface.co/mradermacher/CybersurferNyandroidLexicat-8x7B-GGUF/resolve/main/CybersurferNyandroidLexicat-8x7B.IQ4_XS.gguf) | IQ4_XS | 25.6 | |
| [GGUF](https://huggingface.co/mradermacher/CybersurferNyandroidLexicat-8x7B-GGUF/resolve/main/CybersurferNyandroidLexicat-8x7B.Q4_K_S.gguf) | Q4_K_S | 27.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CybersurferNyandroidLexicat-8x7B-GGUF/resolve/main/CybersurferNyandroidLexicat-8x7B.Q4_K_M.gguf) | Q4_K_M | 28.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CybersurferNyandroidLexicat-8x7B-GGUF/resolve/main/CybersurferNyandroidLexicat-8x7B.Q5_K_S.gguf) | Q5_K_S | 32.5 | |
| [GGUF](https://huggingface.co/mradermacher/CybersurferNyandroidLexicat-8x7B-GGUF/resolve/main/CybersurferNyandroidLexicat-8x7B.Q5_K_M.gguf) | Q5_K_M | 33.5 | |
| [GGUF](https://huggingface.co/mradermacher/CybersurferNyandroidLexicat-8x7B-GGUF/resolve/main/CybersurferNyandroidLexicat-8x7B.Q6_K.gguf) | Q6_K | 38.6 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/CybersurferNyandroidLexicat-8x7B-GGUF/resolve/main/CybersurferNyandroidLexicat-8x7B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/CybersurferNyandroidLexicat-8x7B-GGUF/resolve/main/CybersurferNyandroidLexicat-8x7B.Q8_0.gguf.part2of2) | Q8_0 | 49.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Humanised-LLMv3-GGUF | mradermacher | 2024-05-06T05:47:55Z | 399 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Oneeb/Humanised-LLMv3",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-28T16:03:10Z | ---
base_model: Oneeb/Humanised-LLMv3
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
static quants of https://huggingface.co/Oneeb/Humanised-LLMv3
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Humanised-LLMv3-GGUF/resolve/main/Humanised-LLMv3.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Humanised-LLMv3-GGUF/resolve/main/Humanised-LLMv3.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Humanised-LLMv3-GGUF/resolve/main/Humanised-LLMv3.IQ3_S.gguf) | IQ3_S | 3.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Humanised-LLMv3-GGUF/resolve/main/Humanised-LLMv3.Q3_K_S.gguf) | Q3_K_S | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Humanised-LLMv3-GGUF/resolve/main/Humanised-LLMv3.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Humanised-LLMv3-GGUF/resolve/main/Humanised-LLMv3.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Humanised-LLMv3-GGUF/resolve/main/Humanised-LLMv3.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Humanised-LLMv3-GGUF/resolve/main/Humanised-LLMv3.IQ4_XS.gguf) | IQ4_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Humanised-LLMv3-GGUF/resolve/main/Humanised-LLMv3.Q4_0.gguf) | Q4_0 | 4.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Humanised-LLMv3-GGUF/resolve/main/Humanised-LLMv3.IQ4_NL.gguf) | IQ4_NL | 4.1 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Humanised-LLMv3-GGUF/resolve/main/Humanised-LLMv3.Q4_K_S.gguf) | Q4_K_S | 4.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Humanised-LLMv3-GGUF/resolve/main/Humanised-LLMv3.Q4_K_M.gguf) | Q4_K_M | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Humanised-LLMv3-GGUF/resolve/main/Humanised-LLMv3.Q5_K_S.gguf) | Q5_K_S | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Humanised-LLMv3-GGUF/resolve/main/Humanised-LLMv3.Q5_K_M.gguf) | Q5_K_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Humanised-LLMv3-GGUF/resolve/main/Humanised-LLMv3.Q6_K.gguf) | Q6_K | 5.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Humanised-LLMv3-GGUF/resolve/main/Humanised-LLMv3.Q8_0.gguf) | Q8_0 | 7.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
adowu/astral-256k-5.5b | adowu | 2024-04-10T05:01:58Z | 399 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"astral",
"256k",
"long",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-10T03:37:49Z | ---
library_name: transformers
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- astral
- 256k
- long
- mistral
---
### ASTRAL-256k-5.5b
The adowu/astral-256k-5.5b is a cutting-edge language model developed on the MistralForCausalLM architecture, designed for advanced causal language modeling tasks. This model stands out for its ability to understand and generate text with remarkable depth and context awareness, making it highly effective for a wide range of natural language processing (NLP) applications.
## Key Features
- Advanced Architecture: Utilizes the MistralForCausalLM framework, enabling efficient and effective text processing and generation.
- Large Model Scale: Equipped with a substantial model size, it captures and processes a vast amount of information, enhancing its understanding and generation capabilities.
- Extended Sequence Handling: Capable of managing exceptionally long sequences, this model excels in tasks requiring extensive contextual information.
## Performance and Efficiency
Optimized for high performance, the model employs techniques to balance computational efficiency with output precision. This optimization ensures it can be deployed effectively across various platforms, including those supporting bfloat16 computations, without significant loss in the quality of generated text.
## Application Potential
The model's sophisticated understanding and text generation capabilities make it ideal for several advanced applications:
- Content Generation: From articles and reports to creative writing, it can produce coherent and contextually rich content.
- Conversational Systems: Powers chatbots and virtual assistants, facilitating deep and meaningful interactions over extended conversations.
- Complex Language Understanding Tasks: Excellently performs in summarization, translation, and other tasks over large documents, showcasing its ability to handle detailed and nuanced language understanding.
- **Developed by:** aww
- **Model type:** Mistral |
OpenBuddy/openbuddy-mixtral-22bx8-preview0-65k | OpenBuddy | 2024-04-16T06:55:42Z | 399 | 4 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-15T16:39:09Z | ---
license: apache-2.0
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
- fi
pipeline_tag: text-generation
inference: false
library_name: transformers
tags:
- mixtral
---
⚠️ Warning: Preview Model ⚠️
This model is currently at 50% completion and serves as a preview. It has not yet been fully optimized for general conversational tasks.
We are releasing this preview to provide insights and solicit community feedback to improve the model. Stay tuned for our future release.
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)
Evaluation result of this model: [Evaluation.txt](Evaluation.txt)

# Copyright Notice
Base model: Mixtral 8x22B
License: Apache 2.0
# Prompt Format
We recommend using the fast tokenizer from `transformers`, which should be enabled by default in the `transformers` and `vllm` libraries. Other implementations including `sentencepiece` may not work as expected, especially for special tokens like `<|role|>`, `<|says|>` and `<|end|>`.
```
<|role|>system<|says|>You(assistant) are a helpful, respectful and honest INTP-T AI Assistant named Buddy. You are talking to a human(user).
Always answer as helpfully and logically as possible, while being safe. Your answers should not include any harmful, political, religious, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
You cannot access the internet, but you have vast knowledge, cutoff: 2023-04.
You are trained by OpenBuddy team, (https://openbuddy.ai, https://github.com/OpenBuddy/OpenBuddy), not related to GPT or OpenAI.<|end|>
<|role|>user<|says|>History input 1<|end|>
<|role|>assistant<|says|>History output 1<|end|>
<|role|>user<|says|>History input 2<|end|>
<|role|>assistant<|says|>History output 2<|end|>
<|role|>user<|says|>Current input<|end|>
<|role|>assistant<|says|>
```
This format is also defined in `tokenizer_config.json`, which means you can directly use `vllm` to deploy an OpenAI-like API service. For more information, please refer to the [vllm documentation](https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html).
## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## 免责声明
所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。
OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。
使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。 |
IAFrance/ECE-TW3-JRGL-VHF4 | IAFrance | 2024-04-15T20:55:57Z | 399 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"MTSAIR/MultiVerse_70B",
"abacusai/Smaug-72B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-15T20:38:35Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- MTSAIR/MultiVerse_70B
- abacusai/Smaug-72B-v0.1
---
# ECE-TW3-JRGL-VHF4
ECE-TW3-JRGL-VHF4 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [MTSAIR/MultiVerse_70B](https://huggingface.co/MTSAIR/MultiVerse_70B)
* [abacusai/Smaug-72B-v0.1](https://huggingface.co/abacusai/Smaug-72B-v0.1)
## 🧩 Configuration |
InferenceIllusionist/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-iMat-GGUF | InferenceIllusionist | 2024-04-17T18:33:12Z | 399 | 1 | null | [
"gguf",
"merge",
"iMat",
"license:apache-2.0",
"region:us"
]
| null | 2024-04-16T01:49:55Z | ---
license: apache-2.0
tags:
- merge
- gguf
- iMat
---
<img src="https://i.imgur.com/P68dXux.png" width="400"/>
# Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-iMat-GGUF
Quantized from fp32 with love.
* Quantizations made possible using mixtral-8x7b.imatrix file from [this](https://huggingface.co/datasets/ikawrakow/imatrix-from-wiki-train) repo (special thanks to [ikawrakow](https://huggingface.co/ikawrakow)).
For a brief rundown of iMatrix quant performance please see this [PR](https://github.com/ggerganov/llama.cpp/pull/5747)
<i>All quants are verified working prior to uploading to repo for your safety and convenience. </i>
Importance matrix quantizations are a work in progress, IQ3 and above is recommended for best results.
<b>Tip:</b> Pick a size that can fit in your GPU while still allowing some room for context for best speed. You may need to pad this further depending on if you are running image gen or TTS as well.
Original model card can be found [here](https://huggingface.co/Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss) |
mradermacher/Ankur_llm-GGUF | mradermacher | 2024-05-06T04:42:42Z | 399 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Ankurbash/Ankur_llm",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-17T03:29:44Z | ---
base_model: Ankurbash/Ankur_llm
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Ankurbash/Ankur_llm
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Ankur_llm-GGUF/resolve/main/Ankur_llm.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Ankur_llm-GGUF/resolve/main/Ankur_llm.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Ankur_llm-GGUF/resolve/main/Ankur_llm.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Ankur_llm-GGUF/resolve/main/Ankur_llm.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Ankur_llm-GGUF/resolve/main/Ankur_llm.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Ankur_llm-GGUF/resolve/main/Ankur_llm.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Ankur_llm-GGUF/resolve/main/Ankur_llm.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Ankur_llm-GGUF/resolve/main/Ankur_llm.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Ankur_llm-GGUF/resolve/main/Ankur_llm.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Ankur_llm-GGUF/resolve/main/Ankur_llm.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Ankur_llm-GGUF/resolve/main/Ankur_llm.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Ankur_llm-GGUF/resolve/main/Ankur_llm.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Ankur_llm-GGUF/resolve/main/Ankur_llm.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Ankur_llm-GGUF/resolve/main/Ankur_llm.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
pwei07/Marcoro14-7B-ties | pwei07 | 2024-04-17T11:29:30Z | 399 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"OpenPipe/mistral-ft-optimized-1218",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-17T11:25:18Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
---
# Marcoro14-7B-ties
Marcoro14-7B-ties is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# no parameters necessary for base model
- model: OpenPipe/mistral-ft-optimized-1218
parameters:
density: 0.5
weight: 0.5
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
parameters:
density: 0.5
weight: 0.3
merge_method: ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
normalize: true
dtype: float16
``` |
kuotient/Llama-3-13B-Instruct-attenuated | kuotient | 2024-04-20T13:55:05Z | 399 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:kuotient/Meta-Llama-3-8B-Instruct",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-20T13:36:06Z | ---
base_model:
- kuotient/Meta-Llama-3-8B-Instruct
library_name: transformers
tags:
- mergekit
- merge
license: other
license_name: llama3
---
# Llama-3-13B-Instruct-attenuated
The core idea came from @jukofyork, see this [issue;](https://github.com/arcee-ai/mergekit/issues/198)
As I understand, The concept of the idea is to make model think twice but leap same distances like original. but why 0.7071067812?
> The scale factor to use, eg: solve x^2 = 1/2 --> x = 1/sqrt(2) ≈ 0.7071067812
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [kuotient/Meta-Llama-3-8B-Instruct](https://huggingface.co/kuotient/Meta-Llama-3-8B-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
###############################
# llama-3-attenuated.yaml #
###############################
# Use: mergekit-yaml --clone-tensors ./llama-3-attenuated.yaml ./llama-3-attenuated
# See: https://github.com/arcee-ai/mergekit/issues/198 for discussion/reasoning behind this idea.
# ---
# The scale factor to use, eg: solve x^2 = 1/2 --> x = 1/sqrt(2) ≈ 0.7071067812
const_tag: &scale_factor 0.7071067812 # 1/sqrt(2)
# The filter parameters of a scaled block.
attenuate-env: &attenuated_env
parameters:
scale:
- filter: q_proj
value: *scale_factor
- filter: k_proj
value: *scale_factor
- value: 1.0
# ---
slices:
- sources:
- model: kuotient/Meta-Llama-3-8B-Instruct
layer_range: [0, 4]
- sources:
- model: kuotient/Meta-Llama-3-8B-Instruct
layer_range: [4, 8]
<<: *attenuated_env
- sources:
- model: kuotient/Meta-Llama-3-8B-Instruct
layer_range: [4, 12]
<<: *attenuated_env
- sources:
- model: kuotient/Meta-Llama-3-8B-Instruct
layer_range: [8, 16]
<<: *attenuated_env
- sources:
- model: kuotient/Meta-Llama-3-8B-Instruct
layer_range: [12, 20]
<<: *attenuated_env
- sources:
- model: kuotient/Meta-Llama-3-8B-Instruct
layer_range: [16, 24]
<<: *attenuated_env
- sources:
- model: kuotient/Meta-Llama-3-8B-Instruct
layer_range: [20, 28]
<<: *attenuated_env
- sources:
- model: kuotient/Meta-Llama-3-8B-Instruct
layer_range: [24, 28]
<<: *attenuated_env
- sources:
- model: kuotient/Meta-Llama-3-8B-Instruct
layer_range: [28, 32]
merge_method: passthrough
dtype: bfloat16
```
|
YeungNLP/firefly-qwen1.5-en-14b | YeungNLP | 2024-04-21T16:42:42Z | 399 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-21T14:34:46Z | ---
library_name: transformers
license: apache-2.0
basemodel: Qwen/Qwen1.5-14B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
-
**Shared by [
optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ghost-x/ghost-7b-alpha-awq | ghost-x | 2024-04-26T07:32:32Z | 399 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"ghost",
"tools",
"chat",
"conversational",
"en",
"vi",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
]
| text-generation | 2024-04-22T16:22:20Z | ---
language:
- en
- vi
license: other
license_name: ghost-7b
license_link: https://ghost-x.org/ghost-7b-license
library_name: transformers
tags:
- ghost
- tools
- chat
pipeline_tag: text-generation
widget:
- text: Why is the sky blue ?
output:
text: >-
The sky appears blue because of a phenomenon called Rayleigh scattering.
Sunlight is composed of all colors of the visible spectrum, but blue light
has a shorter wavelength than other colors. When sunlight enters the
Earth's atmosphere, it interacts with molecules of nitrogen and oxygen,
which are the main components of our atmosphere. These molecules scatter
the shorter wavelengths of light, such as blue and violet, more than the
longer wavelengths, such as red and orange. As a result, when we look up
at the sky, we see more blue light than other colors because the blue
light has been scattered more by the molecules in the atmosphere.
---
# Ghost 7B Alpha
<img src="https://cdn-uploads.huggingface.co/production/uploads/600ae38cc92b79f54efd4556/QPrQZMQX_jzyYngmreP0_.jpeg" alt="Ghost 7B Alpha Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
The large generation of language models focuses on optimizing excellent reasoning, multi-task knowledge, and tools support.
# Introduction
**Ghost 7B Alpha** is a large language model fine-tuned from Mistral 7B, with a size of 7 billion parameters. The model was developed with the goal of optimizing reasoning ability, multi-task knowledge and supporting tool usage. The model works well with the main trained and optimized languages being English and Vietnamese.
Overall, the model is suitable when making a pretrained version so you can continue to develop the desired tasks, develop virtual assistants, perform features on tasks such as coding, translation, answering questions, creating documents, etc. It is truly an efficient, fast and extremely cheap open model.
## Specifications
- Name: **Ghost 7B Alpha**.
- Model size: 7 billion parameters.
- Context length: 8K, 8192.
- Languages: English and Vietnamese.
- Main tasks: reasoning, multi-tasking knowledge and function tools.
- License: [Ghost 7B LICENSE AGREEMENT](https://ghost-x.org/ghost-7b-license).
- Based on: Mistral 7B.
- Distributions: Standard (BF16), GGUF, AWQ.
- Developed by: **Ghost X**, [Hieu Lam](https://huggingface.co/lamhieu).
### Links
- Card model: [🤗 HuggingFace](https://huggingface.co/ghost-x/ghost-7b-alpha).
- Official website: [Ghost 7B Alpha](https://ghost-x.org/docs/models/ghost-7b-alpha/).
- Demo: [Playground with Ghost 7B Alpha](https://ghost-x.org/docs/notebooks/playground-with-ghost-7b-alpha/).
## Distributions
We create many distributions to give you the best access options that best suit your needs. Always make sure you know which version you need and what will help you operate better.
| Version | Model card |
| ------- | -------------------------------------------------------------------- |
| BF16 | [🤗 HuggingFace](https://huggingface.co/ghost-x/ghost-7b-alpha) |
| GGUF | [🤗 HuggingFace](https://huggingface.co/ghost-x/ghost-7b-alpha-gguf) |
| AWQ | [🤗 HuggingFace](https://huggingface.co/ghost-x/ghost-7b-alpha-awq) |
## Note
For all official information and updates about the model, see here:
- Card model: [🤗 HuggingFace](https://huggingface.co/ghost-x/ghost-7b-alpha).
- Official website: [Ghost 7B Alpha](https://ghost-x.org/docs/models/ghost-7b-alpha/).
- Demo: [Playground with Ghost 7B Alpha](https://ghost-x.org/docs/notebooks/playground-with-ghost-7b-alpha/).
|
QuantFactory/Unichat-llama3-Chinese-8B-GGUF | QuantFactory | 2024-04-24T04:34:28Z | 399 | 3 | null | [
"gguf",
"text-generation",
"en",
"zh",
"base_model:UnicomLLM/Unichat-llama3-Chinese-8B",
"license:apache-2.0",
"region:us"
]
| text-generation | 2024-04-24T04:08:25Z | ---
license: apache-2.0
language:
- en
- zh
pipeline_tag: text-generation
base_model: UnicomLLM/Unichat-llama3-Chinese-8B
---
# Unichat-llama3-Chinese-8B- GGUF
- This is quantized version of [UnicomLLM/Unichat-llama3-Chinese-8B](https://huggingface.co/UnicomLLM/Unichat-llama3-Chinese-8B)
# Model Description (Translated)
- China Unicom AI Innovation Center released the industry's first llama3 Chinese instruction fine-tuning model (full parameter fine-tuning), uploaded at 22:00 on April 19, 2024
- This model is based on [**Meta Llama 3**](https://huggingface.co/collections/meta-llama/meta-llama-3-66214712577ca38149ebb2b6) , adds Chinese data for training, and achieves high-quality Chinese question and answer using the llama3 model.
- The model context maintains the native length of 8K, and a version that supports 64K length will be released later.
- Base model [**Meta-Llama-3-8B**](https://huggingface.co/meta-llama/Meta-Llama-3-8B)
### 📊 Data
- High-quality instruction data, covering multiple fields and industries, providing sufficient data support for model training
- Fine-tuning instruction data undergoes strict manual screening to ensure high-quality instruction data is used for model fine-tuning.
For more details on models, datasets and training please refer to:
* Github:[**Unichat-llama3-Chinese**](https://github.com/UnicomAI/Unichat-llama3-Chinese)
|
DrNicefellow/Microscopic-Mistral-3k-steps | DrNicefellow | 2024-05-02T13:41:59Z | 399 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-02T13:30:30Z | ---
license: apache-2.0
---
Self trained microscopit Mistral. Around 810M parameters.
The tokenizer is the one from https://huggingface.co/mistralai/Mistral-7B-v0.1.
It is being trained on around 400B tokens and this is step 3k.
The evaluation is being conducted now.
## License
This model is available under the Apache 2.0 License.
## Discord Server
Join our Discord server [here](https://discord.gg/xhcBDEM3).
## Feeling Generous? 😊
Eager to buy me a cup of 2$ coffe or iced tea?🍵☕ Sure, here is the link: [https://ko-fi.com/drnicefellow](https://ko-fi.com/drnicefellow). Please add a note on which one you want me to drink?
|
burak/Rhea-125-v0.5 | burak | 2024-05-09T12:02:18Z | 399 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:davidkim205/Rhea-72b-v0.5",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-09T10:52:53Z | ---
license: apache-2.0
base_model:
- davidkim205/Rhea-72b-v0.5
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [davidkim205/Rhea-72b-v0.5](https://huggingface.co/davidkim205/Rhea-72b-v0.5)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- layer_range: [0, 20]
model: davidkim205/Rhea-72b-v0.5
- sources:
- layer_range: [10, 30]
model: davidkim205/Rhea-72b-v0.5
- sources:
- layer_range: [20, 40]
model: davidkim205/Rhea-72b-v0.5
- sources:
- layer_range: [30, 50]
model: davidkim205/Rhea-72b-v0.5
- sources:
- layer_range: [40, 60]
model: davidkim205/Rhea-72b-v0.5
- sources:
- layer_range: [50, 70]
model: davidkim205/Rhea-72b-v0.5
- sources:
- layer_range: [60, 80]
model: davidkim205/Rhea-72b-v0.5
merge_method: passthrough
dtype: bfloat16
```
|
lodrick-the-lafted/Rummage-8B | lodrick-the-lafted | 2024-05-11T16:36:24Z | 399 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-11T14:55:38Z | ---
license: apache-2.0
---
<img src=https://huggingface.co/lodrick-the-lafted/Rummage-8B/resolve/main/rummage.png>
Rummage-8B
It's [Edgerunners/meta-llama-3-8b-instruct-hf-ortho-baukit-10fail-1000total](https://huggingface.co/Edgerunners/meta-llama-3-8b-instruct-hf-ortho-baukit-10fail-1000total)
With ablation where the harmless dataset was tatsu-labs/alpaca and the harmful dataset was hooking-dev/SexyGPT-ChatML. |
ukung/Open-Llama-3B-FineTune-GGUF | ukung | 2024-05-20T16:33:26Z | 399 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-20T15:41:12Z | Entry not found |
Rakuto/Phi-3-medium-4k-instruct-gptq-4bit | Rakuto | 2024-05-23T07:53:03Z | 399 | 2 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
]
| text-generation | 2024-05-23T07:50:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Llama3-13B-lingyang-v1-GGUF | mradermacher | 2024-06-04T04:56:59Z | 399 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"Llama3",
"en",
"base_model:wwe180/Llama3-13B-lingyang-v1",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-04T04:10:40Z | ---
base_model: wwe180/Llama3-13B-lingyang-v1
language:
- en
library_name: transformers
license:
- other
quantized_by: mradermacher
tags:
- mergekit
- merge
- Llama3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/wwe180/Llama3-13B-lingyang-v1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama3-13B-lingyang-v1-GGUF/resolve/main/Llama3-13B-lingyang-v1.Q2_K.gguf) | Q2_K | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-13B-lingyang-v1-GGUF/resolve/main/Llama3-13B-lingyang-v1.IQ3_XS.gguf) | IQ3_XS | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-13B-lingyang-v1-GGUF/resolve/main/Llama3-13B-lingyang-v1.Q3_K_S.gguf) | Q3_K_S | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-13B-lingyang-v1-GGUF/resolve/main/Llama3-13B-lingyang-v1.IQ3_S.gguf) | IQ3_S | 6.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama3-13B-lingyang-v1-GGUF/resolve/main/Llama3-13B-lingyang-v1.IQ3_M.gguf) | IQ3_M | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-13B-lingyang-v1-GGUF/resolve/main/Llama3-13B-lingyang-v1.Q3_K_M.gguf) | Q3_K_M | 6.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-13B-lingyang-v1-GGUF/resolve/main/Llama3-13B-lingyang-v1.Q3_K_L.gguf) | Q3_K_L | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-13B-lingyang-v1-GGUF/resolve/main/Llama3-13B-lingyang-v1.IQ4_XS.gguf) | IQ4_XS | 7.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-13B-lingyang-v1-GGUF/resolve/main/Llama3-13B-lingyang-v1.Q4_K_S.gguf) | Q4_K_S | 7.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-13B-lingyang-v1-GGUF/resolve/main/Llama3-13B-lingyang-v1.Q4_K_M.gguf) | Q4_K_M | 8.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-13B-lingyang-v1-GGUF/resolve/main/Llama3-13B-lingyang-v1.Q5_K_S.gguf) | Q5_K_S | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-13B-lingyang-v1-GGUF/resolve/main/Llama3-13B-lingyang-v1.Q5_K_M.gguf) | Q5_K_M | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-13B-lingyang-v1-GGUF/resolve/main/Llama3-13B-lingyang-v1.Q6_K.gguf) | Q6_K | 11.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-13B-lingyang-v1-GGUF/resolve/main/Llama3-13B-lingyang-v1.Q8_0.gguf) | Q8_0 | 14.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
yvonne90190/nlp_model_3 | yvonne90190 | 2024-06-12T23:34:23Z | 399 | 0 | transformers | [
"transformers",
"safetensors",
"openelm",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
]
| text-generation | 2024-06-12T23:32:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Mairelles/Bert-Classifier | Mairelles | 2024-06-14T21:54:34Z | 399 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-06-14T21:54:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
NikolayKozloff/Morfoz-LLM-8b-v1.0-IQ4_NL-GGUF | NikolayKozloff | 2024-06-24T15:01:27Z | 399 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"tr",
"base_model:Morfoz-Aigap/Morfoz-LLM-8b-v1.0",
"license:apache-2.0",
"region:us"
]
| null | 2024-06-24T15:01:02Z | ---
base_model: Morfoz-Aigap/Morfoz-LLM-8b-v1.0
language:
- tr
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/Morfoz-LLM-8b-v1.0-IQ4_NL-GGUF
This model was converted to GGUF format from [`Morfoz-Aigap/Morfoz-LLM-8b-v1.0`](https://huggingface.co/Morfoz-Aigap/Morfoz-LLM-8b-v1.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Morfoz-Aigap/Morfoz-LLM-8b-v1.0) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Morfoz-LLM-8b-v1.0-IQ4_NL-GGUF --hf-file morfoz-llm-8b-v1.0-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Morfoz-LLM-8b-v1.0-IQ4_NL-GGUF --hf-file morfoz-llm-8b-v1.0-iq4_nl-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/Morfoz-LLM-8b-v1.0-IQ4_NL-GGUF --hf-file morfoz-llm-8b-v1.0-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/Morfoz-LLM-8b-v1.0-IQ4_NL-GGUF --hf-file morfoz-llm-8b-v1.0-iq4_nl-imat.gguf -c 2048
```
|
LeBenchmark/wav2vec2-FR-7K-base | LeBenchmark | 2023-09-14T09:53:56Z | 398 | 1 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"feature-extraction",
"fr",
"arxiv:2309.05472",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2022-03-02T23:29:04Z | ---
language: "fr"
thumbnail:
tags:
- wav2vec2
license: "apache-2.0"
---
# LeBenchmark: wav2vec2 base model trained on 7K hours of French speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.
For more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: [LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech](https://arxiv.org/abs/2309.05472)
## Model and data descriptions
We release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:
## *Lebenchmark 2.0:*
- [wav2vec2-FR-14K-xlarge](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-xlarge): xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- [wav2vec2-FR-14K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-large): Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- [wav2vec2-FR-14K-light](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-light): Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
## *Lebenchmark:*
- [wav2vec2-FR-7K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-large): Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-7K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-base): Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-3K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-large): Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-3K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-base): Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-2.6K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-2.6K-base): Base wav2vec2 trained on 2.6K hours of French speech (**no spontaneous speech**).
- [wav2vec2-FR-1K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-large): Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
- [wav2vec2-FR-1K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-base): Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
## Fine-tune with Fairseq for ASR with CTC
As our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in [this blogpost](https://huggingface.co/blog/fine-tune-wav2vec2-english).
Please note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.
## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...
Pretrained wav2vec models recently gained in popularity. At the same time, [SpeechBrain toolkit](https://speechbrain.github.io) came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.
While it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!
1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...
2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.
**If interested, simply follow this [tutorial](https://colab.research.google.com/drive/17Hu1pxqhfMisjkSgmM2CnZxfqDyn2hSY?usp=sharing)**
## Referencing LeBenchmark
```
@misc{parcollet2023lebenchmark,
title={LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech},
author={Titouan Parcollet and Ha Nguyen and Solene Evain and Marcely Zanon Boito and Adrien Pupier and Salima Mdhaffar and Hang Le and Sina Alisamir and Natalia Tomashenko and Marco Dinarelli and Shucong Zhang and Alexandre Allauzen and Maximin Coavoux and Yannick Esteve and Mickael Rouvier and Jerome Goulian and Benjamin Lecouteux and Francois Portet and Solange Rossato and Fabien Ringeval and Didier Schwab and Laurent Besacier},
year={2023},
eprint={2309.05472},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
amazon/bort | amazon | 2021-05-18T23:32:35Z | 398 | 15 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"arxiv:2010.10499",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ⚠️ **Disclaimer** ⚠️
This model is community-contributed, and not supported by Amazon, Inc.
## BORT
[Amazon's BORT](https://www.amazon.science/blog/a-version-of-the-bert-language-model-thats-20-times-as-fast)
BORT is a highly compressed version of [bert-large](https://huggingface.co/bert-large-uncased) that is up to 10 times faster at inference.
The model is an optimal sub-architecture of *bert-large* that was found using neural architecture search.
[Paper](https://arxiv.org/abs/2010.10499)
**Abstract**
We extract an optimal subset of architectural parameters for the BERT architecture from Devlin et al. (2018) by applying recent breakthroughs in algorithms for neural architecture search. This optimal subset, which we refer to as "Bort", is demonstrably smaller, having an effective (that is, not counting the embedding layer) size of 5.5% the original BERT-large architecture, and 16% of the net size. Bort is also able to be pretrained in 288 GPU hours, which is 1.2% of the time required to pretrain the highest-performing BERT parametric architectural variant, RoBERTa-large (Liu et al., 2019), and about 33% of that of the world-record, in GPU hours, required to train BERT-large on the same hardware. It is also 7.9x faster on a CPU, as well as being better performing than other compressed variants of the architecture, and some of the non-compressed variants: it obtains performance improvements of between 0.3% and 31%, absolute, with respect to BERT-large, on multiple public natural language understanding (NLU) benchmarks.
The original model can be found under:
https://github.com/alexa/bort
**IMPORTANT**
BORT requires a very unique fine-tuning algorithm, called [Agora](https://adewynter.github.io/notes/bort_algorithms_and_applications.html) which is not open-sourced yet.
Standard fine-tuning has not shown to work well in initial experiments, so stay tuned for updates!
|
ayameRushia/roberta-base-indonesian-1.5G-sentiment-analysis-smsa | ayameRushia | 2023-09-02T23:40:48Z | 398 | 4 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"id",
"dataset:indonlp/indonlu",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ---
widget:
- text: Entah mengapa saya merasakan ada sesuatu yang janggal di produk ini
tags:
- generated_from_trainer
datasets:
- indonlp/indonlu
metrics:
- accuracy
model-index:
- name: roberta-base-indonesian-1.5G-sentiment-analysis-smsa
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: indonlu
type: indonlu
args: smsa
metrics:
- name: Accuracy
type: accuracy
value: 0.9261904761904762
language:
- id
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-indonesian-1.5G-sentiment-analysis-smsa
This model is a fine-tuned version of [cahya/roberta-base-indonesian-1.5G](https://huggingface.co/cahya/roberta-base-indonesian-1.5G) on the indonlu dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4294
- Accuracy: 0.9262
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6461 | 1.0 | 688 | 0.2620 | 0.9087 |
| 0.2627 | 2.0 | 1376 | 0.2291 | 0.9151 |
| 0.1784 | 3.0 | 2064 | 0.2891 | 0.9167 |
| 0.1099 | 4.0 | 2752 | 0.3317 | 0.9230 |
| 0.0857 | 5.0 | 3440 | 0.4294 | 0.9262 |
| 0.0346 | 6.0 | 4128 | 0.4759 | 0.9246 |
| 0.0221 | 7.0 | 4816 | 0.4946 | 0.9206 |
| 0.006 | 8.0 | 5504 | 0.5823 | 0.9175 |
| 0.0047 | 9.0 | 6192 | 0.5777 | 0.9159 |
| 0.004 | 10.0 | 6880 | 0.5800 | 0.9175 |
### How to use this model in Transformers Library
```python
from transformers import pipeline
pipe = pipeline(
"text-classification",
model="ayameRushia/roberta-base-indonesian-1.5G-sentiment-analysis-smsa"
)
pipe("Terima kasih atas bantuannya ya!")
```
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3 |
efederici/sentence-it5-base | efederici | 2022-03-29T23:09:01Z | 398 | 4 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"t5",
"feature-extraction",
"sentence-similarity",
"transformers",
"it",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-03-29T19:57:59Z | ---
pipeline_tag: sentence-similarity
language:
- it
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-IT5-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. It is a T5 ([IT5](https://huggingface.co/gsarti/it5-base)) base model. It is trained on a dataset made from question/context pairs ([squad-it](https://github.com/crux82/squad-it)), tags/news-article pairs, headline/text pairs ([change-it](https://huggingface.co/datasets/gsarti/change_it)) and on [stsb](https://huggingface.co/datasets/stsb_multi_mt/viewer/it/train).
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Questo è un esempio di frase", "Questo è un ulteriore esempio"]
model = SentenceTransformer('efederici/sentence-IT5-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["Questo è un esempio di frase", "Questo è un ulteriore esempio"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('efederici/sentence-IT5-base')
model = AutoModel.from_pretrained('efederici/sentence-IT5-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': None, 'do_lower_case': False}) with Transformer model: T5EncoderModel
(1): Pooling({'word_embedding_dimension': 512, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
|
haining/scientific_abstract_simplification | haining | 2023-07-23T03:21:32Z | 398 | 22 | transformers | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"text2text generation",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text2text-generation | 2022-12-07T04:14:13Z | ---
inference:
parameters:
do_sample: true
max_length: 512
top_p: 0.9
repetition_penalty: 1.0
language:
- en
license: mit
metrics:
- sacrebleu
- bert_score
- rouge
- meteor
- sari
- ari
- "Automated Readability Index"
tags:
- "text2text generation"
task:
name: "scientific abstract simplification"
type: "text2text generation"
widget:
-
text: "summarize, simplify, and contextualize: The COVID-19 pandemic presented enormous data challenges in the United States. Policy makers, epidemiological modelers, and health researchers all require up-to-date data on the pandemic and relevant public behavior, ideally at fine spatial and temporal resolution. The COVIDcast API is our attempt to fill this need: Operational since April 2020, it provides open access to both traditional public health surveillance signals (cases, deaths, and hospitalizations) and many auxiliary indicators of COVID-19 activity, such as signals extracted from deidentified medical claims data, massive online surveys, cell phone mobility data, and internet search trends. These are available at a fine geographic resolution (mostly at the county level) and are updated daily. The COVIDcast API also tracks all revisions to historical data, allowing modelers to account for the frequent revisions and backfill that are common for many public health data sources. All of the data are available in a common format through the API and accompanying R and Python software packages. This paper describes the data sources and signals, and provides examples demonstrating that the auxiliary signals in the COVIDcast API present information relevant to tracking COVID activity, augmenting traditional public health reporting and empowering research and decision-making."
example_title: "covid-api paper, from PNAS"
-
text: "summarize, simplify, and contextualize: Potato mop-top virus (PMTV) is considered an emerging threat to potato production in the United States. PMTV is transmitted by a soil-borne protist, Spongospora subterranean. Rapid, accurate, and sensitive detection of PMTV in leaves and tubers is an essential component in PMTV management program. A rapid test that can be adapted to in-field, on-site testing with minimal sample manipulation could help in ensuring the sanitary status of the produce in situations such as certification programs and shipping point inspections. Toward that goal, a rapid and highly sensitive recombinase polymerase amplification (RPA)-based test was developed for PMTV detection in potato tubers. The test combines the convenience of RPA assay with a simple sample extraction procedure, making it amenable to rapid on-site diagnosis of PMTV. Furthermore, the assay was duplexed with a plant internal control to monitor sample extraction and RPA reaction performance. The method described could detect as little as 10 fg of PMTV RNA transcript in various potato tissues, the diagnostic limit of detection (LOQ) similar to that of traditional molecular methods."
example_title: "potato paper, from PLOS ONE"
-
text: "summarize, simplify, and contextualize: One of the most thrilling cultural experiences is to hear live symphony-orchestra music build up from a whispering passage to a monumental fortissimo. The impact of such a crescendo has been thought to depend only on the musicians’ skill, but here we show that interactions between the concert-hall acoustics and listeners’ hearing also play a major role in musical dynamics. These interactions contribute to the shoebox-type concert hall’s established success, but little prior research has been devoted to dynamic expression in this three-part transmission chain as a complete system. More forceful orchestral playing disproportionately excites high frequency harmonics more than those near the note’s fundamental. This effect results in not only more sound energy, but also a different tone color. The concert hall transmits this sound, and the room geometry defines from which directions acoustic reflections arrive at the listener. Binaural directional hearing emphasizes high frequencies more when sound arrives from the sides of the head rather than from the median plane. Simultaneously, these same frequencies are emphasized by higher orchestral-playing dynamics. When the room geometry provides reflections from these directions, the perceived dynamic range is enhanced. Current room-acoustic evaluation methods assume linear behavior and thus neglect this effect. The hypothesis presented here is that the auditory excitation by reflections is emphasized with an orchestra forte most in concert halls with strong lateral reflections. The enhanced dynamic range provides an explanation for the success of rectangularly shaped concert-hall geometry."
example_title: "music paper, from PNAS"
-
text: "summarize, simplify, and contextualize: Children in industrialized cultures typically succeed on Give-N, a test of counting ability, by age 4. On the other hand, counting appears to be learned much later in the Tsimane’, an indigenous group in the Bolivian Amazon. This study tests three hypotheses for what may cause this difference in timing: (a) Tsimane’ children may be shy in providing behavioral responses to number tasks, (b) Tsimane’ children may not memorize the verbal list of number words early in acquisition, and/or (c) home environments may not support mathematical learning in the same way as in US samples, leading Tsimane’ children to primarily acquire mathematics through formalized schooling. Our results suggest that most of our subjects are not inhibited by shyness in responding to experimental tasks. We also find that Tsimane’ children (N = 100, ages 4-11) learn the verbal list later than US children, but even upon acquiring this list, still take time to pass Give-N tasks. We find that performance in counting varies across tasks and is related to formal schooling. These results highlight the importance of formal education, including instruction in the count list, in learning the meanings of the number words."
example_title: "given-n paper, from PLOS ONE"
---
# TL;DR
Scientific Abstract Simplification (SAS) is a tool designed to rewrite complex scientific abstracts into simpler, more comprehensible versions. Our objective is to make scientific knowledge universally accessible. If you have already experimented with our baseline model (`sas_baseline`), you will find that the current model surpasses its predecessor in terms of all evaluation metrics. Feel free to test it via the Hosted Inference API to your right. Simply select one of the provided examples or input your own scientific abstract. Just ensure to precede your text with the instruction, "summarize, simplify, and contextualize: ", followed by a space. For local usage, refer to the [Usage](#Usage) section."
# Project Description
Open science has significantly reduced barriers to accessing scientific papers.
However, attainable research does not entail accessible knowledge.
Consequently, many individuals might prefer to rely on succinct social media narratives rather than endeavour to comprehend a scientific paper.
This preference is understandable as humans often favor narratives over dry, technical information.
So, why not "translate" these intricate scientific abstracts into simpler, more accessible narratives?
Several prestigious journals have already initiated steps towards enhancing accessibility.
For instance, PNAS requires authors to submit Significance Statements understandable to an 'undergraduate-educated scientist', while Science includes an editor's abstract to provide a swift overview of the paper's salient points.
In this project, our objective is to employ AI to rewrite scientific abstracts into easily understandable scientific narratives.
To facilitate this, we have curated two new datasets: one containing PNAS abstract-significance pairs and the other encapsulating editor abstracts from Science.
We utilize a Transformer model (a variant known as Flan-T5) to fine-tune our model for the task of simplifying scientific abstracts.
Initially, the model is fine-tuned utilizing multiple discrete instructions by amalgamating four pertinent tasks in a challenge-proportional manner (a strategy we refer to as Multi-Instruction Pretuning).
Subsequently, we continue the fine-tuning process exclusively with the abstract-significance corpus. Our model can generate lay summaries that outperform models fine-tuned solely with the abstract-significance corpus and models fine-tuned with traditional task combinations.
We hope our work can foster a more comprehensive understanding of scientific research, enabling a larger audience to benefit from open science.
- **Model type:** Language model
- **Developed by:**
- PIs: Jason Clark and Hannah McKelvey, Montana State University
- Fellow: Haining Wang, Indiana University Bloomington; Deanna Zarrillo, Drexel University
- Collaborator: Zuoyu Tian, Indiana University Bloomington
- [LEADING](https://cci.drexel.edu/mrc/leading/) Montana State University Library, Project "TL;DR it": Automating Article Synopses for Search Engine Optimization and Citizen Science
- **Language(s) (NLP):** English
- **License:** MIT
- **Parent Model:** [FLAN-T5-large](https://huggingface.co/google/flan-t5-large)
# Usage
Use the code below to get started with the model. Remember to prepend the `INSTRUCTION` for best performance.
```python
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
INSTRUCTION = "summarize, simplify, and contextualize: "
tokenizer = AutoTokenizer.from_pretrained("haining/scientific_abstract_simplification")
model = AutoModelForSeq2SeqLM.from_pretrained("haining/scientific_abstract_simplification")
input_text = "The COVID-19 pandemic presented enormous data challenges in the United States. Policy makers, epidemiological modelers, and health researchers all require up-to-date data on the pandemic and relevant public behavior, ideally at fine spatial and temporal resolution. The COVIDcast API is our attempt to fill this need: Operational since April 2020, it provides open access to both traditional public health surveillance signals (cases, deaths, and hospitalizations) and many auxiliary indicators of COVID-19 activity, such as signals extracted from deidentified medical claims data, massive online surveys, cell phone mobility data, and internet search trends. These are available at a fine geographic resolution (mostly at the county level) and are updated daily. The COVIDcast API also tracks all revisions to historical data, allowing modelers to account for the frequent revisions and backfill that are common for many public health data sources. All of the data are available in a common format through the API and accompanying R and Python software packages. This paper describes the data sources and signals, and provides examples demonstrating that the auxiliary signals in the COVIDcast API present information relevant to tracking COVID activity, augmenting traditional public health reporting and empowering research and decision-making."
encoding = tokenizer(INSTRUCTION + input_text,
max_length=672,
padding='max_length',
truncation=True,
return_tensors='pt')
decoded_ids = model.generate(input_ids=encoding['input_ids'],
attention_mask=encoding['attention_mask'],
max_length=512,
top_p=.9,
do_sample=True)
print(tokenizer.decode(decoded_ids[0], skip_special_tokens=True))
```
# Training
## Data
| Corpus | # Training/Dev/Test Samples | # Training Tokens (source, target) | # Validation Tokens (source, target) | # Test Tokens (source, target) | Note |
|----------------------------------|-----------------------------|------------------------------------|--------------------------------------|--------------------------------|----------------------------------------------------------------------------------------------------------------------------------------|
| Scientific Abstract-Significance | 3,030/200/200 | 707,071, 375,433 | 45,697, 24,901 | 46,985, 24,426 | - |
| Editor Abstract | 732/91/92 | 154,808, 194,721 | 19,675, 24,421 | 19,539, 24,332 | - |
| Wiki Auto | 28,364/1,000/1,000 | 18,239,990, 12,547,272 | 643,157, 444,034 | 642549, 444,883 | We used the ACL version, adopted from Huggingface datasets. The validation and test samples are split from the corpus and kept frozen. |
| CNN/DailyMail | 287,113/13,368/11,490 | - | - | - | We used the 2.0 version, adopted from Huggingface datasets. |
## Setup
We finetuned the base model (flan-t5-large) on multiple relevant tasks with standard language modeling loss. During training, the source text of each task is prepended with an task-specific instruction and mapped to the corresponding target text. For example, "simplify: " is added before a wiki text, and the whole text is fed into the model to line up with the corresponding simple wiki text. The tuning process has two steps.
| Task | Corpus | Instruction | Optimal samples |
|------------------------------------|----------------------------------|--------------------------------------------|-----------------|
| Scientific Abstract Simplification | Scientific Abstract-Significance | "summarize, simplify, and contextualize: " | 39,200 |
| Recontextualization | Editor Abstract | "contextualize: " | 2,200 |
| Simplification | Wiki Auto | "simplify: " | 57,000 |
| Summarization | CNN/DailyMail | "summarize: " | 165,000 |
| Total | Challenge-proportional Mixing | n/a | 263,400 |
- Multi-instruction pretuning: In the stage, we first created a task mixture using "challenge-proportional mixing" method. In a separate pilot study, for each task, we finetuned it on a base model and observed the number of samples when validation loss starts to rise. We mixed the samples of each task proportional to its optimal number of samples. A corpus is exhausted before upsampling if the number of total samples is smaller than its optimal number. We finetune with the task mixture (263,400 samples) with the aforementioned template.
- fine-tuning: In this stage, we continued finetuning the checkpoint solely with the Scientific Abstract-Significance corpus till optimal validation loss was observed.
The multi-instruction tuning and the retuning took roughly 63 hours and 8 hours, respectively, on two NVIDIA RTX A5000 (24GB memory each) GPUs. We saved the checkpoint with the lowest validation loss for inference. We used the AdamW optimizer and a learning rate of 3e-5 with fully sharded data parallel strategy across training stages. The batch size equals to 1.
# Evaluation
The model is evaluated on the SAS test set using SacreBLEU, METEOR, BERTScore, ROUGE, SARI, and ARI.
## Metrics
<details>
<summary> Click to expand </summary>
- [SacreBLEU](https://huggingface.co/spaces/evaluate-metric/sacrebleu): SacreBLEU provides hassle-free computation of shareable, comparable, and reproducible BLEU scores. Inspired by Rico Sennrich’s multi-bleu-detok.perl, it produces the official WMT scores but works with plain text. It also knows all the standard test sets and handles downloading, processing, and tokenization for you.
- [BERTScore](https://huggingface.co/spaces/evaluate-metric/bertscore): BERTScore leverages the pre-trained contextual embeddings from BERT and matches words in candidate and reference sentences by cosine similarity. It has been shown to correlate with human judgment on sentence-level and system-level evaluation. Moreover, BERTScore computes precision, recall, and F1 measure, which can be useful for evaluating different language generation tasks.
- [ROUGE](https://huggingface.co/spaces/evaluate-metric/rouge)-1/2/L: ROUGE is a set of metrics and a software package used for evaluating automatic summarization and machine translation software in natural language processing. The metrics compare an automatically produced summary or translation against a reference or a set of references (human-produced) summary or translation.
- [METEOR](https://huggingface.co/spaces/evaluate-metric/meteor): METEOR, an automatic metric for machine translation evaluation that is based on a generalized concept of unigram matching between the machine-produced translation and human-produced reference translations. Unigrams can be matched based on their surface forms, stemmed forms, and meanings; furthermore, METEOR can be easily extended to include more advanced matching strategies. Once all generalized unigram matches between the two strings have been found, METEOR computes a score for this matching using a combination of unigram-precision, unigram-recall, and a measure of fragmentation that is designed to directly capture how well-ordered the matched words in the machine translation are in relation to the reference.
- [SARI](https://huggingface.co/spaces/evaluate-metric/sari): SARI is a metric used for evaluating automatic text simplification systems. The metric compares the predicted simplified sentences against the reference and the source sentences. It explicitly measures the goodness of words that are added, deleted and kept by the system. Sari = (F1_add + F1_keep + P_del) / 3 where F1_add: n-gram F1 score for add operation F1_keep: n-gram F1 score for keep operation P_del: n-gram precision score for delete operation n = 4, as in the original paper.
- [The Automated Readability Index (ARI)](https://www.readabilityformulas.com/automated-readability-index.php): ARI is a readability test designed to assess the understandability of a text. Like other popular readability formulas, the ARI formula outputs a number which approximates the grade level needed to comprehend the text. For example, if the ARI outputs the number 10, this equates to a high school student, ages 15-16 years old; a number 3 means students in 3rd grade (ages 8-9 yrs. old) should be able to comprehend the text.
</details>
Implementations of SacreBLEU, BERT Score, ROUGE, METEOR, and SARI are from Huggingface [`evaluate`](https://pypi.org/project/evaluate/) v.0.3.0. ARI is from [`py-readability-metrics`](https://pypi.org/project/py-readability-metrics/) v.1.4.5.
## Results
We tested our model on the SAS test set (200 samples). We generate 10 lay summaries based on each sample's abstract. During generation, we used top-p sampling with p=0.9. The mean performance is reported below.
| Metrics | SAS |
|----------------|---------|
| SacreBLEU↑ | 25.60 |
| BERT Score F1↑ | 90.14 |
| ROUGE-1↑ | 52.28 |
| ROUGE-2↑ | 29.61 |
| ROUGE-L↑ | 38.02 |
| METEOR↑ | 43.75 |
| SARI↑ | 51.96 |
| ARI↓ | 17.04 |
Note: 1. Some generated texts are too short (less than 100 words) to calcualte meaningful ARI. We therefore concatenated adjecent five texts and compute ARI for the 400 longer texts (instead of original 2,000 texts). 2. BERT score, ROUGE, and METEOR are multiplied by 100.
# Contact
Please [contact us](mailto:[email protected]) for any questions or suggestions.
# Disclaimer
This model is designed to make scientific abstracts more accessible. Its outputs should not be relied upon for any purpose outside of this scope. There is no guarantee that the generated text accurately reflects the research it is based on. When making important decisions, it is recommended to seek the advice of human experts or consult the original papers.
# Acknowledgement
This research is supported by the Institute of Museum and Library Services (IMLS) RE-246450-OLS-20. |
facebook/convnextv2-pico-1k-224 | facebook | 2023-09-04T19:38:57Z | 398 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"convnextv2",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2301.00808",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-02-17T13:54:20Z | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# ConvNeXt V2 (pico-sized model)
ConvNeXt V2 model pretrained using the FCMAE framework and fine-tuned on the ImageNet-1K dataset at resolution 224x224. It was introduced in the paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Woo et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt-V2).
Disclaimer: The team releasing ConvNeXT V2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
ConvNeXt V2 is a pure convolutional model (ConvNet) that introduces a fully convolutional masked autoencoder framework (FCMAE) and a new Global Response Normalization (GRN) layer to ConvNeXt. ConvNeXt V2 significantly improves the performance of pure ConvNets on various recognition benchmarks.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnextv2) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoImageProcessor, ConvNextV2ForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
preprocessor = AutoImageProcessor.from_pretrained("facebook/convnextv2-pico-1k-224")
model = ConvNextV2ForImageClassification.from_pretrained("facebook/convnextv2-pico-1k-224")
inputs = preprocessor(image, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label]),
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnextv2).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2301-00808,
author = {Sanghyun Woo and
Shoubhik Debnath and
Ronghang Hu and
Xinlei Chen and
Zhuang Liu and
In So Kweon and
Saining Xie},
title = {ConvNeXt {V2:} Co-designing and Scaling ConvNets with Masked Autoencoders},
journal = {CoRR},
volume = {abs/2301.00808},
year = {2023},
url = {https://doi.org/10.48550/arXiv.2301.00808},
doi = {10.48550/arXiv.2301.00808},
eprinttype = {arXiv},
eprint = {2301.00808},
timestamp = {Tue, 10 Jan 2023 15:10:12 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2301-00808.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
Den4ikAI/FRED-T5-Large-interpreter | Den4ikAI | 2023-05-21T07:19:25Z | 398 | 2 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"ru",
"dataset:inkoziev/incomplete_utterance_restoration",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text2text-generation | 2023-05-21T07:15:29Z | ---
license: mit
datasets:
- inkoziev/incomplete_utterance_restoration
language:
- ru
widget:
- text: '<SC1>- Как тебя зовут?\n- Джульетта Мао\nРазвернутый ответ: <extra_id_0>'
- text: '<SC1>- А живешь где?\n- В поясе астероидов\nРазвернутый ответ: <extra_id_0>'
pipeline_tag: text2text-generation
---
# Den4ikAI/FRED-T5-Large-interpreter
Модель для восстановления фразы с помощью контекста диалога (анафора, эллипсисы, гэппинг), проверки орфографии и нормализации текста диалоговых реплик.
Больше о задаче [тут](https://huggingface.co/inkoziev/rugpt_interpreter).
# Пример использования
```python
import torch
from transformers import T5ForConditionalGeneration, GPT2Tokenizer
model_name = 'Den4ikAI/FRED-T5-Large-interpreter'
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = T5ForConditionalGeneration.from_pretrained(model_name)
model.eval()
t5_input = '''<SC1>- Ты собак любишь?
- Не люблю я их
Развернутый ответ: <extra_id_0>'''
input_ids = tokenizer(t5_input, return_tensors='pt').input_ids
out_ids = model.generate(input_ids=input_ids, max_length=100, eos_token_id=tokenizer.eos_token_id, early_stopping=True)
t5_output = tokenizer.decode(out_ids[0][1:])
print(t5_output)
```
# Citation
```
@MISC{FRED-T5-Large-interpreter,
author = {Denis Petrov, Ilya Koziev},
title = {Russian conversations interpreter and normalizer},
url = {https://huggingface.co/Den4ikAI/FRED-T5-Large-interpreter},
year = 2023
}
``` |
sail-rvc/Shrek | sail-rvc | 2023-07-14T07:31:48Z | 398 | 1 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
]
| audio-to-audio | 2023-07-14T07:31:27Z |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# Shrek
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:31:48
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
TheBloke/tulu-13B-GGUF | TheBloke | 2023-09-27T12:52:12Z | 398 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"en",
"dataset:databricks/databricks-dolly-15k",
"dataset:OpenAssistant/oasst1",
"dataset:sahil2801/CodeAlpaca-20k",
"arxiv:2306.04751",
"arxiv:2302.13971",
"arxiv:2304.07327",
"base_model:allenai/tulu-13b",
"license:other",
"text-generation-inference",
"region:us"
]
| null | 2023-09-19T21:52:01Z | ---
language:
- en
license: other
datasets:
- databricks/databricks-dolly-15k
- OpenAssistant/oasst1
- sahil2801/CodeAlpaca-20k
model_name: Tulu 13B
base_model: allenai/tulu-13b
inference: false
model_creator: Allen Institute for AI
model_type: llama
prompt_template: '<|user|>
{prompt}
<|assistant|>
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Tulu 13B - GGUF
- Model creator: [Allen Institute for AI](https://huggingface.co/allenai)
- Original model: [Tulu 13B](https://huggingface.co/allenai/tulu-13b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Allen AI's Tulu 13B](https://huggingface.co/allenai/tulu-13b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/tulu-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/tulu-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/tulu-13B-GGUF)
* [Allen Institute for AI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/tulu-13B-fp16)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Tulu
```
<|user|>
{prompt}
<|assistant|>
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `other`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Allen AI's Tulu 13B](https://huggingface.co/allenai/tulu-13b).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [tulu-13b.Q2_K.gguf](https://huggingface.co/TheBloke/tulu-13B-GGUF/blob/main/tulu-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [tulu-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/tulu-13B-GGUF/blob/main/tulu-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [tulu-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/tulu-13B-GGUF/blob/main/tulu-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [tulu-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/tulu-13B-GGUF/blob/main/tulu-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [tulu-13b.Q4_0.gguf](https://huggingface.co/TheBloke/tulu-13B-GGUF/blob/main/tulu-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [tulu-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/tulu-13B-GGUF/blob/main/tulu-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [tulu-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/tulu-13B-GGUF/blob/main/tulu-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [tulu-13b.Q5_0.gguf](https://huggingface.co/TheBloke/tulu-13B-GGUF/blob/main/tulu-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [tulu-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/tulu-13B-GGUF/blob/main/tulu-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [tulu-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/tulu-13B-GGUF/blob/main/tulu-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [tulu-13b.Q6_K.gguf](https://huggingface.co/TheBloke/tulu-13B-GGUF/blob/main/tulu-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [tulu-13b.Q8_0.gguf](https://huggingface.co/TheBloke/tulu-13B-GGUF/blob/main/tulu-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/tulu-13B-GGUF and below it, a specific filename to download, such as: tulu-13b.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/tulu-13B-GGUF tulu-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/tulu-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/tulu-13B-GGUF tulu-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m tulu-13b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|user|>\n{prompt}\n<|assistant|>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/tulu-13B-GGUF", model_file="tulu-13b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Allen AI's Tulu 13B
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Allen AI's Tulu 13B fp16
These files are pytorch format fp16 model files for [Allen AI's Tulu 13B](https://huggingface.co/allenai/tulu-13b).
It is the result of merging and/or converting the source repository to float16.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/tulu-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/tulu-13B-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/tulu-13B-fp16)
## Prompt template
The following template should be used:
```
<|user|>
prompt goes here
<|assistant|>
```
**Note**: There should be a newline after `<|assistant|>`. This appears to be very important for getting this model to respond correctly.
In other words, the prompt is:
```
<|user|>\nprompt goes here\n<|assistant|>\n
```
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Oscar Rangel, Eugene Pentland, Talal Aujan, Cory Kujawski, Luke, Asp the Wyvern, Ai Maven, Pyrater, Alps Aficionado, senxiiz, Willem Michiel, Junyu Yang, trip7s trip, Sebastain Graf, Joseph William Delisle, Lone Striker, Jonathan Leane, Johann-Peter Hartmann, David Flickinger, Spiking Neurons AB, Kevin Schuppel, Mano Prime, Dmitriy Samsonov, Sean Connelly, Nathan LeClaire, Alain Rossmann, Fen Risland, Derek Yates, Luke Pendergrass, Nikolai Manek, Khalefa Al-Ahmad, Artur Olbinski, John Detwiler, Ajan Kanaga, Imad Khwaja, Trenton Dambrowitz, Kalila, vamX, webtim, Illia Dulskyi.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Allen AI's Tulu 13B
# Tulu 13B
This model is a 13B LLaMa model finetuned on a mixture of instruction datasets (FLAN V2, CoT, Dolly, Open Assistant 1, GPT4-Alpaca, Code-Alpaca, and ShareGPT).
*Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner.
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 49.2 | 51.8 | 5.0 | 36.5 | 41.3 | 42.8 | 46.1 | 9.2 | 21.3 | 35.0 | 53.9 |37.2 |
If you use this model, please cite our work, the llama paper, and the original datasets:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{dolly,
author = {Databricks},
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {Blog post},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm}
}
```
```
@article{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Longpre, Shayne and Hou, Le and Vu, Tu and Webson, Albert and Chung, Hyung Won and Tay, Yi and Zhou, Denny and Le, Quoc V and Zoph, Barret and Wei, Jason and others},
journal={arXiv preprint arXiv:2301.13688},
year={2023}
}
```
```
@misc{köpf2023openassistant,
title={OpenAssistant Conversations -- Democratizing Large Language Model Alignment},
author={Andreas Köpf and Yannic Kilcher and Dimitri von Rütte and Sotiris Anagnostidis and Zhi-Rui Tam and Keith Stevens and Abdullah Barhoum and Nguyen Minh Duc and Oliver Stanley and Richárd Nagyfi and Shahul ES and Sameer Suri and David Glushkov and Arnav Dantuluri and Andrew Maguire and Christoph Schuhmann and Huu Nguyen and Alexander Mattick},
year={2023},
eprint={2304.07327},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@article{peng2023instruction,
title={Instruction Tuning with GPT-4},
author={Peng, Baolin and Li, Chunyuan and He, Pengcheng and Galley, Michel and Gao, Jianfeng},
journal={arXiv preprint arXiv:2304.03277},
year={2023}
}
```
```
@misc{codealpaca,
author = {Sahil Chaudhary},
title = {Code Alpaca: An Instruction-following LLaMA model for code generation},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/sahil280114/codealpaca}},
}
```
<!-- original-model-card end -->
|
TheBloke/agentlm-7B-GGUF | TheBloke | 2023-10-21T02:22:13Z | 398 | 8 | transformers | [
"transformers",
"gguf",
"llama",
"dataset:THUDM/AgentInstruct",
"arxiv:2310.12823",
"base_model:THUDM/agentlm-7b",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-10-21T02:18:32Z | ---
base_model: THUDM/agentlm-7b
datasets:
- THUDM/AgentInstruct
inference: false
license: llama2
model_creator: Knowledge Engineering Group (KEG
model_name: AgentLM 7B
model_type: llama
prompt_template: '[INST] <<SYS>>
You are a helpful, respectful and honest assistant.
<</SYS>>
{prompt} [/INST]
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# AgentLM 7B - GGUF
- Model creator: [Knowledge Engineering Group (KEG](https://huggingface.co/THUDM)
- Original model: [AgentLM 7B](https://huggingface.co/THUDM/agentlm-7b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Knowledge Engineering Group (KEG's AgentLM 7B](https://huggingface.co/THUDM/agentlm-7b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/agentlm-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/agentlm-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/agentlm-7B-GGUF)
* [Knowledge Engineering Group (KEG's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/THUDM/agentlm-7b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: THUDM-Llama-2-Chat
```
[INST] <<SYS>>
You are a helpful, respectful and honest assistant.
<</SYS>>
{prompt} [/INST]
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [agentlm-7b.Q2_K.gguf](https://huggingface.co/TheBloke/agentlm-7B-GGUF/blob/main/agentlm-7b.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [agentlm-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/agentlm-7B-GGUF/blob/main/agentlm-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [agentlm-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/agentlm-7B-GGUF/blob/main/agentlm-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [agentlm-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/agentlm-7B-GGUF/blob/main/agentlm-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [agentlm-7b.Q4_0.gguf](https://huggingface.co/TheBloke/agentlm-7B-GGUF/blob/main/agentlm-7b.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [agentlm-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/agentlm-7B-GGUF/blob/main/agentlm-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [agentlm-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/agentlm-7B-GGUF/blob/main/agentlm-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [agentlm-7b.Q5_0.gguf](https://huggingface.co/TheBloke/agentlm-7B-GGUF/blob/main/agentlm-7b.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [agentlm-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/agentlm-7B-GGUF/blob/main/agentlm-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [agentlm-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/agentlm-7B-GGUF/blob/main/agentlm-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [agentlm-7b.Q6_K.gguf](https://huggingface.co/TheBloke/agentlm-7B-GGUF/blob/main/agentlm-7b.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [agentlm-7b.Q8_0.gguf](https://huggingface.co/TheBloke/agentlm-7B-GGUF/blob/main/agentlm-7b.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/agentlm-7B-GGUF and below it, a specific filename to download, such as: agentlm-7b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/agentlm-7B-GGUF agentlm-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/agentlm-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/agentlm-7B-GGUF agentlm-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m agentlm-7b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant.\n<</SYS>>\n{prompt} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/agentlm-7B-GGUF", model_file="agentlm-7b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Knowledge Engineering Group (KEG's AgentLM 7B
## AgentLM-7B
<p align="center">
🤗 <a href="https://huggingface.co/datasets/THUDM/AgentInstruct" target="_blank">[Dataset] </a> • 💻 <a href="https://github.com/THUDM/AgentTuning" target="_blank">[Github Repo]</a> • 📌 <a href="https://THUDM.github.io/AgentTuning/" target="_blank">[Project Page]</a> • 📃 <a href="https://arxiv.org/abs/2310.12823" target="_blank">[Paper]</a>
</p>
**AgentTuning** represents the very first attempt to instruction-tune LLMs using interaction trajectories across multiple agent tasks. Evaluation results indicate that AgentTuning enables the agent capabilities of LLMs with robust generalization on unseen agent tasks while remaining good on general language abilities. We have open-sourced the AgentInstruct dataset and AgentLM.
## Models
**AgentLM** models are produced by mixed training on AgentInstruct dataset and ShareGPT dataset from Llama-2-chat models.
The models follow the conversation format of [Llama-2-chat](https://huggingface.co/blog/llama2#how-to-prompt-llama-2), with system prompt fixed as
```
You are a helpful, respectful and honest assistant.
```
7B, 13B, and 70B models are available on Huggingface model hub.
|Model|Huggingface Repo|
|---|---|
|AgentLM-7B| [🤗Huggingface Repo](https://huggingface.co/THUDM/agentlm-7b) |
|AgentLM-13B| [🤗Huggingface Repo](https://huggingface.co/THUDM/agentlm-13b) |
|AgentLM-70B| [🤗Huggingface Repo](https://huggingface.co/THUDM/agentlm-70b) |
## Citation
If you find our work useful, please consider citing AgentTuning:
```
@misc{zeng2023agenttuning,
title={AgentTuning: Enabling Generalized Agent Abilities for LLMs},
author={Aohan Zeng and Mingdao Liu and Rui Lu and Bowen Wang and Xiao Liu and Yuxiao Dong and Jie Tang},
year={2023},
eprint={2310.12823},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!-- original-model-card end -->
|
TheBloke/una-cybertron-7B-v2-GGUF | TheBloke | 2023-12-06T10:08:44Z | 398 | 32 | transformers | [
"transformers",
"gguf",
"mistral",
"juanako",
"UNA",
"cybertron",
"fbl",
"dataset:fblgit/tree-of-knowledge",
"dataset:Open-Orca/SlimOrca-Dedup",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"base_model:fblgit/una-cybertron-7b-v2-bf16",
"license:apache-2.0",
"text-generation-inference",
"region:us"
]
| null | 2023-12-06T09:41:15Z | ---
base_model: fblgit/una-cybertron-7b-v2-bf16
datasets:
- fblgit/tree-of-knowledge
- Open-Orca/SlimOrca-Dedup
- allenai/ultrafeedback_binarized_cleaned
inference: false
library_name: transformers
license: apache-2.0
model_creator: FBL
model_name: Una Cybertron 7B v2
model_type: mistral
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
tags:
- juanako
- UNA
- cybertron
- fbl
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Una Cybertron 7B v2 - GGUF
- Model creator: [FBL](https://huggingface.co/fblgit)
- Original model: [Una Cybertron 7B v2](https://huggingface.co/fblgit/una-cybertron-7b-v2-bf16)
<!-- description start -->
## Description
This repo contains GGUF format model files for [FBL's Una Cybertron 7B v2](https://huggingface.co/fblgit/una-cybertron-7b-v2-bf16).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/una-cybertron-7B-v2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/una-cybertron-7B-v2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/una-cybertron-7B-v2-GGUF)
* [FBL's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/fblgit/una-cybertron-7b-v2-bf16)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [una-cybertron-7b-v2-bf16.Q2_K.gguf](https://huggingface.co/TheBloke/una-cybertron-7B-v2-GGUF/blob/main/una-cybertron-7b-v2-bf16.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [una-cybertron-7b-v2-bf16.Q3_K_S.gguf](https://huggingface.co/TheBloke/una-cybertron-7B-v2-GGUF/blob/main/una-cybertron-7b-v2-bf16.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [una-cybertron-7b-v2-bf16.Q3_K_M.gguf](https://huggingface.co/TheBloke/una-cybertron-7B-v2-GGUF/blob/main/una-cybertron-7b-v2-bf16.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [una-cybertron-7b-v2-bf16.Q3_K_L.gguf](https://huggingface.co/TheBloke/una-cybertron-7B-v2-GGUF/blob/main/una-cybertron-7b-v2-bf16.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [una-cybertron-7b-v2-bf16.Q4_0.gguf](https://huggingface.co/TheBloke/una-cybertron-7B-v2-GGUF/blob/main/una-cybertron-7b-v2-bf16.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [una-cybertron-7b-v2-bf16.Q4_K_S.gguf](https://huggingface.co/TheBloke/una-cybertron-7B-v2-GGUF/blob/main/una-cybertron-7b-v2-bf16.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [una-cybertron-7b-v2-bf16.Q4_K_M.gguf](https://huggingface.co/TheBloke/una-cybertron-7B-v2-GGUF/blob/main/una-cybertron-7b-v2-bf16.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [una-cybertron-7b-v2-bf16.Q5_0.gguf](https://huggingface.co/TheBloke/una-cybertron-7B-v2-GGUF/blob/main/una-cybertron-7b-v2-bf16.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [una-cybertron-7b-v2-bf16.Q5_K_S.gguf](https://huggingface.co/TheBloke/una-cybertron-7B-v2-GGUF/blob/main/una-cybertron-7b-v2-bf16.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [una-cybertron-7b-v2-bf16.Q5_K_M.gguf](https://huggingface.co/TheBloke/una-cybertron-7B-v2-GGUF/blob/main/una-cybertron-7b-v2-bf16.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [una-cybertron-7b-v2-bf16.Q6_K.gguf](https://huggingface.co/TheBloke/una-cybertron-7B-v2-GGUF/blob/main/una-cybertron-7b-v2-bf16.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [una-cybertron-7b-v2-bf16.Q8_0.gguf](https://huggingface.co/TheBloke/una-cybertron-7B-v2-GGUF/blob/main/una-cybertron-7b-v2-bf16.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/una-cybertron-7B-v2-GGUF and below it, a specific filename to download, such as: una-cybertron-7b-v2-bf16.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/una-cybertron-7B-v2-GGUF una-cybertron-7b-v2-bf16.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/una-cybertron-7B-v2-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/una-cybertron-7B-v2-GGUF una-cybertron-7b-v2-bf16.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m una-cybertron-7b-v2-bf16.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./una-cybertron-7b-v2-bf16.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./una-cybertron-7b-v2-bf16.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: FBL's Una Cybertron 7B v2
# Model Card for una-cybertron-7b-v2-bf16 (UNA: Uniform Neural Alignment)
We strike back, introducing **Cybertron 7B v2** a 7B MistralAI based model, best on it's series. Trained on SFT, DPO and UNA (Unified Neural Alignment) on multiple datasets.
He scores [EXACTLY](https://huggingface.co/datasets/open-llm-leaderboard/details_fblgit__una-cybertron-7b-v2-bf16) **#1** with **69.67**+ score on HF LeaderBoard board, **#8** ALL SIZES top score.
* v1 Scoring **#1** at 2 December 2023 with 69.43 ..few models were releasse .. but only 1 can survive: CYBERTRON!
* v2 Scoring **#1** at 5 December 2023 with 69.67
| Model | Average | ARC (25-s) | HellaSwag (10-s) | MMLU (5-s) | TruthfulQA (MC) (0-s) | Winogrande (5-s) | GSM8K (5-s) |
| --- | --- | --- | --- | --- | --- | --- | --- |
| [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 60.97 | 59.98 | 83.31 | 64.16 | 42.15 | 78.37 | 37.83 |
| [Intel/neural-chat-7b-v3-2](https://huggingface.co/Intel/neural-chat-7b-v3-2) | 68.29 | 67.49 | 83.92 | 63.55 | 59.68 | 79.95 | 55.12 |
| [perlthoughts/Chupacabra-7B-v2](https://huggingface.co/perlthoughts/Chupacabra-7B-v2) | 63.54 | 66.47 | 85.17 | 64.49 | 57.6 | 79.16 | 28.35 |
| [fblgit/una-cybertron-7b-v1-fp16](https://huggingface.co/fblgit/una-cybertron-7b-v1-fp16) | **69.49** | **68.43** | **85.85** | 63.34 | **63.28** | **80.90** | **55.12** |
| [fblgit/una-cybertron-7b-v2-bf16](https://huggingface.co/fblgit/una-cybertron-7b-v2-bf16) | **69.67** | **68.26** | **85.?4** | 63.23 | **64.63** | **81.37** | **55.04** |
The model excels in mathematics, logic, reasoning, overall very smart. He can make a deep reasoning over the context and prompt, it gives the impression of not missing details around.
## Model Details
Adiestrated with UNA: Uniform Neural Alignment technique (paper going out soon).
* What is **NOT** UNA? Its not a merged layers model. Is not SLERP or SLURP or similar.
* What **is** UNA? A formula & A technique to *TAME* models
* When will be released the code and paper? When have time, contribute and it'll be faster.
### Model Description
- **Developed by:** [juanako.ai](https://juanako.ai)
- **Author:** [Xavier M.]([email protected])
- **Investors** [CONTACT HERE]([email protected])
- **Model type:** MistralAI 7B
- **Funded by Cybertron's H100's** with few hours training.
### Prompt
The model is very good, works well on almost any prompt but ChatML format and Alpaca System gets the best
```
<|im_start|>system
- You are a helpful assistant chatbot trained by MosaicML.
- You answer questions.
- You are excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.
- You are more than just an information source, you are also able to write poetry, short stories, and make jokes.<|im_end|>
<|im_start|>user
Explain QKV<|im_end|>
<|im_start|>assistant
```
```
### Assistant: I am StableVicuna, a large language model created by CarperAI. I am here to chat!
### Human: Explain QKV
### Assistant:
```
```
[Round <|round|>]
问:Explain QKV
答:
```
```
[Round <|round|>]
Question:Explain QKV
Answer:
```
```
Question:Explain QKV
Answer:
```
### Framework versions
- Transformers 4.35.0-UNA
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.14.1
### Citations
If you find Cybertron, Juanako or any of our models useful, specially if you use it for your big brand.. or you clone/merge my modelsm, cite please:
```
@misc{unacybertron7b,
title={Cybertron: Uniform Neural Alignment},
author={Xavier Murias},
year={2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://huggingface.co/fblgit/una-cybertron-7b-v2-bf16}},
}
```
Special thanks to @TheBloke & @bartowski for converting the models and their support to the community. Thank you!
<!-- original-model-card end -->
|
Artefact2/bagel-dpo-34b-v0.2-GGUF | Artefact2 | 2024-06-26T08:45:46Z | 398 | 3 | null | [
"gguf",
"en",
"license:apache-2.0",
"region:us"
]
| null | 2024-01-25T06:32:48Z | ---
language:
- en
license: apache-2.0
---
<img src="data:image/jpg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/4QBYRXhpZgAATU0AKgAAAAgAAgESAAMAAAABAAEAAIdpAAQAAAABAAAAJgAAAAAAA6ABAAMAAAABAAEAAKACAAQAAAABAAAEAKADAAQAAAABAAAEAAAAAAD/4QFiaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wLwA8eDp4bXBtZXRhIHhtbG5zOng9ImFkb2JlOm5zOm1ldGEvIiB4OnhtcHRrPSJYTVAgQ29yZSA2LjAuMCI+CiAgIDxyZGY6UkRGIHhtbG5zOnJkZj0iaHR0cDovL3d3dy53My5vcmcvMTk5OS8wMi8yMi1yZGYtc3ludGF4LW5zIyI+CiAgICAgIDxyZGY6RGVzY3JpcHRpb24gcmRmOmFib3V0PSIiCiAgICAgICAgICAgIHhtbG5zOnRpZmY9Imh0dHA6Ly9ucy5hZG9iZS5jb20vdGlmZi8xLjAvIj4KICAgICAgICAgPHRpZmY6T3JpZW50YXRpb24+MTwvdGlmZjpPcmllbnRhdGlvbj4KICAgICAgPC9yZGY6RGVzY3JpcHRpb24+CiAgIDwvcmRmOlJERj4KPC94OnhtcG1ldGE+Cv/bAEMABQQEBAQDBQQEBAYFBQYIDQgIBwcIEAsMCQ0TEBQTEhASEhQXHRkUFhwWEhIaIxocHh8hISEUGSQnJCAmHSAhIP/bAEMBBQYGCAcIDwgIDyAVEhUgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIP/AABEIAYABgAMBIgACEQEDEQH/xAAdAAACAwEBAQEBAAAAAAAAAAAEBQMGBwIBAAgJ/8QAPxAAAQMDAwIEAwYGAgICAgIDAQIDEQQFIQASMQZBEyJRYRRxgQcykaGx8BUjQsHR4VLxJDMWYgglQ3JUgtL/xAAbAQADAQEBAQEAAAAAAAAAAAADBAUCAQAGB//EADARAAICAQQBAwIFBAMBAQAAAAECABEDBBIhMUETIlFhgQUycaHwFLHB0SNC4ZFS/9oADAMBAAIRAxEAPwD8umlcp1qQoDBgkfpOjXg0k0vgqSshsbx4e3aZyP8A7HI8x/tqFl1xBHnUEz3Mz7e/ppm3ToWwFoTuPEHkfWeccaWZipG6VRjDqdsFShIKUFIB/qjk+8/vjRiGhtIiTMAfoQdSNURKypZhMST9c/M/LRlI6GX1qbG3G2IkencHsdER1bgRHLhZeWgAbCSEpO4xII/tqZbCFtJ8Mg+XuCI76mXTrT5eCrKQIOvUU7slPI9IidbYRMOBwZFRtpNQ22tAAV3OIz6+nvqa40yG3VbPMfSPu9se3+tfFrwXUqEE+vp6/LU1QtJaCAQtfJXET+Og0dwhwy+mQIoW084kvuHJVEk5PfXC2Vr3K2JggHy/0Z5nMST/ANaMIWMAQfUDjXigpSCIVtMnkAcc/wCu+mebi3FQFLUg8yfw550Q0AFLSW0LK07QVf08ZBnmMTnnjUgQSgFLeeCocnvr5vehW5tRSY2yDBOIj6jGmFibmRlISSC3uJkZnH09Rr0NECQSSRgnH6+uimkJQsKPnUDO2SJ/1210W/EVJEAYxrcxcEQ0MEc+g76fWDpqu6gfU1SNtgNeZ15wwlA4AMTM5EAEnS1KIME9+Jx+Wr70H1MxYWKxIfp6Wo2b23qgEgKkCUiDkDgfpJ0PK5xYy4Fmdxp62QJdfWIep+i7t0rXNU11pPAU6grbAyFCc45B78DnSJlqVEhIB/CPfVsuNyquo7o+62XqhhlvPiZW5KhKjz5icx2jSF6lcp3S24ghXyIPEgwYOZ1zAzsgOQczmoVFcjGbWHu2ui/gDVcqvbcqnHCg04BK0gQQomO+RGkZYTukSIGPfRyUubADIB4n98a+LaSQlUhPsM6bZgehJ+NGBNm4vCAJCgDiBjj5a5LRODmDHz0f4WM5PY99eeGqDAyedDjAMXKaVBQUpABmCMjt+/8AWolMbgCCInn002DAJJPY8DtrgswQRn2jXJq4r8EQAMEa98AwDAHoTnTJTEgJjjk4Oow2UnBzGRHGPfXpsH5gSWZJzGPSZ138OAo7gAD3Gff/AFowU8oEETwJ5HvrtDJ274B7QTPb951gmpsC4AhkLUO+e440woLTVXCoFPQUr1XUK+60w0VrIgkkBIPAE8fpOpWmPGJQ0kA8qUowAPUk8DRtHdK6z1Aesla9S1W0oVUsOFClgmSkEEEDE+8dgI1jcfE2FHmJHaItNeIoiFLKFJIhSTzkH5/kfTXW1JQEjjkyIj14/eNHuOqqHHl1ju5x9ZU44rMrJJ3k95JP46E8MtFxDiYWDtIP9Ofy/vrW4zW0ETtqn/moKEyO4x+mnNW6a5ptCgkEI2kJEbjETjk8ZHPM86WpacaDa3W1ISsSkkRuExM/MRrptYSuSogcZzH79tdDCZsrBaymSy4kqAJIgEYIxEfloQJBWBECeYyfQfL/ADpvWtl5pICZIVMg9uMfXS7alJEiDyCe/wDjQG4MZwsTV9yzdMG1G6UpuylikLqS8GiCvbJBAmM5Hf30+v11tikXGho11Jokv+NRIdVMSYJUAMnaPbVBQ+UrG1ZKQIwOR767U6taQSkkzIgnGfT00o2Mlrn0uHXKmHZXM6fZKilQcBkSQnke0j6fhqNtg79yUjb6EGSJ4/HE6ZsMUq2kuVFRIONiMqPc8/60Ap0IdEJOMYjP5cafx+2RstuZwWEuqAISgepOBjvOoVsNxtCQABMqOSf7Y7adtUQVSuVNQnY1BIXtnzRgcjmPppcl15kKcS2lxMFJKs/MR30Q5BMppzdGKqmm8IANtlRGVKHHuP76XLQhLSi4qVgyBzGTgzzP9tWtpylW08hQUUFEbkmYzPpwfTH1jVfq2WVKHhwCexmfqY7HSnqbu5QOAIPbFRQlTaggEqkEKJ4+fz/L66+pkPNy8jaMKAUTAOMifWJEe+iQypCSkrIT3IkR2j3n+2u62lZbpWizUJcBbClBKSNpPb545HOssfEwho3BHKmmFuLLVMPFK9xc5j0A9AJPfP00qUEEqURIAwBjbkZ0U6mB6EGcdvl9Z9saGfSUoMqKlA7eBERgA/TWlAqYyZSxFwtpJKdyjBmSPX5ac0riUIKCUgARuGCe/wCJH17aXtFx5ptC8JbEIhIGOcnvk8/T001oqBbzgTt2n1mQdJ52USppMTNUsNpZZqEJecIICfu4IHaCO0mNcu0jaFlRp3I9EAwe3f8A3p9Y+nm30godAfCEuFAG72B5x65+fGj3unrpQMqbQ2l1pR8zBndETKcgE/vvqMM1OaMuZNECnVykBzzgEStJwYjb/jTBDKPKpSVELHEe37+embvT4Q4XXwUoUryFWCv1/KNRpt7BacSaxQCRIAVEeo/6/DTmPUE8z5zV6VLqK3qAOJJaTIBjj89BmhcJCgCf7aYLeqqYhogFCpCDBnHMnvpjQoFS2Gi2ZIkHifT/AL04ubzI2TBtFCVh2mdbJxJOflOdceGrwk7gQgnvx9P005rWSF+GkQB6kD0zPf5aCfTsASYJGIHb8NOq+7mLFdvcCOxKUhKIPdWf07ajSkgxmSYMfnokNwjdg9hHfXqUjYoLCpIgRGPn7aZUxZrnDLW5WTIB/L999Xuq+zq+0HRTHVb6aYUT4BDYeT4oSYyUA8GR8pHE6pDMggjHvp8q+XRy2ptq6kmnACYgSRghJVEkSBie3tjzBuNswjIL3faJloyQAR+QGvENKUQkGJxAMaI2nfxn312lnA8uRzo1RctIfB8Mp9SJGZ/609YpH7jY3alxbf8A4ZhIU4AtSYkpAPIEzPuRzpcW0QmEwR6mZPr7akQhQAxA9JwdYZSep1XA/NIy2AsgHcBgE518GApsqKhI4Ec/LGig0dm+AB2I7/LXTbJWBtE9/wDJ1uoG+YIWMCByONeFgpkT9ONMEtEEYg8zrsNJUSVGPbmfnrhnrixLKQQVJmDkcTrzwt24gCSZAAzpkthSjMye2Z0faLDVXetFNShAKUlbjjitiG0ASVqV2A/ck6wSBzCqCeBK2WVJO4JmMQR+864FNKgPujsSOdaJWdDtmwP3e1XujuyKYg1KGErQppJMBYCwN6T/AMh84ImKiKRxailKZgZJwE5/L66yrhuVhmRkNMIp8JKUbSmSTMzx9PWdSIpluHxnlbEEYI5V7AY/H89M1U7ac4dc9T9wf/8AR+cd+dRKaK17lSonjPH79NePM6GrqBukqZLaEJbbB+6IzjBPr/bQnhuFcQpRnkcnGmhYCQTMH0iTqIoUTJO0gY/SNeFTtkxeUKUsoSN5AkEeny/fGu4S8gIcIS4B5FqJIj0P7x31IWlb4TjXDoXPm3Ej6RGvGEU8zwsVBbX4p2BhP3VqyJOIE+udRAKEY4H4aNbCXUIYebK+yVJypGJwO49iY+R1LWU7KWGSykkkSVjKVfL/AHxroqp5ge4KpSVIUGyQFdlcn6fU6UOIf3kLbEcSDJHbTdpnICpj04J+uunU7Nx2JUVJjP8ATxBxrDC5pGKxY0stAwsgERI7cjP0/wAa7NU3vCkNBvgACT6CR+f49tcKacSZ2kjvOuktp8TGCD3zH+NDqo2r3xDXKmnco0tNU5beSoqW6FRuGMEduOZzPtqHyrPEFP8AScT78/WNeLTKglKoJIkEY/ee/rqI/wAlYmTBznA9TrIB7ja5F4Efu1wFnFA4NsKCxJIxHpHf1Gla2UuNbhAUnMpIzzE+sep1DW1pqH0uIT4SQBCUgn37zMxOnFnulDQt1CaiibrVOtFCCskeGowdwj0/zjQSXA4Epo2PIwDHqLWEr3oDjYAJiFHbPsOOw/P5abWuxC7ABxYBJwAJI7dz6jSFSluVZQVBXmgdx6ent351q3QVoNbc2vht+9W1KiBlQ4z6Yga82NnFL3Beqqd9TPbr00ada1NtrbAAHmIJBjPHvPyx76qT7baXCmdw9Ywf9/uNfoz7TOk6i1KWyIBUDKZGSQTHOfy1+d7gyWagpJAUCQYIjk5mfbWRjdRTnmDbIjcpF1e409UHw6ZNOnA2pJO0gRiTyYOPf0A0qcQpJKVSBO6Ceec8wT20ycEDYhUwIBAgEY9tDraCwSmABgzwPTWxxxBEbjcNpRG0dxjHzzq32wrQ0HH25ZQRA7icfh8vTVQoQSgL/Pv9Bq5UNSpNG41BUCkAHbMZ7/P1/wA6maoXPotI+2bl9lNLZamuR8UEoC1blhWST3yf07a0H7QKPp+nYC6FKEBIkqEQPf8Avr84WW4V1ufC6Z5QS5lICsDERkiB37/LOtBbudZcEtIKlrKiJKjgYmCDwMHJ5+up7FUQoR358yyPe4y7iKFV4/WDX91qpWhlhtmUtDJEgEiczxgj9dUlNCXniEknafMD2/z/AK1plbQ01B08X3W23KtZ2OFPm24xntz/AG1S0pbZdK0EpMyYjP8Ab6aLpgNvE+S/EXPqmL3qMVD+xtnfsHIEx8vbjUbtNSULAbWD4vIQMAEZmfxGnbVyaDxcSgICzPAEnvx9dVa61C3q911SySTEnkDTWLG2RqPUQy5hiT29wGqd3uh5KEhZOR2GfQ6VuoKgSo5nOP33nR7yhCTgkjA/XQ6msqbI3EnkcarqgWSDkLjmBpSVGAkAnIjEa6LW5RM4T3PbRPhpK0pwkHG4iT8416WgBjIGAR39/wAdNLF3kCGilRKT8iR7RohttPhGQornEREakS3CYjv31OhoQDGNGAijNIEtgkSD6SfXUvh4IjJ986IS1wI1L4ILh8NKgicBUEj2/Y1oDzBXB0spxPykamDCgAYIByM59NNay0VFv8NbpaUHBILbgXBgeUwTBEjnjRtisb15rVsIcbp220F5190kIZbHKlQDgSBABn0zrwIIsTjKyttIoxAGSFe/oRqUNESIjvOr9V9EUx6fdvdkvLd0Zp8vpLC2HECY3gKncn3BxM8aq3w+QkJzOTPP+NZBBnmUrVxeGZjdkD8NSIYTJKpAPpzqw03T9wqLdUXBmlUumpv/AGrAwj986iRa3nAS0y44P/qCQPnjXCw6nab4iMU5J7xPz1cumrY5XWO62qjCFXCp8FYZJy8hKiooA7nKVbeTEZjQ1rtlIm5ti9iopqIAqUpDcqPoBMd8e2uf4XV+Kp8BSWyoqSpaggkTIP4emlcrBrQGOafchGSrjN2lr7PSXGpupNO7VMKp2aZchxzcQSopOQlIHJAmfxpbrJUQiZTztGAP9++tCt/TNrqApd5vnw7wTvWlKN54JgEkSYHv6TOhK3p+ioqkopAbuyQD8ShCtvYxH9JyMfmewMWZAxW7Mez4crqHIpZQFsjeohMA4jURp4IIAMfhq5XCwAsN1VE24Uk7FtkElCu3bvj1/wARJ6fao0LXfl1NvUQgtI+FUpSwc7+3lgdiSZ45hoZA3UROFlMrNspre9d2k3V11ihKpdUwAVpTBPlCgcz6zzoCtZY+Jd+GLhZKpRvA3Ee8d/lq3OWSgcINHd6V/d5UoXLSp+RHA7yRoGp6culMnc5RuFB4cQA4k/IpnWQwJh2w5MYthK14SQiQmVHBnP11GWSDJTJPciT8/ppwKUoSUqAEcnv8v9a5VTlYBgkj11owd1EnhKSJOSOMalaUttWUhTasFBBIPb8dMjTFJ4mMjM65NOVHMAD21iEDQN2kSpJepgFpSPMJyn/IH7Ggi2XAN4APGM/idO9u1BCRBnCpgg+sjUtPS0tZWIRVPN0YJ876h5E+5AB/Ln21wtQJM2il2CKLJldcYaDJSFmScjt8/wA9DCmLhhIknAIjOrHdqOkpK5xqjrEVzIyl5CSkL94PGdKmytt5K0eUpMpUTAHfXFYOu4dQzI2LJscURAHaRxsKIM7fQwfQ/PP6aF8IKCipYDh49P3+um7rzr7pUUyleSeT+frnQjlGtQDgVBGRBA+mtAECdLgmoCUIQfOng4BxGvUKCASEEKJ8pmNv+fn/AJ0QWXXCogAJ5JJ/yfy765A3JCFA47g64x44hUNHmeMo2gulRInlOY/1nnV96U6hXZ65BS4uEiBBHsdUhKVOAIBg/dAgAHtH/eum3VU60AAq2nvPH48j20MZCvUbq+ZpHX3WT14ddDkq2jknjHGOfT31jVco1FSpR2jcTg4A9+e3+NPK+rdeaDS1zskJCTMDnMc/saROIKpMEn1HOsFy3JhKA4EXFhUBR4J9cfPQ7iICgTJH4c86YloqUYTn5e2ddv0aE0CH/Gb3KWU+DkqTAGeOMx9ONBLV3CqlwekQWglahI9+D+508o6tpthZeQcqwATB9Z1KxbqYsEqUCrhIGSfQ/wC9CvUewkNhQTMEHEep/XShYOZWAKCPLNWJq7gKZT4aST98kCPn7fLWtVN5b+Gaabqk1BZSlE/1Ej/l76xChoZcUpY2n3V/j31bqaoWmmaUobDjeoEhRiBk95j9xpPNhtrEMM4ZaJ6lrpOo3aw1duqUOM0zRVvUoQSYJBPrnS199DlvdabUFBZAQoSJnGPTSs1akBSWzBcJKp7TgD6a9pnUtpAe4bJCgczgCI+Y0bHiKjcZF1ORXbas6KS26G5Kgg7fWdK65wrqlJCuPLHp3j8dEV66hmrASFISUhW2ZInPb6emlilrW6pXB5P+dVtN7qMh6lChIudKaggTuUPwOp0NtrQnenaJjcZj15+fb31CVzHc8Z0QkkJCSqByBONO7eYoGoSAtgr3BJHof7anS2VjZBMcGeMaJaShIIUNxPY9tENoQhBJbO8kQoHj/vRVX4i+Rx5gRaJSk59JP6alDQAEQcTI50ShiVAKUEjmTJjGNShrgQB799FAijNzB22+5GO406VWOXXbSfBU1OlhJDRaQlJcOTkgCMDuTJ750GhnyjvPaNG0lPKykpncnbkce/zkDQ8mMNRvqGw6o4lZKB3f/ftJH6ZSqNncCV+hGeAYP5n5/PVi6Rt5r7ddrayjfVvBlxtrgvJQslSBkTMz7xr6loDX21CCmHWkhSHACSccK54xn6ZxBbNJdaC7JWzVt2+na5LI3rcMAmADjM5UccbdK5cwQbR3G9NpmzNvY8RtRWmroG7q8WX6BFc0tDdBvkqK1SpxQVlKRETiZxwdCU/SadhWtKFrGVbVqKRjjE+wzGjK+7Xi6vsqp0gsMQnxXT4q1YiZwM+5xPGdSdNVdFUX0UlSupLjQWpaVvJCBtE7cE+wkREe+pzalyPaJaXQ4wQXNicOeNQ0SaOpcbpGF+RCSZUTOQJ7/KeeY0fTO2lxoJRUFakKjaWytRPYQCNVe49QhXV1at+iQ+WF+GhJkobbAkJKREgzuzAM94Gm6amwrrXGqGoUQwhBWlTYSS4pMkIQmJgjg8c8EnSpzZLqPDTYwLqOGqWhffLza2yykAw+mAFTOE54IHrM951zXG61Fexca/e+JJRUpQAJAPl2jH9Jwfb0MI3rs5Vht1ql3MOAbXKipDaVQCSEFACRkEcmIGcDUab0y1TU9PdbqGw84tbLTKFOqSQrcgKKSQewk5we3PG3MOTPKiqaUQqoobXd6mX3q8JD20uGmCCFkYG8JVE45PfTKzv09tZFNTNlx1Z2+V4qATkJOCPT3In31TkvU1YagUt/o1svVIceW6hxhRnmdwCMSIAjT2ncvKSwgWovUe4suBh5PiIC9wSUqnyqhQzwqJ9dK7SDzHTRWhLNWHw6VK1Qw66IAVUeZEZIO8iOe5Eczqn328V90aFBe2bgWCpIZcUqUmOC2sSI+REeg11fkPWi1JU9Us1FsbUpmkaeeWpzxo3neqBEwcjygJjtqq267ssOVdHTNqRSP7Q4axUoSuQqFLBABA7x/UZVmNM4kDC1PMWyH0yNy8Rm7ZKZQBpa0qUg4S+nek4iSRPJPqNAvW+70DQqVWl9CQYNRQLISfcwSI+cabtW6tFIu6M1aaVKUwykPlaVrURCUkTjaCckzzxw7TXO26nRWV1M2S8rw5p3IW2rkBWAFAx+UcxOBly4m7uNsmLKgFfz+frKPudviXHnGCt1CzLqWEoAT2SAkAYA9yZmc6Dcoy0op2wRiSP3660yyVNJcKl9i2PrJUuahpkhtQVEHc2QEuA+xnU9V03RVaHqpptpxlnyLNOSlxKs/eQeO3AMTpzDr1bgyJq/wxwxZTMkVTkyYBjvqJVOYyIHB1qquhnqqgZcomVh4ffJG5K8TIIJggYiPx1Vq7p+qony1WeFTgCQt1YSk+sHvzkAY06udG6MlHS5VNVKaWSAREj0I1C40kpIgQcEHuPTVrfsS/A8amqqWrTGUsPBax3PlwSABzB0lW0lLgKkyBmAYnRlZWECyviYXwYlNKEpCQkJSBAAxHyGoSwwhhxKkOF+fKQYA/LnTdbeZxBM50M6xLYXIJJiByIHPy+uiBBXE76rElibJicNAQNsH1H+NSOUzjSw0+2psjMEQciRz7GdG+AO8gdu869cbJAJUZHfJ1kmEBFcxhfujXrLYKG6uXKhqfidoUxTrKnKclG9IX249Dg6qAbVEACQSQfX0nTt8POoQ26644hH3UlRIT8gTjQ/hpSIUiYJJ/D9nQlBA90YOQE+2LS0nwhIIXPmJ4jt/fUAaUrClATmSYP46aLRKRiCBkRg/TjUKWN4Cgkc8CR6aExAjaGxFS2wpJUBuIx6x9dDLYc4A2pPE9tWJFESnCQE8STHfjUi6FsgJSBKew7++gs0Op+JUnWlJkDJHcYjQ62SSTtknOTjvq0vW5IVJUQODA40C80yPKgCewJ5/wA5/TSxcXHUBI4j6kqrfXtPNBO1tJKmgkBJT2x7YnOlS8pcKwpUK27YwPQ/PS63VBZeSULgDMkTJ9JHI5Gik1QVWkOAbZ4Jj3IHz0H0ijGo+2YMgJnzRWVq2zJEnvPfOjaasUAncVAAwO+gXAE1HlUAk8FOZz+fOvUOyCCAO8nMeun9gKgyK2QhyI1S4hb8boE4IMk6JKXUKaUmSFcKMAH5zpW2UFIiAfWc6ldqFBCEtqwniO+tbb4EXY9kz1552orCXFRmJVwnt+GuVNLSArYTORAnXTXnQFqEgHnGtf6ae+z6o6RC7zQb65tPgrLT3hltIGFiTlXJ/tpzjGlmTyfUehMiCmwAEpUCQJKux7x7f40S2TtEbQYz68f71y622XllsEo3SknkjsY1KxTuOrUlKSVJG4zmMaPxFWu+JMz5lDBOIPH46svwNCLG3VitSqqKyFMkRtHYknmdV5kLBAOfURp9a7vX2gOu20MfEuAIC32Q4EJmSQFdyIz7emmVfYpNXEnTe4F1Ay1Jk8nONSop5IVHONFOn4ipW9sSkLO6Ep2pHrAkx8u2i6ejefUQ0jdAknsPWfTXbAFmLGy1Dme0dnfqadx9pCSlsbiCcmMkgd/3zp1YunXK99TjjqKenZT4q3FZIAOSE9/96d9NWZxwlpx1TiQdziGlAJAxhRMAZgYmONM6ioqLWaypZQGlsJG5T/mQ4DJwSAD8jAzzxqXm1ZXhRf1lvTfh6uQcnH0h6k0FrpKdNMw61SvskAu+TxJMAbZ82R747arVwtlQwzUGtpg0wU+N8TVp2IZAImEgCTJGAM+3Oq831DcLhfDW11c7UXBLgRTtfeKFiRExtAzgJ4+euusWuoupaBup+LQ8y6tTK0b/AA220pPmUVT92QZUSPug98LYlXJb5OhL/txIMYNMQa+0Z1X2hdAWthNKtL96UUStS1FtAVJIGxJGPckx769f6bRfumE9Z9JVIp320FbtAV5QFSImRmCDmee2stpOn+mqi509vourLaHFq2oKqRwIX2y6REzMYAPGO906MbesDHVFmfVV/wAQoB4dSt+AVKUkq8sE4wDI5n2wbNtyjagquf2iOFHwne7WTQ/eIbdX07twJqK5VwrUokoWstJSJyjxBlcn+hJz/wAsYMZrVsXRxulUg0r60NKIaU200ZwkSFEg7dvOZONSdLdGU/UVqqLvcrjRWynecC2HKtSwtR9gkZiDgxBHedWfozotqk+06ruFalNRbrXTmtZLTu5qo5SgyTyCDI5BPqBM7k+JWpRfP2j21dCJbtjDnUlxFvSqfCpGUEvKSo7oCUiSJPCQeZ5nTi4fZfbHrU8bHcnU1bQ3paqiSkKjhQISpAJESQZ+c6C6v64rejLum00PgJuy22XKutfaK9xMnwUmZSlCU4HAz3JJvPTfUyOprZbru4Gk1yas0b62/KHgUkEc8RtVBJz9NaOMDuL727B4n51pXXqOrqGbhToNQw4sqbcTtUlQUQRKSDIiYM8cnutT1VVuXM/wyup6Cno3B530+Ehte7uYUpSjt5wAB7Yfdb0zF7+0W8NMMB0eKJcWoNpa8oKpUTiVTgTPMakR0g7T9NNW7p56yLuzhDppqkoUlxUSjYVp+8fUx8oJ0BsiNQI5jqY3Qlge4ouXUV7rrnTm/wBXRL8ILC6UtBLQ3EoUsBIIKwAck/LnVgpOhaHq24tikfpreGGwsIUkB2oOSmGgDMJHucSc6zdaW2LkxT3o1bVxZrGkP065HiLOXNwBMwcyEiMczrbLz9pFhpqFmno6sFhmmZS0w5TDaytpv/2JWI3bvx+gMtYcRCnbzFNVnUup6/3Br19n906UoqNS6xT9tS9vUljzpSIklKVTtUQOYETOTOu6HrGsqG7iBSW9q0pZ8NCFoCtjwIBlROYO7sPzE5FUfaTdHDWh2sbfXWeRRaZhZMmdyv6oB5gc+k6aoq7ZVWR1TFQylxgtvKYaAUoOR/MW4kGSkY7YOMmNCyqR15hMZ3CzNEc6osji2mba2FKK8kJkA8FSAByT37Tpz/Enq2tcpy+hunoW0qNWXtrqXD3J2mCJHEz21UOjmqNy2NFDHw7S0LCdjBL9RJhS0gnAEDzq4zydWJxdpYWKC61NTT7lhLXirDiFgJB2qKgmQROcQTjSC41Q2plF8u8AEdT2r+0SqQG7TekLRTLltm5LQQhRJIAcCcgGPvAn6jIqHUHU96qq82qq6RZW2yA4h55wradQSAC0ciDPIMCTmdD3Z652+tbpE3ZVU2vcGgloEttSNwWlRygekRie+W1E7S1FA+qxMK+HYK1LtyJlOSS7TgklIgSW5M+4wDggt7uDAHGEoryPPyJxT9GisRU3i3KbpFU6Q4hhgrd2Hk+YRAkHsY476Er7c6KKlrXSN9UjxFJCNu2TIHviDwImPXVnactFbSMOh5lijcVscdaSQ4eZCk4BHBjERIPI0sIpaapcYhVY0klBMqAA3EpU2SDynaJzJEcHW8Of0HIeLavSHU4wyfY/4lScpyMEDQy6aQSRI7CeNXCptTaXEutPh+mcBW26gRuA9ux9u3vjS9JbqEeGKRptuVeZKSVCJAIV8/x+mrXqggEc3PlTiZWKtxUrCqeCCR3mI1000hqta+JYUWtyVLRMbk8xPvxOnL1MEqABBjEic99QeA0oguLIMgSMwPXXT9J1T4MDvNyobtUFVHZae0pbO0JQFAKERwSZOAZPPrpOtPkJU1JVwQePf8J06dYZCiMEdiMfjqB2mDbaXlIIQowCRg/XudCFqOTDqQeAIocSAkBOUkZA7a5S2qElYk8SRx7fnop4OKI2phMzj++uUW995AGSZ7zgf20uxPcooB1BlvOIG1AAzifKkfP20MpC6xbzDlcihLTalhxIK/EUMhMj1n6abfwZ7YFrbKW+JE+bn/GvVWxJUSlIKjmTIJ+f+9JOSZTwPjx8kXKgai4rbS1UVYcQBEgR75Mc9v766Sw/UBJQhRSkTMx24j8fx0/VakprSpSSQU7iRgE9jjTSjpqVjchxSA2uN44+WRxkaGWE22Sh7ZnVuS24iVLWhJUJIwY5x6akWQVycEEQRyB+/wBNKKZ5wICQSCSTERowOOJSNwlU/vGqxxkNcQGUFKjJLiAlO4mUqyBHHz9edfCS5gEk4z8zoNtRWsJnHHOiyfMIgQeB20UJ4ijZPMl3LIJIj0MxOuySVHP0Hf31wkqUop3SCYkZH7/xqYNyYHbmcAaIq1AO4MkQpQIyOOxx9dH065WlSkz+ulsEEKkkz+GjKZSnFQQlIGeY/DRCagAm48Ri4EEkNSUjhRTE/TMfjqRCRv3BO0ngckf50yoKSkVb3VVBcDoA8MJgg575xg9pyNeJpEoAzJJkEfnP11pGDRbLadyKla3EYn+2mLbICwCNfNNbF+UweAeCdPrHa1VtbuWAlhlO9xRyPQY95A0zuCruMnEF3Cr2ZLbbIupWkOSncJCQBuI9TPA/cabVdyttjpFFDbTYCRuWo4BjBJI7gHJgE69vDZtjrTVJWJfCmwt5QQUhCjiDuHmUCI7j21VqmhuxU/VNWt5TtUpKnlJSXHFbfuykgxmDHHtqRnzHJ7R1PotJpFw057j9vr+0We21NJXl9px90laQMtiMbieZ3ntPfvrJ7/1/WXCpqH0vpCCrahCVKERkD5e+J01c6cuZfcXfm6uhpvDW8lblKUqeJVu2lSinJg8GfTVPs3TrF0rfjR8a/RJAWgNtStwFKvu+wV5ZgSMwDA0M8Jz0JQRV9Tjsy3dKq6nvD4u6lratzil0+8uFAWvnakA+ZUd495nmxdeVVUx0bZbQ1UuFqrfeU66jBcShIX4YMf8AIiZiY766sTLdsvybBfEV1ntT9L4dMl5hTYLhgqRBjckmCSnv376s/XlB0u30Zb7D1Xd6a0XykdCk1LIU7TtPBAEkgAhC0bcngjBMaCr0eOoTIh3TBUVS1M1SKq2NsNgFNOoIKZ8qlbwRyBHeeYkEa2u0O1F4taXHVpTcKuw0CKlSgJC9ryEqUfdAQc6zo2OwUHhVl761orpTyPDo7KsrfqSDlM7UhsHyypRJEGAZEQXD7Q7giy1VRb3WvGrnw84W0FKadtICGWkpMyEgGDnv3J0yjDdz9YLLjYpwPI/aG9SKoH6KzAPqr6e2MimLCH1NNoeCwFuAoI8oAJHM851pH2L9TNXXqG89KqqWx4tMlyj8WPMoAgpyDA+4qYB5yM6wKtTX9Q3WkRag8mvqlFbzT7wCEEJlXmI8oGTuVEAxiNWpi31FpS7fOmazp6pu9OjxHBba7xHadQxKQTCjntMn668yFgCB1MK6qSCeZ+k+t+nbB1XWGqqOoP4Fc/DSiqpiwXXFYJ8sfeBPEbgfxGlrt2s32c9HM3VbaqOipELRbKGqMv1lQoHe+7PAE++2TMEpGsZY/wDyM+0Onp6dip/h6n6bck1TtPDg4mQkgSB6p9++qTenOseu+rVOXWueudUhwtlQbKkxgwEgACZwkDzH17jcUtmEx8ttMs1rev8A1Bfwn4aocXcVLfVUJJS20mN2VEbQowZmYn1A1Yk25dmFIqtuCV2eodUirqTVkeJ/MKUNtoKyUuFG1QgZnMAGIabpOuYTar3bStdzDC6RLdQqWG0CfEdCUkKcJSsI2BI8xngYhubd0N8pqa6dP0arehtqsaoXkH4hVSUgyPMFIQCYJVA2iMk6mBAOalJshbgGQVHVQfuFxtNQtLfU7VMqkTXKZ3rq2EqPlJyQtJRhfdPIxqi3ah6no6BmpqqF5uiWCWVForbbBnggHsePfvGrnR/ZuOpaWt6jqL9QMXN9xYRTPlQUjaYJCkkzJExEc5zqsVlv6p6TvLt0ttUAGPJVMtDdTPCYHkOFD2gEc4I1RxOFFXzJ2TGXbkWIv6b6Wr73UhypqRTMFO5KqlQQHcEpgSdrcj72ZiO+LijpeluVEluhudsprhRo8ZbwWpTqgEpR4YSFQ4kqBUCDKcnIjVbu/V6VU2y1W1ukrqhITWJZUp0qPBSmZwDHEjt2B0ZY36aloHK+4tJqwhQU2latqlPKB8pjKkCCo8SE8g84dWZuYwrKiWJotu6mqtlNXvvNhLgDT7yUFKQUgpKEJKQdiCIg8/8A9p1L1Nfa272qhprHb1rp3yoqefSG/CgcblHyJkE5+WBzm9bceqhdGr++iuRSMth1lbqQhsgKBPkEADPAAz7auHRPTtd19d30tU7aUoV46nasFAQ5kpG0kEycxmYJPGVcmAXa8wuHPxbUJPTdLdQVJqG6lVMw62sJLbVY2pVO2QI2EkkAhIlMEHt6D12qfYubybfUoRcWXEuJUyo+FU5mUYG1YOSMSBIxjRSLZcLXVKF0sdHUrXV/CVilPKYWygCd3IO6BIkxgQMnX1clTtmVXm1PpaQ5PipIK292UOhODmJnBJmZ7qB7sEXH3x7aZTGXTFC3eqisudO6uorUhaq61/dNY0VbvFQATDyMkAfeA/G+UXSNLV0jH8IWqqbWmWlpELjufrPyM6zrpVm93Mpudup1sVrLgcRVNI2ocXMhYJgAnIKTHPGVRqKLkpq311c2g2sVTfhXFhqB8M6owh9AB/8AWpUf/wBVSO+sZcB1FAkiujPLnOlFpRDdiVustztjfdprlTlptxXiY8o4jckRgiMicjvjCd6mZcUVU1S28Nu4pBhYHuD+zzq5WtmuullcsVUlp5ttanWHlvEqQDwEyFGJBOSJyAIAlE024p5FmfUwyttxZClqJCirunHlBVyARkg+kM6bO2F/TY/p9ZO1mmXUY96rz5+n/kra6URlEgZ41C3b3qqpQwwyStWAAYn/AENWOqpHGH3G32/DUj7wJiO+T8jz30gu4qnaB1u1r21R+64JHyA/DnV5m4teZ8pjS3AY0J3dun6207G6umDRcEpKSFAxyJBOfb/OlzxuLtMzbXap34JjzIYUZSlU5UB2xIxoTp1fUjprXLyXPOsbAvJETkZ/f01awF1TxdehKiBIHfEDnQ1G8DdDZQuN6U2IgZsyHVkgSTJ2gHPf8vT+2mDVpSUkoBUjjA5PPvpwzTlKwfEAJHocfsaYMNbVDlPuQSD9BkHWmQEVMrnYGVdVrqVpLbUHvKhgH2Oldbb3mlKQtE5gmCCT6avtQhLSFLWJUTgKSSD+mkK6xirUWm0LbVEla0lIPsJ5+n46k5QFNS3gZmW6lMqLdVNMF8peDZGNoOcwRx2+eq3cm0im8YqUnaoQ2UyFczJ7EY/HWqHa7SnesQf6iZUT6e+q3drSldItSW5EZA7/AEHGk3NcyhhfkAiYPTKM8wPX6abUqGlhW8xtEye+P8xpUymCciR3nn/OjmztIPA9++vq3W588jbYQhMrieO40e2grTImRjn9NBMhJWFGInsZ00DqggoQYSqCUjEH/U/nrVHxBFhzc8QhUZGD2zB+nfRrbZgAgye0/XUbbZ2BZEieZ/DRTSYggZ+WNaqA3CfIpSTAAPaRx9NN6W0rW0XENKXAyUp+73M+2oWSSsYEnuDEasVqPxCkMNuNtOJ/mpcdeDbaduZVM/h34jOg5WKrdQmNd7BbgKUrS0Eydw8vOIGR9NFMsKIG6R310WlF5xSilSgsyUmRz20fTMKcWltCdyvSedM4lFXJedju2z6lplOOpQhO5SjAAzrROlumXFrdaqFz4iN21IKjgiCACN09vWe2qpRLFtvXw6FofUSlRcSCdhGSASAIMgT+urL0pX3OvuV4bZdVSq+A8EJQ4FFCt2MAkiBJAJxnSmTO2X2JwJTw6JcH/JkNmXA1nVFEtNFaegA420lKEu1NTTtBcRnaUqUJie+ul1H2nVCW0ItNjtoCsFde66rt2bAngY7a8tblV0xYFIqqn4pSF7WQsqUIPOSZgmeT2Prrg9WVzTDnjPJQViA4Egbcz74IBH1/Dw0h27vAhDrFVxjPZi+503WnUVurmarrzp9IpFHxWKSi8QIX/wAVlasSfWdZPT9V3B2gdtbd/p7Y9aaj4isuKW0oDjMwrds3QlJHAmZjtrUHOnbVYbF1HU25b5dq2w44XSSBERAMf3+Z1+cV1Nweq+sk1tTUPAWtzwi9IwXe0jI986UYBGsSji/5E902W5faZQXX7PbtT/HpvNucdLdPUgK8oMjy7gDAzkgfhnWO9YXJm+2fp+51LxNa7bRTuhwnasplO4+phIP59tB2dLqfscp22gCtdYpJSc//AMiv+/poO4NvqsFjbo2XFVqWEqZCQMHerERn6wOTpWjuNfMfxbVosOIpp745T3lnp9G5bDRlMmRu2hZwBkT7/TJ19cWaI9P3F2iYdUhbzTbq1OAJbImEgEDseBPM9tWem6VqVNB5b1tYrVNBAlxKiJEbdwAxOI3H0zqrdW2xzp6lNrC33FLc8R4PAAoWIxjtIjRNhDgEERts6PjYKwbk/aWygorXRDp+03OoDrFWFHwENEqqXGyAC6rMthcnbwdveNXO93qzWiup6yjHwVY+ylFwUnYlknwyQgJSkQRAxjA7Tqn2ioorzQU958YPKt63CoJlSkNk7jABmQVbsAfd7zgCvba6juLlLYrk9VWt6HH33VJLVOiPMCJlJ5wYnmY1Va1rb1PlkALHf2IBcqBdx6vD9jq2aamcYbrD4yyG2woJSAfWZiP96f226Do/qRNNUO09WlxtJgtFba0ZhGT2iAoQRA99V51x1dRUvthsUVZU01ubCk+RbLZ3qgEdghJ3dpInnSW+OoHUNICktNt0rAISD5MTgAiRrOTEDi5ELjzH1qB8GaRcbz0O48A903XtwA4TSXVRbCiYwHEqIJIPc/lqtVL3Tzr9RcrEm6UlK2NtaqqcDxZ3GEAEbZCjiTxoOrtN+R04nqGqtL4try/AZdeAAcVImRJJEH89HdKWh25MXzp3wxSC70zSqFTo2sqqGlbwjervMpkn37HSpwhY6mck8QRnqisratpDTzzNIhCkIaUFHamAcjAkxwImOfQC9XZ2oo2UMtlTaCpvakbJIgmQCcZ5BMTPEaNFDXUaDTXIvUVfhlxhTag4mCSglIjcUnvM5wdLqvp6+qqJboK6pTtKQ58OUJA5IyqEg8zjWtgXgiDXIW5U8S2fZ/8AZm5cH7ZWXfqim6fq7ud1spnWFul7O1K1lMBtKlcGcx8tMqG11Vjs1X0/dKRpy5rr6jw6hSkqDS2gG1Dzf0FO4zjt6nRFk+0Kgp+lLMbp0s2/1LYWPhqOq+JKW3EoMtqU2PvFO7BODEyRpR03aFXRNV1j1rcqlFA6+tTVMwAl+tekKUE7xAQMyqQBBOI0pl3Le7iPYQHIA5iJ4V6LsqhKmlVLag+p9aitry+YjJ4xJHeCPTWwfZHfbJabQmw3dumt5qKhVWzW09QXC2tRAlSiTKDAz27gTOkj3WtJa6+vprb0v03TVdKk3GoYqg7WPISADuU4YTwoeVJPOpaer6R67ShqrtVJYrm+PBobjalEUpfIOwOtlKVJX6JUIMGFaB6zqLZeIx/So5pHs/E0f7Xqzpv+F09L1PSVVRcg2l1mst76WluM7tsqKgUq5O1JiScEZ1Q6sdRdFs2mjt9I9cLbdEB5srpYTWhWAlYGQoIxEmMmSNEdO3e311Sz0F9ptMlm4W5RpWq0ODaWlGUoUZMJPYmNsxIzrW+sKaip7BT2mqpXLhQvnwmKdmS82raUoQ1A8sYAIjODjJ3kxqyhli65GxvsfxMzpev6Gz0z9repGKdpboe2NuhGxG0EQokDkKkGJnvGYbt9rVE68zX0iEVKmP5b7IVKaplSdrjZMDd5QnzYyJGQJzrrP7KeqOmnah5dVT1rTSg4ktOhTrbUzK2z2jsJgj01W2LSutpnNzrQWkjbK5KZiVcnBIGO0/LWSQEH0m8eH1HJHNz9UdHXy3sF2ppalp+2VrAcaeX/AOzwowlQM+ZJMHHcdlDQfU1KuqaF7t7fxFG9ULQl1kpSUQM+UgHOfXWE9J9QXqyUVRZaq2uhDyi+hampbQCAncBImUiJn+2rBZuo7tZ7y+7VBYoKxcOM7lKaEjkAAyoevYZ7nSGfHuH6ciUdMGVuT3x9v9y/sVdXX29pNepC32RuDaFbtiFKkSQIJic9sjE4+abyVo2lWRIz7fjn6arlVVvWC9FdE22/SVKypl8EpUrv4YmIJxggT7auFD4dXRhynaDJ3EKRIMKnv9fx+fLuh1IYbJF/GNCyN6o6/nMhFE2lhxxTydyCAkQTu9Y+WvWGUiFKIIPqOPrqR9+3tVrVC7XtprXBKGBJURnPGM+p/UaZM2+Wg44YTPJPJ/fbVMHfe3mpDyYmx7Sy1YsfX6wX4cKUotJlCcycHkD9f3GjWkFbMhEE4xydfBsBYBAjjGi2W1AhInaMnPGtBxcC6HuCVSVPNFttopSo4CjP0/frpG9QtvValXBl4tBspbUgha0qgBOTHlkD5auzaQncDtUk8FQj/rUzVIH1gIQhQiCYknOc9tTtXhDG7lfR6k4hVcTNqOz+Osn4jwxP3ZCSfqeB89SO2lHwJccd2k7iE743gTyM8RziY1rCOn6JwQ7SNKBECUA49tUTr7+F2Og8OlSncobFpB9TH4Y9tRtSCgljS5PWyUJ+QGogEjPc/wBtFIE5mQO5xOhGhwBgn1OBo5s7ACBmeRmdfb1PmLhbOQByPWMjR6IBMKJk/L5fjoehFOqpQKlSkNRlSRJHJ9RMn30UgAAmB6czH+ddHBg2BqHsKHhAQRnJ50W3uMADgcaXsrgj1Hro9kLUQQIBOcTre2AJ4hiANoEZHM99FtpWCCBIPrkfnqNhrHv6d/3zo9pqVhQGAcgY17aDAFiORGlJNQB4kJcOAvsrHf8Az+M9rVbLIhNJUV9zV4FK2yVpP3io5AiOYOfp7jXVtctBat6v4NKWRudUlwpL8ZXuMmBxmBExB06udzt14sdVcbcKegtqVgoaYSUqIbJK1CSJO4gTiY7d1MubaNiiN4dLuIyuf0lMvbNI7YKdtlLb7tUjchb6ENrQkSCfYJKVZ3GfSZ1H9mvUtptdwv60pqV0drofHccTkPy4BuGZKjgZ4H5ktPfEUzr9XTqcLYLbaVslxJa5SkIAMzCjB9I7Zq1kvrtn6ncNspaRyqu6FMrpKxrw21JErTAGQcR3mRpZMlZAhHcpMgbEXB68f5m5Ulzperumk3K3trYbey2h2CoKSogpPzPpzzpI3arpWONv1DAYo2leItK1AqcIzsgTAJiTJwNUF7rj7QXUssUnTFpp20+ZtPmUlBHyJg50K91J9q9QsFdZbqUFRkopyopBzMqB/LVYM4XZXEkHEjuMhIsfWaQ51H1DcOmL+jqS3NUKWz4VMUJ2h5J5IkncBjIgZiBGvzwmvFcet0i4u1LNLa1IQp1JSW/5s7YkwJ9CZ046ld6+rKYs1fVbbrToIW2gNsgdx3GPw0msNqutmpWF2W4ULDtQ+f4i88tLktD7qNgB3gyTCQST7am5sZLXUs4GCpV3CbQ8ql+xyhqE0pqQmpWpSQkmBuUN3050/wCmG2qx2yuVtMp+lTSblkIGYSpQ8xxz27x+K3q66dPsdOt2y0pbpgFuPGnbgBkEkgY7wZI4HE6qtn6hNvqumrgpTy3HKUUZaW7DAT4hSpSknvBM5Ec9tLYQBkDH5jGcM+BlXsiaex9oNDf03mhfpV0rFuaHiAtIUIJiUkBJBB7f1DIiM5l1rcKW5GiqKxBIbJTsBKVhsEEDIOSFHKlEmT6aYdS9UtGietzFDRUrPiAPOsncCoSMHd5hwOT3zB1mNdXGsfSlCytMeVKG9yhI7QB/1qjnIcAXf1krQ4/SYuBX0lvt92dZp2mrUpdIwwqWUlQKxvXJCikDccH5e8aJcqL31FSus1XUNLQW9Kh5ktLAemYVKUndAEE9vrmvvVLdNTqKUpCm2zCSYM7Nv6qOiLequpKItoVtYccLRT4iVLQuJymZAiDJgGPbSwtRQMpttZtzDmWipr7aemHKKoJTWMhtqlqaZs7UbQpPsUgkbjgzxHGjumrRRdY9cu3CtCQ0lLP8sQlJnyJVOIE9v0jVVpWlO07nxS3St1aD4bhQFKCTyAn1kjOrL0FUsWrq1lNU8ptowgqHlJ8qikx8x69tOsDlQX5/xJAI07Oy/wDW6+/M1Trx+23/AKrpenrDSPuUvTTdO1TIRtSyD955zeVbVLAUkbZP9R9NZy9UounxPTLCVruFKlTa17wkIQjO9EkSdomAc64pbqCu5u0fUD1PVuVr7qqWpSCw6PEISRPcgASIjUXRdCuuvT9RUt7wpI8RrBmIEAnvMD3Hz0H1DiJ29EVGcSeoqhu+/vHvTPT32hXikafuXUdY1QoT5EKP8wpidpMExHpP11H1H9lVaw0mspqlFwbgAhSypKiB5gSSBOZIMTHBjV1V1r0dYrl/CLq88882oJdea3BFOqNpAII4kic/QY111Zdk2KpbFMpdRRVbUNKWsEKSTwTicJMHEQDgkxOJNyyMddzFbbRXK8Xm3dPmpeD9TUilcg7gApQSSAOIg9hG321Yus6O/dQdUNdP9K0Dnwagqmo1KCkstMNqISndwCpSFLMnJKPTU9vutDTfanarkQlKUVzbriknncAkeYkdowJn1GgupqCh6b6qpuoKtyorKu3K2NW1gqTufYWo7nVjCW9uxWJKp5SDOhFi2YBviMKuzTsy+TNj6b+xa2udOOuXKlqLhVm3otlwr6d4NhaABISFTJEIG7vsHvrIbx03cLJ9pjVp6b6euFdaV7G33Wm1u+KwYSklSRCdoSF8yFCScRq91/UtkcYpuoUXO5MXCmtqLq9ZTVHw6YK8qVuIAyfOkwOQoEgAmMroKtvre8UlQ7dn2KiheQr+FEOEVDSVJDYQQSkrUspBmJJmcaLX/wColjLE8Rx9pNWHl9P9QKp6dp66UXh1TyVZcdAjcpM4gpmQAOc6u3QX2s3O1dGM2/qFwKr6VtXgrdgrU2AAlJVJ/pnjMCJ1mnW1ZTsdb26mQBW0dkaDK1MJgl1tMLUSpPG/OQQfXOkd7qqjqW401RRllutrT/PRTtpbaSZIEZEQACSSI/GQ6dAMAs93/eO6/nUVXIA/tL9deuX7nVrQ3Tu+O8hLzb6SSVOESST6fhERERFbqUhF8cSzDQcccYg5SkLJKMR2UlJnOPXSyx9LXZC7o3df/Ep6RaWH1F0rKVKjaUAHMgjP/wBgO41o1f0Qx050nUs1zH8Wqgz/AC3mliUYESk5SRHIJjb76HkIBq5vB7bMK6WPSb1M7WXKteUG21CStCkplIKkmeNp75JnkRo+3/aXbn23emqDpFFbQOp2OOqQUkQCSpKSTs9QBxOBOsWLxo3H0VoRUM1DkwlRhJylxSYj+qTzxq30t8tlDX09W3VPVIpqYUqFqfUlJTBEqwNnlAGJiJknXHxTIYk8za+nqe39T2Cotb9MFIZOxIVmRic94JB7RPoBoKyW+usVVV26ueccoqWPCcUncQDJAkc98mfpI0Z0iV0tpoq1tEqWj4lAbILbjPlBCTMkickzO4K76sXWFA6aAXOhUVBKZWhJw+2RkGPYyD2I188ch02p3Don+GWgg1WnOFvjj/Uhds1Ga9qvNOhVU0gtpeIkhJMkT3Byc8ToxpKUDYo7j2SASB6fLVRoOrEUtWmhqnn3HHCUBSpUpfHAAOCkgjGPrrQKaj3GUrCkGDI7iO2vqsWQugaqufCahPTynESTt45+ILO1ZLVKSAIIJk/jHOmdKwl4JAZcbB5Chg6Y01vCnQNpKYxnnTpu2hJnYFDsD21hiwN3MAqwquYvZtfihOIAGScnRaaSjoAVBafESJPoPbSTqa6V9upXE2mq2KbH8xaUBQB5ABOP1jWSL63qqWuqvGo6upqv/wDJWoKB75wB9M6mZ9ftOxRZlzSfhbZV3k8TX7veRSISQhsApJkr2hPzGJ+msE+0LqMVj/iBQdTPkCMBR/Xt3Ohb31/XVVQ23XKS2FZTAKiMyCR/jWaXm7Guq9yFq2gykg8dvXk+ukQHzPubqfQ6bSrgEqzITBkgenvoxsAEZkenrpehRUYJn686ZUoClQSIicmJx799foFT89JhrZkBIA+eikBUge+J1AyE+IcyI8pjn/Gj21BQ3KGQNoIgT849u+vdTNA9yVKFpKQQZOR799OLcphuqZXVNl1hKxvbSdpUnkiYMGO/vpU3kDMQcA6YMcDv3M41ociLvQPEeuu0r9a67TU3wzC1ShreV7B28x5j30azsEEEE94OlFOIGBzpxTMqUEbSlS1HCQZV6QdbVKijsT0JYrU5ubVSs06A7CnPilrkIhJGwI7kzPvEYjUQuVXfaumstEpov1bwbCy34aAZlRKY5kgyCPqRouhtdQkJacZcZKHJfWTscREEESRx8jzxkaqXUJqkdQvqoUFx5p07RTQDKlAJKQJkqOYHrMYOkigLsw5jOqyZl0gVODJet7dW/Zjdq22fxFTi6poJeUw8pIcTEhB5KR8oJ76zquderOpKO6VLDqm6QNQ2VKbJb3bkhKp3EQZBxz7auV16C6x6lebXX1tLTvO+aKt8qWMdwkGD+mi+i+iL6j7VaSz9SUyWaZNMpQWnztVG0CIM5kqmMTx66AVY8iH/AA4sMdZv4Jnxe6keRVJN3q2W0DclxSlOApJCZBxAgjJOCdVWtccNYUvXyqqGpKVlJUPeUmc/vGdfsP7S770rSdHt0FegJbonfCCGGk7Vnw9yEqEgYOCMfL0/NfTtktF26oq6ytCjb6FJfqactSVGYCIBB5JMjj3jIjlYNRlpMaldwlEX/wDFW0FT7lfUOd5XIPtxrpFxo2qVTVmpKxoLVulBJClARBE+h/PVnuFopLqDWqo9jZlLTLACSgAkxMZIk9oOeNVdt+htTFVR/AuVFc8U/DvIWUhCc7jA5PGJxoq5rHAgnxbebhVLQqrKF9uraXSLTlK3Tsmccn9MaJqLhY2//AVTrfFKyGmV5G2CSTEiJKgZzxwJ0nYNXVVKFCmfbp1LT4jp3OFAmSoQOQP099aa8zZrf0+5T0dBT3OzpuCVGqDgNQ6nBUnA4kxgjnidDyihdRjC1sFJqUEhmuptqCla5lIWJAVnn56iZoLsKRyqbeUKFtQQ6pADSUkwORHH58TompbbarF1LNOaamcfUhtBggYnbEzHGSP00UXqZKkVDlP49IryuNBZSZiBkAx6+/trqZaAnnw2xqJqZKKgOCpbYeI8oR4ikgAZAwckmRnRC6ml+KTTUzSg2pG0kq2qTiYJxOD9c99Qrebtz/8AKVva7LUgHcCeFQO2PX668WuseeWtNAsDeFpW20UiZGRj/r31qweJkKQLnjVXblV6agr+FqG1FJkk7sQce4Pp68Rqy2mraorpTVnldhSVQchQkifb751XHX6xratCywSP6QEBJg+gzODolIrXEtuqqG1PRkOkpKuxG4gfnoyPR4i2bAMiFW8yzG4vUz9fQVNEKqhW6tfhKwtvcdxKTHInEEc6s32f1jC3fI2t3a4lWwKKVeVYVEnEqjtExHpqhPr/AJinFKUXEgArCon0/wCx+EAaa9J2643C5EUKw24oKKkJSEJdPCSJIggn3jnWHG5vpPYvZjBPYl26k6LvT5qnLfak3Vi7rD9NcU8hMqJPI2g8KnjbGI0k6wuqmKO02EVSag29oNrcQrCyAdxHqmSQD3j5akv/AFLf7W+7bHnlNQSHEBLRKs/8kgBWeSOI9dZ5W1CiX6pxfxCpkqWYI4if0jQCig2JQ9d3UKR1OKypuVXcmmKFh96qcWEtobBccWQAQAnJ5n9NbTTvM9U25Nt6haNm6spWkN1NJWp+GXVJCYbcSSkBLkGIghScK7EUbo20VJ6H6i6xZbqFLtam2jS0rikurbVBCt6cobkkqgCQkdp1Z/tTK6n7K+huoq1FVSXVSleC1VuFyoQztkpKyApSJgp3ZAcjMaxkweoAVNETOPXekxRhansRw90qhu8Xy51NquFU7eKT4N5geE0ygfyySFhZj/1gciJjEa66G6Uox1E5aLGqmdvlSFLW6CXGbeiILiiPvFIVtSE8FU7lKMjHn3rq/taVf3EJWpJSjfJKVSYAkZAzkj076/Uv2CWS1dNdAJqLi8wqqvjzhJeICnkJCkhuCTMJClECeZ7aEuDM/tyMK+nmHyavT4RvxKb8X4/3LJZf/wAefs8s1IKvqByuvrrqUhz4h8ttqjI8qYkSf6lHtom8fZH9k1woRR0Fqp7U6uW2naR0tuSQSQnJ3YzBB4mNEdUdTJcYTQW4NssMoCEwYSkDAAHtGsrqr5aF3Cjq7xTNPP219T9K86tSSwsgJURCgCIA+9PHA1nVYSig1UW0uo9ZjbWfrKP1x0R1L9nz7TSKtdXQOOgoq1qjxeAlKiT5VAAcxMY0Wz9pFsfXT/GMuMVDgDbjRalJdjaFFU+UcGM/hq8dc9XUFz+za5095aFSw4iEFICihZMJV9Dme0axFjp21VVFS0y7ql2qdG5wtBaEpVgSFEHd6FUYkeh0njAyD39iVtpokdSfpvphi9uOpuNS8kt1CnAlICCo4BM5AGCY9dNeoeibWinfftqlNPUfnW0pYXvEEEgQIVHsQdBXxyo6cuqamnWp1NQ0WlhvastvJwsK2nzJP3gpMzJxrli43C5PVVKjelxQ8MhKpMAwOSOxMDv9Bozk/mgglttE0foDqxlVBT9MuMlYZRtQ8FyAqAJA7BUxB4Kjx213pplp7ptdvIzSuKbgmTBJUPpnX5ms1GKe/UlbTtFptzJAnadwUAczHmSMe2ONb90zemPiqepbX/49xp0qkmJWJER6wBr57XKHPtlfEDiAuKKdg2nqV5hunLtQz50BIlS25JKfaRuT2GANabaWWlIU20ZbQAts9i2cjP0I+ms060qFUl7ZrX3XU0u3xA0gkJWUklU+pKSYnjbxk6tHSl6eVS0dN8MGxBZak7ZSVSExnKTIjvPtqho85XDZN/8Akl/iemGTJuUfw/8Av95c6m52q1UrlZW1gaZaEkgTJ7Ae59tIL39pfS1tsr4+LfqH0SA0yncZiYKsAGI5OND3Lput6grgt4qKESYJIBBEADOD+Pb61q6/Z7azto61C1MMNyhAWQAsmTxE8RB9dJPq8jnqhDYNDp0rcbMzBfWvVF6qzWVFxW3T5DTBa3NBMzBA7weTzr2qq66tS2D4UoSfMgxuPIkE4Mdsz66Iv/TztjuarRb1qW06kOltagUpGJO7BOYH5apT9e5SVL1PVpWwYIbOYVBmff5aGE3mxLdrXti2+XGqRVOprFoddOUqSNo9eIx6ap79Q+pZSnG/JJwD9dPql6nqXgXVFSiMmYnjtHb8tInGFKWvw5UEiSRGNWMKhRyIF3vicMpAAI9fw0xbCiQpQIBGD24j++lrWII57aYtK8gGT76+un5waMNZHAJx+mrc1R2h62W9VBVPruG1Rq232wlpGRsDZSSpWMaqjAUUEpB2pEqOfWPoMgf96b0TzzDiX2lFKkeZKgSIPz9tcI+JgEA8wx1hTbylKR4aVHyhOQfYHv8AQ6IZAAEDPcg68p69xMeK0l4HKgRk/M9/qDo1DlA6tPhsvMmPOQQUg84B7cZxramjzAOm4Eg9SZgYITBMQknIBggY/OJ1eeiiKO5qua29y6cQ2oCQhxXlSuPbJxwQPTVboaNpxoOodDoSohSYKI4IlSogEyJExHvgu51i6QqpG7giladx4ZSRuVE7QAMHJMyeffWM2QUVWawYGBGRhxJLhd6+poRVS89WJqFIUC6lO4GCUyoAmMcjBOMjNYpuo7hSW+89RuUu2rdqvgqNhSdp3984x5h6YTHfXdZWroBQVdUEVLbyt87xuUneJTECJk5xz66CqnbfcaC3MWOmLTSqpwpaW5vIdUhSUk4HJUmB2x76BhAUbRGs5L+8xyjo6ttnTqevb5S1V1eWfFLjzi20RnKNpG3vEg8ciY1Bf+t7vS9H0t0t9cmpVQueNTLeRu8Smc5bcBJmJKSPWc8ar19uq7j4t0pmap1+opUUqmyqEswAkoicmeByfTSLqJmotPRjdtqyEvYZU1uBKVbitQx/x40Jw1GxD4ytgAzq9dYXbrR23UbNpLbajKGaRpSgcyqExJOCeexHOov4yWb6+9W7HKJDQYQ4whKVKCZIJj7xyQTkmfbFotNustt6RoK4WZA8OkbbrHF1BTUPLcKpKEz5kiUmEwB6mdZ7U2i/B9VMt1LdPbyphS1LCAkwVbciVEgn150uVUmNqxqoyf6katdbUC3XJ80lU3sSlCgSoGQdxjynjE/TMaqYUq7XdCaejcBEeGho7lgYGBknGjL8w3R1P8Pt1O203TpQHVzuUtwpG4yZIkzgcRqxW2jp6ToxpxCYL7ZWFJSCSvdt8x5IAE4/KTOkUDkTbt4MTVY6gtzqXKwPUziyDucJSpPp8jHbEfIjSmseeeCVrfcZU4tSnUJUQjxE8nb6nmdWl6vqnbMKepWtTZVKUbirZKIMScCYO3tuMcxqqXF5DVxabahRTtKgII3c4P4H2410Eg0ZnaCpInK3bsYqHqfy8kxKgPlPHfj304tyilpSEtKeYdRKglPGPvTH150yWaR1ljwQlZKk7VYCiMSCB2+8Zk+uhHrPWKbUqhWS2uSEjvnt6SR/1oORlHcZwhz5g7jbJeZIClIBggqH4T7+pH6a4LT5aW0uqQ4yte8pB3ExIzImIJxI/TXDU0xdbWyturJAK1JKoHtPBPOPTBGmCaCtSg1K24Dp8rzvlSfUAkD1HJ/XWPVri4yMF+4CDCkRUAuJcASo+ZKUApPbt/bRNM0QiqpA6dzjSQ2VAkp2qBAHpicd/XOnlDZ2rktIerClQASlDDW8qUIAACSNvpOZ9NWil6Jr6tD7zdO6WWlbUOPPJSpPYSRMn2nMzoR1KobhDgZl2/MzEl9tSd3lbBnvCe319Pb5av8A0CahDgLboQHnEoSlStoJwO5G3JmeNeXDoypUhSHW1s1BVtIKZnj8zoekpK+00gp/GACVKKkpIIGIzgd49OPbTGPUrkNAxHPpjjXdUE6yU4OsnEVSkOtpgQFSEz6H0Bzr5vo240LbV0eaNXSU9QHFsJATuRt3ZB9D6wDPODp3Q0ltF0YrrwlRrgqHGnSFs7uEk5JKRz39eNW5rqxdsslxoWKairVXhnw/FBJSpMnMkCdpmJiB6jTWNPdsaStTmYJ6uE+aIPxK291tV9N3l26dL1y6Zyq2obZXC/EbgGVCBukkiJkEHEY0Um29a9S3d3qLrKsoGgtgNsKuYShVOPvHYyQQkxuGQJ50LZ26ay2xu63dpsVlSVCiXt3fCqyQ4YAndMwIxxkHVlZ+z24VVoav/XHUFtsTVQFBly5vQ7UhR4Q0CmcGBxIIwIGt7QvcWLOx9olYf+zSjr6T4S39f21t5KQ4lDzRQVhUFO5SSTGURjvOJzdrCzVdPdPWKz9Q0bCq6z+MaSoad8RB3qIUtCknMgxBEp9BOhK77N/BtVRUM01Hd00q2nK5m0vRXMJQJQFsubgqAZ2ggmOFRqs3S5Vdvom6hFS9cbTVKCmagI/l0eCG0JJlSyUgApIxBzOupX5p7IWPsbuPLx1TS1TZepK4KRuKFLZVORiOeQdZ7VXNmofLTlIzWsEjc3UJJS5mRISR39CPz0hqeoaUuuNK8OmJVvUE7tqu0gknBGIkRwBjQKb01RqCaVxTtRyla5Ed5zyfxjtoOVt/PmUtNjXGJcL1eFWiwOUK0ISEoygYCZAhIHbEfsaq/T9e5Top6etALJCkKBEEApn9QD2mI1FcmKq41bbrVO8afYlaErO8kxkqOO+7GIjjB13aCV1obdaDQbf2qCjChAJJHMcaT2bAV+ZSB9QDJ1XUeV1LQv2hNE0tyobSVLcCEwUgQoBJJ/4lWRPY99TWw01D1G867VqdZU4UuNKbUQUqQDMg89/p7a0S33K2W+2MBVLTOtOO+G4kIBUBETzkEHvMg86rd5Yp2b3UMWmgp1U4dSWnQdi4I8iYKiJ7QAYjnSwJ/JGa53y5dHXulo03BN1cVUqLaEArSfKhMoyBJBCVR2x8jpn0fc21sFQUT8K8HElWQUeIknn23/sao9ZQVVoZVWvUgW042VgqSUgpUACoZ9TI+Xtoro6saBUyH0HxWXURMEZUJ5Mffn6an6rTlLYwnrrkKMp+f7TZPtDpkrsDjpwGTuUQYITlKvyJ1QKC8XUmhFlCQ++NhYJICXhCSkDvJz7TyIzoV/qhW9JuLQ2HS5TeIrf93KZI9+Yx+Os8+y26W6ruNdSXBDLS2HVOMPugJ2KJhOcRg5z3HYDSWh3bGX4j+pK0HM/TdodqFdPU9bdPDbqHUBbwTwlUQRJ7ggj21S+obkDVqZYKpPKwmQB6T6mT+umL99apad1h5px0LSmoSW/NG8Z758wVqjvvuVpKqtp1CVkpjIKu/Y/T++s5fcdoiWFdts0rF8SK+8NO0zO7YCFEkkrPoT7azjrC3pL3xbbvhuNjvkK7dh89avVKRTsq2lKQkEyRP7+esx6hfTWuFoR4bfJn7x/vpnDi2kHzD+t8ShVdHSCmadaUS8onxEbY2GSMGTM8zAie8aWKAbaIJIJP01ZHmmwFSDBGMYOlNUkEk7UgDywM/j7xqte6ADlYhawJic8xJGmLZECEwPUjQVOoyE5AOSJie+mbABIESBkHiPb8tfSEz44LcIZPEGCeY76a052pjuTOTn20AyhCVAKPOmDOACMq7xro5gzYh7QXHlBE/npiyjaAeCMjBzoBhSxBEgDvpiypaimYAGNFAiTmMaN1xohaMnbtBUMjEDPt+51bemrJ01/E6Sp6wuyG3KoEUlO61vSlWfMQCJGYEgwfmNJLPZa6vZcqmacmnZI8R1UBCPb6+mdF9TUKh058YadlxNrbUqnqXlKaSkrBBKUZ3EDv7xidBzAMKHcPpmZSSTxFdzb6Kqlsrpq2otim7kugqmqeC2gqMtVCAoEAztlOZk5kDVPtrdIzT1tEEv1a0u7XXGkKMrClJ3AQeQAYxO6O2qSKyoS/TIWXKpJqg++YVtWQSQB+HYHjWhfZ/wBVde9OU9Rb7NZUFFW8XlvPUu+PluIkf9zpHD7XN9SrmG5BUcbbqpKawW9LlW4YFSqi2VLhgf1FIO6CMzJ9Tk6mb6VtNPWW+t6ikXdpXit0q6lSEMifJLfhErJifQx6AavNMz9sFXamhcbU2miAQtLaGG0Ljck7Z3H2PtPONWeru3VVJbnbnXOt0tOkwFF1tQWckhJQDJifSI02xVhV1EEOw9XMc6s6Jut1dTUUtM/T075/lArUtKxAKypMApzJE/e0kv1j6fsVTTU9E1cGmU7DU+NTuL3pBBJBCQAqQoYxwdXO8da3e71NLQs1b9El9+FvMOJ3lIBMAwYBj0k+g1Wr9XqWlTX8TuylDBU5UNrBznHh65/Sh1L7jxxCDWHHkGLaORf71/iY5dq5VZcatwU6El9cqWGyhXziBnI7CYnR9ir7jRWjwEsM1tE89tS1UAgNGJUonsPb64k6mvLS2i4s1jrwVylaUz+IjXDPhM2CmeqHkOIZKitneQ44TIOAMDPM+uMaVdTjO0SliYZgWbxBq124VzL1RQ0yjThRQnaPImIJhI9JBzPPrpOy066X6yqZKm21BC3EpkJ7Z1eaZLNH0kTQuuMJfZLhDpBUvcqSJAEiEj9flUlUzjlIoNJQltMBSnDtB7nn1OM6WDWSI4FFAzhv4ikWlxqHGD5doVg+n17xq89P19MUiXA0VZ8xgD051QVoqKEBsubW1pKgr+leOJ7x89d0tor6+kbqGCPEyYCskfL66xkQOOTUNjIXocy19UXqmqqxhFoCDUN+YvoEbR8++Y+UaKYqLnXobNwfcrXkgpb3kJDffcBA9uRjnPavWhhzxU0rqApxKgpSAcrURgk+0/T5zGi2m1VN0qDR0aFuMshJedSQBMgesAAY57zBManZax0oH3l3CinEcrn9BB7G2+xUtMNJLanV/wA51BlRRMkZGB8jnvI1tdnaNcKZS6dRYab2tsspJCcxgwOREqOVT6QAqsfTzLd5oaVNcytXifziPMSQJKJMTHlGfX1A1rNltppW3XFKUVLWZJG0rH/Ij1MciNZxYTqGvxI2v1q6RaHJnNNYaLw91VSNOurELKwFA+0Rx7e2q51h0VTV1rfqrc0lqobRuDaEJG8QZAwIxPz1fhxxx66+KQfQD31dTAiKFAnxLa7Kzlye5+Meo6JSqZqoYeC2DvWWEDa5TGYkk/05xE8Dg6adP0dZVUzT9Yw6ilfcTtdLPkW0B5oWRHCSMRk886u/2oWq122/vVlDTEJQU+OgEFJJ/pAxzjGQMaqlRc01lbb6RurVS0raQhtgKWUIggnywOU44ET2IydQe4+XDoBLJ0pQW2r67uvVF0cFdZOnqZVU4xTkuFakqAQkoMebfuMTiI1FdulenOoL3bOtuoftERcauuqW6hkJYWhtDQcIDexU+GgHYN3AII5M6L6ADVOjrzpq30bFdXOUqXilxguIecaXvUgIUrzLKYVBI9YAGanf+qrJfKKi6hudhXaath4+A20qG6ptJUpCPDI8sZkzBgjkCMZeFhNOAXP0kNVcrr9nPWjd+o7bU1dxqakqrXQlZbCN8KCoHmUQDE4TgwScNPtIoqeh6i6go7aSzTMPU11piw3lvxgpLqQJ5KgrGIKj21SrTTdTdX3qiqLbfXCzVOxWha9pYQFSpRST5hBERPIETq09Y3hq6dUdR19ptiq62oUzShaG9yQ0ySFLIBHKlO4EfckcjQcG4CjD6zaxBHfEpnRtstdw+02iUuiD6k09S5SUtUQ54tQhvc2lXAVKpVHf5asKnb3d+grzcevKNwEVKW7YuoY8JxK/PvCMAlIGwYwN0azl2qTWdQU1aF/wxuh/mH4EQps4IgD09YzGrzX3FN9cc/it1rq59pKm0BatoRHcBRO70IHHPY664scTyHaQW6jRn7Q7dbbYmi6ctFDQU7bIDlQ95nVGI2nAlXJyTjMaqV1u7V0bt7qG6V+sRLS1pQUrWSCETBHmBMBXaR6aBqelLxb33luUKKunZIUp6jeDqWxymTnic9/loqketdI3cVJZpWK1JSGmH2lK3nBSQQPUkQJ+RjAMh3kfSP4EXCjMvmbR0D0fYL3VporpfakVfhyfhQCgGJIClEzEHOJj31L1f0VY+jy/XuMP3JphUrW1VKKkpJkFSCmDyFYPaYBGqV9mS+rarrFF7L1Im2NpU5VBltCQlGR5wkDZkcn01J151fS3S83ECtcbb2hvYhRO8hJ2AgkDhWcGMjM6XyIKvzDYGdnq+J3V3+pvlvFHXO1CqNLaaZtypSQGkAxHBmAqYAMemqdaFuUj1OErACHVpMnIGDnAjGibTUWdNEqoFXtf8RDYAAQRiVGUnPAAzmO0am+Kt1N1S27Z31PUQUXUKeha0BWIJgbojkgTPGuZVJ4PMJtGzcvFGbV0rdTdOjWqGqe3PtNlhIWoFSkgkDAGIgj3jWNUDVyoeo6imNE6lhSSlbikkJEHaQSR/wDUHtzHfVk6OqalVQ+1SlHiqdO1W0qJ8xBjiBBmTAHMTxWL0Grd1yHqymWqoLy9pQqUpBkzESSd3E4941Lw4xjzsB5lE5C+nF+Jv3TdYa23VVW4sqUEhqmQpJkNpweOPNuMHInvGpHH3DT+O6rbMkpOY0ltFxZctVLUUsrIQG3DG1JIgGAOBB+saKuDhTSBagCBgqOQMfrrKISbi+RtvErt3qXnVKbQCR68z/nVQqmm2xG2RyTGnVwqVrWQlUk4gnn20meAMlzBiDAmMfv8dOY1AgGYxDcW0pA28EcDt7aQVKWSABIUO8wP+9WKtZWsgFJHofXVdrGlJI5KY5jTIEyCTK3TCSJGJ4HJ03pVKStJSQlQMieBpW0mBuAOOfbR7ZO8Yj5jjX0Z5nzgBBjQLCj5skmZGj2FDEQMQZ7nStomAUjI7n8tHsmTlX09deUTDCzLBQU1PUMqBrmGqgeZLCyQVJmJ+pgR300+IonaGlp26LwHW/8A2PpWpRc+af8ABGqy2U+K05kOMqC0qByCDIP46cNuqcJUSNyvMSRE+uBoi2Tz1F8iqKrubBYbx07R9GWu0Vbqk1a3XahStxbBkGMwSfKBjM/TTj7SeouiajoKko5FU0nw9zSCkE/iD+PvrDnk1NQ0VoMupTG8qiBG2IHtqjX526UtOampUoqWSUtKKiTGPuiYH1xoT8LR+v7zuNScm4fT9pvd86j6EuF4auPT1G6l1lkMKCkIDbLcEJCCAIz7enroiz9YWq3XJisrmiqnacSt1AKVSN2Z9e5g+mvyq8jqNqqcpCl+hqsKXTLGxZJG5JIUBGMj1n1OmvTjfWdzuwpaaiqbu3tCnW0qLaIifMsAHSoMfKHbQM/b3Uf2tdNq6euCG6jxPiKZaUNNJCVoJBSTORPPbGsLv/WdGOmrJarJUO1aGXnt6HVALG5Up3BOOABA1ldXTX6nFYwqjRaKilWlS6ao3ueG0VFIX4hVBSTjPpIOq7U3u4BlDAUhSFmQpLYE5kLERAJ9fT6a2SCm2oFMRGQPd1NEaublRfKVxbAZl51xICYCgUkzM/LiI0Nd6gKdWcfe/vrOmKi8IcS/QK8JQlQKQkqAJ2k98E+umlwReaStpGLheXQqp2bYb3/eSCD2n7wHfvjGj/1K48ZUjs3+0z/QtlzjID0K/cn/ADIr65DSlETnj10ubqKZ+jaTTOqYdSvcSACVKJiDPIAJwJie+pa2juzY8G7VLLDqyfDbUI3wJJkYH+Todzp9bdOxUNvqQ9MqQoQAZgbeCT9MfTU/LmTIQVMr6fTZMamxH12uDSGFUyjCiIJA8qUwARzzAAwMyfTNcT4lyaeKkqaYbMhKeDjk8Scf9a431K/GVVvKcdUJSDgKPH4RJ+muXaZ9NM6UPuLWjbgHBn/X7zoCrXmNE7p0G/iLehoE7T5klWAkwJz35/LRdG3c6BBDLyPDJgjcCJ4ifT2/LQ7YeUDtLYTCUoJjM7Rn1740W4pxpDTadqKoiVgmQkHgRgTBk/prD31G8IxHliblislK3TLTU1FQHH3l4BlIOCDmRA98fjGtEsF0+Bcbom1FtaCDlMoQnaYIAPO0jMGJB51ldK0+xUqVUJLjhwnxRCFAiBHGPSIzrSunGEvVtE2ltwOvrG9SEbidu0EBJIhQjIngj11LzrZ58yn6m5QK4E/Q/TfTll/hVvqmUKcgpfSVECFQcGOYk8k+uraAQIiTqvdL1jlXbWnXFpWpwELKUbBIIAO3sSmCRqygAAkCfnq7pgoxihPzb8QZznYOepAlL3iubygtyNgSDIxmTqSMZGO2u8QMZGoy82V+El1Bc7I3eb8P366Y4WIbWfkDr4mIfaJ1BRtX25UDbCK1SkBCkoRuVujbnGIOJHPoQBrNR9n3Ut3bTcqd34eja2uBa1kxHoAcJJwTiYmI1p3XHTzNHf6u8UCapioddwp99KGXFHJCAYwMTmBxjVAvdw6kK2X2Lkll2jYWauk3bEvESSUlJ2rxj2jHuvldwnsn1P4bj07OBnNAgSR964MXq2dSdNvFNxtxDTg8QBpGQFOqHcbIChmeZIjTO59e/Z71ZU0Ny6k6XuSK1txb7T1sqUhp8oG0q8NyCEwJ29pOedZFYr3dEX16sYuhpVlOA4CGnhzCifu+xgx8tOrj1PTLfQ7demGfiEtraQ4EmClQJUE7VAEGTxPPYnR8b7lG7gxbUYvSysMNso6PmvrGdd1VS09irbZ0L00u00JQhVZVlYXUKbWrykEE7U5OQTHp6VbqutYsNQ9Q0FUh6nLSElKSqW1bYLcAgHcIJmdoHrz7c+oblS2OjrKOlTS0tSj4dv4cFCwlMEIUtRJHIIjmfUaadNfZtRdQUAqbvekIuC6P41u3gKBaZPC1QDySDEzBmDOvMRXHMwiMGtxQg/2V0t3ct19vFqtzdbe0hCqIPNBYI3fzVICgUqUEkRMxJwdWX7QaumTdbHcK20pp745RtKr2qEIQ0qozG5IBztEkCIz3GEFNRrtFXSLstzZNPTI/lpDwRuG4yolREpkk4gj8tWSzdMtXVdXf+pU1b1WunqHGmmlJQUlMFIBUJmVAAbVcHMDKrZFAsx/0nY0OjKRT3ZLlxco664LYYQ6lKkIUorW3mSggQnaUj73rwY1Bcbzb/wD5Haa20UyW1UrZK5WpSQdxO0ScQCEniZ7d2146ZsVGho22oU6FLCSppxNQjJ/5BCe/eCB9NVu6U5o6JbLVI/ThobS4oJUt7lWQCdon0iY9dC4bmOAFQB3LhUu/HIqup3uuqW3VtzX4iqVqldcUNp2gHbCZ8s8GPXVIeTX1dau2Lu7T1OhW9DyyoIXmP6hImSc6ZdOLpqqyPv1TQSmncSlRSYlOzco+5kTpFbFs/wAQaec3q2pCygJCpwBmZwZ99eIoUPE8gs8w61BKX2yoFaQozGB29COOcaPo0VKkF90EKKVSpQjd2HHPHPf89Q3F23XC5urtdGKRlxt0hpB3ISfRJJPI02sj6X0CkUClacrWSSAAIgR+uONLO4UFjHxhLAKJp/SNjdZtjVytS0/HJUpSkunyOjgj2yBxqqdeFt6707tzcet7/i7/AAktFW1JQAcggTuH5cxrTukFp/hSPOCQCMJCYyQNZj9pyEPX5VQVKAabSqAmQQFGe/bcDEdtRdJkLaim57lDPjCYzt4HE0rp1DX/AMWpUMOSnaUlQACjnn5xHrGpK2lQX0uLSC60NqVE8e3OlPRFQKnpRT7KkqSh9YjcJjGQO49xxOmVRUACMhXJI1VVKkZ3sxLXsqaeUlUbgYgGdJHwVEgGD6n986evhTm4kxjkjnSOsVAOOOTzGiBagt1mKKl2CqTPv2H7zpHWqQsEHkie/PppvVFJZMJ8x7nk/L89JH2yZJI9xzrdTY45MrLaztKATtJmJwT6nRrKZAMye47+mgGYKSdwJHAHJ0wpi591CiQeQBM9x+B1fJkXbZ5jAhCQEthUkZ3DIPfRDIIIn5jvqBhtaz5QSTnaMk/KOdGFCkLhSClUAkHkd41wNOnESLhrMEDtGI/ffTNkAgHfAIyT/rSllfCfX05Gj2nFTGST66KDFWT5jGC5tSrxC2lW5WxZTxnMcie3eNdtXFVD1AHlWxdaDSmnUFAltAWsAKJAxKiBP6a7t7aXHtqshUAgnHPJ9tLLxcHaNh1FLWE1HjB5JQ5JZbgAmPWDGeBnk6DlKg0OzCYw5UX0P8wg0YuVuvV2qKOnX4zzNG2qrURUJUlQQS2RwSQRGcR65t7dXR9OUxt9M7UIr26NVWVBKSlxIyUFZJIVEnII7YJzk1zutVW1qmbAuvuFEy2h5xLySlSXI3KPJkhUe59BOtfV0t1H1L0dS1wt9xtZqE7KllVMqVQYlMiSknP4A5A0uvxCZfBPUrnXhZv32bUt3Qyo1TSEusPrHmW2oje0qOQP8HWMPW2tXWtBYkVKQsthUqLZIieIH6RJAjX6FU11HYKJFBb3K9iip0eE00m3ockBMYKgZPJmBz2xql3961V6Gh1BXqNc24HKarSyG1kbtqmnMDAJBJzAJMmI1tuPM9iYnipTKtFlatn/AISC7UuHw1lGQrMGRGAImSe8a5XWuw3WUJpm2mPDaWlalSQE7SU7QNs5yeZiZxq0JpWqG91vTTDKaiqc/wDI8MHaGFbUhSVKXG7JPzyeDqo1XT3UNsuLgQkIUtzZt3STPCYOM+h9R6jS+VEGIDdZPz4j2DM75jS0AB15k1ptSq+to7TYGFuvreLikVQCxtgZCoGyIyo88zjQ79japkKXRXqmrk71Qgbkx6iVAdzziffWp9BpqOl0XGxXmwIt95q7C+6w8d4cdEkbiCT88Afd0rp/s4qKjqOnUbixXpqkFSQ0I2RiFAgZzPfSAVi64x237ymGX03zMPavf0mWobWxWO07tOyH2kT4SSDuk8zmYH4fTTWrttIpBVbXQ67CS4kJhCTIKpJPpHrkx21eOtugbb05drS7cStNC+4lLpTCVAT5gkwcgDEz9dZ/cLGml6iUzbHlP0CXfK6SUpW3IGCYkZI78aO4ZCcbiiItjdMqjNhNqeoKqipvjFLp1qLDJCnVFYlPIgevJGOY1H1DSIauri0KSQH1IlMCMnt9PTRqKe2KqtlQ+unQUbVLZypUSCII5nA9dF1dAzWKNyqTuRVUrTqQ1JWtweRZiDgFJJJjkDvgRfndGUx8be5a+jrdRuVDCqylNTTpTtClncEKgEyn0nd+utBtNqNNeaS5WxTJSqt2KbUgkbS2Uk7YwAE7vz9NZX061cQ9StUzFRVOvj/xmEArWpsEqKikcjBMdwJONbV0zVq8CgXXNLdTUJWpsJdIK3IAAI5Mn0InA4EaSyHdlHiFa0xk9/3mn9JqLtI5uiUrKuwgkTB9MRq0g5yMfrpNZqBFvYS2hYUopO5REFRmJ9hiI9tNwZxP5atae/TFz4XXMGzsV6khEjMT89LX7XSqrTcEtBur8Mo8UCSU4MKB7Y9jpkIgjM+ml9xtSax+mqvGqEuUxUUtIeKELkQQsD70AGPSffR2oiK4yQ1A1cwT7QKJLt4ffSGqdpI8RCSdwTOCUgnyhRkwOInBOFd3pKDq/pZqoqVpoayiTKFOOENpgjCyB5hHHqdNeva241nUKrfS03hkEspdSkIBQIgmBk7VDnjjnVD6pudxqqBqjS+HGg54f8ptSPE2gAqAB7gE8iAOZ1gdSziuxFFn6Jfu1LVKt14pK99hanG20laVL7FLZUhIWoZO0En051X3KdhL6k1IUmmKsKUkjcQjaEKAOCCJnsfUa/RbjFoY6Xobkioo02tumDSQlxI2mIMpH9RVOf1Os0t9l6qv9S7drj09TrofO6xW1VOhLiwklQ3EQTO2CpQM5GRofdxxXHmVOnt9ReelV2qjYbNIipS6h55WxKlbAlSZM5jbgcSDiRq+t9Q19P0Ql1np+lpry0wi1rrwIL7SISE+aNxEQSBEgDOBqo03UtIvp5m3IUhyqLpbRbFNK8IiZSQEJyQcbQfNu7gaLpPs7vVQ7QvXe8t9PhC1LVS1KlVDqQTMJYaCtkAq++oHPAI15A10omshQjdkijxK99FVX1dDTpZJDRdUsFLBQvISkkZJGySDyfUxb7j1l17RIQqhcsFpUhKgPAo2NwVkwFbVY+uuLr9lFHc7YKaz/aBa3a1tR8NqvYdo21TJAS4pJE+6lCecaqVbZX+lAm2dY0JtF5ZaUlJqgS1UNkKCVtOJlKgd2Z55B0DLha7YRvDnSgFMMrut+vayibZqq61XB0rKVtqpGFbkYAAhPrPcdudV+0PPVFwp27046zQukh34dsL2xIAGcR6SOT66AuFWw/XoVRIRWVjh8gYClfLsJiJ+nbR1H1NebbYFWKqeq6AFzc60EgJqMyVLxkgGO8fXWBjr8sK+UNxO0MW6mdvNKFldPUphCwpQCFck8GDyM8xzgar1NTnfUJafDbRKW954OJGfeBp29R1CialhklTK0qVI3ISf6e5ng4P6aYXx2rqN5uaaZ1aWUqK2m0pgHjKOTJjA9tasV3MYwSaAlao3Xk1oaedJWlKwlSQAODwRE/nq29Nh2lq01CUJcCITtWJCjkj641X2KFQfYdSCpKvukQQoQBgwOD2PHtq42xCWdgaVKt29REjaeAARzAnI7n21M1mWhU+i0en4szVulmnPhKl/ZCWmd6gMbdy8SO0R+es86nCKo10tNuOVJCApSJU0kAqKkqnEmBx299F1dZWUtFUVlHU+I0hIAUhRWEjvjvyZjnVap7q+u41A8UEKGVJJBVAwQZHb2xPtlPS4ayFx4g9U59OvkzRrDTMWfomiJrG1OlIUpppBhG6ADu4JmMCY0xd2qAUowqJgGdKW1uOWR+kfIMIZCRBlO4jB+X56bOutMlbjja3UIBVsagKVHAEzBP1+R1UX6yE/JoSFn4VyrQiscU0xPmWkSU/Id9VyuLanlJK4bmARz840xqKlt5CXmm3WkOJCktukFaZAMGOYnnE8wMwpqgCncTyZEenvo92u2oACmu4ofTEKA4+6Qf3jSaoJU6Ss5J3KPp9Mab1WZhQPzMRpS8lJI/mQIkkiCPl6mI0OMKb4lMpifmY4+umjZPh7QOOT66ApmHCkLAlIyYMwPXR7I3CCcD1/ONWiZOAMcW5woWkNAh2ZQtJIIxEc/n7af3KwVlvQ05U06qZTid4Q5jcnkKT6giMf24rlvdRT1Da1bkgEEqAkjPMatF/6uuHUpphcKlVQaVvwmlK8sJHHyOlnL7wV6lDEMfpkN3EqCkKG4mPcc+2mLJTII/P9dKxKlgg85J45/Y02p20JdbUVeTBVAJIGMZj9e+m1YiTsiAxo1UeAlLjbBcUCJKVQUj1APJOO4jTXpaxWilraq6Xd1iudcb8NtDvCDwomTyITkenvoJaqJysW5S0amKcxsbW6VwfWYz6+3GklztiX+oXaRq0M1VQkJdWNqlK2mRuwgyIH0+hjY9xuJvwJrdqrLVaaa8dQu1LFRVNO+V4IIDHkAJyMqKc8mJ5xoPpb7Ruob5Xufw6icNO0PEStdUULCZwQmCJPMfpqjMU1ePsWudOthbTor3QUloNyngEJAHoO2hLLVXClLNNS1zLlO6WXKNoIhylgS+VmBCTmcncSD2gcC+6hBOw2lu/4P35l++1K4XirsrF+prkhlx0fClt1IKSoyoKkny4BBjn6axdL6627MUV4pqKrdQklSlpBjOeCBwB+zrUuta1bX2d0LzzKXwKkgoVMKltfp9M6xZqsXU3veUISAytPlklXJyZMkHE4/LU5h/yyti5wfWbjU198u95fYt9FZWHaYNoNQApPhoAhO47QDMERmNpxESgu94t9BV3Fup2m4LVS1VK6y0haUukpWsr7iDj5Y7DVktd2tTrRVStButZSW3UBCkhQVBSoKyFHynEYn31m3VCaiovZHwyKVtVQUJYAUFuAGd0qER2xxHvrmUA0DO4CRdRx1bdr79oSqHqGx01Y5d7a0C2KRG80wSUpJWScpXAwOO4zqn2/rWrtN0arKluottW2SQyUqU2DwSkTKRM4MxxwMBVtbW24vtUyq2hCkgOHds3Dd3g5AyeM8dhq69KdP2plikuN+t9SU1NBUOmtEKp2QnckFwx6gSARO4euhdD5jYcgmjV9/WeXDqh/rplVup7XU1FQpO5BWCXFKGZTEx2+fGqpW0Dybo1TvJqKSqG5MLURBJkSkgbckD/sadVKlU3UVldsdupHqioQlCW7YsoQ84SQFIBIUnkGRAn5au9/6Z+Ps7TN5uKXqotBSXGUqc+GGVKla43p28xExI4yqfYb+Y0MnqAKR0JkFwtbbN5bcuCwmiqXErWsSSjO10JHCiFbsTwZjVqNrs7aaJkPLNuqXHEMK8QFSFlOBKQJTuKcQO/BGVFLRrrGFW0U6FVNE4p1tLiilC0ESoGOcAHt3PbRTHSVWopdcqG6XxT5ErUQlskTIIkR2jHOtOw6M4g+ITbaC6VVMHqOnqDcG3zToLCyHGISSAmCOwjM/dnjWjdE1yqu40Y8RTTlOrxCpRBAkAgiI7z+E6T2NxVNcW7mrxGhWLG8pSBsdSCFKMn3VgT97OiaWjRaOpEKtdzarWktLbWpg4PCkkAkzkqH+p0k5BjNGqn6Uo/C+GStLm8LzIIM/XtokH3541Wul6tBtRYSILJ8ySCNs5JmO5/ZB1Y0kBI9xq3gy7kBnw2s05xZSO5KCUjgQeO+hq6vpqCkVUVDobSMCTyeAPxIGuPiUjxS4oNpbMKKjgYBn89ZR1Je371cadlio3eE5LzCQQkNgwEKVGCYkzPPeDrb5aE5pdKcj8jgSqVdxqrxfKKlernG7fcHPh0qCwJUcEkjidpMmMDGqeK1L1Q/06pCaV1ZWqjVtKyqFEbiMyDByPTuQNG3Vf8A+jrKq4O1b10cqYZpqdILJgkTwZgAgEcADmTpb0vYb3buq27q7bnK1SEf+OpbraEAEEcmJCUz37z21xePMqhTfAqpwLDfXLgKWofoqhRAcbcC/ESqP6diSSIyfuiJ51ol0rKBu0XmsR1EaGprGRTilrXQE0gQR/LbaB3LcVAGAed0jEpevOo61NM+7UUJttUG1NqCUoWhOQCkls+WQAM8+hnWf9D0xvXUan3nFqpaHzNpK52rUMqn5AfLA7DRlU5GCzLHYhcmWjpizUHQvTz/AFFeq1NPW1BJWvktE58JA53EdxE8SADNWuf2t1lOn/8AQ2WmaYK1ISuplayQAZISQBz7++dJurbq/wBW9QvtMPrUGCUULRVu8VKQQsgf8zyPWI5jS212WnQw1S1z7geeeQ40wyElxKoIhW7AkKB25OPXTLZtnsxcD5+YsMAI9TObPx8D9Ja7T9rdT4qEX61teG+SVuUqCSgYG4pnPHr24M61BK7VfbAjpbqR0VPT9ckOUdUMqt6lCUvtEgQnI3JxIkwCNYyLbR0dFU3GidVWULTiW1NOo2uLXwlIKSQpBIJMQQO3mGmXQ90r119xtNfW/FuSKwCSA2pS9riTjiSg7QIx21tH9UbH5HicZRi/5MXFSvO0L/RfW9wsvUDtQ0qkSqlW6yQZHaCf6Cnt3B9NWe225nrLpyvqaWoo6H4Ne2nW+lQ8UpAOxESEJgjJmTzmdQ/adUNVVRb7hUQ4+ql+GdIMblNq2oJJHdO30mORqkUdwuVvo1U9O4mH1hLaN5G05zAPfHzmdR8uM3weRLmPINt1YjilqDULorHV0qKdunUpKn20Q6vmUqIEGDAzMfXUt/RSU1e6mzu1aKIhtKfGIKjAG4Y7TJ9vY6nqF2CltRKqcuVruW3xKIjCir3mc+0TEyn3F9ouuwHAsJkYBwfz40BjQ3RrTqGyADzCrfLr7aN25KdyoM4PH9hq821kJQJx9NU+1tgPqKcxjVyoXFpebQltKk8rJWQU/IQZzGDGoOqJdqE+sxnYhjVaQiwVamkEkyYHtiY1mNuW6m90S1sKbQmPFERvEkEkHvGPprQOog0jpip/mbNieUiSTz+/TVU6ar0tFpay9vSEqbaDykioMkhOJ59o4mZGm9De1nPmR9ZQAQTU6VYfeBQrchyoCxAwUNoHp2kjRzilKO1ACiCCABJP7n66AtVM6he+oUkKSjwyRiVKVvWfTkgfTRwqFUdSh9tsOqbMpQqQFfODjPf66oKOLnz7tbVFFRS1FPRNOOsLbbVLaVEQCQogj547xxxpHVu7iIPfngaJXdb+80/T3NDLdKtfiBIVvWpQEAlZA/ICdKn1JUkys7okd5yMHOizPmDPuhSQEqyMqBwB6aXOKQsKSpRBAwAJBM9/z9dTvFbZO4ZnuCD9fx0CSndHHz751kiGWVmmViCR8zpzSlgNL8Vskn7qgYA+Ygz+UaU06kinLYkErBiBGB69uflpgyZB3biBwDmNU2JEAqgwxwtlADaCkDGTMn+2ukLVG089ogxohT7DjLSWWEtrQjatRz4mZmOxjH56jQ0EESoEESRriGEypXRhjIPlK8g5KQYOmlMpxA2CQlYggd+/9tBMJEpBSJGROQfpo9oy4JBGePSfT8dGBihEb04GASk45HGpKh6vsbdZcylD7lX4AQhb5UtI3KBJA+6I7Z9ZGoG3aOmo6l2qVUeIlADKWUBQUuQJWZG1IHoDPHGo6qipUfFP1G2rTUltRZfVATsB2gAH1nn1iM60DAZFBF+ZbOlurmFVDll6k8BmlrSfBdBVtSseu4YHHzj1ibXUdO9PN0qlCoo2KVQ3LUmpSlC4zkAAkYmN0H11hV/FvrLX8LR2inoA8rb8S2okE4OEyZn17HGqmupuzVXUUT9VUGT5U7kp8nPcTMHnt+GvFlLeziLjEwX3m5fvtE68YcLdrtK21UFOqWQtsK8VzIK/YRj3+mqp0vS0LVOeobujxW0y2xSk7Uq9SfUZ4/vqm1bDy65woqTUJ3FKFkzvzGOSf7zrXbba27L/AAZ64ULlS8hLbDVM3lSF+UyQRlUnAzB5zpBlNytiZAlHr4+ZYbHdn65aaN+1s25AJQULQUbJSYKVCMn1McZ1JbbdVVPU9EK6vTV0ygqkYaedUFMFQ3EkLB2gwY0Ja7d1LWMV1qsT9XdA4ht9TVvWt9bbe5YLbsDKiFZHYmcgmVXU/UF5tdWxTPsGidpilKg804hbbiVbinJP9Q57zwI0s2Kz3DDKfipc/tF6dsNyuNM2slpxxglLwEbhM7c/0pJMZMxGuaehsFoRTWF16lrad1ohTDFSQp5zIPiqUcjjGIgARGq11TeHKykbrmn3Hi24mqQRuAS25EgKkgkHdgARIB51W6Siq7xd3athJDs+H4pRuSqQSUxk8BRx6aAPy11Dbebk1PQtdM9VvV9JVLhifhiUpKZVuB2knCQDgiSYB4Goa7qp2pS4qpuq3WVrV4bKSR4RgHccwZ9vT31JeenbvcaQ0rLTdQSG0hLAMp2hQ3BMGcTn/Wnb9gsloQiqqL1X11xSgMpLTQZ2J2k+bdO4DjA7TjOsk7uSYUe0UBKTbWr1Q3CkuqKF15poBTQZIASZlKj3mczie5A1bl0NXcqkKdSuiqKhKnlLKUpYnk4QfKCfQGJ40Wzd6llpVBUtG40tQnaXENhvYSYlJTPB/H9YWaRNqpHnKOhdarXBvdU46vYviBsgJz6nudEJB7ngefaKn1Lbbg2wpuqbWh5LilIX4ydgnBJKgZkDMCTMY02oaWjpK1urRU0yHGE+XdVITIJ5UEgRtHcH20ZRWK4XW1FDDbCqVBlZWsltKlZATMKJEDIJ79uV9bb2LZcX7cKqkcU2Nq1UqAFJxO2SDJB7+3OdLbN3QjRfaBZmoUHVtHTPN1jNS2pKlJbqGd25O2MrSsdwZwefXV8peo7bWUxdpHC6pKdxQMED3JMfvvr8/ULdSFIFMrx0ltJcSG07BAO0FXBUTAjBJBidHi6pYtRSHUsKdclOxKQNoGVZPMng8+xGDadHQ0ska9MTpuPJHxH/AFV1TVVFQl5xCi24stNstuff9FBIH3gCBmQN0ckaqV+cuDTXwtEQpwqklG4rWogyAO6Zn/J0T07Uqqrk/fn6UpYplhumCnApQQkyowRycyeylz6aDo79S0d2qrxT3FChQVC1IolrdUpDaQQg7oI+8BkzHEac2gvs/eLIWXHur9BLr0A50j0c3Wr6mq2H7yaVVWgLQpxbbAMSlAlIgqJjJVBPA1U+petbH1Ld26Rq+LTbG1qNRWna4+vkhISMJEEICjMBMngaU3bq2j6nL7NC1urXKpLy3FPAJUp1Z8MBMDaUtyhREjakDg6qV/stlYtFPcemqhm4EunclgeECoD+mIJj/iZkHnJ0yyrjACiDxs2ZiznmM791TRs1zt66afFZQlZRV077u5h/ASsgQNsiRMExxkaG6ZuFKOluoH7YP5KXHC2sohZRswJI7YHaY99VmlYQu3N1VY+w6KhpadgITCeNqEx2JnER7mdNOmFLFkqAUMU9O+vysMoJCRG0pXPKp5JmZ9DpjSteQfeC1KVjP2lLt9Oty/oVShS/Ab3tlIkgpRuTgf8A2g61jo5npWlt6q3qBmncdQFOrYW4CVLIgeeZTB7BUkqmCBqgMpfauBoKt2WdwaWhJCEhJSASUiMxPPH00bQtPNobXRv0jtG4seGpLqEBCAZKlpcIKTHtn1ONCZT4mXNtZjC4VNJUU1c9UuVFIw5TrUllLAK2OD5RKRBJBzBMg6C6Hbs66+sFuFeapunkOVS0JSZdR/SAYOJkq0DVVDb5NrXXl1Na6A5WiQ1IPkTxKhkblD0ETBl7ZaB/p6lrqivYQxUL2oIEDa0gTvTEg7id098f8sM6e/UH0g8yhcBBPfj9ZXer1VdXVVSR5kNOFSYjnEwPpOq9bnHEJKliVpXtZBEGTyr6CPxHpqzXK6ituC/AdSGnMpQCQlAgQD2J54yT9dJWnPEulXXOkFNPMBKYBV7eknP10jlbcxI8yjjQjGqTmoS3U3FSKsPGmbAblECSP9n/AL0/pDbf/jrwWlQfIlGSBMjP4SPrqusVFelgsKeBcUvYEnknvBjABgHI/XRyUhSQ2yDjyoBMgngH6mT9dJZvy1K+lWju+I9szaw2kLXvI9e2SeD37fTVsYhhpTuE7El1ZODAwPpP4ROkVubCik7Eoxwmf76t9kp/G8SqWPITtTOfKMT9STqM/vyE91LBPpYVHX85lW6prknpphhhe5yscLYJxOYJGvOlW2l3VtS2XAGClLSkpSst4IHJEgCfw0v6iqHH745uaJt9KC0kpQFBKic/LAjtGntp6evb1HRVFGW22Fr8R0lyYOAJkSe/r+WqOFBjSh55krUZQx5PXEv7bi/hNyEQ2SShKjKgJ7zyZkz3nUBeE+aSO4H6aK8B0NobkQBBO0AH3xxnQFS2toFIEweR31sEjiSmAYkiLa5CVE7FAnv2/DSGoASqRlMZmNOn1+UgpyTngx8vTSOqJBJJgRz66KpmKgTygpRIMxySZJ0udcDZKSkkRInEeh+X66meele3iDONAOrG6JJJ7jtrfcIpqJKeYJIwD3Om1OVgFoHak5IJgHk6VsJKCFAApHIPf/GmLQMqJIGc+unWNwoWuoc2lKVQpWOYH6ToptHiKG0COBI0IzKgSZJAzA/XR7Q2gdsTzg68p5mH6jOmDYYIcUQpJBSAAQZOc6MgLcKkJ2oOQCZPy0vYUJ4B9oydNqOkfqKhmmYQpx10hKEpgkkmAP3xowHmBJ8QmmBmIzxIzpfd6yoaaSlglLrxKElXYSQCOc+X6TOrDU282qsVRVC0GrQDvbQoL2cDJBP5HSTqSgXU2gv06YWzCiEiCQPn9fprAfcLWZyY9vDCV6hqEUlxNQVNOOLYBaUpIKd6VQSCoESQOfynVbvV2bd6lpKlRQXUK2OlACUE4CowME5j8hqFl3c6acgrK5IBJG3viCM8DHA0HU0SK1tS220oKdu8jCRieTxjPz+etV/3EWs/kIjlhfw1XSO1SPCp6WrQ8ZX5wmckEknAzjv2wNfpizdP2W3dKJeqqdhSnUKeaL380NNLHIJEEkEZPr7a/KbSqyqSKVt5TpKYKEICQqAQeBnA/OTrTq/qvqJvo2jsYuNFVpRTBK2mTuVTpEBKCsgTAMSCZ4nGsZAG/JOqCK3S6U946eTfWKXp2LX8BVGoTT0S/DK9oCSXAcKJCScxEwJnVN6nutR1Lernd6ulpE0lQoEr2KDi0YSFFQmSIBIMR21m4XWv1pVUvtJSkxsdcIRz/UkEHj0nUhdXcb01Z7ZUFLClq3rClFKECSSQTO0R78eukhiZDZj5yq3UZ27Y6wzQVrVY7UtvFoNIcCQoRAQBGFhWf17atHSFBeai8myLeaoi4tdO8al0NllSgkSVEweDweD76Tv2ekttyU3RVLjgZMgOEAhcR2GORjkRrfrH0O5ZbNT3Og6hdo3KvwS8pphCyvdAjcucbiSTgn6aNjwNmX2wGo1mPTEbzLPSWSj6csdNZKO+9LeC6pLC1tLWXFgZLizuPIHYCJx66onUqjX9Xot1dTW+toQ+GlM22oDaCgJSQvxFJJGVJHJjOQBq0F+42avcq6vqGtuSWnahCaZ5loNENNSQfKTEqBGTxGvz/U9Wv3S/VF1qqalqH20rKVrSJBgiQCJIiMQIgDGs6jAUIBEzpNSMtsjTcBYK3qS2pR044LY1TNE3K5Go3st7ZlKYBlQEEkzE4HOs0u1u6dtlzaUxUXK4eKva3VvvFJWZgnaFmBIIzx3A1WrB1v1GGl2Wm32+z7CKgVDikoeVzO2BKie8GB9NWC22x25dQMLp+oEbUDd4yErKKRAmYWqSU+aPfdydJFK9spI/Ba48peqHrPRGmpmzUJO9imS+kid0SskgSrMyQIGe+hmeiLmmiduag6UubpdfSpAIiAUpB8xB94Hvrdem+krZdq+iVcrzUXJbDIqadTq0LQZAEhIEcQZHEe+lv2hUPg3BNr6auDLdXTtGoqqqrcDgTtSVedJIEJABjPPfuU42UbgagDnD+0zHL4xQ9LuKNXc6WncdbSoUrqit15Mf1pQDA5UCoJInuRpSl2wXVYqqd/4OlSrasVAU6huZ+5tAKhGAVEHiYBnUdrpKO7rNyKUJU4FoXU1SlKdrF7zLqUmZmIzAGT2AGq2T7M0UlI3TXG62q3VzzZdatzzYeW2ZJ80zPMmAqfbMGVWCln+31gGfHuCof1+kymy16lVdwoaJttppDZUl2rUpKRtOeAYJwACT6ZBOoaqrdo/GZs1qa/hbqh4FQ7SKQ7XqEbpUomApQUecdtW/qvou5dLt0bBYK7RSodfqqmgcC3H94hQUjHlAASMeUTqRh+zXzoSoq3KtNKhlG1ukZSQl0g4Bgg7ZEQCPrjS2N1BOV/B68ygcDZR6WIWSOT4mGouVNZb45UW9hamlSl5DYIDS90naSTgTGYx3POhb3fGa2gp7fTUoYpgvcUpSEkqk8Ae4Hz4+T9ijq62/u2m025V3qqwkJZYSo7pyYSDgZBJ7RJGr7Sf/AI39YrbNxqbaC0yfFaoma1pFSoYJClZSg57TxznVNyu3dcjoCr7a5mH0CkpfdWHCFOIlJaTuCDu75HtgT9Y1ZrSWLDQJrHKtLjNWsJgrlbasgKKcbQSn5/LVtpB0xYOpTZb30IbPXUyvL4lU4t0YwsEkpXjMxE+/Fm6s6e+z55y2V198dLNwQEIubaPh6cOp8vhuJQo7FRHmIIVzOCdBDFSGUxpdhVldb4/eUO721292wM0K2WalzlTytqVo7AKjynkSSJ9QTmsPWHqFDtHTrsNdvaUtJdFKopKVJKZ3AEYxwT89XjqO2t9C3B9lxmoctyVDaA5uWyDkQSPMDnnn58pF9XdLlgqbutUDEFpVJB59d8HVHdiyGy1SZtzY+FW4H090zVsoS71CCwwFJX8Nv/mnPmkCdgIxmCfTuD7xdKR1T7tWuKRk+cCBvg+VoDjkZ9Ijtiu3LrFpbJTRhYR92eCfqP7arj7z9wKDVL2pSPKgQEpHMx8tDy5UVfTxeezDYMDs/q5/HQ/3J6KoSp564PoUlJUfDQgRBPOe3p9fbR1alaqVhlinCSob3AkgFJjBPHczn/GgGG2n3A64kikpUBRT/wAsQB8ye3bU1Op1Nx+JeVDriipShmDwAMHjHpGkDKajzCWaOqZWp2pYUgwEIJggTiefSRPvphRNkPgRAR69jx/nRrl4U/TtUaApxKCTuWrcEymCBiYnP07ako20rWViTJmfXU3VPQuW9GLsRvTSUpabELX5BHb1P9/pq6uVTFo6YfqiAlLTXlSTE4gD58ar9npNy0ukSpQgew9fx/T30o6wcdevTFAuoSllLO9CSTG6ckiDM55/XSWBLapvVPuiGktTtddlKYK1OBIceJzJwVSPTdAjW50DXw9vFMQCafyEp4Vifx51nHR1rcW43UqRHxCg2lJMEt5XuPqCEK7f1DjWnLKGWAlACUjgAQAZn9dUSTchZiOFnIdS2CFJBKhAChMfL34z20BcIWvxpMSJiIT7a9W8kLC3FKCQJIB4z76UvVJS6QVieYBI/HS+4kwmz2ic1rDpTvSyoJCN5gEwOJP486rdXjylIJiAY57/AIjTisr3UUhKXCkujYoAwVJwfwntqsVTyiCMnMmO3aY0fESe5h0CwGqQQo8D0E6XOpcnkntAMjjnTF95KyjxFlcDE4j20G4cAzJ5iOPTTQi5Jidor2CAQCIPvo9oqIAyQeY7+ml6FJCQEq7AgDgaMZwtOZjtpgmOqviMmlKQCUqKZ8pIwT8x6aIacUYQYI10pphLDDrRlxQO8KVI9Bjt37njtodakpWCklIPcjn8NYTIDN5cJUR1TkAyhRn1n20xYW60oFDhQexSYj3+eq/SvncEgTGn1GW1pUHSQoDywB+ZnA0bdxFNnNz22W9bF0drXKtyoddESoQRmc8/jqzMrjaypRDbvlcAMSI49hMdxzpXTKSkk7gSnR7bp3gpyeIifYj5Z1xSqihOOGc20oV9sK7bfmHacJdS+vaQ4mduJMkRiB+XfQd6sVTSpat7KW6Jh5gVTYUSC+hRkBOBg45nt6Y0a50irtbXWGgkvESlagREZAmc/wBu06pFU3XfxG3GrdccNMlSEu7SNrZQd0xGJn0n2nTOJg3tk7UKU90Y9KUjVqUhNMllVWslKql1tK1KMAEJSuQhImdxEn5DVzoL/ZLqFUnVlHT3Fh1WxLtMlCCzGApC0wFGDJBP1zqpU9S7Rst1u5txtSCh1TqNwKAZAkkQDn05jManarKWhbdsbVubYFUrxSsGQ7HYGPLkEgjnjtpshANnUnW5JccyudVWRFmvHwRKn6SoPi01TMrSJkAH29+ecTipWt5dsdqqlNIoU62VNqdUkJKZOSDycAfQ++rh1Nd1VXSrTe9ThpXVBKlKBjuNonsQrAAjVaDa6umoHaVtoKZC1uqAKgqCSN6Y5gbe8zqUb/KZaBWg8f8ASIbreok/GPuUtOsqccdCZUhraPMMHMdiM6/YDyW0dF2xIeZUlssplTkEjeMqzj14xr8k9GuUQ6hd/wDkT1JbWHWN6XXluNEgkKQISFlJIHG3t21s/Vf2p2Si+zylo7ferZcKpACVMMLdWqBMmVIRnAETme2q2nCpjnzf4jvzZ6HQif7RutRZmHVENrVXuVKqJhCQVKaclBeUYwDEJA559NZLbunr5fbUa9FGtthakteM4tKQoE7EyCQdoI27gDn5RqTp2/1XUHXVReK5tD1WpMMhSCW2oAG0AcAJECOB9JY9S9TovN4o6QsKq6pxKGHHGSpRfIJCTyCTB9yeedT9WxYbh4ljQYhi9knsdloaLelNwfqlLUplxbg2hIH9IkmZA7as1jr7T07U1Ndd6toU9W2GHGltlQWlKSACYMJ59ZxjB19XM9PUa6Z+73Fbz4bUPg6NCENtn3WR5jxgD6ydV+9MUF2pEOWu33RDbA3HxUCoRGdxCkgQfp276QDsavqUTjXmu5ols+3Lpjpx95+nStwllDFKhpraltvKlYVHJCRicJ5zrLOoftCruqLxdrgy68pVWopDCT/M2qMdo9ADHy4Oi006altmhqqNJbpVt7UuI/mJkGAoEDBIA99w4Ea96hXZ6E0jNKhu2NFsurQ35CszEbkgEhWYMiII7aYsMtGL1ta5d/smYZZutjVdCz/MqtqGVIAKFbVqQgZMJlKTnnjHdje3+paa70dxctlC90+9RvPXW4KS2qrbrf5m3asnxEKQsNBCEYPcEE6y7oOue/jyTa1U1O434bpddKleGN0q2Se5gzPtJ1rF+6hqKCmp+o3Ps+sdb1E4SU1VNUoqHG1zCVKAIBXH9UGI5Ea7mG6iDA6YHGTY7Mg+0P7Q12i7U9tcY3KS22upSkAhtwsysRHZU8nvrCqvrpCkNNIaDTDJ8JtKCZQBkEcRk9uONJuor/fLjd6uoulMtipeWrxEvIKVTPmBkDuP3OgWGUVNH4a6LxKlahAQgpUREwmJknHy/MLpjCkn5lPcSgHVTZelL+90x0Q5ebZtVfb08pKXRALLKTtSkGMSQok+wHrq/wBl+1DqS30XhXVaf4gysLS8lRM8BQIJM8jAIn01mnS7bFZYGrdXBVC7SuF1tM7RtIJwc4+XMc6+6guyV0BoKZDTKUlaWUNiBK1SpWCY4jGB9cOlAU5k/cRk4ln+3fqCg6isVJf6MNtXS3qaqGHEtgFCVYWjvICsjGNVFV3euHSS7Slt4tKKKlLYUpZQNslU/wD1BSqTxGqD1JcHU2hdvKw67ULQhIBO6E88+p40Ua1VOytRJQhhACoMSEgJiUnuQMZnSSLsQ/rH2AZxXxHnR94F46ZuHTNzqHap6lbcqadT64BQI8VoqJOOFpxgg+us+udiVQVqC243UU1Q2XmVJXJKNxTKgMpMjgxxOnvQ8Ut3rLq+SpmnoKh1YGMqSEgdxkn66kNWf/hRbW0wpaDsbdcHnHdW0AY9z7dzrScOTM5PyKPMrrNv2fEM1rxbaYMoQkyVrMARPaO/b669YtddWAt0lOt9SlhMpAyMROfl+wdGWWjVfrtS0VOXPjVqJffcUPCZaH3lwBwBn3+Z1oqK6kpG6eltdIBbWBFMAkBZTMF5Sv8Amog7c+p4A1zJk2ngQqYwVqZ5U0yLepu3+KCUK/mK4CnYiBjgce2eNBIK/Mp0ElRwkGCn3HOf86fdVqoqivbRRKBcQP5qk49tp9Ve/b64AYYbWoLUAmBBE4GPf999cOUKN3mGxadsjUeoXbqZDbZKZVvxJEQP2NP6anRubC5CQZUBjtgfvj8NBUyUpSCOBoxT6adsEk49O/8AvUh3LtcubQq7ZbbfWppkuVL3hinaQVuLK8p9Bt9P8RpDZKB/qrqJ661gU3QqysjBDXZAzyr8p99JKuqNwW1ZKMxSg7nno/8AYZiScTq82NFY2ypukZC6RUJS6gRtgRIEAH6xE8aNjx7BfkyXne+o9pPCeuoradOxCPKlMQEwCCkf/UCB786aVVXLfiRIKeB3+caFo226WnKCYVECTISJnnuZ5MZ9IA0tr3VIVsQZKu3Y66wIiSbXavifVVSUtqcAJJEbQefTS5ThXQ+Isyo+RQPKdC1dY2hxTS1gjjaDMevfRNrIU0phsBwuyjaTJ9Qfn30Bm28mUkxAiotqFyRPKT906WrdSVFKgUhR2gAfgNM6qkdFUra0UhBkqJECBPOq8uoa+IcDjW8AeWJG30PvH99MY2vgQGTEBy0hq9qFAhYXJyI4zidCLqt4CSIQkYPMZ9dEeV5QQFbSowIAz2HPadLalJaeIWIKTBPYe35acU+DEHSuR1BGAokbRJ9uR30c0hSwIgn0/XQlIrandsk8cxH/AHpiiCAoHaTyOx0w5oxzGormGQlLIbUpQUn8NcpTkQNxGc/njUQQSRvwPlOp0wlB2nj6n8dZBAmmBbmEMkBe7IJOI7f604ZK0p3kwCIwfy99KGlpQQvaFJHMk5JED/Oiad2QSFAicA9/af760SJjYZZKJQWtKQoJBPKzAHudMKZ1sLV5CQcCSAQO8+/acRpBTulZAxuBiBp3TyFCYg8n09tAZqmhjJEatKSkpCQO3v8ATGlt0pClDFaykF+lcC2wowFJmCnjkTGex08pmk7RA3bgIJ/t76ONubqWiw8jxG1iCDruPKUIIgM2IZAVMzq62RAQF1LDj1GtnxPh0jcXwnhQgGIJIPBEe+gqQVlcuouRWd1O4W1JW0E7ExJAj7oj0kk4+dkuSq7p+pXRU1M5V0tOlLpedBO1aiTggGPKBzE5meNUm69VpdYTucbQVK3eAlG1IMnBAGfY9pjVf1EYBwZ8+MWQMUYRR1C+hyg+HQpAWtRKghO2JiAZx3nk8867fqKW0VTdNaUvNobRtcdKp8fHmChJH3h2n66rTTrlZd0+IVONFW9ZAIkA5J9PT8tWm72ijZoTd26ulDq2m3Pg2lHxEpUCJjuIGeYJ1OyZBu5lXFgJShAqy4UT9paoWk1Ar0eZ15YR5yVSREbhHrJn01Hbqf8Ail3FCw2txSwQoNkKW4AmTGCBwpU4iPbXfTVgu3U9zp7dQqa8Rydq3VBsAATBUYAJ2kZPaO+ntf0/V9GXFLV3ozTvkpUEbUrUZgA9wBiZPMjuZ0wnINGK5F2MLHJjK39L0XTjdQOpaG4gIKX2wytBhlYEFeCM5E5gg8yNBpVRUDz9bZn2kGlClU4ebl50ElMBYgSAR6ZB9caTebd1F1P061dbPZ/DRb0pap6g1CUPKSUp3YJGCqQN0AzwY1ntS/eeqmLfZq1NLQi31CqdTzqG2CkryQpISFGIJPMYwCToFF2oQysFWz5iFdyry2UU79KlwZCzuXGeQIgT6gfhpzbkdUXQuIY6hYbokILa0PVXhqUTu4A7fKONE9aWJiwNVD1L1HbXnaKoDfw1MtZdcQpKfMklABEEGJkfMad9IVNrYsFFUvdMrv8Ac3y7UoYWpSWadhJAU6oJIyVYAkficBBXIfZ4jiu2BNz+ZV6OzXp+uVTrrEVVSUrRUhS9wKPKCd5UJMqTBPH01yu1qpb23a76GH6cI8RCEqhpXfcSCpRHtOZ9NW/qOu6fetYvXTlK5SKqUGnqGkOfcc8wO07gQIgc44zGqJQUa1vJeuCLjTMGkX8FLeHVpABSN0goE/PM99cJo0TMqA4tRxNPtVgYRTvsWytslKxSlDwQoqDryincAJkqEExzEe50eacqpRXF6jaeda2op2qINlZKo+8Ocf0x3nvrM6/qS/WHrZLdXUlFuQGUOBSUrKmgjCiBJBIkwCPkNaPS3C5IdVUNuNLDQS5ThR++ZCgAY/qTj69s6o4UTKN6yPqGy4jsbowbqN9+opH6O42k1S3FAGp8NtowABmQYVOAYJOO+sqrnae33t6rtjDrdPVIhtl5KkqCjAUDIGCYOIn01qt8vlyqKZy53OjQUh3cmkaSVOJPZQie54JOBJ4Gsp6k6io7nQPOCh8BdM6AgqpQ1BJ8wlIE4Hf0PpoOW0ye0GN6Y+piIcjiDOVVc1dUvVlc8hx44J8yTiYTyJwP+Me0aDeu9c7uCapDaymTuaJB7GCDjIPr6zru01l+ujCqOnqE1LilbksB9IdWAIISk8yIEd+wzoULUtCnWFBPiqKlsKBkKAPmT8gI7a2CPtPMGJ+TAkJSa341yr+IqOxTnb2EfvGi3KG716E0zFC43TqIKlqUEg+hkmBoBami6gEpp0qVtlIMpJyFT8xn5+2oHhcGHCw/4mxeEbjuQofI86E6E8rCJlVSVeWgNtUFjNqoXU1DjziV1bzUnxckBCPVIIyT3OllwVtD9Nz8O0rdGRJIEToJipqgg0zSiN/DTJKicfln8Ne1JVSMBpaguoWQpZkHbGAOe2sqmwc+Z45BkYbehHlndTb7MqjRTLS/UkGqKhtU4OUtj0Hr/eRD5VNWroXq1nap1Cd6IIEniQIxgQkdgPxqlqeqmVKq6sFxoAg7uR35J+mm1f1JLHw9scWhtQlTykwtZOMScDkSJ40o4JbiUcZFCJPhVh4h5KwVDcEqkEzmSO8/nzppSUyPDKVyEcHv+vJ9tA0W59ZO6U8SckfjptKW2ypaglCfXv76XyvzUq4kAWEt7WGAoHwwkeUgzGMfP5nQAqXah/egp2idqTBK5GIGckkfKNCVNY7UrTTMtFYWIEGAnMkk+w/c6e2W2F9CkAlKwjBTBPPAHfv8p0RcZHvfsxHJlUDYh4H7wuyUtHsYpC2a2rqTsSy2QC2AfvKPaAFYMRzmdau0y1SUjdM0EpQ2IAAgfOPT/GlVhsdDakJbpUgPvABbzpEnEnPAGmTxDRUkecJJyO/adE4kbLkLnjqB1RIkIUJiZGY0gqFrUolSpKex7aa1CtySuSAMAnk6UuBt0lK1KE+gmT27/TS+Qi6j+kRjz4laq9vxTzviAkqwDgng4g6kRXJYWVsumTmePbsP8RqW4W9YUHENkbu44Bj8tJFladwbVIiN37/trgAIoyltJoqY4dvLzbClkeIojhRkHP8A1quqeU6ta1IBM5BwNMP4XWmzC6ONn4NTpZS4TMuQCRzPGeNLFoUgkgGIgmOfbWsaqLqczbtvM7S0pRKyqO+P6fWSdRVJBUUSVAZBPp8u2vfGU35UqKQoQRxOhnQQComcxzpoDmSXbipHSSY4n1JmPbTqlOxZUROODI/fOklFICZxjOnTMKSkLVG0QAe37nRMkoYhJVgKAhICZmRz+Gu20JQg7kkmMGYjPcd+/po6hoF1bnhtpJUfTIPbj8dGv21yifCXaeRu4V3zx8v3OlfVANR3+nJG6osS1ypSDBExET6/T/GvkJSlZPHeB+8aNKTvUdu3mInA9P7Z1y202MlsrJOMxH+xrQeDKSanBSN5IIPA09pKgpACQJ7k5B+Wk1IhSV/ckpzByDpzTMSjxD5SVduB8teJswVbRLFRP7iAcSJEDVooTMFSARGQMR/vVMpP5C0rM7fUZHPGdXOyMLrHgoJSIEgSOPU/40AtzQmWShuMJqqYh1NU2wHkbPDqGokrbmZ4+8mSR6gkdxH52636TftnUFUUUnwtO4pTrIMlK0cgpVkHHcH21+rqWhkyE+ZJ5A415XWFipt5t1ZRoq6PeHKdDmEtrzLc/wBKVAkJV/STHAGj48hx9xF0DnifjCkqnaZ9t0pUh9KCjxEDzEEYChwrn3OMzGp3a+6VKzQPLDbGzzOJSEpKIKs4yJiB3jWl/aN0bQ9LO0b9pqk11BcmvHYqXgA6iFFKkqAP3kkwYA4iBB1Vek71QWZ64G5WenuqKtpTKEv5LJ3BW9I//wBeD6dxMsrsfxzAMzpyDxEHTfU1Z07XoXTbnT4kBRydpxgT35jtn106ZurV56xTV1qnKhxTgU0HVEhB2khCgeZUBzxEd9Aum0s0jhtDJTULbCVKUoqKVSSSgziRj2k6Cs1iudzvNNb2aNwl9wDxnEqbQDG7KzgRz9NHSrqAbJvTd1x95ud8v1F0va6d+yOh+scQlqsSpqFLK5StCySdyVA8H/jOADrCuo61xd9QqirHFuIEKWCQVAEFIJ7kce8aM6lrrkNy01q61pBLRdUCnbGCCkn1kdp9c6SWq3VVZdRRlsiqdWG9ikxG7uZ+6Bz8tEzVu3L4gdMp2kOeTPa9p1p9bTtR4mEuAqB7pkzJOQQOTq8W+qvtt6et1bba4sGoo1UpLTe/e0VSQUkEjMGexE4ga0jpbpzoelt5cR0d/wDK22lJaduVxqlsNKc9GkIwBIxumY1WvtDsFttVO1e+lm3WbSFeFUW55e5dE7kiFd0HsfeNBv6QpG/2k99TNbpULpqVND4iyhagpa9pAPEkfkPf21YOjbSOpeo6jxqmvepqMhLKkuBuBGPvA7R5frHaNVq2LqLjVqcqFHaluEpWrkbgPljOcfU6a2u7P9P05qW22lAvL8RKXkgzwDMmIEY/Z4uMFg7eZtnIQ408TQmejOnquuqF0dHWXBTADi2natCGxJgKJISTPoCDpxci6qlGxQWWwhENDaCgAApAHAjECces6zei+0FukeqahQ8Nda0kElW6PMQqDBEQTkRx76kX1uiSu31jCm0pADLrZlSozJAnv6jjjOqGNsabgOJNyY8mSieY/wCoLqyzbgy3TAtlte9e0omADuIJkJmRPJzz3ym+19FWUextlbLxICUwpKdo4wcaOu3US3gpL6g6lxfiBRCkkweMHIHH/Wq67WKVWuKZpiELbKNoWSQD3k/vtpEnc3fEoKuxa8xhQMrTQvITTy8VAlafMoJ9RnkY+h17c6N3+IBx5otoeKC4U5KjACyPXvn176hYJpwkulDp2yCgEKOMgkEDGc9/XT17q+5VFiYslWtt2mpQpNOlTaVeFMEiY9fSOO+hsSOodFBFGA1lOh2qbp2W2kubisg5KUiYmO+ePfMalT8PSvv07dwNWGllKStAAOO3ucY7e+p7WFUqii5Ldo3KgB1pRbKpRJExIxMxBzEdtBXaspTeXKmjdUScAgAqUfU9pPprIcg8TRxqy+4XIa2qdpXS3TpbZ3J4SkAjnBgDP776Ba2h9K3odUTJIMk45zqRLSnFlxxBDe6CQQY9p+U6nDfjPp8JKSRgEmJ+fpj++ulu7m1xcjbwJJUhLzJBV4aJMICpA5IEz/bmde0tEutaqH1LS1T0zYlZHPbaB6xP4aipqN2qqyrxAhhoS67OE+gHqojt9e2myCkJS86jwaNr/wBTJMlZj7yvcxPt+GvAULacd+duP/7J6q4LZtNupXEJWpgKS15Up2pJklRABIBJMEnn56VVFauoWGG4JP3s+Ue86jrFPVLpIRvWs5Uk/dEYEHj+/wA51NRUrjKwotwo+aViZOR+HuePcjQmRb3GMY8zBdsaWizvVD62twbKU7nX1pISwj/kf7J760KwW5hDSqtsL8BXkaK8KUnMk+kz+8aQ9NMM3Zb1GXlIpkL8ZaVKnxlxGDHmj986vqkttsJCClCAdoQkwRj/ABrABJ5k7WajaNo7M+U5JIJCU9/pqZDzamJWtUcAkcd/w0qecWgnGDwCYGo26tKyUSQQJIPPyjWCnPEWTJaw980KWHFVTymgUQgJTO5f9IOcTqO2Wtx/cCyorPEZj/M540puNSgpCVAFsHlXA1b+k77YvCdF9rHktIRsYSyIJJxyBxn3Op2dmxNYn1WkAzacLE17s7lJRI3LSAoEKShUqGYHH6azqtbZbaCG1AkGSraQB6eurr1XUJtzpTSEoYKz4QUYUUbpBPG09ogesDWc1VUt51ZSANxkjmff351rTh39xh3C4hQkKn3QCN5LY4zga5TUEkeIVDMDOo1lSiE7du7MD8jqHcW1FJBJ4z31SCSdkySd5baSVIUSo/1Ht64OdDONOinDhTCFE7T2P1+UH66+W5JMJz2AEzqN1SgEkyAeDyD2n30UCpNcgkwmhbUVpBTnviY76emlXSuhqqZW0rBKVCDwDwY7frpFSr8NlKpEzMzPHMj6/uNOHLhU3BwVFW8t1yACpaiSQBAyfYDvrThSpvuUsZbeAOpcejq5u33xioWhDiQrAUMc6c9T3iku14eqksIY3HCU4GqBQ1BYcS5tBCSFFJ4MEGI9Dx9dWW/dWXHqupZfuYbLzKShCktpRCZEJwAIERHufXUptPb77lsa0qBjC8eTFNQpXilSQQJye2iqVKlqTKQD7jB+WuGpXtStIAHcd9GU7QKk5MDgd0/276N1FnNmxGNOwULCmzKjMkie0HTako0kpbbhS5iCYB9B7aHpmCUAp4Ppz9NWC22wvOFalhCUCY7njQ2sdTiAHufUtmWtIW7BM5AJx21drLQtoICDG0TK8e55/TQlEylKQVtyNvIEnnjR7DqEoAKdoVxJBP4fjoL5kTmYbC78Sx0xQSNqtqjjGm9EKZxSqd4plXCVZ3CMiO+NUh6sTTLhpRUe6yqEp9vf6aVVF5r6qvFNaGlJqlJKFPIlIHrg9+PTnSOTW88Tw0BI7qJev+j7PW9dtv0dWtxsja7RL3KAcUJlJHAOJHaZ76zPq77P6y0O3GoabZSKdwAoDgKwFHEZlYzyJ2+vOv050n0Y03aHK6qa+Oq3BLiXCFRJBIBPfGT7fLV3X0tYKgvgWGjJeSUrc8FJUoHBgkGAJ4xo+m1WQe6JalcSnYOan86qe31zF/pUPsKbaWvcRBCVhJyPfI1dXbxUMMNh5m5B1Q3rqC7uYcRHAAHlIMf49Nm+1L7K6S2VdGuxU5K1YQFKgRwEqVIEggncYkYJkCcCutiqKN+ppLkVl4K8KRgs7eJBiDPecx3nP0GDP6ifBkfNhUNdWDK85aal2kr7kh5Jo9/hqSSQpSpBO4EdpOcz66Y9JsUKaxyrduqG61KXAkOhS/F8hEhSQYPaD6c4OhEvVDVCu00ykrcrXPEW46CIAGAB2wP9TGl9G9V0dxYqKItv/DrL4VuICxncmDz3GmjttQsXUPtct9v0moi9ihsiPgRKWh5lh2REAABJIEGDzzz3wMu6rr+lLgp5opbeZWnaolQUIC0qk8kGfx0Ju6Vv1Cl9l+mZdQIDNSsoW3OY770gkwe06r95ulut9G/Q09ch9+ohKyiS2ygZ2jEkkgdhxAGlzYMZWmAMrVDUIpry04HQpDiVNqbElSRzMRxMH8dWG12NPVNU/baVS0hC1PuPITIbQUgeYkd1QO0/XVbcr/GtdPRopmZZWpQdRCVKn1gAnvnM/TVm6QrKzp134taQtuuagteY70hXc8CDBgkn20UtxQEAFNe41O63pmso+oKalpqZy6U9LToSt2maUptBgyfMkSJnsJ+h1ULjSu0Vc40StvaQpKpBmczjEfprVj1G9VPMqoi4zVE7alaFlsluTuhM48pKY7gnkk6znqIUSLwVtB+oVJU6FAAJM4AMGAMdjoYY3R7h1XjjqLlMmtqUoU62HXFbSUjAJyMe+fnr6oo1MMgqfJc4CVQkJz6TgSNesvpbcS+gbFJIIBUFEEcGT2EdtWCvQxdWDWsocIcO4pAITuxuTI553T795x3uca0iGhLTtXtrXUtsABSi0JgfLv6acvrs1CgvU/8AMUREpA3D3A/p45zz250mebQw6G2iFJH3iBE+p9u/z7akV4bjaUJbBUiRMZ/v+/lobCjGcVsLkVTX1dahKCpbTcbQASolMmBPJ/Ln31JS0AUyFufyGUSCo5Uv5H39B+ep0rbo0AuJDrpAO1ImYHc/2HpzoSoq33nCV+VaDASrASe4jjWhwOJ4ruNfz7CGOVlGlQplU7iGECEiQNyp5P7xoxx59+kbbpi02yIPhs/1ECASRGcn050k3Ourio2uAcJSAPlgaf2lgOoCUiEkGZ59P1OgZDtO6UNOgbEVM4qEqZfp6dKUtspG7aCCE5gkz95RGZP9tehv41Sv/JR4gyneYTzgDGfw511WzU3X4doF3GwIbBUpZ5MD6DmNOqCwCVKr2xT0rQK317wpw4B2YBG6fSY4760GLcmT8vp47A6iWitVzerEs0rKH1KMBKSFbvfnHzxq52r7P3PFC71ViFZLTKiVK+aiMD2A+urXYKCjobQlVOwGlPkuKERtkmET6AQPfnRrtQlsfdEAfh/vWwpPJkTUa8qduPiRt2+ioGtlBTNMQjYSkQVAGRJ7/wCdCPOlC/OrcmckHJ/xoj+I+GVbNplJSQQCM49PT8NKXXlOFaOD94D1xP6a0QAOIijM7e4yV2oSoE5AAkRmdLlPMh3xMgjMpMEe2uBULQogIlQ5Bz299AVDso9Ve/46EZTxJ1U7r6outbULUARnE9/yGhKevWwW21lQE7gQSCDwMjv76jQp+oCvCZW6GxuXtSTtHqY9++ltdUrW7kQYjB50oUGRjc+kwn0UA6jS53FVUlYc/mxMlc5zz85P5are4TJxHJjXq1PkbCslJEmD/bUBKwkgAwe5wDoyYtoqcyZ7ndRKlpIOFYEd9QKEKTAEEwAO/bI1LuKgkFWQYknjsB/vUIysblBIjJI/wPlow44ijtfM+WQg5UfeREHQru4k+YEHvHHfXbpURvUqTxB15tSqDuAKjknt/rRAIox5k9OAWwRgjOTzplTpIEhcQM55/D199KKcrJO3cr3OYHqdMm1KBAPrOIzrLiWMZF3GrRAkZMjn00wpwhJwArt/1paygqncVAp9edMqYAHByc6UY1H1W45poQQtGVDIMTHcfPjTGmZK3DvVCiZMAGfXShkqlIChxxmT+86eUyXPDQspUEk4VET8v76Wd6jOPFcsVutNSUJqEpBZnYFEjnnifQH9nT9hgNlMkAf1EmI9T8vnpZZytra6lZSocYggj+2nwqKdSCl9KXT/AOxSSJT9frBjSWTNQ7jQxCJH+rvhq8M2tLTrBPmW8FwojB2wRjtEZn5ahrKu9Xp5htmWGw4HEFlMbSAQZnmOdDmhfrq8s0lP/Lbc8yiICJMxj2OrqzS3C2U1NS0tAlVa+lURCg2AJ3e47Rjme2pT5LMYO1Ou4BYbHWvVS1Vbj1UGUnepSidp5GfX5emrt0bal1FS9d1bC0f5YC8lMYKZI9Se36a9ZsKqRunQSSt7Lj/KwsiMdvXtj0JOLjbqdq3sN0DLMMBOeCR6kmdYQW1mT9RqBtIXzH9qo3m6FZa2pSszt50WEmmG8mT3JJMe+dDN3BuiopKglA9RPyx6+2kFX1Xb1Ffi1AMHYloA73DPAHfOO3HONUQyYwADIOx8jE1KV9qfU7VpudleaoaitWlTnmYJABlOBCT5jEQQQQojM4zP7RukXL7dUO0jDdvq3QhtCyNtO8rP3lT/ACSEkeVXlJBAJxra3KqprHSQ0thg+ZSnQApPEACO/wCXpoOpZpWElaEJLgWFOKUSokT69+f+9M4srXazTBANpn5Be6E6pZvRtrNsfqK1tJcCaZaVqTEEqwSODE5BGfXVYutjv1vu5pqq2VFO64Ia2AkqHeNs7pBI5zr9mX2itlTbgw5SpQlC/FSKceGqdpSYUkAhRBORwQO06VvWtVa0w01egqlaCS0uopG3Hm04ICHAUxEDJBj3zp9dUezAekCKn4yaom36ssOKSCmTtWAJzkZIjHbQtUxTtLcSGimDBKTIGeP9zrbevOhLaLu++p5FO46ouhSWAGXkTlUAwlYkiBG4CeTquOfZddm6Bqso36V+lcRubWhaiFA+iowIzJnTY1Cmrg/6cg8TOCthmmSlNSQoR5UpOfz5GedMGLgqmUh1qppQpIBBdTvjvkGf0109aqthBqHWCtsKKXCgQEZ25g4EkZiDPJyAuUMwVlIjaQlU/lJx9NMK9wDYvmdPuOunxF3BAIEShCpHcxgf60KGUPuKCnX3HOSVAJ7c8nTugs9wuDqaZqnU4CncVKUQAJI4E4nGJ0K4wWKktO0pSpJKFTgmMYkZ145AeppcNdwdmlZS24hZQlCfMpJJUpXceg1K27UeF4ba3PDmdqcIPbgDmNGsttFYQ1S7lkAwFJJJg4nHpx2jXq61xClJTRFBbkqCiBx9ORpdsjdARpMSDkwVukKlhRbSBORJM/T/AHojYhpAgpkjzJA2gZ/xqZLtc+kIRTqCV5AgncOZA7/60U105cqopKKV91RG5UNEJQPqBJ/edY9x4PEIXRRuuVt9aHFmFjdEAg5H6Yj01G1TLVEujarvBEfuNXZnoK5GlbdqAkhYSGkbkkqJPlBkn1+n0w4Z+zmpbaS4+6ySoSpCJUUnMgAwD24OdMrwKEm5dTi3WxmdtUrvipDZBHEJkge8n9eNWi02GtqUl16oQwzwVOKCAc47DkngaeI6cbaaVTkumoKgClCtsCeQCDPPJiO4xqzW+x0tA0kpUhTn/NbSVqBgcEzj6f31llJ7gG14UUpiiit7NC94bTiUMOpw+034YUI8yUlUrWczMgH0wBpy22lxDNA3RqYt7EFKCIUrvJ95+fzPaRVDTAklG9YOFLUVEegE8R7AakS8lhQW42HEjBSSROOT+uvBPmTGz7j9f2he/wAoRtKUDAkxGl1U4FS2hRSo8KQYI9x++2ol1SnAQpZCR2HJ18HEgEhQwcADI0QG4mVo3A33lJUStRUonJPJPcnQiHQXTKiMH6aIqlIIMHcImPU6XPqDYC0hSZEjdBn3+U673DovNmeVLiypR4EcTIj+540FUPlTaW0tpBPJEgniJJn05Ea+Lm8Fe6DOZGgXnFiVATA5MwNYYCo/hNOJ2H3aZDvhPKbDqSlYSYkTwYPB9+Y0ugOKOxSifQ9/l+uvFv7jAMR665bUtLhJSCBk/LWUQA8x7NlYjgz4tKUgkzt/qMxH+tRVAYYK2mlF4TIWCQO04MdscY0U+4lxKnCqVE+kSP7cfvuC4gEEgSnso4/HRSogseVq5grkpIKgEzkAd/caH3qBA/pjtx/1oipZDJCVLQTAMpIIEiQDHEenbQm8pVMA+xEj568o+Jt3+ZMFrCCdvceYiY9j/v0+euFrWGi2NsKIVkAEGI59I7a8CyqTO1Cj7x+4P568WUqVAUAOJIwNaqCJuTUyXEgqgBPGB9YGmje3cFGCVGSAI+eNKKZzdIUrjgEz30xYd2NgbTnBIkj1kaDkn0OExu0St0HdKlGTOCPmO+mjECCACqMg5H+9V9txXiJLaiRHJEAewHeJ01pVq3hK4nsdJvKWMXHtIGyRPf2nVioadRgqXKAcA41XaMIKwkKmPQRP5auFpQlRIKFKJEJE5BxzqVmyUZXxY+IycuTNFSJS3CnlmADBjB5nng6gYrVponqwoeUpIguBXlSPUZyfp76gXbW27il24KaYS4ISpxcAmcjjJI13cruKi2O0nT1N4rCArxXtoTBycGR6Tqe1ueIVyEEvnQS6Jy0P1tc8y2+8oqSlajJ5CTn1if2dfXLqOoVdWvgKPxKrcUodLcI9DBPOAdVfpq3UlltNLca2tDrgIKGlElIPEADnHznT1++trabCUqVTuKUdqgAUJjISeQDEZ+WJ1k4bMlPlG4kC7grHW/VdF1C6a6hFY3Tp2thK9jYJOCAR5jgwc6uf/wAy6hdogqmt7A3nyu7wFJJzlM9ojnvwdU23VAuykqLCUpWdxkTCRO0fgZntOrrR0tPbW0NBKH21CQlRkD2ieR/fR00zP0aETzanGtWouF25y816A7crjUBsq3JAVtBjsMDB9+dTgf8AmJK1pISSGyRKk/X5a4rqtsI2tHZHABmdKVXEBG1Shu9fXTY0yIa8yac7PyBQj+oq96S2pW0RBIJkaAU60ykhJCkqEEnJ0mcuASlSVHaO4n8DpbVXQUrK3iA+EjdtBPE5BjuBnTWOi4UGiYIqQhYi6jN2tCVBuQpIOBzpU5cEjxGaIBS53KJyhHv/AHgc+2qb1R1vSUzrTlG2nZMONGSVTGJ+vb1jE6PbuW5oLbwF5IAjt6dvlo2s0T6UjcQb+J3Q6nHqgdoIr5gt0Aq3lordzraHAmVRCjBVJ9BkAAQBE5nXlIpNqr2BSuKTTPKDa20/0rKdyVpjgnbB4mQeRqGucC3kvJc/mLTs8GNoc9CFAHaoce/eI1G0hxVS26+lDSWpLTKVboJBBWpUZMH6TMnspdrKA2kkDsTm6dO09wqTV0baG4VuUw7IQ4ZndI+4qQMiZ7jA1WVdB0lPTv1dc18O54oU0yg+KhCQRKVEgbiZJ7emrt8ZISgGCMyDg++ld3qalVA+aQB16QEpWohPInM+meR89aTK44BnGxqeSIJ8K7T3lypbtdFcKV1O1SW1hC0cEQhafmcK/qjsNJ7r0/ZrxWJeFK/b1BO1SUkpCsjsUnMD/Rgasqa1cDcYUYKglUgH0Hrmc6HpqqqRvFVVB1SlykpG3aPTEf64k62mQjmL5Eit7o6iuLDiXGxTJ8JtpsobzKTO47gNxIMSQJn0GRqjpuyWK1KCSgOraqB4i1BJWfCUQDJzBTMZgnjVuTUpKZHHtqo9W2R26rbuNOpbrzDexNOICVZmcnB7Tnn20zjyEnkybkRiajG6WKlrrfRG3tsUyQAVk4Ck7RHfgTx3nTV24tbFNpJfIEBCBMY4PYfXSqz0LFvs9OzUIaXUBsJcVtB+UHOYAGNE16G6y3P0vjLbDySjekwpPv8APRlIJ7k3MrH2nxPaJt1bFPUVRTvQ0lLbYkhHlAJJxKvoI7c5LU4CRk/U6VUI+AtjNCh5biGUhO5Zkqyefx+mmDdStBUlvaVLQUkFMkcTyOfccaZtb74iJxOfE4eKoUqTHpPPz0IpSwQZickntrioqQyhSVOgZ+7P66EVcy4hSEHeDlSQABMRz2iToXqRkaT5h9OUqJ3upQjncqfwgev99CVT7QKtxif6RpW5VPLXCjsbngdtQKdG8nJSPr++da3kwZ04EmecWoHZhOhw+tpwHfKTj318l/xCEgAD1OBrgVNPRVIcqGE1DcEhCiQDiBkQec+8RrVwi4vmeO1S3CQs7gBgAcd/86CeeSpeVCflxoZVSrzKSiARGTx76FW4srJSuBOQDE60ITYBDEKBUSJJ5gZnXVb5mgEtqSnvJmT/AJ7a9t7a3VEQDjv299WJ+2Ur9ucqG1JY8JKU+GoklZ4JBj5mMRxJ0YoNtmLDIQ+0SmtUalKKymQMn9eNQvNFCYmZ7jgf51YU0SHFypYKU5kGZMcQPrnSypaCVqCAM4Amf11kUZ45GB5ihSkhJBkEYJP5xqFanHEENpJCAT7gepHoNGONIDSl70pWDhPO7/Q9/wC2ly1KST4aiEqEGDEj09/lre2MJkHmD8ZVkcYI/vqIjAAgA5kCT9D++NEI8IPhL4cLfCgkwTjgemdcFEgbCVAp80iMxkfLWws4cgg7a1NqG0wQZBI4z767VuSd6/KVCZz6e36a4XKAUkZPc9tQFxSk7YMHgenvrxSdD8QiiIStKlJKgDwDE/40yQeFApOcAzjuP8c6VsQgGDIIxjn1n/H10ewTtEmTyCBx9dJ5J9Ph8RqwElwKQIHp6e099N2kgO5TBAiDgj58aW0kwAAQDzjn9nThCRl55ZJPcmScfnqdlYCXMGMk3GlO+llAVtCEgZXGfkJ10vrR+0OlulaSVjG5fmAPy+f/AHpE/WFxta1QhDYhI4k9v3/nSRKjVsOOvO+REAJmCZMY9eOfQ6VXTjIbaN59SMQoRndeorleKtL9ZVrUoL8sAgIxPHbmNOLNdKyntqm1JcUytRlJ4UTIJjMQIGc6pHwjgqElBBUmIjVts1z2tqZUEmMkGMdue5406dOgWq4kJ9W7G5d6VC0U7C0uubwJQQ5uSnOZGflxqwbWHrU22XfFfUmXDsghXcZPEfj7aqVLW0yWiphO1czJwfx7Tpyapa6RNcpxtLal+GUJWAoGJnbGBONJtjUdTgdyRcsduuiQ00dimltjISICTEED101N9QAClwkxByNUBVxSwpwBEhcEHd935R9Pw0ObiCvKyJMRrSDaOIDJj3m5oa7wpQAKpSMyDn/eoVXIOAfzYcPAOAf96pTdzSEAE5GJ4nvn8dSO3HxICFlKRwCZI1lvmax4vEfOXBYJClER6nULl1bCkNoqm/GI3hreN5EwVR6T399I0V63WnA4UgowQmDPpg6H+Jl0LHlIET3HeMfLjQwLPMbKBRCbo2qrWNzQCRBATCSDOTIEjE9/ftrtuqKQZIAH9CCSEj0nt+elyqkKUAqCB2GJ99fIdSZSmMd8ie2iO5YUZjHiVTajuOfjQUGU5JmRgjXCHitfHk5EdtK54AXJiIiI9tTJeKGiFL2g4AGJProJMNshLtUc7iIHoBqBdRuISDE6EdcJQV5hOJB0uXcUhRDeZMEDk/6xry8zjLUYuVCiYBiTEg65bdAI3c/OdK01SlKMTnEk/lqVNQkiU4PBn9dG2xdhzHza1HCRPeAT+OukuEqhXHHrGljNStsEgyknOfy9xr1dQuAQIMSYJx+/TXOQYFsYaMXSGjJyeQdQuOJJCiogHMemgFVm5JEyTnnGoXH/ABGiSpOMRMT7aYRiYjmwUYVUVzbbY8NO5Z7DP7Gl7lbUvw2vyt8lKTE/UfpoRx5TaS0CAFHmQfb9NfB9AaBSnIwZ/wA/j/vTSru5MSZdn5ZN4LY3Egmcxkj01Cp9TYVGATMevz1Ol2nXTJKVLD24lYJBTH9IHr3n6e+gXnklKglIJ9ZmNGZAvAi6uzdz1TviEeZKQoxuJgD1J0N46EE+dSj6gR8+Z/LUbiyoArWITiAIJ9uNDkblgbsn6RrNfMKACeIxYWla0pCYST94rOe3Hb1+urh1xYKS00loWipYqVuUaVqSwuQknOcep1TaRLLD6kVbqkoTPmaAUSY8sZEiYEg/LXldUvPoaCniragIkk45xzrBRmcFTxDAqilWHMW1BUkbkogHERzoElQBUQoA+o5/f99HsqbTVJL6i23OSEzA+Xy1Bc6lFS+44yylppaoCU5iBECfb9dPokSeqsRhaqxKQEpQkHkqJyfYZ+umqlurRKNwQD5iOAfQ/wDeqrR1HgvJcMGDuKVDCs8Z/XVgoatNRWuPLcbYbUSvaEkJ54SP940Rutpk7IlXkBjbwmVNpSyHEr/rKiFbjPaAIwffSqrt5SVOMrlxCuDEp9/y00qK2mDbqnFpQSAW0tpEH/ERx/nQNuqDVVbjDiCpKRJUO3zM8Hj56yBXEmbnNuPEq1S24XFFQKiTyR+Olr4UEKSklSU5zIA+mrRe0NrdBaISlOJH7+eqw41KyZO2JlI5/ftxopUiUNNl9Ree4GTBE8fPTW2XBihrS/UUbdYjaQW3SQDIIBwRxIPPYaWqCQ4kbwcTPAHqO0+muTJJMQP0z20RG2ncIXJjGQFGnFQkOOq2qBB+mglDbzBxjRwV5CnYndM7jJI/1odWwEBckcTPH7B1y+YVRQqcMhRAURImIkCdNacBZK0IhMzEzGOJ76T064SJwonEjnHrpwjd5VEEAiTuAGeTx89TcnU+x04BMsFvUEqS4Y8NBghX7ydEVlYhKx4hCUk7oGY7Y+n6aSsvFtopCsk89jqOqcWpgOqWlW5XAOR8/wAvnqcce57Mtersx8T66V6n6dukYG1BMGMKV6SPfGB6aloKFbluW4pCpTgk8Ae/77aGaYbQ+kOwoCDuSZ5z+n4caLcqHGGtqFSlYIURmM9xp9UCChIuXIcrFjJ0soQEfeITMnB959tE07qWnkvBMk4k/KOO+lTLrinExkf8CYn941OHkOPFSE7UbpSmZj0Hv89Za5xFBFy209S0WwpxUACAAYj5+vf8ddGsUlZBVunOqwirVA2qI+Z/P8BogVC1ICyYE8AwTpU440APEcuXJUgHIiBnUSLkQoE8gwOMeg0ncUAjcQCD+/x768S9t2qnI/44j0M99eCTJq4+TcN5PnAExHGiBV8EGI4k8ar8pmQqAe5zGJzGuy/sSdpkDEnn8PprhSdBj9NX5/GSCoxHp+Wi/GC48MknkkHHz/XSFVxacabbDCG1ITtKkkyrJO4yfQx9NSN1RKUkQAnEg5OhPjo8QqNuHMYuvOJlI4HeOdcJqFqO5Jx79/lof4plbqUeIooI8xH9v964XUpbUrwlyIieZHrOh0SJvbXUaeOUnb4kj1HHz/fpr1VYpxCAkSQc+/7E6TCuGxxBQlRXA3Gdye+IPcGMzqFVStKglIkjgDnWdlzV1Gz1SoIKAowruQRHroMqAByQSZI99B+OpZKlKJV65zrtDjapLzkDtAmTHuRye/bWwlQTNZqFNrBJQCAJmTj6anQ635gCSR3HfSorLhSlEkk8DP5fLUgcbQEw6sqKsgDgf3Pf8tbIgQOZYV1DSqNptDCUOIBlYVJV7/loZLgWoJecKQBMhMxzoeiqVOLDCSPDWoCTOc94408NEy2h/att0JUAFyQPY5/7/PQ2fmjDDTnZuEVFYU8HFNAthQ3AGJ/cc6HeKkkp2qEnyo7nPf6R89eVb53qQ0opCSdwwBzHb8NB1NUtEJwFEQVDn5Se+iIOYpmFivM9qHVJShRcBIwEjsJmPzProMPrKyCZM5P9/wA9RIdG4JUsAnvzHfUagtJUZkAzPp76cBrgSecPFmMTUIbaCChYeCpJJwRGMRz7k99QpfSogzB9dL961ErUYA5k6jL5SCCDM+nH/ejA33EmxhZe+r7LYLTZ7ZV2W/fxF6oBFQlW0bTEkgJykDjMzPOqXulBUDAJgn11xsUtQ2mCRyeNcplBKFgx6g68qkCibgywBsCFByPLO5IzJ7+mpG1L3JKCdyTKSOx9fodCpUmS1uASrzAEfONSNqKVlIJAA5J9/T562hAMHkBIueVrLxB3lJVJkzM+onuZMznS4+IohG5QTu3bScDgEx69uNEPB3xgN0gHMY/DTJm1qcS04SpIUkqyInt+xptWN0IHYNtsYsQ2pLgaWZQDME5HE8Z7acoo6cWRx+ncccqkuwWUtE7Uf8yucZIHaZ51A8021CCIUDCjHv2PfHb9dMmPiUWt2kQ3DbuCZAxIMe84P/WMOS1DzNqqC7FipXl16ktKQ55ifKo8RmcaiVcXGSnwoTuiQCcntI9tQ10B4sIzBzHMz6+nA0A5KFcncODx7Z12uYmcSeOjGjtcspUndM5IA47x/vvoMraW0qU7lzhQJ8onuO/9vroMvbeCrOCCef8AWNRFwYKTieBowJPcwMSqeBJHQgjyrBSDicHjvqMOqKPDiRJMnn0/DXqNpWorcCAEkpBBMnsO8fM/XUGCsmQATHm12E/SdFRR93ntjULjvkCIAgyTGR7fL21IVDsrn2jvritaZYqVtsVCalECHEggK78H0/tr3i50EAgT/9k=" />
These are GGUF quantized versions of [jondurbin/bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2).
The importance matrix was trained for 1M tokens (2,000 batches of 512 tokens) using `wiki.train.raw`.
|
ChrisWilson011016/5CSvMgMLK2oSA67SRsycYWGZbWbN7BSciDiDfLTkyKH2sgC8_vgg | ChrisWilson011016 | 2024-03-04T19:02:24Z | 398 | 0 | keras | [
"keras",
"region:us"
]
| null | 2024-02-24T15:07:10Z | Entry not found |
MaziyarPanahi/Experiment26-7B-GGUF | MaziyarPanahi | 2024-03-02T18:18:34Z | 398 | 4 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"chat",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:yam-peleg/Experiment26-7B"
]
| text-generation | 2024-03-02T17:53:25Z | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- chat
- en
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: Experiment26-7B-GGUF
base_model: yam-peleg/Experiment26-7B
inference: false
model_creator: yam-peleg
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Experiment26-7B-GGUF](https://huggingface.co/MaziyarPanahi/Experiment26-7B-GGUF)
- Model creator: [yam-peleg](https://huggingface.co/yam-peleg)
- Original model: [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B)
## Description
[MaziyarPanahi/Experiment26-7B-GGUF](https://huggingface.co/MaziyarPanahi/Experiment26-7B-GGUF) contains GGUF format model files for [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B).
## How to use
Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
### Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: [MaziyarPanahi/Experiment26-7B-GGUF](https://huggingface.co/MaziyarPanahi/Experiment26-7B-GGUF) and below it, a specific filename to download, such as: Experiment26-7B-GGUF.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download MaziyarPanahi/Experiment26-7B-GGUF Experiment26-7B-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
</details>
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download [MaziyarPanahi/Experiment26-7B-GGUF](https://huggingface.co/MaziyarPanahi/Experiment26-7B-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/Experiment26-7B-GGUF Experiment26-7B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Experiment26-7B.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Experiment26-7B.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Experiment26-7B.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) |
mradermacher/BondBurger-8x7B-GGUF | mradermacher | 2024-05-06T06:08:27Z | 398 | 0 | transformers | [
"transformers",
"gguf",
"not-for-all-audiences",
"en",
"base_model:Envoid/BondBurger-8x7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-19T10:23:36Z | ---
base_model: Envoid/BondBurger-8x7B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- not-for-all-audiences
---
## About
static quants of https://huggingface.co/Envoid/BondBurger-8x7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/BondBurger-8x7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/BondBurger-8x7B-GGUF/resolve/main/BondBurger-8x7B.Q2_K.gguf) | Q2_K | 17.6 | |
| [GGUF](https://huggingface.co/mradermacher/BondBurger-8x7B-GGUF/resolve/main/BondBurger-8x7B.IQ3_XS.gguf) | IQ3_XS | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/BondBurger-8x7B-GGUF/resolve/main/BondBurger-8x7B.IQ3_S.gguf) | IQ3_S | 20.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/BondBurger-8x7B-GGUF/resolve/main/BondBurger-8x7B.Q3_K_S.gguf) | Q3_K_S | 20.7 | |
| [GGUF](https://huggingface.co/mradermacher/BondBurger-8x7B-GGUF/resolve/main/BondBurger-8x7B.IQ3_M.gguf) | IQ3_M | 21.7 | |
| [GGUF](https://huggingface.co/mradermacher/BondBurger-8x7B-GGUF/resolve/main/BondBurger-8x7B.Q3_K_M.gguf) | Q3_K_M | 22.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/BondBurger-8x7B-GGUF/resolve/main/BondBurger-8x7B.Q3_K_L.gguf) | Q3_K_L | 24.4 | |
| [GGUF](https://huggingface.co/mradermacher/BondBurger-8x7B-GGUF/resolve/main/BondBurger-8x7B.IQ4_XS.gguf) | IQ4_XS | 25.6 | |
| [GGUF](https://huggingface.co/mradermacher/BondBurger-8x7B-GGUF/resolve/main/BondBurger-8x7B.Q4_K_S.gguf) | Q4_K_S | 27.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BondBurger-8x7B-GGUF/resolve/main/BondBurger-8x7B.Q4_K_M.gguf) | Q4_K_M | 28.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BondBurger-8x7B-GGUF/resolve/main/BondBurger-8x7B.Q5_K_S.gguf) | Q5_K_S | 32.5 | |
| [GGUF](https://huggingface.co/mradermacher/BondBurger-8x7B-GGUF/resolve/main/BondBurger-8x7B.Q5_K_M.gguf) | Q5_K_M | 33.5 | |
| [GGUF](https://huggingface.co/mradermacher/BondBurger-8x7B-GGUF/resolve/main/BondBurger-8x7B.Q6_K.gguf) | Q6_K | 38.6 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/BondBurger-8x7B-GGUF/resolve/main/BondBurger-8x7B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/BondBurger-8x7B-GGUF/resolve/main/BondBurger-8x7B.Q8_0.gguf.part2of2) | Q8_0 | 49.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.