File size: 1,484 Bytes
5eb7fd1 4044735 5eb7fd1 7cc2c92 d6fa34a dc73485 7cc2c92 aee3d6a 7cc2c92 dc73485 7cc2c92 d6fa34a ceb8783 f3c72a8 01603e1 ceb8783 01603e1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
---
license: apache-2.0
language:
- en
tags:
- HuggingFace
- defog/sqlcoder.gguf
- sqlcoder-7b-2.gguf
pipeline_tag: text-generation
---
## Model Details
I do not claim ownership of this model. <br>
It is converted into 8-bit GGUF format from original repository [huggingface.co/defog/sqlcoder-7b-2](https://huggingface.co/defog/sqlcoder-7b-2)
### Model Description
**Developed by:** [Defog AI](https://defog.ai)
### Model Sources
**Repository:** [https://huggingface.co/defog/sqlcoder-7b-2](https://huggingface.co/defog/sqlcoder-7b-2)
### Example usage
**With Llamacpp:**
```
from langchain_community.llms. llamacpp import Llamacpp
from huggingface_hub import hf_hub_download
YOUR_MODEL_DIRECTORY = None
CONTEXT LENGHT = None
MAX TOKENS = None
BATCH SIZE = None
TEMPERATURE = None
GPU_OFFLOAD = None
def load_model (model_id, model_basename):
model_path = hf_hub_download (
repo_id=model_id,
filename=model_basename,
resume_download=True,
cache_dir="YOUR_MODEL_DIRECTORY",
)
kwargs = {
'model_path': model_path,
'n_ctx': CONTEXT_LENGHT,
'max_tokens': MAX_TOKENS,
'n_batch': BATCH_SIZE,
'n_gpu_layers': GPU_OFFLOAD,
'temperature': TEMPERATURE,
'verbose': True,
}
return LlamaCpp(**kwargs)
11m = load_model(
model_id="whoami02/defog-sqlcoder-2-GGUF",
model_basename="sqlcoder-7b-2.q8_0.gguf",
```
<!--  --> |