--- library_name: transformers base_model: meta-llama/Meta-Llama-3-8B-Instruct license: llama3 language: - pt tags: - code - sql - finetuned - portugues-BR --- **Lloro SQL** Lloro-7b Logo Lloro SQL, developed by Semantix Research Labs, is a language Model that was trained to effectively transform Portuguese queries into SQL Code. It is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct, that was trained on Bird and Spider public datasets. The fine-tuning process was performed using the QLORA metodology on a GPU A100 with 40 GB of RAM. **Model description** Model type: A 7B parameter fine-tuned on GretelAI public datasets. Language(s) (NLP): Primarily Portuguese, but the model is capable to understand English as well Finetuned from model: meta-llama/Meta-Llama-3-8B-Instruct **What is Lloro's intended use(s)?** Lloro is built for Text2SQL in Portuguese contexts . Input : Text Output : Text (Code) **Usage** Using an OpenAI compatible inference server (like [vLLM](https://docs.vllm.ai/en/latest/index.html)) ```python from openai import OpenAI client = OpenAI( api_key="EMPTY", base_url="http://localhost:8000/v1", ) def generate_responses(instruction, client=client): chat_response = client.chat.completions.create( model=, messages=[ {"role": "system", "content": "Você escreve a instrução SQL que responde às perguntas feitas. Você NÃO FORNECE NENHUM COMENTÁRIO OU EXPLICAÇÃO sobre o que o código faz, apenas a instrução SQL terminando em ponto e vírgula. Você utiliza todos os comandos disponíveis na especificação SQL, como: [SELECT, WHERE, ORDER, LIMIT, CAST, AS, JOIN]."}, {"role": "user", "content": instruction}, ] ) return chat_response.choices[0].message.content output = generate_responses(user_prompt) ``` **Params** Training Parameters | Params | Training Data | Examples | Tokens | LR | |----------------------------------|---------------------------------|---------------------------------|------------|--------| | 8B | GretelAI public datasets | 65000 | 18.000.000 | 9e-5 | **Model Sources** GretelAI: https://huggingface.co/datasets/gretelai/synthetic_text_to_sql **Performance** | Modelo | LLM as Judge | Code Bleu Score | Rouge-L | CodeBert- Precision | CodeBert-Recall | CodeBert-F1 | CodeBert-F3 | |----------------|--------------|-----------------|---------|----------------------|-----------------|-------------|-------------| | Llama 3 - Base | 65.48% | 0.4583 | 0.6361 | 0.8815 | 0.8871 | 0.8835 | 0.8862 | | Llama 3 - FT | 62.57% | 0.6512 | 0.7965 | 0.9458 | 0.9469 | 0.9459 | 0.9466 | **Training Infos:** The following hyperparameters were used during training: | Parameter | Value | |---------------------------|----------------------| | learning_rate | 1e-4 | | weight_decay | 0.001 | | train_batch_size | 16 | | eval_batch_size | 8 | | seed | 42 | | optimizer | Adam - adamw_8bit | | lr_scheduler_type | cosine | | num_epochs | 3.0 | **QLoRA hyperparameters** The following parameters related with the Quantized Low-Rank Adaptation and Quantization were used during training: | Parameter | Value | |-----------------|---------| | lora_r | 16 | | lora_alpha | 64 | | lora_dropout | 0 | **Framework versions** | Library | Version | |---------------|-----------| | accelerate | 0.21.0 | | bitsandbytes | 0.42.0 | | Datasets | 2.14.3 | | peft | 0.4.0 | | Pytorch | 2.0.1 | | safetensors | 0.4.1 | | scikit-image | 0.22.0 | | scikit-learn | 1.3.2 | | Tokenizers | 0.14.1 | | Transformers | 4.37.2 | | trl | 0.4.7 |