nirusanan's picture
Create README.md
315622d verified
metadata
language:
  - en
library_name: transformers
pipeline_tag: text-generation
base_model: meta-llama/Meta-Llama-3-8B
base_model_relation: finetune

Model Information

The Llama-3-8B_SFT_Finetune_Pandas_Code is a quantized, fine-tuned version of the Meta-Llama-3 model designed specifically for analyzing tabular data.

How to use

Starting with transformers version 4.34.0 and later, you can run conversational inference using the Transformers pipeline.

Make sure to update your transformers installation via pip install --upgrade transformers.

import transformers
import torch
from peft import PeftModel, PeftConfig, get_peft_model
from transformers import pipeline
def get_pipline():
    model_name = "nirusanan/Llama-3-8B_SFT_Finetune_Pandas_Code"

    tokenizer = AutoTokenizer.from_pretrained(model_name)
    tokenizer.pad_token = tokenizer.eos_token

    model = AutoModelForCausalLM.from_pretrained(
        model_name,
        torch_dtype=torch.float16,
        device_map="cuda:0",
        trust_remote_code=True
    )

    pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=850)

    return pipe

pipe = get_pipline()
def generate_prompt(task, header_columns):
    prompt = f"""Below is an instruction that describes a task. Write a Python function using Pandas to accomplish the task described below.

### Instruction:
{task}

header columns with sample data:
{header_columns}

### Response:
"""
    return prompt
prompt = generate_prompt("Your question based on tabular data", "Necessary columns names")
result = pipe(prompt)
generated_text = result[0]['generated_text']
print(generated_text.split("### End")[0])