File size: 1,157 Bytes
88fd93d 44611c5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
---
language: en
license: apache-2.0
library_name: peft
tags:
- gemma
- peft
- function-calling
- lora
- thinking
pipeline_tag: text-generation
model-index:
- name: gemma-2-2B-it-thinking-function_calling-V0
results: []
---
# Function Calling Fine-tuned Gemma Model
This is a fine-tuned version of google/gemma-2-2b-it optimized for function calling with thinking.
## Model Details
- Base model: google/gemma-2-2b-it
- Fine-tuned with LoRA for function calling capability
- Includes "thinking" step before function calls
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel, PeftConfig
model_name = "sethderrick/gemma-2-2B-it-thinking-function_calling-V0"
# Load the model
config = PeftConfig.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path)
model = PeftModel.from_pretrained(model, model_name)
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Use for function calling
# ...
```
|