lab2 / README.md
forestav's picture
Update README.md
471f7d4 verified

A newer version of the Gradio SDK is available: 5.21.0

Upgrade
metadata
title: Lab2
emoji: 💬
colorFrom: yellow
colorTo: purple
sdk: gradio
sdk_version: 5.0.1
app_file: app.py
pinned: false

Fine-Tuned Medical Language Model

Overview

This project fine-tunes the LLaMA 3.2 3B model using the FineTome-100k instruction dataset. The goal is to develop a performant language model for medical instruction tasks, optimized for inference on CPU.

Key Features

  • Base Model: LLaMA 3.2 3B (fine-tuned with Hugging Face Transformers and Unsloth).
  • Dataset: FineTome-100k, a high-quality instruction dataset.
  • Inference Optimization: Quantized to GGUF format for faster CPU inference using methods like Q4_K_M.

Improvements

Model-Centric Approach

  1. Hyperparameter Tuning:

    • Learning Rate: Reduced to 1e-4 and tested against 2e-4 for better generalization.
    • Warmup Steps: Increased to 100 to stabilize early training.
    • Batch Size: Adjusted via gradient accumulation to simulate larger effective batch sizes.
  2. Fine-Tuning Techniques:

    • Resumed training from a 3,000-step checkpoint to save time.
    • Applied adamw_8bit optimizer for memory-efficient training.
  3. Experimentation with Foundation Models:

    • Tested alternative open-source models, including Falcon-7B and Mistral 3B, for comparison.

Data-Centric Approach

  1. Additional Data Sources:

    • Plans to augment training with datasets like PubMedQA or MedQA for domain-specific improvements.
    • Diversity of instructions to improve robustness across medical queries.
  2. Dataset Analysis:

    • Addressed class imbalances and ensured validation split consistency.

Hyperparameters

The final training used the following hyperparameters:

  • Learning Rate: 1e-4
  • Warmup Steps: 100
  • Batch Size: Simulated effective batch size of 8 (2 samples per device with 4 gradient accumulation steps).
  • Optimizer: AdamW (8-bit quantization).
  • Weight Decay: 0.01
  • Learning Rate Scheduler: Linear decay.

Model Performance

Training

  • Steps: Fine-tuned for 6,000 steps total (3,000 initial + 3,000 resumed).
  • Validation Loss: Improved from X to Y during fine-tuning.

Inference

  • Quantized Format: Q4_K_M and F16 formats evaluated for inference speed.
  • CPU Latency: Achieved X ms per query on a single-core CPU.

Next Steps

  1. Continue fine-tuning with additional data sources (e.g., MedQA).
  2. Explore LoRA or parameter-efficient tuning for larger models.
  3. Deploy and evaluate the model in real-world scenarios.

Usage

To load and use the model:

from transformers import AutoTokenizer, AutoModelForCausalLM

model_name = "forestav/medical_model"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

# Generate predictions
inputs = tokenizer("What are the symptoms of diabetes?", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

An example chatbot using [Gradio](https://gradio.app), [`huggingface_hub`](https://huggingface.co/docs/huggingface_hub/v0.22.2/en/index), and the [Hugging Face Inference API](https://huggingface.co/docs/api-inference/index).