You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Model Card: LoRA-LLaMA3-8B-GitHub-Summarizer

This repository provides LoRA adapter weights fine-tuned on top of Meta’s LLaMA-3-8B model for the task of summarizing GitHub issues and discussions. The model was trained on a curated dataset of open-source GitHub issues to produce concise, readable, and technically accurate summaries.

Model Details

Model Description

  • Developed by: Saramsh Gautam (Louisiana State University)
  • Model type: LoRA adapter weights
  • Language(s): English
  • License: llama (must comply with Meta's license)
  • Fine-tuned from model: meta-llama/Meta-Llama-3-8B
  • Library used: PEFT (LoRA) with Hugging Face Transformers

Model Sources

Uses

Direct Use

These adapter weights must be merged with the base LLaMA-3-8B model using PEFT or Hugging Face’s PeftModel wrapper.

Example use case:

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel, PeftConfig

base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Meta-Llama-3-8B")
model = PeftModel.from_pretrained(base_model, "saramshgautam/lora-llama-8b-github")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B")

Intended USe

  • Research in summarization of technical conversations

  • Augmenting code review and issue tracking pipelines

  • Studying model adaptation via parameter-efficient fine-tuning

Out-of-Scope Use

  • Commercial applications (restricted by Meta’s LLaMA license)

  • General-purpose conversation or chatbot use (model optimized for summarization)

Bias, Risks, and Limitations

  • The model inherits biases from both the base LLaMA-3 model and the GitHub dataset. It may underperform on non-technical content or multilingual issues.

Recommendations

Use only for academic or non-commercial research. Evaluate responsibly before using in production or public-facing tools.

How to Get Started with the Model

See the example in “Direct Use” above. You must separately download the base model from Meta and load the LoRA adapters from this repo.

Training Details

Training Data

  • Source: Hugging Face lewtun/github-issues
  • Description: Contains 3,000+ GitHub issues and comments from popular open-source repositories.

Training Procedure

  • LoRA with PEFT
  • 4-bit quantized training using bitsandbytes
  • Mixed precision: bf16
  • Batch size: 8
  • Epochs: 3
  • Optimizer: AdamW

Evaluation

Metrics

ROUGE-1, ROUGE-2, ROUGE-L, ROUGE-Lsum on a 500-issue test set

Results

Metric Score
ROUGE-1 0.706
ROUGE-2 0.490
ROUGE-L 0.570
ROUGE-Lsum 0.582

Environmental Impact

  • Hardware Type: 4×A100 GPUs (university HPC cluster)
  • Training Hours: ~4 hours
  • Carbon Estimate: ~10.2 kg CO₂eq
    (estimated via ML CO2 calculator)

Citation

APA:

Gautam, S. (2025). LoRA-LLaMA3-8B-GitHub-Summarizer: Adapter weights for summarizing GitHub issues using LLaMA 3. Hugging Face. https://huggingface.co/saramshgautam/lora-llama-8b-github

BibTeX:

@misc{gautam2025lora,
  title={LoRA-LLaMA3-8B-GitHub-Summarizer},
  author={Gautam, Saramsh},
  year={2025},
  howpublished={\url{https://huggingface.co/saramshgautam/lora-llama-8b-github}},
  note={Fine-tuned adapter weights using LoRA on Meta-LLaMA-3-8B}
}

Contact


Framework Versions

  • PEFT: 0.15.2
  • Transformers: 4.40.0
  • Bitsandbytes: 0.41.3
  • Datasets: 2.18.0
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for SamG13/llama-8b-gitnav

Adapter
(642)
this model

Dataset used to train SamG13/llama-8b-gitnav