Model Card for ORANSight Gemma-2B

This model belongs to the first release of the ORANSight family of models.

  • Developed by: NextG lab@ NC State
  • License: gemma
  • Context Window: 8192 tokens
  • Fine Tuning Framework: Unsloth

Generate with Transformers

Below is a quick example of how to use the model with Hugging Face Transformers:

from transformers import pipeline

# Example query
messages = [
    {"role": "system", "content": "You are an O-RAN expert assistant."},
    {"role": "user", "content": "Explain the E2 interface."},
]

# Load the model
chatbot = pipeline("text-generation", model="NextGLab/ORANSight_Gemma_2_2B_Instruct")
result = chatbot(messages)
print(result)

Coming Soon

A detailed paper documenting the experiments and results achieved with this model will be available soon. Meanwhile, if you try this model, please cite the below mentioned paper to acknowledge the foundational work that enabled this fine-tuning.

@article{gajjar2024oran,
  title={Oran-bench-13k: An open source benchmark for assessing llms in open radio access networks},
  author={Gajjar, Pranshav and Shah, Vijay K},
  journal={arXiv preprint arXiv:2407.06245},
  year={2024}
}

Downloads last month
103
Safetensors
Model size
2.61B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for NextGLab/ORANSight_Gemma_2_2B_Instruct

Finetuned
(111)
this model
Quantizations
2 models

Collection including NextGLab/ORANSight_Gemma_2_2B_Instruct