Lux-Llama

This repository contains a fine-tuned version of the Llama-3.1-8B-Instruct model, specifically adapted for Luxembourgish. The fine-tuning was performed using LoRA (Low-Rank Adaptation) on a dataset crafted to reason in Luxembourgish. The fine-tuning process utilized the computational resources provided by Meluxina, a high-performance computing (HPC) platform operated by LuxProvide.

Model Overview

  • Base Model: Llama-3.1-8B-Instruct
  • Fine-Tuning Method: LoRA (Low-Rank Adaptation)
  • Compute Platform: Meluxina by LuxProvide
  • Fine-Tuning Framework: Unsloth
  • Status: Early release. The model and dataset are still being improved, and feedback is welcome.

About Meluxina

Meluxina is Luxembourg's national supercomputer, launched in June 2021 by LuxProvide. It is built on the EVIDEN BullSequana XH2000 platform and provides:

  • 18 PetaFlops of computing power.
  • 20 PetaBytes of storage capacity.
  • A scalable architecture integrating simulation, modeling, data analytics, and AI.

Meluxina was ranked 36th globally and recognized as the greenest supercomputer in the EU within the Top500 ranking. Named after Luxembourg's legend of the mermaid Melusina, it symbolizes digital innovation and employs water-cooling technology for energy efficiency.

Features

  • Language: Luxembourgish
  • Specialization: Reasoning for complex problem-solving and step-by-step explanations.
  • Efficiency: LoRA fine-tuning ensures minimal computational overhead while maintaining high performance.

Installation

To use the fine-tuned model, ensure you have the following dependencies installed:

%%capture
!pip install unsloth
# Also get the latest nightly Unsloth!
!pip uninstall unsloth -y && pip install --upgrade --no-cache-dir --no-deps git+https://github.com/unslothai/unsloth.git

You can then load the model as follows:

from unsloth import FastLanguageModel
import torch
from transformers import TextStreamer

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "aiplanet/Lux-Llama",
    max_seq_length = 8192,
    dtype = None,
    load_in_4bit = True,
)
FastLanguageModel.for_inference(model) # Enable native 2x faster inference

alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Output:
{}"""

inputs = tokenizer(
[
    alpaca_prompt.format(
        "Proposéiert mir en neit Rezept mat Eeër a Brout", # instruction
        "", # input
        "", # output - leave this blank for generation!
    )
], return_tensors = "pt").to("cuda")

text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 2048)

Output:

# Sécher! Hei ass e leckert Rezept dat Eeër a Brout kombinéiert: Brout Eeër Frittata Mat dësem Rezept kënnt Dir e leckere Brout Eeër Frittata maachen, perfekt fir e früh Moien Frühstück oder e leckeren Snack. 

# Zutaten: 
# - 4 grouss Eeër
# - 1/2 Coupe geschnidden Brout
# - 1/2 Coupe gerappte Cheddar Kéis
# - 1/2 Coupe gerappte Parmesan Kéis
# - 1/4 Coupe gerappte Mozzarella Kéis
# - 1/4 Coupe gehackte frësche Petersilie
# - Salz a Peffer fir ze schmaachen
# - 2 Esslöffel Olivenueleg

# Instruktioune: 
# 1. Den Ofen op 375 ° F (190 ° C) virhëtzen.
# 2. An enger grousser Schossel, d'Eeër, d'Brout, d'Cheddar Kéis, d'Parmesan Kéis, d'Mozzarella Kéis, d'Petersilie, Salz a Peffer mëschen.
# 3. Huelt eng 9-Zoll (23 cm) Liewensmëttel Schossel a fëllt se mat der Eeër Mëschung. 
# 4. Dréckt d'Schossel mat Olivenueleg.
# 5. Bake fir ongeféier 35-40 Minutten, oder bis d'Eeër voll gekacht sinn a d'Brout e liicht brong ass. 
# 6. Huelt de Frittata aus dem Ofen a léisst et e puer Minutten ofkillen ier Dir et servéiert. 
# Genéisst Är lecker Brout Eeër Frittata!

Fine-Tuning Process

  1. Framework: The fine-tuning was conducted using Unsloth, a LoRA-based fine-tuning library.
  2. Steps:
    • Initialized the Llama-3.1-8B-Instruct model.
    • Applied LoRA adapters for efficient training via Unsloth.
    • Evaluated the best checkpoints on preliminary benchmarks.
  3. Hardware: High-performance A100 GPUs provided by Meluxina ensured rapid convergence.

Dataset Description

  • Under progress

Benchmarking

  • Under progress

Acknowledgments

This work leverages computational resources and support from Meluxina by LuxProvide.

LuxProvide Logo Meluxina Logo
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for aiplanet/Lux-Llama

Finetuned
(586)
this model