Model that I used for this Kaggle Competition: LLM Prompt Recovery
My Kaggle implementation: Notebook
Usage:
from unsloth import FastLanguageModel
from peft import PeftModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Llama-3.2-3B-Instruct",
max_seq_length = 512,
dtype = None,
load_in_4bit = True,
)
model = PeftModel.from_pretrained(model, "billa-man/llm-prompt-recovery")
Input to the LLM is in the following format:
{"role": "user", "content": "Return the prompt that was used to tranform the original text into the rewritten text. Original Text: " + original_text +", Rewritten Text: " + rewritten_text}
An example:
original_text = "Recent breakthroughs have demonstrated several ways to induce magnetism in materials using light, with significant implications for future computing and data storage technologies."
rewritten_text = "Light-induced magnetic phase transitions in non-magnetic materials have been experimentally demonstrated through ultrafast optical excitation, offering promising pathways for photomagnetic control in next-generation spintronic devices and quantum computing architectures."
inference(original_text, rewritten_text) (check notebook for code)
> 'Rewrite the following sentence while maintaining its original meaning but using different wording and terminology: "Recent breakthroughs have demonstrated several ways to induce magnetism in materials using light, with significant implications for future computing and data storage technologies."'
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for billa-man/llm-prompt-recovery
Base model
meta-llama/Llama-3.2-3B-Instruct
Finetuned
unsloth/Llama-3.2-3B-Instruct