Uploaded model
- Developed by: Ishika08
- License: apache-2.0
- Finetuned from model : unsloth/phi-4-unsloth-bnb-4bit
This phi model was trained 2x faster with Unsloth and Huggingface's TRL library.
How to Use the Model for Inferencing
You can use the model for inferencing via Hugging Face's API by following the steps below:
1. Install Required Libraries
Ensure that you have the requests
library installed:
pip install requests
Steps to use the model for inferencing using Hugging Face API
import requests
API URL for the model hosted on Hugging Face
API_URL = "https://api-inference.huggingface.co/models/Ishika08/phi-4_fine-tuned_mdl"
Set up your Hugging Face API token
HEADERS = {"Authorization": f"Bearer token_id"}
The input you want to pass to the model
payload = { "inputs": "What is the capital of France? Tell me some of the tourist places in bullet points." }
Make the request to the API
response = requests.post(API_URL, headers=HEADERS, json=payload)
Print the response from the model
print(response.json()) # Get the response output
OUTPUT
{ "generated_text": "Paris is the capital of France. Some of the famous tourist places include:\n- Eiffel Tower\n- Louvre Museum\n- Notre-Dame Cathedral\n- Sacré-Cœur Basilica" }
Steps to use model using InferenceClient library from huggingface_hub
from huggingface_hub import InferenceClient
Initialize the client with model name and Hugging Face token
client = InferenceClient(model="Ishika08/phi-4_fine-tuned_mdl", token=""")
Perform inference (text generation in this case)
response = client.text_generation("What is the capital of France? Tell me about Eiffel Tower history in bullet points.")
Print the response from the model
print(response)
- Downloads last month
- 21