If an SLM is trained/finetunned on random datasets from a specific field, is it a good practice to end each query with the prompt below to ensure that the response is not arbitrary? Does an SLM like 'Phi-4-mini-instruct' fully comply with such prompt?
Prompt: "Ensure that the response is fact-based, provable, and does not make assumptions. Avoid potential sources of error or uncertainty in the response. Additionally, include a dedicated section titled 'Known Issues and Sources of Error' to identify any possible limitations, ambiguities, or uncertainties related to the explanation."
Haris Naeem
harisnaeem
·
AI & ML interests
Finetunning open-source LM on advanced physics research. (NLP)
Recent Activity
new activity
about 6 hours ago
microsoft/Phi-4-mini-instruct:Phi-4 model loads successfully on text-generation-webui, but Phi-4-mini-instruct does not
commented on
an
article
2 days ago
After 500+ LoRAs made, here is the secret
new activity
3 days ago
microsoft/Phi-4-mini-instruct:Can I Fine-tune Phi-4-mini-instruct locally without a GPU?
Organizations
harisnaeem's activity
Phi-4 model loads successfully on text-generation-webui, but Phi-4-mini-instruct does not
#21 opened about 6 hours ago
by
harisnaeem


commented on
After 500+ LoRAs made, here is the secret
2 days ago
Can I Fine-tune Phi-4-mini-instruct locally without a GPU?
1
#19 opened 3 days ago
by
harisnaeem

A basic query
1
#1 opened 7 days ago
by
harisnaeem


upvoted
a
collection
7 days ago
Is QLoRA used to quantize phi-4 to a 4-bit GGUF or another tool?
1
#4 opened 8 days ago
by
harisnaeem
