LLAMA 2 7B Fine tuned on Suchinthana/databricks-dolly-15k-sinhala dataset. Used 3000 datapoints for finetunning and ran for 200 steps (~1.01 epochs).

Downloads last month
17
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train Suchinthana/databricks-dolly-15k-sinhala