prithivMLmods commited on
Commit
067ad06
·
verified ·
1 Parent(s): aa40e0c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -8
README.md CHANGED
@@ -21,15 +21,8 @@ pipeline_tag: text-generation
21
 
22
  SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device. Fine-tuning a language model like SmolLM involves several steps, from setting up the environment to training the model and saving the results. Below is a detailed step-by-step guide based on the provided notebook file
23
 
24
- | **Notebook** | **Link** |
25
- |--------------|----------|
26
- | SmolLM-FT-360M | [SmolLM-FT-360M.ipynb](https://huggingface.co/datasets/prithivMLmods/FinetuneRT-Colab/blob/main/SmolLM-FT/SmolLM-FT-360M.ipynb) |
27
 
28
- ---
29
-
30
- ### How to use
31
-
32
- ### Transformers
33
  ```bash
34
  pip install transformers
35
  ```
@@ -247,6 +240,37 @@ After training, save the fine-tuned model and tokenizer to a local directory.
247
  ```
248
 
249
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
250
  ### **Model & Quant**
251
 
252
  | **Item** | **Link** |
 
21
 
22
  SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device. Fine-tuning a language model like SmolLM involves several steps, from setting up the environment to training the model and saving the results. Below is a detailed step-by-step guide based on the provided notebook file
23
 
 
 
 
24
 
25
+ # How to use with `Transformers`
 
 
 
 
26
  ```bash
27
  pip install transformers
28
  ```
 
240
  ```
241
 
242
  ---
243
+
244
+ ### **Run with Ollama [Ollama Run]**
245
+
246
+ Ollama makes running machine learning models simple and efficient. Follow these steps to set up and run your GGUF models quickly.
247
+
248
+ ## Quick Start: Step-by-Step Guide
249
+
250
+ | Step | Description | Command / Instructions |
251
+ |------|-------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------|
252
+ | 1 | **Install Ollama 🦙** | Download Ollama from [https://ollama.com/download](https://ollama.com/download) and install it on your system. |
253
+ | 2 | **Create Your Model File** | - Create a file named after your model, e.g., `metallama`. |
254
+ | | | - Add the following line to specify the base model: |
255
+ | | | ```bash |
256
+ | | | FROM Llama-3.2-1B.F16.gguf |
257
+ | | | ``` |
258
+ | | | - Ensure the base model file is in the same directory. |
259
+ | 3 | **Create and Patch the Model** | Run the following commands to create and verify your model: |
260
+ | | | ```bash |
261
+ | | | ollama create metallama -f ./metallama |
262
+ | | | ollama list |
263
+ | | | ``` |
264
+ | 4 | **Run the Model** | Use the following command to start your model: |
265
+ | | | ```bash |
266
+ | | | ollama run metallama |
267
+ | | | ``` |
268
+ | 5 | **Interact with the Model** | Once the model is running, interact with it: |
269
+ | | | ```plaintext |
270
+ | | | >>> Tell me about Space X. |
271
+ | | | Space X, the private aerospace company founded by Elon Musk, is revolutionizing space exploration... |
272
+ | | | ``` |
273
+
274
  ### **Model & Quant**
275
 
276
  | **Item** | **Link** |