NikolayKozloff commited on
Commit
26c22b2
1 Parent(s): c2888d8

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +38 -0
README.md ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ license_link: https://huggingface.co/microsoft/Phi-3-medium-4k-instruct/resolve/main/LICENSE
4
+ language:
5
+ - sq
6
+ library_name: transformers
7
+ pipeline_tag: text-generation
8
+ tags:
9
+ - nlp
10
+ - code
11
+ - llama-cpp
12
+ - gguf-my-lora
13
+ inference:
14
+ parameters:
15
+ temperature: 0.7
16
+ widget:
17
+ - messages:
18
+ - role: user
19
+ content: Identifiko emrat e personave në këtë artikull 'Majlinda Kelmendi (lindi
20
+ më 9 maj 1991), është një xhudiste shqiptare nga Peja, Kosovë.'
21
+ base_model: Kushtrim/Phi-3-medium-4k-instruct-sq
22
+ ---
23
+
24
+ # NikolayKozloff/Phi-3-medium-4k-instruct-sq-F32-GGUF
25
+ This LoRA adapter was converted to GGUF format from [`Kushtrim/Phi-3-medium-4k-instruct-sq`](https://huggingface.co/Kushtrim/Phi-3-medium-4k-instruct-sq) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
26
+ Refer to the [original adapter repository](https://huggingface.co/Kushtrim/Phi-3-medium-4k-instruct-sq) for more details.
27
+
28
+ ## Use with llama.cpp
29
+
30
+ ```bash
31
+ # with cli
32
+ llama-cli -m base_model.gguf --lora Phi-3-medium-4k-instruct-sq-f32.gguf (...other args)
33
+
34
+ # with server
35
+ llama-server -m base_model.gguf --lora Phi-3-medium-4k-instruct-sq-f32.gguf (...other args)
36
+ ```
37
+
38
+ To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).