Transformers
GGUF
English
llama
TheBloke commited on
Commit
7e28f2d
·
1 Parent(s): 4320933

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -173,7 +173,7 @@ CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
173
  from ctransformers import AutoModelForCausalLM
174
 
175
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
176
- llm = AutoModelForCausalLM.from_pretrained("TheBloke/Camel-Platypus2-13B-GGML", model_file="camel-platypus2-13b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
177
 
178
  print(llm("AI is going to"))
179
  ```
 
173
  from ctransformers import AutoModelForCausalLM
174
 
175
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
176
+ llm = AutoModelForCausalLM.from_pretrained("TheBloke/Camel-Platypus2-13B-GGUF", model_file="camel-platypus2-13b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
177
 
178
  print(llm("AI is going to"))
179
  ```