Spaces:
Running
on
A10G
Running
on
A10G
Add -DLLAMA_CURL=OFF
Browse filesSeems to be causing build errors: https://github.com/ggml-org/llama.cpp/issues/12925
```
-- Detecting CUDA compile features
-- Detecting CUDA compile features - done
-- CUDA host compiler is GNU 13.3.0
-- Including CUDA backend
-- Could NOT find CURL (missing: CURL_LIBRARY CURL_INCLUDE_DIR)
CMake Error at common/CMakeLists.txt:90 (message):
Could NOT find CURL. Hint: to disable this feature, set -DLLAMA_CURL=OFF
-- Configuring incomplete, errors occurred!
gmake: Makefile: No such file or directory
gmake: *** No rule to make target 'Makefile'. Stop.
cp: cannot stat './build/bin/llama-*': No such file or directory
* Running on local URL: http://0.0.0.0:7860
```
start.sh
CHANGED
@@ -12,7 +12,7 @@ if [[ -z "${RUN_LOCALLY}" ]]; then
|
|
12 |
fi
|
13 |
|
14 |
cd llama.cpp
|
15 |
-
cmake -B build -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=${GGML_CUDA}
|
16 |
cmake --build build --config Release -j --target llama-quantize llama-gguf-split llama-imatrix
|
17 |
cp ./build/bin/llama-* .
|
18 |
rm -rf build
|
|
|
12 |
fi
|
13 |
|
14 |
cd llama.cpp
|
15 |
+
cmake -B build -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=${GGML_CUDA} -DLLAMA_CURL=OFF
|
16 |
cmake --build build --config Release -j --target llama-quantize llama-gguf-split llama-imatrix
|
17 |
cp ./build/bin/llama-* .
|
18 |
rm -rf build
|