seanpedrickcase commited on
Commit
14d074b
·
1 Parent(s): 96d818b

Hugging Face spaces implementation seems unable to install latest llama-cpp-python version, so downgrading.

Browse files
Files changed (2) hide show
  1. requirements.txt +2 -2
  2. requirements_aws.txt +1 -1
requirements.txt CHANGED
@@ -6,8 +6,8 @@ google-generativeai==0.8.5
6
  pandas==2.2.3
7
  transformers==4.51.3
8
  # For Windows https://github.com/abetlen/llama-cpp-python/releases/download/v0.3.2/llama_cpp_python-0.3.2-cp311-#cp311-win_amd64.whl -C cmake.args="-DGGML_BLAS=ON;-DGGML_BLAS_VENDOR=OpenBLAS"
9
- #llama-cpp-python==0.3.2 --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu # Older version based on wheel if the below line doesn't work
10
- llama-cpp-python==0.3.8 -C cmake.args="-DGGML_BLAS=ON;-DGGML_BLAS_VENDOR=OpenBLAS"
11
  torch==2.5.1 --extra-index-url https://download.pytorch.org/whl/cpu
12
  sentence_transformers==4.1.0
13
  faiss-cpu==1.10.0
 
6
  pandas==2.2.3
7
  transformers==4.51.3
8
  # For Windows https://github.com/abetlen/llama-cpp-python/releases/download/v0.3.2/llama_cpp_python-0.3.2-cp311-#cp311-win_amd64.whl -C cmake.args="-DGGML_BLAS=ON;-DGGML_BLAS_VENDOR=OpenBLAS"
9
+ llama-cpp-python==0.3.2 --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu # For linux if dependencies for below build command are not available in the environment
10
+ #llama-cpp-python==0.3.8 -C cmake.args="-DGGML_BLAS=ON;-DGGML_BLAS_VENDOR=OpenBLAS"
11
  torch==2.5.1 --extra-index-url https://download.pytorch.org/whl/cpu
12
  sentence_transformers==4.1.0
13
  faiss-cpu==1.10.0
requirements_aws.txt CHANGED
@@ -8,7 +8,7 @@ google-generativeai==0.8.5
8
  pandas==2.2.3
9
  transformers==4.51.3
10
  # For Windows https://github.com/abetlen/llama-cpp-python/releases/download/v0.3.2/llama_cpp_python-0.3.2-cp311-#cp311-win_amd64.whl -C cmake.args="-DGGML_BLAS=ON;-DGGML_BLAS_VENDOR=OpenBLAS"
11
- #llama-cpp-python==0.3.2 --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu # For linux
12
  llama-cpp-python==0.3.8 -C cmake.args="-DGGML_BLAS=ON;-DGGML_BLAS_VENDOR=OpenBLAS"
13
  #torch==2.5.1 --extra-index-url https://download.pytorch.org/whl/cpu # Loaded in Dockerfile
14
  #sentence_transformers==4.1.0 # Loaded in Dockerfile
 
8
  pandas==2.2.3
9
  transformers==4.51.3
10
  # For Windows https://github.com/abetlen/llama-cpp-python/releases/download/v0.3.2/llama_cpp_python-0.3.2-cp311-#cp311-win_amd64.whl -C cmake.args="-DGGML_BLAS=ON;-DGGML_BLAS_VENDOR=OpenBLAS"
11
+ #llama-cpp-python==0.3.2 --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu # For linux if dependencies for below build command are not available in the environment
12
  llama-cpp-python==0.3.8 -C cmake.args="-DGGML_BLAS=ON;-DGGML_BLAS_VENDOR=OpenBLAS"
13
  #torch==2.5.1 --extra-index-url https://download.pytorch.org/whl/cpu # Loaded in Dockerfile
14
  #sentence_transformers==4.1.0 # Loaded in Dockerfile