text_summariser / requirements_gpu.txt
seanpedrickcase's picture
Dockerfile now loads models to local folder. Can use custom output folder. requrirements for GPU-enabled summarisation now in separate file to hopefully avoid HF space issues.
3809dc8
raw
history blame
211 Bytes
gradio==4.36.0
transformers
pyarrow
openpyxl
llama-cpp-python==0.2.77 --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu121
torch==2.3.1 --extra-index-url https://download.pytorch.org/whl/cu121