llm_topic_modelling / requirements_gpu.txt
seanpedrickcase's picture
Upgraded Gradio. More resilient to cases where LLM calls do not return valid markdown tables (will reattempt with different temperature). Minor fixes
b9301bd
raw
history blame contribute delete
633 Bytes
pandas==2.2.3
gradio==5.20.1
spaces==0.31.0
boto3==1.35.71
pyarrow==18.1.0
openpyxl==3.1.3
markdown==3.7
tabulate==0.9.0
lxml==5.3.0
google-generativeai==0.8.3
html5lib==1.1
beautifulsoup4==4.12.3
rapidfuzz==3.10.1
torch==2.4.1 --extra-index-url https://download.pytorch.org/whl/cu121
#llama-cpp-python==0.2.90 --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu121
# Specify exact llama_cpp wheel for huggingface compatibility
https://github.com/abetlen/llama-cpp-python/releases/download/v0.2.90-cu121/llama_cpp_python-0.2.90-cp310-cp310-linux_x86_64.whl
transformers==4.49.0
numpy==1.26.4
typing_extensions==4.12.2