Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Spaces:
seanpedrickcase
/
llm_topic_modelling
like
0
Running
App
Files
Files
Community
Fetching metadata from the HF Docker repository...
a10d388
llm_topic_modelling
Ctrl+K
Ctrl+K
3 contributors
History:
15 commits
seanpedrickcase
Refactored call_llama_cpp_model function to include model parameter in chatfuncs.py and updated import statements in llm_api_call.py to reflect this change.
a10d388
5 months ago
.github
First commit
5 months ago
tools
Refactored call_llama_cpp_model function to include model parameter in chatfuncs.py and updated import statements in llm_api_call.py to reflect this change.
5 months ago
.dockerignore
Safe
137 Bytes
First commit
5 months ago
.gitignore
Safe
137 Bytes
First commit
5 months ago
Dockerfile
Safe
1.97 kB
Corrected references to extra-index-url in requirements/Dockerfile
5 months ago
README.md
Safe
2.24 kB
Changed readme sdk back to gradio, updated intro text
5 months ago
app.py
Safe
21.9 kB
Moved model load to chatfuncs submodule to hopefully avoid gpu run issues
5 months ago
requirements.txt
Safe
632 Bytes
Corrected references to extra-index-url in requirements/Dockerfile
5 months ago
requirements_aws.txt
Safe
353 Bytes
Corrected prompt. Now runs Haiku correctly
5 months ago
requirements_cpu.txt
Safe
421 Bytes
Corrected references to extra-index-url in requirements/Dockerfile
5 months ago