llm_topic_modelling / tools /chatfuncs.py

Commit History

Topic deduplication/merging now separated from summarisation. Gradio upgrade
854a758

seanpedrickcase commited on

Moved spaces GPU calls back to main functions as otherwise it doesn't seem to work correctly
d4f58e6

seanpedrickcase commited on

Trying to move calls to @spaces.GPU to specific gemma calls to use the local model more efficiently
d9427a2

seanpedrickcase commited on

Enhanced local model support by adding model loading functionality in chatfuncs.py and updating llm_api_call.py to utilize local models for topic extraction and summarization. Improved model path handling and ensured compatibility with GPU configurations.
f5a842c

seanpedrickcase commited on

Refactored call_llama_cpp_model function to include model parameter in chatfuncs.py and updated import statements in llm_api_call.py to reflect this change.
a10d388

seanpedrickcase commited on

Moved model load to chatfuncs submodule to hopefully avoid gpu run issues
1f0d087

seanpedrickcase commited on

Adding some compatibility with Zero GPU spaces
63067b7

seanpedrickcase commited on

Added support for using local models (specifically Gemma 2b) for topic extraction and summary. Generally improved output format safeguards.
b7f4700

seanpedrickcase commited on