Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

Spaces:
seanpedrickcase
/
llm_topic_modelling
Running

App Files Files Community
Fetching metadata from the HF Docker repository...
llm_topic_modelling
Ctrl+K
Ctrl+K
  • 3 contributors
History: 25 commits
seanpedrickcase's picture
seanpedrickcase
Changed default requirements to CPU version of llama cpp. Added Gemini Flash 2.0 to model list. Output files should contain only final files.
b0e08c8 3 months ago
  • .github
    First commit 6 months ago
  • tools
    Changed default requirements to CPU version of llama cpp. Added Gemini Flash 2.0 to model list. Output files should contain only final files. 3 months ago
  • .dockerignore
    137 Bytes
    First commit 6 months ago
  • .gitignore
    137 Bytes
    First commit 6 months ago
  • Dockerfile
    1.91 kB
    Topic deduplication/merging now separated from summarisation. Gradio upgrade 4 months ago
  • README.md
    2.65 kB
    Updated intro and readme to link to datasets 6 months ago
  • app.py
    25.8 kB
    Changed default requirements to CPU version of llama cpp. Added Gemini Flash 2.0 to model list. Output files should contain only final files. 3 months ago
  • requirements.txt
    437 Bytes
    Changed default requirements to CPU version of llama cpp. Added Gemini Flash 2.0 to model list. Output files should contain only final files. 3 months ago
  • requirements_aws.txt
    369 Bytes
    Topic deduplication/merging now separated from summarisation. Gradio upgrade 4 months ago
  • requirements_gpu.txt
    633 Bytes
    Changed default requirements to CPU version of llama cpp. Added Gemini Flash 2.0 to model list. Output files should contain only final files. 3 months ago