barunsaha commited on
Commit
102af4e
·
unverified ·
2 Parent(s): 9f63a8c 00e2c74

Merge pull request #85 from barun-saha/ollama

Browse files

Fix bug in offline mode and PPTX templates accessing

Files changed (3) hide show
  1. README.md +7 -0
  2. app.py +5 -2
  3. requirements.txt +2 -4
README.md CHANGED
@@ -88,13 +88,20 @@ Offline LLMs are made available via Ollama. Therefore, a pre-requisite here is t
88
  In addition, the `RUN_IN_OFFLINE_MODE` environment variable needs to be set to `True` to enable the offline mode. This, for example, can be done using a `.env` file or from the terminal. The typical steps to use SlideDeck AI in offline mode (in a `bash` shell) are as follows:
89
 
90
  ```bash
 
 
 
 
91
  ollama list # View locally available LLMs
92
  export RUN_IN_OFFLINE_MODE=True # Enable the offline mode to use Ollama
93
  git clone https://github.com/barun-saha/slide-deck-ai.git
94
  cd slide-deck-ai
 
 
95
  python -m venv venv # Create a virtual environment
96
  source venv/bin/activate # On a Linux system
97
  pip install -r requirements.txt
 
98
  streamlit run ./app.py # Run the application
99
  ```
100
 
 
88
  In addition, the `RUN_IN_OFFLINE_MODE` environment variable needs to be set to `True` to enable the offline mode. This, for example, can be done using a `.env` file or from the terminal. The typical steps to use SlideDeck AI in offline mode (in a `bash` shell) are as follows:
89
 
90
  ```bash
91
+ # Install Git Large File Storage (LFS)
92
+ sudo apt install git-lfs
93
+ git lfs install
94
+
95
  ollama list # View locally available LLMs
96
  export RUN_IN_OFFLINE_MODE=True # Enable the offline mode to use Ollama
97
  git clone https://github.com/barun-saha/slide-deck-ai.git
98
  cd slide-deck-ai
99
+ git lfs pull # Pull the PPTX template files
100
+
101
  python -m venv venv # Create a virtual environment
102
  source venv/bin/activate # On a Linux system
103
  pip install -r requirements.txt
104
+
105
  streamlit run ./app.py # Run the application
106
  ```
107
 
app.py CHANGED
@@ -159,7 +159,7 @@ with st.sidebar:
159
 
160
  if RUN_IN_OFFLINE_MODE:
161
  llm_provider_to_use = st.text_input(
162
- label='2: Enter Ollama model name to use:',
163
  help=(
164
  'Specify a correct, locally available LLM, found by running `ollama list`, for'
165
  ' example `mistral:v0.2` and `mistral-nemo:latest`. Having an Ollama-compatible'
@@ -167,8 +167,11 @@ with st.sidebar:
167
  )
168
  )
169
  api_key_token: str = ''
 
 
 
170
  else:
171
- # The LLMs
172
  llm_provider_to_use = st.sidebar.selectbox(
173
  label='2: Select a suitable LLM to use:\n\n(Gemini and Mistral-Nemo are recommended)',
174
  options=[f'{k} ({v["description"]})' for k, v in GlobalConfig.VALID_MODELS.items()],
 
159
 
160
  if RUN_IN_OFFLINE_MODE:
161
  llm_provider_to_use = st.text_input(
162
+ label='2: Enter Ollama model name to use (e.g., mistral:v0.2):',
163
  help=(
164
  'Specify a correct, locally available LLM, found by running `ollama list`, for'
165
  ' example `mistral:v0.2` and `mistral-nemo:latest`. Having an Ollama-compatible'
 
167
  )
168
  )
169
  api_key_token: str = ''
170
+ azure_endpoint: str = ''
171
+ azure_deployment: str = ''
172
+ api_version: str = ''
173
  else:
174
+ # The online LLMs
175
  llm_provider_to_use = st.sidebar.selectbox(
176
  label='2: Select a suitable LLM to use:\n\n(Gemini and Mistral-Nemo are recommended)',
177
  options=[f'{k} ({v["description"]})' for k, v in GlobalConfig.VALID_MODELS.items()],
requirements.txt CHANGED
@@ -17,15 +17,13 @@ langchain-ollama==0.2.1
17
  langchain-openai==0.3.3
18
  streamlit~=1.38.0
19
 
20
- python-pptx~=0.6.21
21
- # metaphor-python
22
  json5~=0.9.14
23
  requests~=2.32.3
24
 
25
  transformers>=4.48.0
26
  torch==2.4.0
27
 
28
- urllib3~=2.2.1
29
  lxml~=4.9.3
30
  tqdm~=4.66.5
31
  numpy
@@ -38,4 +36,4 @@ anyio==4.4.0
38
 
39
  httpx~=0.27.2
40
  huggingface-hub~=0.24.5
41
- ollama~=0.4.4
 
17
  langchain-openai==0.3.3
18
  streamlit~=1.38.0
19
 
20
+ python-pptx~=1.0.2
 
21
  json5~=0.9.14
22
  requests~=2.32.3
23
 
24
  transformers>=4.48.0
25
  torch==2.4.0
26
 
 
27
  lxml~=4.9.3
28
  tqdm~=4.66.5
29
  numpy
 
36
 
37
  httpx~=0.27.2
38
  huggingface-hub~=0.24.5
39
+ ollama~=0.4.7