chansung commited on
Commit
40d3c00
·
verified ·
1 Parent(s): 17f9db0

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. README.md +24 -0
  2. app.py +10 -10
  3. requirements.txt +1 -1
README.md CHANGED
@@ -11,3 +11,27 @@ license: apache-2.0
11
  ---
12
 
13
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference# AdaptSum
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference# AdaptSum
14
+ # AdaptSum
15
+
16
+ AdaptSum stands for Adaptive Summarization. This project focuses on developing an LLM-powered system for dynamic summarization. Instead of generating entirely new summaries with each update, the system intelligently identifies and modifies only the necessary parts of the existing summary. This approach aims to create a more efficient and fluid summarization process within a continuous chat interaction with an LLM.
17
+
18
+ # Instructions
19
+
20
+ 1. Install dependencies
21
+ ```shell
22
+ $ pip install requirements.txt
23
+ ```
24
+
25
+ 2. Setup Gemini API Key
26
+ ```shell
27
+ $ export GEMINI_API_KEY=xxxxx
28
+ ```
29
+ > note that GEMINI API KEY should be obtained from Google AI Studio. Vertex AI is not supported at the moment (this is because Gemini SDK does not provide file uploading functionality for Vertex AI usage now).
30
+
31
+ 3. Run Gradio app
32
+ ```shell
33
+ $ python main.py # or gradio main.py
34
+ ```
35
+
36
+ # Acknowledgments
37
+ This is a project built during the Vertex sprints held by Google's ML Developer Programs team. We are thankful to be granted good amount of GCP credits to do this project.
app.py CHANGED
@@ -57,12 +57,12 @@ async def echo(message, history, state, persona):
57
 
58
  response_chunks = ""
59
  model_content_stream = await client.models.generate_content_stream(
60
- model=args.model,
61
- contents=state['messages'],
62
- config=types.GenerateContentConfig(
63
- system_instruction=system_instruction, seed=args.seed
64
- ),
65
- )
66
  async for chunk in model_content_stream:
67
  response_chunks += chunk.text
68
  # when model generates too fast, Gradio does not respond that in real-time.
@@ -211,10 +211,10 @@ def main(args):
211
  with gr.Column("chat-window", elem_id="chat-window"):
212
  gr.ChatInterface(
213
  multimodal=True,
214
- type="messages",
215
- fn=None,
216
- # additional_inputs=[state, persona],
217
- # additional_outputs=[state, summary_diff, summary_md, summary_num],
218
  )
219
 
220
  return demo
 
57
 
58
  response_chunks = ""
59
  model_content_stream = await client.models.generate_content_stream(
60
+ model=args.model,
61
+ contents=state['messages'],
62
+ config=types.GenerateContentConfig(
63
+ system_instruction=system_instruction, seed=args.seed
64
+ ),
65
+ )
66
  async for chunk in model_content_stream:
67
  response_chunks += chunk.text
68
  # when model generates too fast, Gradio does not respond that in real-time.
 
211
  with gr.Column("chat-window", elem_id="chat-window"):
212
  gr.ChatInterface(
213
  multimodal=True,
214
+ type="messages",
215
+ fn=echo,
216
+ additional_inputs=[state, persona],
217
+ additional_outputs=[state, summary_diff, summary_md, summary_num],
218
  )
219
 
220
  return demo
requirements.txt CHANGED
@@ -1,3 +1,3 @@
1
  google-genai==0.7.0
2
  toml
3
- gradio
 
1
  google-genai==0.7.0
2
  toml
3
+ gradio