chansung commited on
Commit
22cc657
·
verified ·
1 Parent(s): 30775d6

Upload folder using huggingface_hub

Browse files
Files changed (2) hide show
  1. README.md +24 -0
  2. app.py +1 -3
README.md CHANGED
@@ -107,3 +107,27 @@ $ python main.py # or gradio main.py
107
 
108
  # Acknowledgments
109
  This is a project built during the Vertex sprints held by Google's ML Developer Programs team. We are thankful to be granted good amount of GCP credits to do this project.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
107
 
108
  # Acknowledgments
109
  This is a project built during the Vertex sprints held by Google's ML Developer Programs team. We are thankful to be granted good amount of GCP credits to do this project.
110
+ # AdaptSum
111
+
112
+ AdaptSum stands for Adaptive Summarization. This project focuses on developing an LLM-powered system for dynamic summarization. Instead of generating entirely new summaries with each update, the system intelligently identifies and modifies only the necessary parts of the existing summary. This approach aims to create a more efficient and fluid summarization process within a continuous chat interaction with an LLM.
113
+
114
+ # Instructions
115
+
116
+ 1. Install dependencies
117
+ ```shell
118
+ $ pip install requirements.txt
119
+ ```
120
+
121
+ 2. Setup Gemini API Key
122
+ ```shell
123
+ $ export GEMINI_API_KEY=xxxxx
124
+ ```
125
+ > note that GEMINI API KEY should be obtained from Google AI Studio. Vertex AI is not supported at the moment (this is because Gemini SDK does not provide file uploading functionality for Vertex AI usage now).
126
+
127
+ 3. Run Gradio app
128
+ ```shell
129
+ $ python main.py # or gradio main.py
130
+ ```
131
+
132
+ # Acknowledgments
133
+ This is a project built during the Vertex sprints held by Google's ML Developer Programs team. We are thankful to be granted good amount of GCP credits to do this project.
app.py CHANGED
@@ -59,9 +59,7 @@ async def echo(message, history, state, persona):
59
  model_content_stream = await client.models.generate_content_stream(
60
  model=args.model,
61
  contents=state['messages'],
62
- config=types.GenerateContentConfig(
63
- system_instruction=system_instruction, seed=args.seed
64
- ),
65
  )
66
  async for chunk in model_content_stream:
67
  response_chunks += chunk.text
 
59
  model_content_stream = await client.models.generate_content_stream(
60
  model=args.model,
61
  contents=state['messages'],
62
+ config=types.GenerateContentConfig(seed=args.seed),
 
 
63
  )
64
  async for chunk in model_content_stream:
65
  response_chunks += chunk.text