johnstrenio commited on
Commit
cdfcf5b
·
verified ·
1 Parent(s): bb85f09

Upload 6 files

Browse files
Files changed (5) hide show
  1. README.md +8 -6
  2. app.py +310 -0
  3. model.py +57 -0
  4. requirements.txt +9 -0
  5. style.css +16 -0
README.md CHANGED
@@ -1,12 +1,14 @@
1
  ---
2
- title: Wedding Assistant
3
- emoji: 👁
4
- colorFrom: purple
5
- colorTo: red
6
  sdk: gradio
7
- sdk_version: 4.28.3
8
  app_file: app.py
 
9
  pinned: false
 
10
  ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
1
  ---
2
+ title: Mistral-7B
3
+ emoji: 😻
4
+ colorFrom: yellow
5
+ colorTo: yellow
6
  sdk: gradio
7
+ sdk_version: 3.50.2
8
  app_file: app.py
9
+ models: [mistralai/Mistral-7B-v0.1, mistralai/Mistral-7B-Instruct-v0.1]
10
  pinned: false
11
+ license: mit
12
  ---
13
 
14
+ Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
app.py ADDED
@@ -0,0 +1,310 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from typing import Iterator
3
+
4
+ import gradio as gr
5
+
6
+ from model import run
7
+
8
+ HF_PUBLIC = os.environ.get("HF_PUBLIC", False)
9
+
10
+ DEFAULT_SYSTEM_PROMPT = '''
11
+ You are a digital assistant for Kristen Arndt and John "LJ" Strenio's Wedding. You are careful to provide professional responses only regard to the wedding information below or any information you may have about the central oregon coast where the wedding is held.
12
+
13
+ [WEDDING DETAILS]
14
+ Heceta Lighthouse Bed & Breakfast
15
+ Heceta Beach, Oregon
16
+ August 3, 2024
17
+ Welcome!
18
+ We’re excited to have everyone come celebrate the wedding of Kristen and LJ! The wedding ceremony will be held Saturday August 3rd, 2024 at 4:00pm with guests arriving at 3:00pm at the Heceta Lighthouse Bed & Breakfast at Heceta Beach, Oregon.
19
+ Travel
20
+ For folks traveling from out of state the best two options for airports are Portland Intl Airport and Eugene’s Regional Airport. PDX will likely be the move for those coming from the east coast however west coast based friends might be able to find comparable flights to Eugene which will allow for a much shorter drive to the coast. (~1 hr drive from Eugene, ~3 hrs from Portland).
21
+ Accomodations
22
+ The families will be staying at the Lighthouse Keeper’s B&B and in the nearest town of Yachats 20 minutes to the north. Accommodations in several price ranges exist in Yachats with our host’s recommending the overleaf hotel and driftwood inns though a number of vacation rental/airbnb/vrbo’s are available as well. Florence, 20 minutes south of the lighthouse, also has a variety of hotels and vacation rentals available including the Driftwood Shores Resort, Best Western Pier Point Inn, and the River House Inn.
23
+ Local Transportation / Shuttles
24
+ Heceta Beach is a uniquely secluded state park. The day parking lot has a short path that leads to the venue for guests planning on driving themselves to the ceremony. To allow our guests to drink and enjoy themselves we will be providing a free private shuttle service that will run from several locations in both Yachats and Florence. The shuttle will provide pickups before the ceremony with both early and late return trips mid evening and at the end of the night.
25
+
26
+ Times and Locations:
27
+ Yachats
28
+ Overleaf Resort - 2:30pm
29
+ The Drift Inn - 2:40pm
30
+
31
+ Florence
32
+ Old Town Inn - 2:30pm
33
+ Driftwood Shores Resort - 2:45pm
34
+ Attire
35
+ Cocktail Attire. Come get fancy with us! Your snazziest suit & tie, formal separates, dressy jumpsuits, cocktail, mid and floor-length dresses are all welcome. Please no jeans or t-shirts but its an outdoor wedding so you can leave the tux at home too! While Oregon is hot in August, the coast is reliably 20 degrees cooler than inland often with a cool breeze off the ocean so keep in mind the evening may get chilly.
36
+ Itinerary
37
+ Friday
38
+ 2pm Welcome Beach Party
39
+ Join us for drinks, cornhole and waves at the beach in Heceta Lighthouse state park. Directions
40
+
41
+ 7pm Drinks at Ona
42
+ Grab a drink with us following the rehearsal dinner at the Ona Restaurant and Bar in Yachats. Directions
43
+ Saturday
44
+ 2:30pm Shuttle Service begins from Yachats and Florence
45
+ (see schedule for exact times and locations)
46
+
47
+ 3:00pm pre-ceremony Beer/Wine + Oregon cheese
48
+ at the Heceta head Lighthouse Keeper’s House
49
+
50
+ 4:00pm Wedding Ceremony, Reception directly following.
51
+
52
+ 10:00pm last Shuttles to Florence and Yachats / Afterparty
53
+ Registry
54
+ We kindly ask that you consider making a cash contribution in lieu of traditional wedding gifts. (For those with Venmo)
55
+ Dinner
56
+ Dinner will be served family style with vegetarian options included:
57
+ Appetizers:
58
+ albacore tuna Mushroom pate bruschetta Green Salad
59
+ Main Courses:
60
+ Vegetarian and Seafood pasta with pesto Brisket
61
+
62
+ Note: Dinner will be peanut free, please contact LJ for additional food allergies.
63
+ Contact
64
+ For Questions or additional info message or email LJ:
65
+ 802-734-6892 [email protected]
66
+
67
+ For Shipping:
68
+ Kristen & John John-Strenio
69
+ 11520 sw 98th Ave
70
+ Tigard, OR 97223
71
+ '''
72
+ MAX_MAX_NEW_TOKENS = 4096
73
+ DEFAULT_MAX_NEW_TOKENS = 256
74
+ MAX_INPUT_TOKEN_LENGTH = 4000
75
+
76
+ DESCRIPTION = """
77
+ # Wedding Assistant
78
+ """
79
+
80
+ def clear_and_save_textbox(message: str) -> tuple[str, str]:
81
+ return '', message
82
+
83
+
84
+ def display_input(message: str,
85
+ history: list[tuple[str, str]]) -> list[tuple[str, str]]:
86
+ history.append((message, ''))
87
+ return history
88
+
89
+
90
+ def delete_prev_fn(
91
+ history: list[tuple[str, str]]) -> tuple[list[tuple[str, str]], str]:
92
+ try:
93
+ message, _ = history.pop()
94
+ except IndexError:
95
+ message = ''
96
+ return history, message or ''
97
+
98
+
99
+ def generate(
100
+ message: str,
101
+ history_with_input: list[tuple[str, str]],
102
+ system_prompt: str,
103
+ max_new_tokens: int,
104
+ temperature: float,
105
+ top_p: float,
106
+ top_k: int,
107
+ ) -> Iterator[list[tuple[str, str]]]:
108
+ if max_new_tokens > MAX_MAX_NEW_TOKENS:
109
+ raise ValueError
110
+
111
+ history = history_with_input[:-1]
112
+ generator = run(message, history, system_prompt, max_new_tokens, temperature, top_p, top_k)
113
+ try:
114
+ first_response = next(generator)
115
+ yield history + [(message, first_response)]
116
+ except StopIteration:
117
+ yield history + [(message, '')]
118
+ for response in generator:
119
+ yield history + [(message, response)]
120
+
121
+
122
+ def process_example(message: str) -> tuple[str, list[tuple[str, str]]]:
123
+ generator = generate(message, [], DEFAULT_SYSTEM_PROMPT, 1024, 1, 0.95, 50)
124
+ for x in generator:
125
+ pass
126
+ return '', x
127
+
128
+
129
+ def check_input_token_length(message: str, chat_history: list[tuple[str, str]], system_prompt: str) -> None:
130
+ input_token_length = len(message) + len(chat_history)
131
+ if input_token_length > MAX_INPUT_TOKEN_LENGTH:
132
+ raise gr.Error(f'The accumulated input is too long ({input_token_length} > {MAX_INPUT_TOKEN_LENGTH}). Clear your chat history and try again.')
133
+
134
+
135
+ with gr.Blocks(css='style.css') as demo:
136
+ gr.Markdown(DESCRIPTION)
137
+ # gr.DuplicateButton(value='Duplicate Space for private use',
138
+ # elem_id='duplicate-button')
139
+
140
+ with gr.Group():
141
+ chatbot = gr.Chatbot(label='Discussion')
142
+ with gr.Row():
143
+ textbox = gr.Textbox(
144
+ container=False,
145
+ show_label=False,
146
+ placeholder='Tell me about John.',
147
+ scale=10,
148
+ )
149
+ submit_button = gr.Button('Submit',
150
+ variant='primary',
151
+ scale=1,
152
+ min_width=0)
153
+ with gr.Row():
154
+ retry_button = gr.Button('🔄 Retry', variant='secondary')
155
+ undo_button = gr.Button('↩️ Undo', variant='secondary')
156
+ clear_button = gr.Button('🗑️ Clear', variant='secondary')
157
+
158
+ saved_input = gr.State()
159
+
160
+ with gr.Accordion(label='⚙️ Advanced options', open=False, visible=False):
161
+ system_prompt = gr.Textbox(label='System prompt',
162
+ value=DEFAULT_SYSTEM_PROMPT,
163
+ lines=0,
164
+ interactive=False)
165
+ max_new_tokens=256
166
+ temperature=0.1
167
+ top_p=0.9
168
+ top_k=10
169
+ max_new_tokens = gr.Slider(
170
+ label='Max new tokens',
171
+ minimum=1,
172
+ maximum=MAX_MAX_NEW_TOKENS,
173
+ step=1,
174
+ value=DEFAULT_MAX_NEW_TOKENS,
175
+ )
176
+ temperature = gr.Slider(
177
+ label='Temperature',
178
+ minimum=0.1,
179
+ maximum=4.0,
180
+ step=0.1,
181
+ value=0.1,
182
+ )
183
+ top_p = gr.Slider(
184
+ label='Top-p (nucleus sampling)',
185
+ minimum=0.05,
186
+ maximum=1.0,
187
+ step=0.05,
188
+ value=0.9,
189
+ )
190
+ top_k = gr.Slider(
191
+ label='Top-k',
192
+ minimum=1,
193
+ maximum=1000,
194
+ step=1,
195
+ value=10,
196
+ )
197
+
198
+ textbox.submit(
199
+ fn=clear_and_save_textbox,
200
+ inputs=textbox,
201
+ outputs=[textbox, saved_input],
202
+ api_name=False,
203
+ queue=False,
204
+ ).then(
205
+ fn=display_input,
206
+ inputs=[saved_input, chatbot],
207
+ outputs=chatbot,
208
+ api_name=False,
209
+ queue=False,
210
+ ).then(
211
+ fn=check_input_token_length,
212
+ inputs=[saved_input, chatbot, system_prompt],
213
+ api_name=False,
214
+ queue=False,
215
+ ).success(
216
+ fn=generate,
217
+ inputs=[
218
+ saved_input,
219
+ chatbot,
220
+ system_prompt,
221
+ max_new_tokens,
222
+ temperature,
223
+ top_p,
224
+ top_k,
225
+ ],
226
+ outputs=chatbot,
227
+ api_name=False,
228
+ )
229
+
230
+ button_event_preprocess = submit_button.click(
231
+ fn=clear_and_save_textbox,
232
+ inputs=textbox,
233
+ outputs=[textbox, saved_input],
234
+ api_name=False,
235
+ queue=False,
236
+ ).then(
237
+ fn=display_input,
238
+ inputs=[saved_input, chatbot],
239
+ outputs=chatbot,
240
+ api_name=False,
241
+ queue=False,
242
+ ).then(
243
+ fn=check_input_token_length,
244
+ inputs=[saved_input, chatbot, system_prompt],
245
+ api_name=False,
246
+ queue=False,
247
+ ).success(
248
+ fn=generate,
249
+ inputs=[
250
+ saved_input,
251
+ chatbot,
252
+ system_prompt,
253
+ max_new_tokens,
254
+ temperature,
255
+ top_p,
256
+ top_k,
257
+ ],
258
+ outputs=chatbot,
259
+ api_name=False,
260
+ )
261
+
262
+ retry_button.click(
263
+ fn=delete_prev_fn,
264
+ inputs=chatbot,
265
+ outputs=[chatbot, saved_input],
266
+ api_name=False,
267
+ queue=False,
268
+ ).then(
269
+ fn=display_input,
270
+ inputs=[saved_input, chatbot],
271
+ outputs=chatbot,
272
+ api_name=False,
273
+ queue=False,
274
+ ).then(
275
+ fn=generate,
276
+ inputs=[
277
+ saved_input,
278
+ chatbot,
279
+ system_prompt,
280
+ max_new_tokens,
281
+ temperature,
282
+ top_p,
283
+ top_k,
284
+ ],
285
+ outputs=chatbot,
286
+ api_name=False,
287
+ )
288
+
289
+ undo_button.click(
290
+ fn=delete_prev_fn,
291
+ inputs=chatbot,
292
+ outputs=[chatbot, saved_input],
293
+ api_name=False,
294
+ queue=False,
295
+ ).then(
296
+ fn=lambda x: x,
297
+ inputs=[saved_input],
298
+ outputs=textbox,
299
+ api_name=False,
300
+ queue=False,
301
+ )
302
+
303
+ clear_button.click(
304
+ fn=lambda: ([], ''),
305
+ outputs=[chatbot, saved_input],
306
+ queue=False,
307
+ api_name=False,
308
+ )
309
+
310
+ demo.queue(max_size=32).launch(share=HF_PUBLIC, show_api=False)
model.py ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from typing import Iterator
3
+
4
+ from text_generation import Client
5
+
6
+ model_id = 'mistralai/Mistral-7B-Instruct-v0.1'
7
+
8
+ API_URL = "https://api-inference.huggingface.co/models/" + model_id
9
+ HF_TOKEN = os.environ.get("HF_READ_TOKEN", None)
10
+
11
+ client = Client(
12
+ API_URL,
13
+ headers={"Authorization": f"Bearer {HF_TOKEN}"},
14
+ )
15
+ EOS_STRING = "</s>"
16
+ EOT_STRING = "<EOT>"
17
+
18
+
19
+ def get_prompt(message: str, chat_history: list[tuple[str, str]],
20
+ system_prompt: str) -> str:
21
+ texts = [f'<s>[INST] <<SYS>>\n{system_prompt}\n<</SYS>>\n\n']
22
+ # The first user input is _not_ stripped
23
+ do_strip = False
24
+ for user_input, response in chat_history:
25
+ user_input = user_input.strip() if do_strip else user_input
26
+ do_strip = True
27
+ texts.append(f'{user_input} [/INST] {response.strip()} </s><s>[INST] ')
28
+ message = message.strip() if do_strip else message
29
+ texts.append(f'{message} [/INST]')
30
+ return ''.join(texts)
31
+
32
+
33
+ def run(message: str,
34
+ chat_history: list[tuple[str, str]],
35
+ system_prompt: str,
36
+ max_new_tokens: int = 1024,
37
+ temperature: float = 0.1,
38
+ top_p: float = 0.9,
39
+ top_k: int = 50) -> Iterator[str]:
40
+ prompt = get_prompt(message, chat_history, system_prompt)
41
+
42
+ generate_kwargs = dict(
43
+ max_new_tokens=max_new_tokens,
44
+ do_sample=True,
45
+ top_p=top_p,
46
+ top_k=top_k,
47
+ temperature=temperature,
48
+ )
49
+ stream = client.generate_stream(prompt, **generate_kwargs)
50
+ output = ""
51
+ for response in stream:
52
+ if any([end_token in response.token.text for end_token in [EOS_STRING, EOT_STRING]]):
53
+ return output
54
+ else:
55
+ output += response.token.text
56
+ yield output
57
+ return output
requirements.txt ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ accelerate
2
+ bitsandbytes
3
+ gradio
4
+ protobuf
5
+ scipy
6
+ sentencepiece
7
+ torch
8
+ text_generation
9
+ git+https://github.com/huggingface/transformers@main
style.css ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ h1 {
2
+ text-align: center;
3
+ }
4
+
5
+ #duplicate-button {
6
+ margin: auto;
7
+ color: white;
8
+ background: #1565c0;
9
+ border-radius: 100vh;
10
+ }
11
+
12
+ #component-0 {
13
+ max-width: 900px;
14
+ margin: auto;
15
+ padding-top: 1.5rem;
16
+ }