slush0's picture
Added automatic resuming of sessions in Chat mode (resends the context to the API).
57a7522
raw
history blame
925 Bytes
import gradio as gr
from prompt import iface_prompt
from chat import iface_chat
with gr.Blocks() as iface:
gr.Markdown("""# Petals playground
**Let's play with prompts and inference settings for BLOOM and BLOOMZ 176B models!**
This space uses websocket API of [chat.petals.ml](http://chat.petals.ml). Health status of Petals network [lives here](http://health.petals.ml).
Do NOT talk to BLOOM as an entity, it's not a chatbot but a webpage/blog/article completion model.
For the best results: MIMIC a few sentences of a webpage similar to the content you want to generate.
BLOOMZ performs better in chat mode and understands the instructions better.""")
gr.TabbedInterface([iface_prompt, iface_chat], ["Prompt mode", "Chat mode"])
# Queues are required to enable generators
iface.queue(concurrency_count=5, max_size=50)
iface.launch(show_error=True)