rmayormartins commited on
Commit
1442014
β€’
1 Parent(s): 533cdcf

Subindo arquivos

Browse files
Files changed (3) hide show
  1. README.md +30 -6
  2. app.py +52 -0
  3. requirements.txt +4 -0
README.md CHANGED
@@ -1,13 +1,37 @@
1
  ---
2
- title: My Llama3.1 Groq
3
- emoji: 🐠
4
- colorFrom: green
5
  colorTo: green
6
  sdk: gradio
7
- sdk_version: 4.41.0
8
  app_file: app.py
9
  pinned: false
10
- license: ecl-2.0
11
  ---
12
 
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: My-Llama3.1-Groq
3
+ emoji: πŸ¦™πŸ€–πŸ†™
4
+ colorFrom: blue
5
  colorTo: green
6
  sdk: gradio
7
+ sdk_version: "4.12.0"
8
  app_file: app.py
9
  pinned: false
 
10
  ---
11
 
12
+
13
+ ## This Project
14
+
15
+ This project is a chatbot powered by the Groq Cloud API using the `Llama-3.1-70b-versatile` model.
16
+
17
+ This chatbot leverages the capabilities of Groq's cloud API, specifically utilizing the advanced `Llama-3.1-70b-versatile` model to provide efficient and accurate responses. The interface is built using Gradio, allowing for easy interaction and deployment.
18
+
19
+ ## More Information
20
+
21
+ - **Developer:** Ramon Mayor Martins, Ph.D. (2024)
22
+ - **Email:** rmayormartins at: gmail.com
23
+ - **Homepage:** [rmayormartins.github.io](https://rmayormartins.github.io/)
24
+ - **Twitter:** [@rmayormartins](https://twitter.com/rmayormartins)
25
+ - **GitHub:** [rmayormartins](https://github.com/rmayormartins)
26
+
27
+ ## Special Thanks
28
+
29
+ A special thank you to:
30
+
31
+ - **Meta** for developing the Llama models. Learn more at [Llama 3](https://llama.meta.com/llama3/).
32
+ - **Groq** for providing the powerful cloud API that makes this project possible. Visit [Groq](https://groq.com/) for more information.
33
+ - **Federal Institute of Santa Catarina (IFSC)** [IFSC](https://www.ifsc.edu.br/)
34
+
35
+ ---
36
+
37
+ Feel free to explore, interact, and contribute!
app.py ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from groq import Groq
3
+ import gradio as gr
4
+
5
+ #
6
+ api_key = os.getenv("GROQ_API_KEY2")
7
+
8
+ #
9
+ client = Groq(api_key=api_key)
10
+
11
+ #
12
+ system_prompt = {
13
+ "role": "system",
14
+ "content": "You are a useful assistant. You reply with efficient answers."
15
+ }
16
+
17
+ #
18
+ async def chat_groq(message, history):
19
+ messages = [system_prompt]
20
+
21
+ for msg in history:
22
+ messages.append({"role": "user", "content": str(msg[0])})
23
+ messages.append({"role": "assistant", "content": str(msg[1])})
24
+
25
+ messages.append({"role": "user", "content": str(message)})
26
+
27
+ response_content = ''
28
+
29
+ # modelo `llama-3.1-70b-versatile`
30
+ stream = client.chat.completions.create(
31
+ model="llama-3.1-70b-versatile",
32
+ messages=messages,
33
+ max_tokens=1024,
34
+ temperature=1.3,
35
+ stream=True
36
+ )
37
+
38
+ for chunk in stream:
39
+ content = chunk.choices[0].delta.content
40
+ if content:
41
+ response_content += chunk.choices[0].delta.content
42
+ yield response_content
43
+
44
+ # Interface Gradio
45
+ with gr.Blocks(theme=gr.themes.Monochrome()) as demo:
46
+ gr.ChatInterface(chat_groq,
47
+ clear_btn=None,
48
+ undo_btn=None,
49
+ retry_btn=None)
50
+
51
+ demo.queue()
52
+ demo.launch()
requirements.txt ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ groq
2
+ gradio==4.29.0
3
+ transformers
4
+ torch