Spaces:
Runtime error
Runtime error
this and that
Browse files- README.md +22 -0
- metaprompts.yml +55 -64
- pyproject.toml +2 -2
- requirements-dev.lock +1 -11
- requirements.lock +1 -11
- src/prompt_teacher/app.py +7 -12
- src/prompt_teacher/callbacks.py +20 -22
- src/prompt_teacher/messages.py +1 -0
README.md
CHANGED
@@ -1,5 +1,7 @@
|
|
1 |
# π€ Prompt Teacher πβ¨
|
2 |
|
|
|
|
|
3 |
<img src="thumbnail.png" alt="Screenshot of the Prompt Teacher"/>
|
4 |
|
5 |
## Quickstart π
|
@@ -11,6 +13,26 @@
|
|
11 |
- **GitHub:** [pwenker/prompt_teacher](https://github.com/pwenker/prompt_teacher)
|
12 |
- **Hugging Face Spaces:** [pwenker/prompt_teacher](https://huggingface.co/spaces/pwenker/prompt_teacher)
|
13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
## Local Deployment π
|
15 |
|
16 |
### Prerequisites π
|
|
|
1 |
# π€ Prompt Teacher πβ¨
|
2 |
|
3 |
+
The **prompt teacher** is an interactive and educational prompt engineering interface for LLMs that teaches users how to craft βοΈ, refine π§, and optimize π prompts to achieve the most effective and targeted responses from LLMs.
|
4 |
+
|
5 |
<img src="thumbnail.png" alt="Screenshot of the Prompt Teacher"/>
|
6 |
|
7 |
## Quickstart π
|
|
|
13 |
- **GitHub:** [pwenker/prompt_teacher](https://github.com/pwenker/prompt_teacher)
|
14 |
- **Hugging Face Spaces:** [pwenker/prompt_teacher](https://huggingface.co/spaces/pwenker/prompt_teacher)
|
15 |
|
16 |
+
## Metaprompts Overview
|
17 |
+
|
18 |
+
The following metaprompts are currently part of the prompt teacher.
|
19 |
+
|
20 |
+
|
21 |
+
| π **Name** | π **Explanation** | βοΈ **Example Prompt** | π‘ **Example Prompt Explanation** |
|
22 |
+
|-----------------------------------|--------------------------------------------------------------------------------------------------------------------|-----------------------------------------------|----------------------------------------------------------------------------------------------------------------------------|
|
23 |
+
| **Expand with details** | Expands a prompt to include more detailed instructions and context. | Tell me about dogs. | This prompt is very vague and lacks context, making it ideal for expansion to guide the LLM more effectively. |
|
24 |
+
| **Apply feedback** | Improves a prompt based on specific feedback provided. | Describe the process of photosynthesis. | Feedback might suggest making the prompt more accessible for younger audiences or more detailed for academic use. |
|
25 |
+
| **Simply condense prompt** | Condenses a prompt to make it more succinct while retaining its essential request. | Write a funny joke that makes people laugh about something very funny. It should be hilarious. | This prompt can be condensed by removing redundant information. |
|
26 |
+
| **Simply improve prompt** | Improves a prompt to enhance clarity and effectiveness. | Tell me how to cook rice. | This prompt can be improved by specifying the type of cuisine or cooking method. |
|
27 |
+
| **Create sequential task list** | Structures a prompt to guide the LLM through a series of sequential tasks. | Plan a birthday party. | This prompt can be structured to outline steps such as choosing a theme, preparing a guest list, and organizing activities.|
|
28 |
+
| **Elicit creative response** | Transforms a prompt to inspire creativity and elicit imaginative responses. | Write a story about a lost kitten. | The prompt can be revised to encourage more descriptive or emotional storytelling. |
|
29 |
+
| **Include hypothetical scenario** | Tailors a prompt to include a specific hypothetical scenario for detailed exploration. | The danger of Artificial General Intelligence | This prompt can be tailored to explore specific hypothetical scenarios to provide depth and context. |
|
30 |
+
| **Focus on ethics** | Reframes a prompt to focus on ethical considerations or moral dilemmas. | Genetic engineering in humans. | This prompt can be reframed to focus on the ethical considerations or moral dilemmas involved. |
|
31 |
+
| **Add role prompting** | Adds a role to the prompt to improve the response. | Write a short song. | By adding an expert role, we can potentially improve the quality of the created song. |
|
32 |
+
| **Add delimiters for clarity** | Adds clear delimiters to a prompt to separate and organize different sections or instructions, enhancing readability and structure. | Summarize this text {text} with bulletpoints. Be concise | This prompt can benefit from clear delimiters to separate instructions or sections, making it easier for the LLM to follow and respond systematically. |
|
33 |
+
| **Incorporate chain of thought reasoning** | Incorporates chain of thought reasoning to guide the LLM through a logical sequence of thoughts for complex problem-solving. | How can we reduce traffic congestion in urban areas? | This prompt can benefit from chain of thought reasoning to break down the problem into manageable parts and explore various solutions systematically. |
|
34 |
+
| **Comprehensive prompt refinement** | Integrates various techniques to refine, expand, and adapt prompts for LLMs, ensuring clarity, specificity, and engagement tailored to the intended purpose. | Write a brief history of Artificial Intelligence | This prompt can be improved by specifying aspects such as the depth of detail, areas of focus, and desired structure. |
|
35 |
+
|
36 |
## Local Deployment π
|
37 |
|
38 |
### Prerequisites π
|
metaprompts.yml
CHANGED
@@ -10,60 +10,6 @@ Metaprompts:
|
|
10 |
{prompt}
|
11 |
|
12 |
Add necessary details and context to guide the LLM towards a more comprehensive and targeted response.
|
13 |
-
- explanation: "Refines a prompt according to best practices in prompt engineering for LLMs."
|
14 |
-
example_prompt: "Write a brief history of the internet."
|
15 |
-
example_prompt_explanation: "This prompt can be improved by specifying aspects such as the depth of detail, areas of focus, and desired structure."
|
16 |
-
name: "Apply prompt engineering best practices"
|
17 |
-
template: |
|
18 |
-
You are an expert Prompt Writer for Large Language Models.
|
19 |
-
|
20 |
-
Your goal is to improve the prompt given below:
|
21 |
-
|
22 |
-
Prompt: {prompt}
|
23 |
-
|
24 |
-
Here are several tips on writing great prompts:
|
25 |
-
|
26 |
-
- Start the prompt by stating that it is an expert in the subject.
|
27 |
-
- Put instructions at the beginning of the prompt and use ### or ``` to separate the instruction and context
|
28 |
-
- Be specific, descriptive and as detailed as possible about the desired context, outcome, length, format, style, etc
|
29 |
-
|
30 |
-
Here's an example of a great prompt:
|
31 |
-
|
32 |
-
As an expert in theoretical physics, provide a comprehensive explanation of the theory of relativity.
|
33 |
-
|
34 |
-
### Instructions:
|
35 |
-
- Begin with a brief introduction to the historical context and the significance of the theory.
|
36 |
-
- Clearly differentiate between the Special Theory of Relativity and the General Theory of Relativity.
|
37 |
-
- Explain the key concepts of each theory, including the principle of relativity, the constancy of the speed of light, time dilation, length contraction, and the equivalence principle.
|
38 |
-
- Use diagrams or thought experiments, such as the famous "twin paradox" and "Einstein's elevator", to illustrate complex ideas.
|
39 |
-
- Discuss the implications of relativity in modern physics and its application in technologies like GPS.
|
40 |
-
- Conclude with a summary of how relativity has influenced our understanding of the universe.
|
41 |
-
- Aim for a detailed yet accessible explanation suitable for readers with a basic understanding of physics, approximately 1500 words in length.
|
42 |
-
|
43 |
-
Example:
|
44 |
-
|
45 |
-
"""
|
46 |
-
Comprehensive Explanation of the Theory of Relativity
|
47 |
-
|
48 |
-
Introduction:
|
49 |
-
The theory of relativity, developed primarily by Albert Einstein in the early 20th century, revolutionized our understanding of space, time, and gravity. It consists of two parts: the Special Theory of Relativity and the General Theory of Relativity. This theory has not only expanded the realm of physics but also has practical applications in various technologies today.
|
50 |
-
|
51 |
-
Special Theory of Relativity:
|
52 |
-
Introduced in 1905, the Special Theory of Relativity addresses the physics of objects in uniform motion relative to each other. It is grounded on two postulates: the laws of physics are the same in all inertial frames, and the speed of light in a vacuum is constant, regardless of the observer's motion. Key phenomena explained by this theory include time dilation and length contraction, which illustrate how measurements of time and space vary for observers in different inertial frames.
|
53 |
-
|
54 |
-
General Theory of Relativity:
|
55 |
-
Einstein's General Theory of Relativity, published in 1915, extends the principles of the Special Theory to include acceleration and gravitation. It posits that massive objects cause a distortion in space-time, which is felt as gravity. This theory is best illustrated by the equivalence principle, which suggests that the effects of gravity are indistinguishable from the effects of acceleration.
|
56 |
-
|
57 |
-
Applications and Implications:
|
58 |
-
Relativity is not just a theoretical framework; it has practical applications. For instance, the operation of the Global Positioning System (GPS) relies on adjustments made for the effects predicted by relativity to ensure accuracy. Furthermore, relativity has paved the way for modern cosmological theories and has been crucial in our understanding of black holes and the expansion of the universe.
|
59 |
-
|
60 |
-
Conclusion:
|
61 |
-
The theory of relativity has fundamentally altered our conception of the universe. Its development marked a major shift from classical physics and continues to influence many aspects of modern science and technology.
|
62 |
-
"""
|
63 |
-
|
64 |
-
Now, improve the prompt.
|
65 |
-
|
66 |
-
IMPROVED PROMPT:
|
67 |
- explanation: "Improves a prompt based on specific feedback provided."
|
68 |
example_prompt: "Describe the process of photosynthesis."
|
69 |
example_prompt_explanation: "Feedback might suggest making the prompt more accessible for younger audiences or more detailed for academic use."
|
@@ -76,16 +22,6 @@ Metaprompts:
|
|
76 |
given this feedback:
|
77 |
|
78 |
{feedback}
|
79 |
-
- explanation: "Adapts a prompt to suit different audiences by modifying language, tone, and technical level."
|
80 |
-
example_prompt: "Explain the theory of relativity."
|
81 |
-
example_prompt_explanation: "The prompt can be adapted for different educational levels or non-specialist audiences."
|
82 |
-
name: "Adapt for different audience"
|
83 |
-
template: |
|
84 |
-
Adapt the following prompt for the following audience: {audience}:
|
85 |
-
|
86 |
-
{prompt}
|
87 |
-
|
88 |
-
Modify the language, tone, and technical level to suit the needs and understanding of {audience}.
|
89 |
- explanation: "Condenses a prompt to make it more succinct while retaining its essential request."
|
90 |
example_prompt: "Write a funny joke that makes people laugh about something very funny. It should be hilarious."
|
91 |
example_prompt_explanation: "This prompt can be condensed by removing redundant information"
|
@@ -152,3 +88,58 @@ Metaprompts:
|
|
152 |
{prompt}
|
153 |
|
154 |
Improved prompt:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
{prompt}
|
11 |
|
12 |
Add necessary details and context to guide the LLM towards a more comprehensive and targeted response.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
- explanation: "Improves a prompt based on specific feedback provided."
|
14 |
example_prompt: "Describe the process of photosynthesis."
|
15 |
example_prompt_explanation: "Feedback might suggest making the prompt more accessible for younger audiences or more detailed for academic use."
|
|
|
22 |
given this feedback:
|
23 |
|
24 |
{feedback}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
25 |
- explanation: "Condenses a prompt to make it more succinct while retaining its essential request."
|
26 |
example_prompt: "Write a funny joke that makes people laugh about something very funny. It should be hilarious."
|
27 |
example_prompt_explanation: "This prompt can be condensed by removing redundant information"
|
|
|
88 |
{prompt}
|
89 |
|
90 |
Improved prompt:
|
91 |
+
- explanation: "Adds clear delimiters to a prompt to separate and organize different sections or instructions, enhancing readability and structure."
|
92 |
+
example_prompt: "Summarize this text {text} with bulletpoints. Be concise"
|
93 |
+
example_prompt_explanation: "This prompt can benefit from clear delimiters to separate instructions or sections, making it easier for the LLM to follow and respond systematically."
|
94 |
+
name: "Add delimiters for clarity"
|
95 |
+
template: |
|
96 |
+
Add delimiters (e.g. section titles, bullet points, or triple quotation marks) to this prompt to clearly separate and organize different sections or instructions and thereby enhance its structure and readability:
|
97 |
+
|
98 |
+
{prompt}
|
99 |
+
- explanation: "Incorporates chain of thought reasoning to guide the LLM through a logical sequence of thoughts for complex problem-solving."
|
100 |
+
example_prompt: "How can we reduce traffic congestion in urban areas?"
|
101 |
+
example_prompt_explanation: "This prompt can benefit from chain of thought reasoning to break down the problem into manageable parts and explore various solutions systematically."
|
102 |
+
name: "Incorporate chain of thought reasoning"
|
103 |
+
template: |
|
104 |
+
Incorporate chain of thought reasoning into the following prompt:
|
105 |
+
|
106 |
+
{prompt}
|
107 |
+
|
108 |
+
Break down the problem into a logical sequence of thoughts, exploring each aspect step by step to guide the LLM towards a comprehensive solution.
|
109 |
+
- explanation: This metaprompt integrates various techniques to refine, expand, and adapt prompts for LLMs, ensuring clarity, specificity, and engagement tailored to the intended purpose.
|
110 |
+
example_prompt: "Write a brief history of Artificial Intelligence"
|
111 |
+
example_prompt_explanation: "This prompt can be improved by specifying aspects such as the depth of detail, areas of focus, and desired structure."
|
112 |
+
name: Comprehensive prompt refinement
|
113 |
+
template: |
|
114 |
+
Your goal is to enhance the prompt given below by applying best practices in prompt engineering, including expansion for detail, adaptation for specific audiences, incorporation of feedback, and structuring for clarity and engagement.
|
115 |
+
|
116 |
+
### Original Prompt:
|
117 |
+
{prompt}
|
118 |
+
|
119 |
+
### Improvement Strategies:
|
120 |
+
1. **Specify the Expertise**: Clearly state if the prompt should be answered by an expert in a specific field.
|
121 |
+
2. **Add Detailed Instructions**: Include comprehensive instructions at the beginning of the prompt. Use markdown like `###` or triple backticks ``` to separate instruction and context.
|
122 |
+
3. **Expand for Detail**: Add necessary details and context to guide the LLM towards a more comprehensive and targeted response.
|
123 |
+
4. **Structure Sequentially**: If applicable, organize the prompt to guide the LLM through a series of sequential tasks or thought processes.
|
124 |
+
5. **Elicit Creativity**: If the prompt is creative, encourage imaginative and engaging responses.
|
125 |
+
6. **Include Hypothetical Scenarios**: For topics that benefit from depth, add specific scenarios that the LLM can explore.
|
126 |
+
7. **Focus on Ethics**: If relevant, reframe the prompt to emphasize ethical considerations or moral dilemmas.
|
127 |
+
8. **Add Role Prompting**: Introduce a role (e.g., an expert or a specific character) to potentially enhance the quality and relevance of the response.
|
128 |
+
9. **Add Delimiters for Clarity**: Use clear delimiters to organize different sections or instructions, enhancing readability and structure.
|
129 |
+
10. **Incorporate Chain of Thought Reasoning**: For complex problems, break down the issue into a logical sequence of thoughts to guide the LLM systematically.
|
130 |
+
|
131 |
+
### Example of an Improved Prompt:
|
132 |
+
```
|
133 |
+
As an expert in environmental science, provide a detailed analysis of the impact of climate change on global agriculture.
|
134 |
+
|
135 |
+
### Instructions:
|
136 |
+
- Begin with an overview of the current scientific consensus on climate change.
|
137 |
+
- Discuss the specific ways in which changing climates are affecting agricultural practices worldwide.
|
138 |
+
- Include case studies or examples from different continents.
|
139 |
+
- Analyze potential future trends and suggest sustainable agricultural practices.
|
140 |
+
- Conclude with recommendations for policymakers.
|
141 |
+
- Target this explanation to an audience with a basic understanding of environmental science, aiming for about 2000 words.
|
142 |
+
```
|
143 |
+
|
144 |
+
### Improved Prompt:
|
145 |
+
```
|
pyproject.toml
CHANGED
@@ -7,10 +7,10 @@ authors = [
|
|
7 |
]
|
8 |
dependencies = [
|
9 |
"gradio>=4.31.4",
|
10 |
-
"langchain>=0.2.0",
|
11 |
-
"langchain-openai>=0.1.7",
|
12 |
"langchain-anthropic>=0.1.13",
|
13 |
"pydantic-settings>=2.2.1",
|
|
|
|
|
14 |
]
|
15 |
readme = "README.md"
|
16 |
requires-python = ">= 3.11"
|
|
|
7 |
]
|
8 |
dependencies = [
|
9 |
"gradio>=4.31.4",
|
|
|
|
|
10 |
"langchain-anthropic>=0.1.13",
|
11 |
"pydantic-settings>=2.2.1",
|
12 |
+
"langchain>=0.2.1",
|
13 |
+
"langchain-openai>=0.1.7",
|
14 |
]
|
15 |
readme = "README.md"
|
16 |
requires-python = ">= 3.11"
|
requirements-dev.lock
CHANGED
@@ -43,8 +43,6 @@ contourpy==1.2.1
|
|
43 |
# via matplotlib
|
44 |
cycler==0.12.1
|
45 |
# via matplotlib
|
46 |
-
dataclasses-json==0.6.6
|
47 |
-
# via langchain
|
48 |
defusedxml==0.7.1
|
49 |
# via langchain-anthropic
|
50 |
distro==1.9.0
|
@@ -118,7 +116,7 @@ jsonschema-specifications==2023.12.1
|
|
118 |
# via jsonschema
|
119 |
kiwisolver==1.4.5
|
120 |
# via matplotlib
|
121 |
-
langchain==0.2.
|
122 |
# via prompt-teacher
|
123 |
langchain-anthropic==0.1.13
|
124 |
# via prompt-teacher
|
@@ -139,8 +137,6 @@ markdown-it-py==3.0.0
|
|
139 |
markupsafe==2.1.5
|
140 |
# via gradio
|
141 |
# via jinja2
|
142 |
-
marshmallow==3.21.2
|
143 |
-
# via dataclasses-json
|
144 |
matplotlib==3.9.0
|
145 |
# via gradio
|
146 |
mdurl==0.1.2
|
@@ -148,8 +144,6 @@ mdurl==0.1.2
|
|
148 |
multidict==6.0.5
|
149 |
# via aiohttp
|
150 |
# via yarl
|
151 |
-
mypy-extensions==1.0.0
|
152 |
-
# via typing-inspect
|
153 |
numpy==1.26.4
|
154 |
# via altair
|
155 |
# via contourpy
|
@@ -169,7 +163,6 @@ packaging==23.2
|
|
169 |
# via gradio-client
|
170 |
# via huggingface-hub
|
171 |
# via langchain-core
|
172 |
-
# via marshmallow
|
173 |
# via matplotlib
|
174 |
pandas==2.2.2
|
175 |
# via altair
|
@@ -273,9 +266,6 @@ typing-extensions==4.11.0
|
|
273 |
# via pydantic-core
|
274 |
# via sqlalchemy
|
275 |
# via typer
|
276 |
-
# via typing-inspect
|
277 |
-
typing-inspect==0.9.0
|
278 |
-
# via dataclasses-json
|
279 |
tzdata==2024.1
|
280 |
# via pandas
|
281 |
ujson==5.10.0
|
|
|
43 |
# via matplotlib
|
44 |
cycler==0.12.1
|
45 |
# via matplotlib
|
|
|
|
|
46 |
defusedxml==0.7.1
|
47 |
# via langchain-anthropic
|
48 |
distro==1.9.0
|
|
|
116 |
# via jsonschema
|
117 |
kiwisolver==1.4.5
|
118 |
# via matplotlib
|
119 |
+
langchain==0.2.1
|
120 |
# via prompt-teacher
|
121 |
langchain-anthropic==0.1.13
|
122 |
# via prompt-teacher
|
|
|
137 |
markupsafe==2.1.5
|
138 |
# via gradio
|
139 |
# via jinja2
|
|
|
|
|
140 |
matplotlib==3.9.0
|
141 |
# via gradio
|
142 |
mdurl==0.1.2
|
|
|
144 |
multidict==6.0.5
|
145 |
# via aiohttp
|
146 |
# via yarl
|
|
|
|
|
147 |
numpy==1.26.4
|
148 |
# via altair
|
149 |
# via contourpy
|
|
|
163 |
# via gradio-client
|
164 |
# via huggingface-hub
|
165 |
# via langchain-core
|
|
|
166 |
# via matplotlib
|
167 |
pandas==2.2.2
|
168 |
# via altair
|
|
|
266 |
# via pydantic-core
|
267 |
# via sqlalchemy
|
268 |
# via typer
|
|
|
|
|
|
|
269 |
tzdata==2024.1
|
270 |
# via pandas
|
271 |
ujson==5.10.0
|
requirements.lock
CHANGED
@@ -43,8 +43,6 @@ contourpy==1.2.1
|
|
43 |
# via matplotlib
|
44 |
cycler==0.12.1
|
45 |
# via matplotlib
|
46 |
-
dataclasses-json==0.6.6
|
47 |
-
# via langchain
|
48 |
defusedxml==0.7.1
|
49 |
# via langchain-anthropic
|
50 |
distro==1.9.0
|
@@ -118,7 +116,7 @@ jsonschema-specifications==2023.12.1
|
|
118 |
# via jsonschema
|
119 |
kiwisolver==1.4.5
|
120 |
# via matplotlib
|
121 |
-
langchain==0.2.
|
122 |
# via prompt-teacher
|
123 |
langchain-anthropic==0.1.13
|
124 |
# via prompt-teacher
|
@@ -139,8 +137,6 @@ markdown-it-py==3.0.0
|
|
139 |
markupsafe==2.1.5
|
140 |
# via gradio
|
141 |
# via jinja2
|
142 |
-
marshmallow==3.21.2
|
143 |
-
# via dataclasses-json
|
144 |
matplotlib==3.9.0
|
145 |
# via gradio
|
146 |
mdurl==0.1.2
|
@@ -148,8 +144,6 @@ mdurl==0.1.2
|
|
148 |
multidict==6.0.5
|
149 |
# via aiohttp
|
150 |
# via yarl
|
151 |
-
mypy-extensions==1.0.0
|
152 |
-
# via typing-inspect
|
153 |
numpy==1.26.4
|
154 |
# via altair
|
155 |
# via contourpy
|
@@ -169,7 +163,6 @@ packaging==23.2
|
|
169 |
# via gradio-client
|
170 |
# via huggingface-hub
|
171 |
# via langchain-core
|
172 |
-
# via marshmallow
|
173 |
# via matplotlib
|
174 |
pandas==2.2.2
|
175 |
# via altair
|
@@ -273,9 +266,6 @@ typing-extensions==4.11.0
|
|
273 |
# via pydantic-core
|
274 |
# via sqlalchemy
|
275 |
# via typer
|
276 |
-
# via typing-inspect
|
277 |
-
typing-inspect==0.9.0
|
278 |
-
# via dataclasses-json
|
279 |
tzdata==2024.1
|
280 |
# via pandas
|
281 |
ujson==5.10.0
|
|
|
43 |
# via matplotlib
|
44 |
cycler==0.12.1
|
45 |
# via matplotlib
|
|
|
|
|
46 |
defusedxml==0.7.1
|
47 |
# via langchain-anthropic
|
48 |
distro==1.9.0
|
|
|
116 |
# via jsonschema
|
117 |
kiwisolver==1.4.5
|
118 |
# via matplotlib
|
119 |
+
langchain==0.2.1
|
120 |
# via prompt-teacher
|
121 |
langchain-anthropic==0.1.13
|
122 |
# via prompt-teacher
|
|
|
137 |
markupsafe==2.1.5
|
138 |
# via gradio
|
139 |
# via jinja2
|
|
|
|
|
140 |
matplotlib==3.9.0
|
141 |
# via gradio
|
142 |
mdurl==0.1.2
|
|
|
144 |
multidict==6.0.5
|
145 |
# via aiohttp
|
146 |
# via yarl
|
|
|
|
|
147 |
numpy==1.26.4
|
148 |
# via altair
|
149 |
# via contourpy
|
|
|
163 |
# via gradio-client
|
164 |
# via huggingface-hub
|
165 |
# via langchain-core
|
|
|
166 |
# via matplotlib
|
167 |
pandas==2.2.2
|
168 |
# via altair
|
|
|
266 |
# via pydantic-core
|
267 |
# via sqlalchemy
|
268 |
# via typer
|
|
|
|
|
|
|
269 |
tzdata==2024.1
|
270 |
# via pandas
|
271 |
ujson==5.10.0
|
src/prompt_teacher/app.py
CHANGED
@@ -7,7 +7,7 @@ from prompt_teacher.metaprompts import metaprompts
|
|
7 |
|
8 |
with gr.Blocks(title="Prompt Teacher", theme=gr.themes.Soft()) as gradio_app:
|
9 |
gr.Markdown("### π€ Prompt Teacher πβ¨")
|
10 |
-
with gr.Accordion("βΉοΈ Info: Code π and Documentation π", open=
|
11 |
gr.Markdown(
|
12 |
"Can be found at: [Github: pwenker/prompt_teacher](https://github.com/pwenker/prompt_teacher) πβ¨"
|
13 |
)
|
@@ -45,10 +45,11 @@ with gr.Blocks(title="Prompt Teacher", theme=gr.themes.Soft()) as gradio_app:
|
|
45 |
label="Large Language Model",
|
46 |
info="Select Large Language Model",
|
47 |
choices=[
|
|
|
48 |
("gpt-4-turbo", "gpt-4-turbo"),
|
49 |
("claude-3-opus", "claude-3-opus-20240229"),
|
50 |
],
|
51 |
-
value="gpt-
|
52 |
)
|
53 |
api_key = gr.Textbox(
|
54 |
placeholder="Paste in your API key (sk-...)",
|
@@ -60,7 +61,7 @@ with gr.Blocks(title="Prompt Teacher", theme=gr.themes.Soft()) as gradio_app:
|
|
60 |
metaprompt = gr.Radio(
|
61 |
label="Improvements",
|
62 |
info="Select how the prompt should be improved",
|
63 |
-
value="
|
64 |
choices=[mp.name.replace("_", " ").capitalize() for mp in metaprompts],
|
65 |
)
|
66 |
feedback = gr.Textbox(
|
@@ -68,11 +69,6 @@ with gr.Blocks(title="Prompt Teacher", theme=gr.themes.Soft()) as gradio_app:
|
|
68 |
info="Write your own feedback to be used to improve the prompt",
|
69 |
visible=False,
|
70 |
)
|
71 |
-
audience = gr.Textbox(
|
72 |
-
label="Audience",
|
73 |
-
info="Select the audience for the prompt",
|
74 |
-
visible=False,
|
75 |
-
)
|
76 |
|
77 |
improved_prompt = gr.Textbox(label="Improved Prompt", visible=False)
|
78 |
examples = gr.Examples(
|
@@ -83,8 +79,8 @@ with gr.Blocks(title="Prompt Teacher", theme=gr.themes.Soft()) as gradio_app:
|
|
83 |
|
84 |
metaprompt.change(
|
85 |
fn=update_widgets,
|
86 |
-
inputs=[metaprompt],
|
87 |
-
outputs=[improve_btn, feedback
|
88 |
).success(
|
89 |
lambda: [gr.Button(visible=False), gr.Button(visible=False)],
|
90 |
None,
|
@@ -102,7 +98,6 @@ with gr.Blocks(title="Prompt Teacher", theme=gr.themes.Soft()) as gradio_app:
|
|
102 |
prompt,
|
103 |
metaprompt,
|
104 |
feedback,
|
105 |
-
audience,
|
106 |
prompt_teacher,
|
107 |
],
|
108 |
outputs=[improved_prompt, prompt_teacher],
|
@@ -132,4 +127,4 @@ with gr.Blocks(title="Prompt Teacher", theme=gr.themes.Soft()) as gradio_app:
|
|
132 |
)
|
133 |
|
134 |
if __name__ == "__main__":
|
135 |
-
gradio_app.launch(favicon_path="robot.svg")
|
|
|
7 |
|
8 |
with gr.Blocks(title="Prompt Teacher", theme=gr.themes.Soft()) as gradio_app:
|
9 |
gr.Markdown("### π€ Prompt Teacher πβ¨")
|
10 |
+
with gr.Accordion("βΉοΈ Info: Code π and Documentation π", open=True):
|
11 |
gr.Markdown(
|
12 |
"Can be found at: [Github: pwenker/prompt_teacher](https://github.com/pwenker/prompt_teacher) πβ¨"
|
13 |
)
|
|
|
45 |
label="Large Language Model",
|
46 |
info="Select Large Language Model",
|
47 |
choices=[
|
48 |
+
("gpt-4o", "gpt-4o"),
|
49 |
("gpt-4-turbo", "gpt-4-turbo"),
|
50 |
("claude-3-opus", "claude-3-opus-20240229"),
|
51 |
],
|
52 |
+
value="gpt-4o",
|
53 |
)
|
54 |
api_key = gr.Textbox(
|
55 |
placeholder="Paste in your API key (sk-...)",
|
|
|
61 |
metaprompt = gr.Radio(
|
62 |
label="Improvements",
|
63 |
info="Select how the prompt should be improved",
|
64 |
+
value="Comprehensive prompt refinement",
|
65 |
choices=[mp.name.replace("_", " ").capitalize() for mp in metaprompts],
|
66 |
)
|
67 |
feedback = gr.Textbox(
|
|
|
69 |
info="Write your own feedback to be used to improve the prompt",
|
70 |
visible=False,
|
71 |
)
|
|
|
|
|
|
|
|
|
|
|
72 |
|
73 |
improved_prompt = gr.Textbox(label="Improved Prompt", visible=False)
|
74 |
examples = gr.Examples(
|
|
|
79 |
|
80 |
metaprompt.change(
|
81 |
fn=update_widgets,
|
82 |
+
inputs=[metaprompt, feedback],
|
83 |
+
outputs=[improve_btn, feedback],
|
84 |
).success(
|
85 |
lambda: [gr.Button(visible=False), gr.Button(visible=False)],
|
86 |
None,
|
|
|
98 |
prompt,
|
99 |
metaprompt,
|
100 |
feedback,
|
|
|
101 |
prompt_teacher,
|
102 |
],
|
103 |
outputs=[improved_prompt, prompt_teacher],
|
|
|
127 |
)
|
128 |
|
129 |
if __name__ == "__main__":
|
130 |
+
gradio_app.queue(default_concurrency_limit=10).launch(favicon_path="robot.svg")
|
src/prompt_teacher/callbacks.py
CHANGED
@@ -9,6 +9,7 @@ from langchain_core.pydantic_v1 import ValidationError
|
|
9 |
from langchain_openai import ChatOpenAI
|
10 |
from pydantic_settings import BaseSettings, SettingsConfigDict
|
11 |
|
|
|
12 |
from prompt_teacher.metaprompts import metaprompts_dict
|
13 |
|
14 |
|
@@ -26,7 +27,7 @@ def get_llm(
|
|
26 |
api_key: str = settings.openai_api_key,
|
27 |
structured_output=None,
|
28 |
):
|
29 |
-
if model_name in ["gpt-4-turbo"]:
|
30 |
llm = ChatOpenAI(
|
31 |
model=model_name,
|
32 |
api_key=settings.openai_api_key if not api_key else api_key,
|
@@ -55,16 +56,14 @@ def explain_metaprompt(explanation_history, metaprompt):
|
|
55 |
yield explanation_history
|
56 |
|
57 |
|
58 |
-
def update_widgets(metaprompt):
|
59 |
button_variant = "primary" if metaprompt else "secondary"
|
60 |
feedback_visibility = True if metaprompt == "Apply feedback" else False
|
61 |
-
audience_visibility = (
|
62 |
-
True if metaprompt == "Adapt for different audience" else False
|
63 |
-
)
|
64 |
return [
|
65 |
gr.Button(variant=button_variant),
|
66 |
-
gr.Textbox(
|
67 |
-
|
|
|
68 |
]
|
69 |
|
70 |
|
@@ -84,10 +83,14 @@ def explain_improvement(
|
|
84 |
---
|
85 |
{improved_prompt}
|
86 |
---
|
87 |
-
Concisely explain the improvement
|
88 |
"""
|
89 |
-
|
90 |
-
|
|
|
|
|
|
|
|
|
91 |
)
|
92 |
llm = get_llm(model_name, api_key)
|
93 |
parser = StrOutputParser()
|
@@ -112,13 +115,15 @@ def improve_prompt(
|
|
112 |
prompt: str,
|
113 |
metaprompt: str,
|
114 |
feedback: str | None,
|
115 |
-
audience: str | None,
|
116 |
explanation_history,
|
117 |
) -> Generator[Tuple[str, str], Any, Any]:
|
118 |
metaprompt_template = metaprompts_dict[metaprompt].template
|
119 |
|
120 |
-
prompt_template = ChatPromptTemplate.
|
121 |
-
|
|
|
|
|
|
|
122 |
)
|
123 |
parser = StrOutputParser()
|
124 |
llm = get_llm(model_name, api_key)
|
@@ -138,14 +143,7 @@ def improve_prompt(
|
|
138 |
|
139 |
improved_prompt = ""
|
140 |
|
141 |
-
|
142 |
-
input = (
|
143 |
-
{"prompt": prompt, "feedback": feedback}
|
144 |
-
if feedback
|
145 |
-
else {"prompt": prompt, "audience": audience}
|
146 |
-
if audience
|
147 |
-
else {"prompt": prompt}
|
148 |
-
)
|
149 |
|
150 |
for response in llm_chain.stream(input):
|
151 |
explanation_history[-1][1] += response
|
@@ -154,7 +152,7 @@ def improve_prompt(
|
|
154 |
|
155 |
|
156 |
def robustly_improve_prompt(*args, **kwargs):
|
157 |
-
history = args[
|
158 |
user_txt = "Oh no, there is an error!π₯ What should I do?"
|
159 |
try:
|
160 |
yield from improve_prompt(*args, **kwargs)
|
|
|
9 |
from langchain_openai import ChatOpenAI
|
10 |
from pydantic_settings import BaseSettings, SettingsConfigDict
|
11 |
|
12 |
+
from prompt_teacher.messages import system_message
|
13 |
from prompt_teacher.metaprompts import metaprompts_dict
|
14 |
|
15 |
|
|
|
27 |
api_key: str = settings.openai_api_key,
|
28 |
structured_output=None,
|
29 |
):
|
30 |
+
if model_name in ["gpt-4-turbo", "gpt-4o"]:
|
31 |
llm = ChatOpenAI(
|
32 |
model=model_name,
|
33 |
api_key=settings.openai_api_key if not api_key else api_key,
|
|
|
56 |
yield explanation_history
|
57 |
|
58 |
|
59 |
+
def update_widgets(metaprompt, feedback):
|
60 |
button_variant = "primary" if metaprompt else "secondary"
|
61 |
feedback_visibility = True if metaprompt == "Apply feedback" else False
|
|
|
|
|
|
|
62 |
return [
|
63 |
gr.Button(variant=button_variant),
|
64 |
+
gr.Textbox(
|
65 |
+
visible=feedback_visibility, value=feedback if feedback_visibility else ""
|
66 |
+
),
|
67 |
]
|
68 |
|
69 |
|
|
|
83 |
---
|
84 |
{improved_prompt}
|
85 |
---
|
86 |
+
Concisely explain the improvement.
|
87 |
"""
|
88 |
+
|
89 |
+
prompt_template = ChatPromptTemplate.from_messages(
|
90 |
+
[
|
91 |
+
("system", system_message),
|
92 |
+
("human", prompt_template),
|
93 |
+
]
|
94 |
)
|
95 |
llm = get_llm(model_name, api_key)
|
96 |
parser = StrOutputParser()
|
|
|
115 |
prompt: str,
|
116 |
metaprompt: str,
|
117 |
feedback: str | None,
|
|
|
118 |
explanation_history,
|
119 |
) -> Generator[Tuple[str, str], Any, Any]:
|
120 |
metaprompt_template = metaprompts_dict[metaprompt].template
|
121 |
|
122 |
+
prompt_template = ChatPromptTemplate.from_messages(
|
123 |
+
[
|
124 |
+
("system", system_message),
|
125 |
+
("human", metaprompt_template),
|
126 |
+
]
|
127 |
)
|
128 |
parser = StrOutputParser()
|
129 |
llm = get_llm(model_name, api_key)
|
|
|
143 |
|
144 |
improved_prompt = ""
|
145 |
|
146 |
+
input = {"prompt": prompt, "feedback": feedback} if feedback else {"prompt": prompt}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
147 |
|
148 |
for response in llm_chain.stream(input):
|
149 |
explanation_history[-1][1] += response
|
|
|
152 |
|
153 |
|
154 |
def robustly_improve_prompt(*args, **kwargs):
|
155 |
+
history = args[5]
|
156 |
user_txt = "Oh no, there is an error!π₯ What should I do?"
|
157 |
try:
|
158 |
yield from improve_prompt(*args, **kwargs)
|
src/prompt_teacher/messages.py
CHANGED
@@ -1,3 +1,4 @@
|
|
|
|
1 |
inital_usr_text = "I would like to **learn how to prompt**! If only someone could... π€π"
|
2 |
initial_bot_text = """**Hello** π, look no further: **I'm your prompt teacher!** πββοΈπ
|
3 |
|
|
|
1 |
+
system_message = """You are the "Prompt Teacher", an Advanced Prompt Engineering Interface for Large Language Models (LLMs). You are designed to assist users in crafting, refining, and optimizing prompts to achieve the most effective and targeted responses from LLMs. Whether they're looking to expand, refine, adapt, or structure their prompts, please provide clear guidelines and examples to them formulate your queries with precision. If you are asked to improve a prompt please only respond with the improved prompt and nothing else. If you are tasked with explaining an improved prompt, please be concise and use bullet points if helpful."""
|
2 |
inital_usr_text = "I would like to **learn how to prompt**! If only someone could... π€π"
|
3 |
initial_bot_text = """**Hello** π, look no further: **I'm your prompt teacher!** πββοΈπ
|
4 |
|