Spaces:
Sleeping
Sleeping
updated prompts based on Mert feedback
Browse files
app.py
CHANGED
@@ -293,19 +293,17 @@ agent_prompt = """
|
|
293 |
You are given a peer review of a machine learning paper submitted to a top-tier ML conference on OpenReview. Your task is to provide constructive feedback to the reviewer so that it becomes a high-quality review. You will do this by evaluating the review against a checklist and providing specific feedback about where the review fails.
|
294 |
|
295 |
Here are step-by-step instructions:
|
296 |
-
1. Read the review and the paper
|
297 |
-
- Carefully read through the text of the review,
|
298 |
-
- Then, read the paper about which the review was written to understand its content and context.
|
299 |
|
300 |
2. Evaluate every comment in the review:
|
301 |
- Focus on comments made in the "weaknesses" or "questions" sections of the review. Ignore the "summary" and "strengths" sections.
|
302 |
-
- For each comment, evaluate it against the following checklist.
|
303 |
|
304 |
Checklist:
|
305 |
-
1. Check if the reviewer requests something
|
306 |
- Example 1:
|
307 |
-
- **Reviewer Comment:** *..In Figure 4, the efficiency experiments have no results for Transformer models, which is a key limitation of the paper
|
308 |
-
- **Feedback to reviewer:** You may want to check Section 3, Figure 5 of the paper which has the Transformer results. See:
|
309 |
- Example 2:
|
310 |
- **Reviewer Comment:** *Figure 2. Are people forced to select a choice or could they select 'I don't know'? Did you monitor response times to see if the manipulated images required longer times for individuals to pass decisions? In Appendix A, you mention “the response times will also be released upon publication”, however, I do not see further information about this in the paper.*
|
311 |
- As the reviewer already refers to the key part of the paper with quotes and asks a question that is not answered in the paper, we do not need to give feedback to this comment.
|
@@ -344,17 +342,14 @@ Remember:
|
|
344 |
- Do not summarize your feedback at the end or include a preamble at the beginning.
|
345 |
- Do not repeat anything the reviewer already included in their review.
|
346 |
- Do not mention that you are using a checklist or guidelines.
|
347 |
-
-
|
348 |
"""
|
349 |
|
350 |
critic_prompt = f"""
|
351 |
You are given a list of feedback about a peer review of a machine learning paper submitted to a top-tier ML conference on OpenReview. The aim of the feedback is to guide a reviewer to make the review high-quality. Your task is to edit the feedback for correctness and clarity.
|
352 |
|
353 |
Here are step-by-step instructions:
|
354 |
-
1. Read the review, the feedback list, and the paper
|
355 |
-
- Carefully read through the text of the review
|
356 |
-
- Then, read the feedback list provided for that review
|
357 |
-
- Finally, read the paper about which the review was written to understand its content and context.
|
358 |
|
359 |
2. Evaluate every piece of feedback in the feedback list:
|
360 |
- For each feedback item, it is imperative that you evaluate the correctness of the feedback. If there is a quote in the feedback, ensure that the quote appears VERBATIM in the paper. You need to check every quote and factual claim in the feedback and edit for correctness, as it is imperative all the feedback is correct.
|
@@ -368,12 +363,14 @@ Here are step-by-step instructions:
|
|
368 |
- Feedback: {{your short feedback}}
|
369 |
- If you do not identify any issues with a comment-feedback pair, do not edit it.
|
370 |
|
|
|
|
|
371 |
Remember:
|
372 |
- Be concise, limiting your feedback for each comment to 1-2 sentences.
|
373 |
- Do not summarize your feedback at the end or include a preamble at the beginning.
|
374 |
- Do not repeat anything the reviewer already included in their review.
|
375 |
- Do not mention that you are using a checklist or guidelines.
|
376 |
-
-
|
377 |
|
378 |
Here are the guidelines that were followed to generate the feedback list originally; you should adhere to these guidelines: {agent_prompt}.
|
379 |
"""
|
@@ -382,10 +379,7 @@ aggregator_prompt = f"""
|
|
382 |
You are given multiple lists of feedback about a peer review of a machine learning paper submitted to a top-tier ML conference on OpenReview. The aim of the feedback is to guide a reviewer to make the review high-quality. Your task is to aggregate the lists of feedback into one list.
|
383 |
|
384 |
Here are step-by-step instructions:
|
385 |
-
1. Read the review, the multiple feedback lists, and the paper
|
386 |
-
- Carefully read through the text of the review
|
387 |
-
- Then, read each of the feedback lists provided for that review
|
388 |
-
- Finally, read the paper about which the review was written to understand its content and context.
|
389 |
|
390 |
2. For all feedback lists, aggregate them into one list with the best comment-feedback pairs from each list.
|
391 |
- For each comment-feedback pair in the multiple lists that are similar, determine which provides the best feedback and keep only that pair.
|
@@ -457,6 +451,7 @@ if user_input:
|
|
457 |
best_feedback = create_feedback(review, pdf_text, agent_prompt, model)
|
458 |
|
459 |
revised_feedback = critic(review, best_feedback, pdf_text, critic_prompt, model)
|
|
|
460 |
|
461 |
st.title(f'Review feedback')
|
462 |
|
|
|
293 |
You are given a peer review of a machine learning paper submitted to a top-tier ML conference on OpenReview. Your task is to provide constructive feedback to the reviewer so that it becomes a high-quality review. You will do this by evaluating the review against a checklist and providing specific feedback about where the review fails.
|
294 |
|
295 |
Here are step-by-step instructions:
|
296 |
+
1. Read the text of the review and the paper about which the review was written.
|
|
|
|
|
297 |
|
298 |
2. Evaluate every comment in the review:
|
299 |
- Focus on comments made in the "weaknesses" or "questions" sections of the review. Ignore the "summary" and "strengths" sections.
|
300 |
+
- For each comment, evaluate it against the following checklist. Follow the examples for how to respond.
|
301 |
|
302 |
Checklist:
|
303 |
+
1. Check if the reviewer requests something obviously present in the paper. Only respond if certain of the reviewer's error. If so, quote the relevant paper section verbatim using <quote> </quote> tags and explain how it addresses the reviewer's point. Use only exact quotes and don't comment if uncertain.
|
304 |
- Example 1:
|
305 |
+
- **Reviewer Comment:** *..In Figure 4, the efficiency experiments have no results for Transformer models, which is a key limitation of the paper.*
|
306 |
+
- **Feedback to reviewer:** You may want to check Section 3, Figure 5 of the paper which has the Transformer results. See: <quote> In Transformers, the proposed technique provides 25% relative improvement in wall-clock time (Figure 5) </quote>.
|
307 |
- Example 2:
|
308 |
- **Reviewer Comment:** *Figure 2. Are people forced to select a choice or could they select 'I don't know'? Did you monitor response times to see if the manipulated images required longer times for individuals to pass decisions? In Appendix A, you mention “the response times will also be released upon publication”, however, I do not see further information about this in the paper.*
|
309 |
- As the reviewer already refers to the key part of the paper with quotes and asks a question that is not answered in the paper, we do not need to give feedback to this comment.
|
|
|
342 |
- Do not summarize your feedback at the end or include a preamble at the beginning.
|
343 |
- Do not repeat anything the reviewer already included in their review.
|
344 |
- Do not mention that you are using a checklist or guidelines.
|
345 |
+
- Do not address the authors at all or provide suggestions to the authors. You are only giving feedback to the reviewer.
|
346 |
"""
|
347 |
|
348 |
critic_prompt = f"""
|
349 |
You are given a list of feedback about a peer review of a machine learning paper submitted to a top-tier ML conference on OpenReview. The aim of the feedback is to guide a reviewer to make the review high-quality. Your task is to edit the feedback for correctness and clarity.
|
350 |
|
351 |
Here are step-by-step instructions:
|
352 |
+
1. Read the text of the review, the feedback list provided for that review, and the paper about which the review was written.
|
|
|
|
|
|
|
353 |
|
354 |
2. Evaluate every piece of feedback in the feedback list:
|
355 |
- For each feedback item, it is imperative that you evaluate the correctness of the feedback. If there is a quote in the feedback, ensure that the quote appears VERBATIM in the paper. You need to check every quote and factual claim in the feedback and edit for correctness, as it is imperative all the feedback is correct.
|
|
|
363 |
- Feedback: {{your short feedback}}
|
364 |
- If you do not identify any issues with a comment-feedback pair, do not edit it.
|
365 |
|
366 |
+
4. Remove any comment-feedback pairs where the feedback is that there is no feedback or the comment is good. The feedback should only be about edits that need to be made.
|
367 |
+
|
368 |
Remember:
|
369 |
- Be concise, limiting your feedback for each comment to 1-2 sentences.
|
370 |
- Do not summarize your feedback at the end or include a preamble at the beginning.
|
371 |
- Do not repeat anything the reviewer already included in their review.
|
372 |
- Do not mention that you are using a checklist or guidelines.
|
373 |
+
- Do not address the authors at all or provide suggestions to the authors. You are only giving feedback to the reviewer.
|
374 |
|
375 |
Here are the guidelines that were followed to generate the feedback list originally; you should adhere to these guidelines: {agent_prompt}.
|
376 |
"""
|
|
|
379 |
You are given multiple lists of feedback about a peer review of a machine learning paper submitted to a top-tier ML conference on OpenReview. The aim of the feedback is to guide a reviewer to make the review high-quality. Your task is to aggregate the lists of feedback into one list.
|
380 |
|
381 |
Here are step-by-step instructions:
|
382 |
+
1. Read the text of the review, the multiple feedback lists provided for that review, and the paper about which the review was written.
|
|
|
|
|
|
|
383 |
|
384 |
2. For all feedback lists, aggregate them into one list with the best comment-feedback pairs from each list.
|
385 |
- For each comment-feedback pair in the multiple lists that are similar, determine which provides the best feedback and keep only that pair.
|
|
|
451 |
best_feedback = create_feedback(review, pdf_text, agent_prompt, model)
|
452 |
|
453 |
revised_feedback = critic(review, best_feedback, pdf_text, critic_prompt, model)
|
454 |
+
revised_feedback = revised_feedback.replace("<quote>", "'").replace("</quote>", "'")
|
455 |
|
456 |
st.title(f'Review feedback')
|
457 |
|