diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000000000000000000000000000000000000..c76eb90f38fd9810b84df60eb6895ca17697e949
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,2 @@
+.DS_Store
+Icon?
diff --git a/CODEOWNERS b/CODEOWNERS
new file mode 100644
index 0000000000000000000000000000000000000000..ce7a494556d131b4461378648e8979e65d551140
--- /dev/null
+++ b/CODEOWNERS
@@ -0,0 +1,2 @@
+# Comment line immediately above ownership line is reserved for related other information. Please be careful while editing.
+#ECCN:Open Source
diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md
new file mode 100644
index 0000000000000000000000000000000000000000..b4612a7bc59f0b1770675cc2857d866fe41a9a31
--- /dev/null
+++ b/CODE_OF_CONDUCT.md
@@ -0,0 +1,105 @@
+# Salesforce Open Source Community Code of Conduct
+
+## About the Code of Conduct
+
+Equality is a core value at Salesforce. We believe a diverse and inclusive
+community fosters innovation and creativity, and are committed to building a
+culture where everyone feels included.
+
+Salesforce open-source projects are committed to providing a friendly, safe, and
+welcoming environment for all, regardless of gender identity and expression,
+sexual orientation, disability, physical appearance, body size, ethnicity, nationality,
+race, age, religion, level of experience, education, socioeconomic status, or
+other similar personal characteristics.
+
+The goal of this code of conduct is to specify a baseline standard of behavior so
+that people with different social values and communication styles can work
+together effectively, productively, and respectfully in our open source community.
+It also establishes a mechanism for reporting issues and resolving conflicts.
+
+All questions and reports of abusive, harassing, or otherwise unacceptable behavior
+in a Salesforce open-source project may be reported by contacting the Salesforce
+Open Source Conduct Committee at ossconduct@salesforce.com.
+
+## Our Pledge
+
+In the interest of fostering an open and welcoming environment, we as
+contributors and maintainers pledge to making participation in our project and
+our community a harassment-free experience for everyone, regardless of gender
+identity and expression, sexual orientation, disability, physical appearance,
+body size, ethnicity, nationality, race, age, religion, level of experience, education,
+socioeconomic status, or other similar personal characteristics.
+
+## Our Standards
+
+Examples of behavior that contributes to creating a positive environment
+include:
+
+* Using welcoming and inclusive language
+* Being respectful of differing viewpoints and experiences
+* Gracefully accepting constructive criticism
+* Focusing on what is best for the community
+* Showing empathy toward other community members
+
+Examples of unacceptable behavior by participants include:
+
+* The use of sexualized language or imagery and unwelcome sexual attention or
+advances
+* Personal attacks, insulting/derogatory comments, or trolling
+* Public or private harassment
+* Publishing, or threatening to publish, others' private information—such as
+a physical or electronic address—without explicit permission
+* Other conduct which could reasonably be considered inappropriate in a
+professional setting
+* Advocating for or encouraging any of the above behaviors
+
+## Our Responsibilities
+
+Project maintainers are responsible for clarifying the standards of acceptable
+behavior and are expected to take appropriate and fair corrective action in
+response to any instances of unacceptable behavior.
+
+Project maintainers have the right and responsibility to remove, edit, or
+reject comments, commits, code, wiki edits, issues, and other contributions
+that are not aligned with this Code of Conduct, or to ban temporarily or
+permanently any contributor for other behaviors that they deem inappropriate,
+threatening, offensive, or harmful.
+
+## Scope
+
+This Code of Conduct applies both within project spaces and in public spaces
+when an individual is representing the project or its community. Examples of
+representing a project or community include using an official project email
+address, posting via an official social media account, or acting as an appointed
+representative at an online or offline event. Representation of a project may be
+further defined and clarified by project maintainers.
+
+## Enforcement
+
+Instances of abusive, harassing, or otherwise unacceptable behavior may be
+reported by contacting the Salesforce Open Source Conduct Committee
+at ossconduct@salesforce.com. All complaints will be reviewed and investigated
+and will result in a response that is deemed necessary and appropriate to the
+circumstances. The committee is obligated to maintain confidentiality with
+regard to the reporter of an incident. Further details of specific enforcement
+policies may be posted separately.
+
+Project maintainers who do not follow or enforce the Code of Conduct in good
+faith may face temporary or permanent repercussions as determined by other
+members of the project's leadership and the Salesforce Open Source Conduct
+Committee.
+
+## Attribution
+
+This Code of Conduct is adapted from the [Contributor Covenant][contributor-covenant-home],
+version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html.
+It includes adaptions and additions from [Go Community Code of Conduct][golang-coc],
+[CNCF Code of Conduct][cncf-coc], and [Microsoft Open Source Code of Conduct][microsoft-coc].
+
+This Code of Conduct is licensed under the [Creative Commons Attribution 3.0 License][cc-by-3-us].
+
+[contributor-covenant-home]: https://www.contributor-covenant.org (https://www.contributor-covenant.org/)
+[golang-coc]: https://golang.org/conduct
+[cncf-coc]: https://github.com/cncf/foundation/blob/master/code-of-conduct.md
+[microsoft-coc]: https://opensource.microsoft.com/codeofconduct/
+[cc-by-3-us]: https://creativecommons.org/licenses/by/3.0/us/
\ No newline at end of file
diff --git a/Dataset_Stats.csv b/Dataset_Stats.csv
new file mode 100644
index 0000000000000000000000000000000000000000..d10c3962d80aea5a2d2cc7c801d9b343afc90d08
--- /dev/null
+++ b/Dataset_Stats.csv
@@ -0,0 +1,87 @@
+Category,Data_name,Index,Year,License,Train,Validation,Test,Total
+Natural Language Understanding,ATIS,1,Speed and NLP 1990,"GNU General Public License, version 2",4478,500,893,5871
+Natural Language Understanding,ATIS-NER,2,Speed and NLP 1990,"GNU General Public License, version 2",4478,500,893,5871
+Natural Language Understanding,BANKING77,3,ACL 2020 NLP4ConvAI,CC BY 4.0,8622,1540,3080,13242
+Natural Language Understanding,BANKING77-OOS,4,ACL 2020 NLP4ConvAI,CC BY 4.0,5905,1506,2000,9411
+Natural Language Understanding,CLINC-Single-Domain-OOS-banking,5,EMNLP 2020,CC BY 3.0,500,500,500,1500
+Natural Language Understanding,CLINC-Single-Domain-OOS-credit_cards,6,EMNLP 2021,CC BY 3.0,500,500,500,1500
+Natural Language Understanding,CLINC150,7,EMNLP 2019,CC BY 3.0,15000,3000,4500,22500
+Natural Language Understanding,DSTC8-SGD,8,ACL 2020,CC BY-SA 4.0,1402,0,481,1883
+Natural Language Understanding,HWU64,9,IWSDS 2019,CC BY 4.0,8954,1076,1076,11106
+Natural Language Understanding,MIT-Movie,10,ICASSP 2013,BSD license,17590,0,4395,21985
+Natural Language Understanding,MIT-Restaurant,11,ICASSP 2014,BSD license,7660,0,1521,9181
+Natural Language Understanding,RESTAURANTS8K,12,ACL 2020,CC BY 4.0,4613,0,2379,6992
+Natural Language Understanding,SNIPS,13,ArXiv 2018,Apache License 2.0,13084,700,700,14484
+Natural Language Understanding,SNIPS-NER,14,ArXiv 2018,Apache License 2.0,13084,700,700,14484
+Natural Language Understanding,TOP,15,EMNLP 2018,CC-BY-SA,31279,0,9042,40321
+Natural Language Understanding,TOP-NER,16,EMNLP 2018,CC-BY-SA,31279,0,9042,40321
+Task Oriented Dialogue,ABCD,17,NAACL 2021,MIT License,8034,1004,1004,10042
+Task Oriented Dialogue,AirDialogue,18,EMNLP 2018,Apache License Version 2.0,321459,40363,0,361822
+Task Oriented Dialogue,BiTOD,19,NeurIPS 2021 Workshop,Apache License 2.0,2952,295,442,3689
+Task Oriented Dialogue,CaSiNo,20,NAACL 2021,CC BY 4.0,900,30,100,1030
+Task Oriented Dialogue,CraigslistBargains,21,EMNLP 2018,MIT License,4000,570,803,5373
+Task Oriented Dialogue,Disambiguation,22,NAACL 2022,MiT License,8433,999,1000,10432
+Task Oriented Dialogue,DSTC2-Clean,23,ACL 2018,"GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007",1612,506,1117,3235
+Task Oriented Dialogue,FRAMES,24,SIGDIAL 2017,"GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007",1329,0,40,1369
+Task Oriented Dialogue,GECOR,25,EMNLP 2019,CC BY 4.0,676,0,0,676
+Task Oriented Dialogue,HDSA-Dialog,26,ACL 2019,MIT License,8438,1000,1000,10438
+Task Oriented Dialogue,KETOD,27,NAACL 2022,MiT License,4247,545,532,5324
+Task Oriented Dialogue,KVRET,28,SIGDIAL 2017,No License,2425,302,304,3031
+Task Oriented Dialogue,MetaLWOZ,29,Sigdial 2019,MICROSOFT RESEARCH LICENSE TERMS,37884,0,2319,40203
+Task Oriented Dialogue,MS-DC,30,Arxiv 2018,MICROSOFT RESEARCH LICENSE TERMS,10000,0,0,10000
+Task Oriented Dialogue,MuDoCo,31,LREC 2020,Attribution-NonCommercial 4.0 International,6058,691,749,7498
+Task Oriented Dialogue,MulDoGO,32,EMNLP 2019,Community Data License Agreement – Permissive – Version 1.0,59939,1150,2319,63408
+Task Oriented Dialogue,MultiWOZ_2.1,33,LREC 2020,MiT License,8434,999,1000,10433
+Task Oriented Dialogue,MULTIWOZ2_2,34,ACL NLP4CONV AI,Mit License,8437,1000,1000,10437
+Task Oriented Dialogue,SGD,35,AAAI 2020,CC BY-SA 4.0,16142,2482,4201,22825
+Task Oriented Dialogue,SimJointGEN,36,NAACL 2020,No license,100000,10000,10000,120000
+Task Oriented Dialogue,SimJointMovie,37,NAACL 2018,No license,384,120,264,768
+Task Oriented Dialogue,SimJointRestaurant,38,NAACL 2019,No license,1116,349,775,2240
+Task Oriented Dialogue,STAR,39,CL 2020,MIT License,6652,0,0,6652
+Task Oriented Dialogue,Taskmaster1,40,EMNLP 2019,Attribution 4.0 International (CC BY 4.0),6170,769,769,7708
+Task Oriented Dialogue,Taskmaster2,41,Github 2020,Creative Commons Attribution 4.0 License (CC BY 4.0),17304,0,0,17304
+Task Oriented Dialogue,Taskmaster3,42,Github 2020,Creative Commons Attribution 4.0 License (CC BY 4.0),22724,17019,17903,57646
+Task Oriented Dialogue,WOZ2_0,43,ACL 2017,Apache License 2.0,600,200,400,1200
+Dialogue Summarization,AMI,44,ICASSP 2005,Creative Commons Attribution 4.0 International Licence (CC BY 4.0),117054,0,0,117054
+Dialogue Summarization,CRD3,45,ACL2021,CC BY-SA 4.0,108,16,16,140
+Dialogue Summarization,DialogSum,46,ACL 2020,CC BY-SA 4.0,12460,500,500,13460
+Dialogue Summarization,ECTSum,47,Findings of ACL21,MIT License,1681,249,495,2425
+Dialogue Summarization,ICSI,48,EMNLP 2022,No License,110254,0,0,110254
+Dialogue Summarization,MediaSum,49,ICASSP 2005, CC BY 4.0,443596,10000,10000,463596
+Dialogue Summarization,QMSum,50,NAACL 2021,No license,162,35,35,232
+Dialogue Summarization,SAMSum,51,NAACL 2021,non-commercial licence: CC BY-NC-ND 4.0,14732,818,819,16369
+Dialogue Summarization,TweetSumm,52,EMNLP19,Creative Commons Zero v1.0 Universal,879,110,110,1099
+Dialogue Summarization,ConvoSumm,53,ACL 2022,No License,821,200,1000,2021
+Dialogue Summarization,SummScreen_ForeverDreaming,54,ACL 2022,No License,3673,338,337,4348
+Dialogue Summarization,SummScreen_TVMegaSite,55,COLING20,Creative Commons Legal CodeCC0 1.0 Universal,18915,1795,1793,22503
+Conversational Recommendation,Redial,56,EMNLP 2021,"Apache License 2.0,",10006,0,1342,11348
+Conversational Recommendation,INSPIRED,57,EMNLP 2020,No License,800,100,100,1000
+Conversational Recommendation,DuRecDial-2.0,58,ACL 2019,CC-BY-NC-4.0,5678,811,1752,8241
+Conversational Recommendation,OpenDialKG,59,NeurIPS 2018,CC BY 4.0,12302,0,0,12302
+Conversational Recommendation,SalesBot,60,ACL 2022,No License,10277,0,0,10277
+Open Domain Dialogue,AntiScam,61,AAAI 2020,MiT License,220,0,0,220
+Open Domain Dialogue,chitchat-dataset,62,ICAART 2020,MiT License,4018,0,0,4018
+Open Domain Dialogue,ConvAI2,63,DSTC 2019,Apache 2.0,2423,0,0,2423
+Open Domain Dialogue,Empathetic,64,ACL 2019,cc-by-nc-4.0,17802,2761,2541,23104
+Open Domain Dialogue,HH-RLHF,65,Arxiv 2022,MiT License,160800,0,8552,169352
+Open Domain Dialogue,PLACES3.5,66,EACL 2023,CC BY-NC 4.0,5591,0,0,5591
+Open Domain Dialogue,Prosocial,67,EMNLP 2022,MiT License,42304,7132,8701,58137
+Open Domain Dialogue,SODA,68,Arxiv 2023,Mit License,1191582,146346,148968,1486896
+Knowledge Grounded Dialogue,CompWebQ,69,NAACL 2018,"GNU General Public License, version 2",27639,703,2816,31158
+Knowledge Grounded Dialogue,CoQA,70,TACL 2019,MiT License,7199,500,0,7699
+Knowledge Grounded Dialogue,CoSQL,71,EMNLP 2019,CC BY-SA 4.0,4318,586,0,4904
+Knowledge Grounded Dialogue,DART,72,NAACL 2021,MIT License,62659,2768,5097,70524
+Knowledge Grounded Dialogue,FeTaQA,73,TACL 2022,CC BY-SA 4.0,7326,1001,2003,10330
+Knowledge Grounded Dialogue,GrailQA,74,WWW 2021,Apache License 2.0,44337,300,6463,51100
+Knowledge Grounded Dialogue,HybridQA,75,EMNLP 2020,MIT License,62682,3466,0,66148
+Knowledge Grounded Dialogue,MTOP,76,EACL 2021,CC BY-SA 4.0,15667,2235,4386,22288
+Knowledge Grounded Dialogue,MultiModalQA,77,ICLR 2021,No License,15688,1501,0,17189
+Knowledge Grounded Dialogue,SParC,78,ACL 2019,CC BY-SA 4.0,6064,844,0,6908
+Knowledge Grounded Dialogue,Spider,79,EMNLP 2018,CC BY-SA 4.0,7000,1034,0,8034
+Knowledge Grounded Dialogue,SQA,80,ACL 2017,CC BY-SA 4.0,4257,784,1025,6066
+Knowledge Grounded Dialogue,ToTTo,81,EMNLP 2020,CC BY-SA 3.0,120761,7700,0,128461
+Knowledge Grounded Dialogue,WebQSP,82,ACL 2016,No License,2673,309,1639,4621
+Knowledge Grounded Dialogue,WikiSQL,83,ArXiv 2017,BSD 3-Clause License,56355,8421,15878,80654
+Knowledge Grounded Dialogue,WikiTQ,84,ACL 2015,CC BY-SA 4.0,11321,2831,4344,18496
+Knowledge Grounded Dialogue,wizard_of_internet,85,ACL 2022,CC BY 4.0,8614,0,503,9117
+Knowledge Grounded Dialogue,wizard_of_wikipedia,86,ICLR 2019,CC BY 4.0,18430,967,968,20365
\ No newline at end of file
diff --git a/LICENSE.txt b/LICENSE.txt
new file mode 100644
index 0000000000000000000000000000000000000000..b2949c4234a7073c094f52afe08e60f87e6c9a0e
--- /dev/null
+++ b/LICENSE.txt
@@ -0,0 +1,207 @@
+Apache License Version 2.0
+
+Copyright (c) 2023 Salesforce, Inc.
+All rights reserved.
+
+Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "{}"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright {yyyy} {name of copyright owner}
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
diff --git a/README.md b/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..c517db4e68a2fd8a6f6be6bf83c7b78e79aaa888
--- /dev/null
+++ b/README.md
@@ -0,0 +1,152 @@
+
+
+
+
+
+Paper,
+Huggingface,
+Model,
+Twitter
+
+
+
+# DialogStudio: Towards Richest and Most Diverse Unified Dataset Collection and Instruction-Aware Models for Conversational AI
+
+## News!
+
+* [Initial Release] July 2023, we're thrilled to the initial release of the largest unified Dialog dataset collection. The full list of all available datasets is [here](./Dataset_Stats.csv).
+
+
+## Contents
+
+- [Introduction](#introduction)
+- [Loading Data](#loading-data)
+- [Datasets](#datasets)
+- [Model](#model)
+- [License](#license)
+- [Citation](#citation)
+
+## Introduction
+
+
+DialogStudio is a large collection and unified dialog datasets.
+The figure below provides a summary of the general statistics associated with DialogStudio. DialogStudio unified each dataset while preserving its original information, and this aids in supporting research on both individual datasets and Large Language Model (LLM) training. The full list of all available datasets is [here](./Dataset_Stats.csv).
+
+The data are downloadable through Huggingface as introduced in [Loading Data](#loading-data). We also provide examples for each dataset in this repo. For more granular and category-specific details, please refer to the individual folders corresponding to each category within the DialogStudio collection, e.g. [MULTIWOZ2_2](./task-oriented-dialogues/MULTIWOZ2_2/) dataset under the [task-oriented-dialogues](./task-oriented-dialogues/) category.
+
+
+
+
+
+
+DialogStudio evaluates dialogue quality based on six critical criteria, namely Understanding, Relevance, Correctness, Coherence, Completeness, and Overall Quality. Each criterion is scored on a scale of 1 to 5, with the highest scores reserved for exceptional dialogues.
+
+Given the vast number of datasets incorporated into DialogStudio, we utilized 'gpt-3.5-turbo' to assess 33 distinct datasets. The corresponding script used for this evaluation can be accessed through the [link](https://github.com/salesforce/DialogStudio/blob/main/code/openai_dialog_quality_evaluation.py).
+
+The results of our dialogue quality assessment are presented below. We intend to release evaluation scores for individually selected dialogues in the upcoming period.
+
+
+
+
+
+
+
+## Loading Data
+
+You can load any dataset in the DialogStudio from the [HuggingFace hub](https://huggingface.co/datasets/Salesforce/dialogstudio) by claiming the `{dataset_name}`, which is exactly the dataset folder name. All available datasets are described in [dataset content](./Dataset_Stats.csv).
+
+Below is one example to load the [MULTIWOZ2_2](./task-oriented-dialogues/MULTIWOZ2_2/) dataset under the [task-oriented-dialogues](./task-oriented-dialogues/) category:
+
+
+
+Load the dataset
+```python
+from datasets import load_dataset
+
+dataset = load_dataset('Salesforce/dialogstudio', 'MULTIWOZ2_2')
+```
+Here is the output structure of MultiWOZ 2.2
+```python
+DatasetDict({
+ train: Dataset({
+ features: ['original dialog id', 'new dialog id', 'dialog index', 'original dialog info', 'log', 'prompt', 'external knowledge non-flat', 'external knowledge', 'dst knowledge', 'intent knowledge'],
+ num_rows: 8437
+ })
+ validation: Dataset({
+ features: ['original dialog id', 'new dialog id', 'dialog index', 'original dialog info', 'log', 'prompt', 'external knowledge non-flat', 'external knowledge', 'dst knowledge', 'intent knowledge'],
+ num_rows: 1000
+ })
+ test: Dataset({
+ features: ['original dialog id', 'new dialog id', 'dialog index', 'original dialog info', 'log', 'prompt', 'external knowledge non-flat', 'external knowledge', 'dst knowledge', 'intent knowledge'],
+ num_rows: 1000
+ })
+})
+```
+
+
+## Datasets
+
+The datasets are split into several categories in this GitHub repository and [HuggingFace hub](https://huggingface.co/datasets/Salesforce/dialogstudio). You can check the [table of dataset](./Dataset_Stats.csv) for more information. And you can click into each folder to check a few examples:
+
+- [Knowledge-Grounded-Dialogues](./knowledge-grounded-dialogues/)
+- [Natural-Language-Understanding](./natural-language-understanding/)
+- [Open-Domain-Dialogues](./open-domain-dialogues/)
+- [Task-Oriented-Dialogues](./task-oriented-dialogues/)
+- [Dialogue-Summarization](./dialogue-summarization/)
+- [Conversational-Recommendation-Dialogs](./conversational-recommendation-dialogues/)
+
+
+
+
+## Model
+(Will update soon)
+
+We've rolled out version 1.0 of models trained on a few selected DialogStudio datasets. Built on small-scale pre-trained models, this version does not incorporate datasets utilized for training large-scale models (>=7B) like Alpaca, ShareGPT, GPT4ALL, UltraChat from OpenAI's 'GPT-3.5/4', or other datasets such as OASST1 and WizardCoder (Note that DialogStudio has unified such datasets). As a result, it has certain limitations in terms of writing and creative capabilities. Our initial focus is to update the model versions to enhance existing abilities. Further improvements, including expansion of other capabilities, are part of our roadmap and will be responsive to community requests.
+
+
+## License
+
+Our project follows the following structure with respect to licensing:
+
+1. For all the modified datasets in DialogStudio:
+ - A portion of these datasets is under the [Apache License 2.0](LICENSE.txt).
+ - Some retain their original licenses even after modification.
+ - For a few datasets that lacked a license, we have cited the relevant papers.
+2. Original dataset licenses: For reference, we also put the originally available licenses for each dataset into their respective dataset folders.
+3. Code: Our codebase is under the [Apache License 2.0](LICENSE.txt).
+
+For detailed licensing information, please refer to the specific licenses accompanying the datasets. It is important to familiarize yourself with these terms as we do not assume responsibility for licensing issues.
+
+## Acknowledgement
+We sincerely thank all dataset authors who have contributed to the Conversational AI field. Despite careful efforts, inaccuracies in our citations or references may occur. If you spot any errors or omissions, please raise an issue or submit a pull request to help us improve. Thank you!
+
+## Citation
+
+The data and code in this repository is mostly developed for or derived from the paper below. If you utilize datasets from DialogStudio, we kindly request you cite both the original work and our own.
+
+```
+@misc{zhang2023dialogstudio,
+ title={DialogStudio: Towards Richest and Most Diverse Unified Dataset Collection for Conversational AI},
+ author={Jianguo Zhang and Kun Qian and Zhiwei Liu and Shelby Heinecke and Rui Meng and Ye Liu and Zhou Yu and and Huan Wang and Silvio Savarese and Caiming Xiong},
+ year={2023},
+ eprint={2307.10172},
+ archivePrefix={arXiv},
+ primaryClass={cs.CL}
+}
+```
+
+## Contribution
+
+We enthusiastically invite contributions from the community! Join us in our shared mission to propel the field of conversational AI forward!
diff --git a/SECURITY.md b/SECURITY.md
new file mode 100644
index 0000000000000000000000000000000000000000..e31774df287d3b91b508341475a7cf26e146aa2d
--- /dev/null
+++ b/SECURITY.md
@@ -0,0 +1,7 @@
+## Security
+
+Please report any security issue to [security@salesforce.com](mailto:security@salesforce.com)
+as soon as it is discovered. This library limits its runtime dependencies in
+order to reduce the total cost of ownership as much as can be, but all consumers
+should remain vigilant and have their security stakeholders review all third-party
+products (3PP) like this one and their dependencies.
\ No newline at end of file
diff --git a/code/openai_dialog_quality_evaluation.py b/code/openai_dialog_quality_evaluation.py
new file mode 100644
index 0000000000000000000000000000000000000000..8d6e6ecf087b076cb51c5d34711144a90979e79e
--- /dev/null
+++ b/code/openai_dialog_quality_evaluation.py
@@ -0,0 +1,101 @@
+"""
+ Copyright (c) 2023, salesforce.com, inc.
+ All rights reserved.
+ SPDX-License-Identifier: Apache License 2.0
+ For full license text, see the LICENSE file in the repo root or https://www.apache.org/licenses/LICENSE-2.0
+"""
+
+
+import os
+os.environ["OPENAI_API_KEY"] = ""
+
+
+from langchain.chains.llm import LLMChain
+from langchain.prompts import PromptTemplate
+from langchain.chat_models import ChatOpenAI
+import json
+from utils import open_json, save_json, open_jsonl
+from collections import defaultdict
+
+class EvaluateDialogs(object):
+ """ Evaluate Dialogs based on OpenAI. To run this:
+ pip install openai
+ pip install langchain
+ """
+ def __init__(self):
+ self.data_dir = "/Users/jianguozhang/TOD-Family/TOD-Studio/open-source/"
+ self.excluded_datasets = ['MetaLWOZ', "MuDoCo", "SalesBot", "HDSA-Dialog", "MULTIWOZ2_2"] # "SGD"
+
+ self.quality_agent_prompt = PromptTemplate(
+ input_variables=["dialog"],
+ template="""
+ Hi AI, I plan to train a language model for response generation. Please analyze the following dialogue and evaluate it based on the criteria provided. Assign a score from 1 (poor) to 5 (excellent) for each category. We're looking for a critical assessment, and higher scores should only be given to truly exceptional examples. The criteria for evaluation are: Understanding, Relevance, Completeness, Correctness, and Coherence.
+
+ After your assessment, provide an overall score for the dialogue along with a concise summary of your evaluation. The overall score should also be on a scale of 1 (poor) to 5 (excellent) and should represent a holistic assessment of the dialogue.
+
+ Please present your evaluation and comment into the following format:
+
+ {{
+ "Understanding": _,
+ "Relevance": _,
+ "Completeness": _,
+ "Correctness": _,
+ "Coherence": _,
+ "Overall": {{"score": _, "comment": _}}
+ }}
+
+ Please replace each underscore (_) with the appropriate score. For the 'Overall' field, provide the score and a concise comment. Regarding to the comment, it should not only summarize the dialogue's quality but also highlight any issues or shortcomings you may have identified in the dialogue.
+
+ Below is the dialog:
+
+ {dialog}
+
+ Evaluate the dialog now.
+ """
+ )
+
+ self.quality_chain = LLMChain(llm=ChatOpenAI(temperature=0.2, model_name="gpt-3.5-turbo"), prompt=self.quality_agent_prompt)
+
+ def run_openai_evaluation(self, dialog):
+ res = self.quality_chain.run(dialog=dialog)
+ try:
+ res = json.loads(res)
+ except:
+ res = str(res)
+ return res
+
+ def tod(self):
+ """
+ Evaluate TOD dialogues
+ :return:
+ """
+ folder_name = "Task-Oriented-Dialogues--OpenAI"
+ folder_path = os.path.join(self.data_dir, folder_name)
+ dataset_names = os.listdir(folder_path)
+ print(dataset_names)
+ print()
+ for dataset_name in dataset_names:
+ if not os.path.isdir(os.path.join(folder_path, dataset_name)):
+ continue
+
+ data = open_json(os.path.join(folder_path, dataset_name, "train.json"))
+ f_writer = open(os.path.join(folder_path, dataset_name, "train_quality_scores.json"), "w")
+ print("Start processing: {} #total dialogs: {}".format(dataset_name, len(data)))
+
+ for index, item in enumerate(data):
+
+ output = defaultdict(dict)
+ output["source"] = item["source"]
+ output["quality score"] = self.run_openai_evaluation(item["dialog"])
+
+ json.dump(output, f_writer)
+ f_writer.write("\n") # Add a new line for readability
+ if index % 10 == 0 or index + 1 == len(data):
+ f_writer.flush() # Flush the buffer to update the file immediately
+
+ def run(self):
+ self.tod()
+
+process = EvaluateDialogs()
+# Run evaluations for dialogs
+process.run()
diff --git a/code/preprocess_data_DialSum.py b/code/preprocess_data_DialSum.py
new file mode 100644
index 0000000000000000000000000000000000000000..508401405350a0219e2e12fe15b8b6375088005b
--- /dev/null
+++ b/code/preprocess_data_DialSum.py
@@ -0,0 +1,614 @@
+"""
+ Copyright (c) 2023, salesforce.com, inc.
+ All rights reserved.
+ SPDX-License-Identifier: Apache License 2.0
+ For full license text, see the LICENSE file in the repo root or https://www.apache.org/licenses/LICENSE-2.0
+"""
+
+#!/usr/bin/env python3
+#
+import sys, os, pdb
+import json
+import shutil, errno
+from tqdm import tqdm
+import pandas as pd
+from utils.constant import *
+
+
+class PreProcessData(object):
+ """docstring for PreProcessData"""
+ def __init__(self):
+ super(PreProcessData, self).__init__()
+ self.data_dir = "/path/to/where/the/raw/dataset/is"
+ self.save_dir = "/path/to/store/the/processed/dataset/" # e.g. ./data/processed/Dialogue-Summarization
+
+
+ def _load_json(self, path=None):
+ if path is None or not os.path.exists(path):
+ raise IOError('File does not exist: %s' % path)
+ # return None
+ with open(path) as df:
+ data = json.loads(df.read())
+ return data
+
+
+ def _load_txt(self, path=None, split_tok="\n"):
+ if path is None or not os.path.exists(path):
+ raise IOError('File does not exist: %s' % path)
+ with open(path) as df:
+ data = df.read().strip().split(split_tok)
+ return data
+
+
+ def _load_csv(self, path=None, sep="\t"):
+ if path is None or not os.path.exists(path):
+ raise IOError('File does not exist: %s' % path)
+ with open(path) as df:
+ data = pd.read_csv(df, sep=sep)
+ return data
+
+
+ def _load_jsonl(self, path=None):
+ if path is None or not os.path.exists(path):
+ raise IOError('File does not exist: %s' % path)
+ data = []
+ with open(path) as df:
+ for line in df.readlines():
+ data.append(json.loads(line))
+ return data
+
+
+
+ def _load_dir_json(self, dir_path=None):
+ if dir_path is None or not os.path.exists(dir_path): return None
+ total_data = [] # assume data is a list of dialogs
+ for filename in sorted(os.listdir(dir_path)):
+ if filename in ["schema.json"]: continue
+ if not filename.endswith(".json"): continue
+ file_path = os.path.join(dir_path, filename)
+ data = self._load_json(path=file_path)
+ if type(data) == list:
+ total_data.extend(data)
+ else:
+ total_data.append(data)
+ return total_data
+
+
+ def _load_dir_txt(self, dir_path=None, file_type="txt"):
+ if dir_path is None or not os.path.exists(dir_path): return None
+ total_data = [] # assume data is a list of dialogs
+ for filename in sorted(os.listdir(dir_path)):
+ if not filename.endswith(file_type): continue
+ file_path = os.path.join(dir_path, filename)
+ data = self._load_txt(path=file_path)
+ if type(data) == list:
+ total_data.extend(data)
+ else:
+ total_data.append(data)
+ return total_data
+
+
+ def _load_dir_tsv(self, dir_path=None, sep="\t"):
+ if dir_path is None or not os.path.exists(dir_path): return None
+ total_data = None
+ for filename in sorted(os.listdir(dir_path)):
+ file_path = os.path.join(dir_path, filename)
+ data = self._load_csv(path=file_path, sep=sep)
+ total_data = pd.concat([total_data, data], ignore_index=True)
+ return total_data
+
+
+ def _save_json(self, data, path):
+ with open(path, "w") as tf:
+ json.dump(data, tf, indent=4)
+
+
+ def init_dial(self, dial_idx=0, ori_dial_id=""):
+ dial = {
+ ORI_DIAL_ID: ori_dial_id,
+ DIAL_IDX: dial_idx,
+ ORI_DIAL_INFO: {},
+ LOG: [],
+ PROMPT: [],
+ }
+ return dial
+
+
+ def init_turn(self, turn_id=0, dial_hist=[]):
+ turn = {
+ TURN_ID: turn_id,
+ USR_UTT: "",
+ SYS_UTT: "",
+ DIAL_HIST: " ".join(dial_hist),
+ ORI_USR_ANN: {},
+ ORI_SYS_ANN: {},
+ }
+ return turn
+
+
+ def save_dial(self, data, data_name="", file_idx=0, mode="train"):
+ save_name = f"dialogues_{file_idx}.json"
+ folder_path = os.path.join(self.save_dir, data_name, mode)
+ if not os.path.exists(folder_path): os.makedirs(folder_path)
+ path = os.path.join(folder_path, save_name)
+ self._save_json(data, path)
+
+
+ def copy_general(self, src, dst):
+ try:
+ shutil.copytree(src, dst, dirs_exist_ok=True)
+ except OSError as exc: # python >2.5
+ if exc.errno in (errno.ENOTDIR, errno.EINVAL):
+ shutil.copy(src, dst)
+ else: raise
+
+
+ def copy_related_files(self, data_name, exp_list=[], extra_dir=""):
+ source_dir = os.path.join(self.data_dir, data_name, extra_dir)
+ target_dir = os.path.join(self.save_dir, data_name)
+ for filename in os.listdir(source_dir):
+ if filename.startswith("."): continue # ignore hidden files
+ if filename.startswith("__"): continue # ignore hidden files
+ if filename in exp_list: continue
+ if filename.endswith(".py"): continue
+ source_path = os.path.join(source_dir, filename)
+ target_path = os.path.join(target_dir, filename)
+ self.copy_general(source_path, target_path)
+
+
+ def save_original_examples(self, examples, data_name):
+ """
+ save 5 original data points just for reference and check
+ data would be a list of length 5, each entry is a dialog
+ in the form of dictionary
+ """
+ path = os.path.join(self.save_dir, data_name, "original_examples.json")
+ self._save_json(examples, path)
+ print("original examples saved")
+
+
+ def save_converted_examples(self, data_name):
+ """
+ extract the first 5 examples from the train set of the
+ already processed data, just for reference and check
+ """
+ data = self._load_json(os.path.join(self.save_dir, data_name, "train/dialogues_1.json"))
+ examples = {key: data[key] for key in list(data.keys())[:5]}
+ self._save_json(examples, os.path.join(self.save_dir, data_name, "converted_examples.json"))
+ print("converted examples saved")
+
+
+ def _import_system_file(self, filename="", module_name=""):
+ import importlib, sys
+ spec = importlib.util.spec_from_file_location(module_name, filename)
+ module = importlib.util.module_from_spec(spec)
+ sys.modules[module_name] = module
+ spec.loader.exec_module(module)
+ return module
+
+
+ def tweetsum(self):
+ """
+ real data store in kaggle, need to download and preprocess first
+ """
+ data_name = "TweetSumm"
+ # prepare data
+ Modules = self._import_system_file(os.path.join(self.data_dir, data_name, "tweet_sum_processor.py"), "TweetSumProcessor")
+ processor = Modules.TweetSumProcessor(os.path.join(self.data_dir, data_name, "archive/twcs/twcs.csv"))
+ exp_list = ["tweet_sum_data_files", "archive", "tweet_sum_processor.py"]
+ for mode in ["train", "val", "test"]:
+ real_name = f"final_{mode}_tweetsum.jsonl" if mode != "val" else "final_valid_tweetsum.jsonl"
+ path = os.path.join(self.data_dir, data_name, "tweet_sum_data_files", real_name)
+
+ # split = self._load_jsonl(path)
+ new_data = {}
+ file_idx = 1
+ original_data_sample = []
+
+ with open(path) as f:
+ dialog_with_summaries = processor.get_dialog_with_summaries(f.readlines())
+ for dial_idx, dialog_with_summary in tqdm(enumerate(dialog_with_summaries)):
+ new_dial_id = f"{data_name}--{mode}--{dial_idx+1}"
+
+ json_format = dialog_with_summary.get_json()
+ dial = json.loads(json_format)
+ if mode == "train" and dial_idx < 5:
+ original_data_sample.append(dial)
+
+ new_dial = self.init_dial(dial_idx=dial_idx+1, ori_dial_id=dial["dialog"]["dialog_id"]) # idx starts from 1
+ new_dial[ORI_DIAL_INFO] = {
+ "summaries" : dial["summaries"]
+ }
+ turn_id, dial_hist = 1, []
+ new_turn = self.init_turn(turn_id=turn_id)
+ for idx, turn in enumerate(dial["dialog"]["turns"]):
+ utt = " ".join(turn["sentences"])
+ if turn["is_agent"]:
+ new_turn[SYS_UTT] += f" {utt}"
+ new_turn[SYS_UTT] = new_turn[SYS_UTT].strip()
+ if idx == len(dial["dialog"]["turns"]) - 1 or \
+ not dial["dialog"]["turns"][idx+1]["is_agent"]:
+
+ new_dial[LOG].append(new_turn)
+ turn_id += 1
+ if new_turn[USR_UTT]:
+ dial_hist.append(" " + new_turn[USR_UTT])
+ dial_hist.append(" " + new_turn[SYS_UTT])
+ new_turn = self.init_turn(turn_id=turn_id)
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+ else:
+ new_turn[USR_UTT] += f" {utt}"
+ new_turn[USR_UTT] = new_turn[USR_UTT].strip()
+
+ new_data[new_dial_id] = new_dial
+ if (dial_idx+1) % 1000 == 0 or dial_idx+1 == len(dialog_with_summaries):
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ new_data = {} # reset
+ file_idx += 1
+
+ if mode == "train": self.save_original_examples(original_data_sample, data_name)
+ self.save_converted_examples(data_name)
+ self.copy_related_files(data_name, exp_list)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def samsum(self):
+ """
+ 1. achieved from HF datasets "samsum"
+ 2. no sys/user, but two human being, assuming the first utterance comes from user, ignore residual
+ """
+ data_name = "SAMSum"
+ # prepare data
+ from datasets import load_dataset
+ data = load_dataset("samsum")
+ for mode in ["train", "val", "test"]:
+ real_name = mode if mode != "val" else "validation"
+ new_data, file_idx = {}, 1
+
+ for dial_idx, dial in tqdm(enumerate(data[real_name])):
+ new_dial_id = f"{data_name}--{mode}--{dial_idx+1}"
+ new_dial = self.init_dial(dial_idx=dial_idx+1, ori_dial_id=dial["id"]) # idx starts from 1
+ new_dial[ORI_DIAL_INFO] = {
+ "summary" : dial["summary"]
+ }
+ dial_hist = []
+ sep = "\r\n" if "\r\n" in dial["dialogue"] else "\n"
+ for turn_idx, turn in enumerate(dial["dialogue"].split(sep)):
+ speaker, utt = turn.split(": ")[0], ": ".join(turn.split(": ")[1:])
+ if turn_idx % 2 == 0:
+ new_turn = self.init_turn(turn_id=turn_idx//2+1)
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+ new_turn[USR_UTT] = utt.strip().replace(" ", " ")
+ new_turn[ORI_USR_ANN]['speaker'] = speaker
+ else:
+ new_turn[SYS_UTT] = utt.strip().replace(" ", " ")
+ new_turn[ORI_SYS_ANN]['speaker'] = speaker
+ dial_hist.append(" " + new_turn[USR_UTT])
+ dial_hist.append(" " + new_turn[SYS_UTT])
+ new_dial[LOG].append(new_turn)
+
+ new_data[new_dial_id] = new_dial
+ if (dial_idx+1) % 1000 == 0 or dial_idx+1 == len(data[real_name]):
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ new_data = {} # reset
+ file_idx += 1
+
+ self.save_original_examples(data["train"][:5], data_name)
+ self.save_converted_examples(data_name)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def dialogsum(self):
+ """
+ 1. we use the data from github: https://github.com/cylnlp/dialogsum/tree/main/DialogSum_Data
+ but, it is also available from HF datasets "knkarthick/dialogsum"
+ 2. no sys/user, but two human being, assuming the first utterance comes from user, ignore residual
+ """
+ data_name = "DialogSum"
+
+ for mode in ["train", "val", "test"]:
+ real_name = mode if mode != "val" else "dev"
+ path = os.path.join(self.data_dir, data_name, f"DialogSum_Data/dialogsum.{real_name}.jsonl")
+ data = self._load_jsonl(path)
+ new_data, file_idx = {}, 1
+
+ for dial_idx, dial in tqdm(enumerate(data)):
+ new_dial_id = f"{data_name}--{mode}--{dial_idx+1}"
+ new_dial = self.init_dial(dial_idx=dial_idx+1, ori_dial_id=dial["fname"]) # idx starts from 1
+ for key in dial:
+ if key in ["fname", "dialogue"]: continue
+ new_dial[ORI_DIAL_INFO][key] = dial[key]
+
+ dial_hist = []
+ turns = dial["dialogue"].replace("PErson","Person").split("#Person")[1:]
+ for turn_idx, turn in enumerate(turns):
+ speaker, utt = turn.split("#:")
+ speaker = "Person" + speaker
+ utt = utt.replace("\n","").strip()
+
+ if turn_idx % 2 == 0:
+ new_turn = self.init_turn(turn_id=turn_idx//2+1)
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+ new_turn[USR_UTT] = utt.strip()
+ new_turn[ORI_USR_ANN]['speaker'] = speaker.replace("#","")
+ else:
+ new_turn[SYS_UTT] = utt.strip()
+ new_turn[ORI_SYS_ANN]['speaker'] = speaker.replace("#","")
+ dial_hist.append(" " + new_turn[USR_UTT])
+ dial_hist.append(" " + new_turn[SYS_UTT])
+ new_dial[LOG].append(new_turn)
+
+ new_data[new_dial_id] = new_dial
+ if (dial_idx+1) % 1000 == 0 or dial_idx+1 == len(data):
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ new_data = {} # reset
+ file_idx += 1
+
+ if mode == "train": self.save_original_examples(data[:5], data_name)
+ self.save_converted_examples(data_name)
+ self.copy_related_files(data_name, ['Baseline'])
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def ami(self):
+ """
+ download processed data from https://drive.google.com/drive/folders/1BbmaZnzG9WrqOO-D3h211NOJePotqwQJ
+ the data is separated into 6 files based on annotation
+ here we extract the dialog context based on file "dialogueActs"
+ no train/val/test split, consider all as train
+ no readme file needs to be copied
+ we use ABCD instead of USR_UTT/SYS_UTT
+
+ 1. each dialog contains more than 2 speaker? yes A,B,C,D
+ 2. speaking in any order? yes A->B->C->D
+ """
+ data_name = "AMI"
+ mode = "train"
+ data_dir = os.path.join(self.data_dir, data_name, "dialogueActs")
+ new_data, dial_idx = {}, 1
+
+ for filename in os.listdir(data_dir):
+ dial = self._load_json(os.path.join(data_dir, filename))
+ new_dial = self.init_dial(dial_idx=dial_idx) # idx starts from 1
+ # # # save dialog log
+ new_dial[ORI_DIAL_INFO]["dialog history"] = []
+ for turn in dial:
+ new_dial[ORI_DIAL_INFO]["dialog history"].append(turn["speaker"] + " : " + turn["text"])
+
+ # # # save abstractive summary
+ if os.path.exists(os.path.join(self.data_dir, data_name, "abstractive", filename)):
+ abs_sum = self._load_json(os.path.join(self.data_dir, data_name, "abstractive", filename))
+ new_dial[ORI_DIAL_INFO]["abstractive summary"] = abs_sum
+ # # # save extractive summary
+ if os.path.exists(os.path.join(self.data_dir, data_name, "extractive", filename)):
+ ext_sum = self._load_json(os.path.join(self.data_dir, data_name, "extractive", filename))
+ new_dial[ORI_DIAL_INFO]["extractive summary"] = []
+ for ext_turn in ext_sum:
+ new_dial[ORI_DIAL_INFO]["extractive summary"].append(ext_turn["speaker"] + " : " + ext_turn["text"])
+
+ new_dial_id = f"{data_name}--{mode}--{dial_idx}"
+ new_dial[ORI_DIAL_ID] = filename
+ new_data[new_dial_id] = new_dial
+ dial_idx += 1
+ if dial_idx == 2:
+ self.save_original_examples(dial, data_name)
+
+ self.save_dial(new_data, data_name=data_name, file_idx=1, mode=mode)
+ self.save_converted_examples(data_name)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def icsi(self):
+ """
+ similar as AMI
+ speak can last to A->J
+ """
+ data_name = "ICSI"
+ mode = "train"
+ data_dir = os.path.join(self.data_dir, data_name, "dialogueActs")
+ new_data, dial_idx = {}, 1
+
+ for filename in os.listdir(data_dir):
+ dial = self._load_json(os.path.join(data_dir, filename))
+ new_dial = self.init_dial(dial_idx=dial_idx) # idx starts from 1
+ # # # save dialog log
+ new_dial[ORI_DIAL_INFO]["dialog history"] = []
+ for turn in dial:
+ new_dial[ORI_DIAL_INFO]["dialog history"].append(turn["speaker"] + " : " + turn["text"])
+
+ # # # save abstractive summary
+ if os.path.exists(os.path.join(self.data_dir, data_name, "abstractive", filename)):
+ abs_sum = self._load_json(os.path.join(self.data_dir, data_name, "abstractive", filename))
+ new_dial[ORI_DIAL_INFO]["abstractive summary"] = abs_sum
+ # # # save extractive summary
+ if os.path.exists(os.path.join(self.data_dir, data_name, "extractive", filename)):
+ ext_sum = self._load_json(os.path.join(self.data_dir, data_name, "extractive", filename))
+ new_dial[ORI_DIAL_INFO]["extractive summary"] = []
+ for ext_turn in ext_sum:
+ new_dial[ORI_DIAL_INFO]["extractive summary"].append(ext_turn["speaker"] + " : " + ext_turn["text"])
+
+ new_dial_id = f"{data_name}--{mode}--{dial_idx}"
+ new_dial[ORI_DIAL_ID] = filename
+ new_data[new_dial_id] = new_dial
+ dial_idx += 1
+ if dial_idx == 2:
+ self.save_original_examples(dial, data_name)
+
+ self.save_dial(new_data, data_name=data_name, file_idx=1, mode=mode)
+ self.save_converted_examples(data_name)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def qmsum(self):
+ data_name = "QMSum"
+ for mode in ["train", "val", "test"]:
+ path = os.path.join(self.data_dir, data_name, f"data/ALL/{mode}")
+ data = self._load_dir_json(path)
+ new_data, file_idx = {}, 1
+ for dial_idx, dial in tqdm(enumerate(data)):
+ new_dial_id = f"{data_name}--{mode}--{dial_idx+1}"
+ new_dial = self.init_dial(dial_idx=dial_idx+1)
+ for key_ in dial:
+ if key_ == "meeting_transcripts": continue
+ new_dial[ORI_DIAL_INFO][key_] = dial[key_]
+
+ new_dial[ORI_DIAL_INFO]["dialog history"] = []
+ for turn in dial["meeting_transcripts"]:
+ new_dial[ORI_DIAL_INFO]["dialog history"].append(turn["speaker"] + " : " + turn["content"])
+
+ new_data[new_dial_id] = new_dial
+ if (dial_idx+1) % 1000 == 0 or dial_idx+1 == len(data):
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ new_data = {} # reset
+ file_idx += 1
+
+ if mode == "train": self.save_original_examples(data[:5], data_name)
+ self.save_converted_examples(data_name)
+ self.copy_related_files(data_name, ['Baseline'])
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def mediasum(self):
+ data_name = "MediaSum"
+ split_id = self._load_json(os.path.join(self.data_dir, data_name, "data/train_val_test_split.json"))
+ data = self._load_json(os.path.join(self.data_dir, data_name, "data/news_dialogue.json"))
+
+ split_id2mode, new_data, file_idx, dial_idx = {}, {}, {}, {}
+ for mode in ["train", "val", "test"]:
+ for dial_id in split_id[mode]:
+ split_id2mode[dial_id] = mode
+ new_data[mode], file_idx[mode], dial_idx[mode] = {}, 1, 1
+
+ for dial in tqdm(data):
+ new_dial = self.init_dial() # idx starts from 1
+ new_dial[ORI_DIAL_ID] = dial['id']
+ for key_ in dial:
+ if key_ in ["id", "utt", "speaker"]: continue
+ new_dial[ORI_DIAL_INFO][key_] = dial[key_]
+ dialog_log = []
+ for idx in range(len(dial["utt"])):
+ dialog_log.append(dial["speaker"][idx] + " : " + dial["utt"][idx])
+ new_dial[ORI_DIAL_INFO]["dialog history"] = dialog_log
+
+ mode = split_id2mode.get(dial["id"], "train")
+ new_dial_id = f"{data_name}--{mode}--{dial_idx[mode]}"
+ new_dial[DIAL_IDX] = dial_idx[mode]
+ new_data[mode][new_dial_id] = new_dial
+ dial_idx[mode] += 1
+
+ if len(new_data[mode]) == 1000:
+ self.save_dial(new_data[mode], data_name=data_name, file_idx=file_idx[mode], mode=mode)
+ new_data[mode] = {} # reset
+ file_idx[mode] += 1
+
+ # if there are some unsaved dialogs left, save it now
+ for mode in ["train", "val", "test"]:
+ if new_data[mode]:
+ self.save_dial(new_data[mode], data_name=data_name, file_idx=file_idx[mode], mode=mode)
+
+ self.save_original_examples(data[:5], data_name)
+ self.save_converted_examples(data_name)
+ self.copy_related_files(data_name, ["data"])
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def crd3(self):
+ """
+ For this dataset, we choose present only chunk_size=2 offset=0
+ some file are missing for chunk size = 2
+ """
+ data_name = "CRD3"
+ exp_list = []
+ for filename in os.listdir(os.path.join(self.data_dir, data_name)):
+ if filename == "readme.txt": continue
+ if filename == "LICENSE": continue
+ exp_list.append(filename)
+ for mode in ["train", "val", "test"]:
+ new_data, file_idx, dial_idx = {}, 1, 1
+ for file_name in self._load_txt(os.path.join(self.data_dir, data_name, f"data/aligned data/{mode}_files")):
+ file_path = os.path.join(self.data_dir, data_name, f"data/aligned data/c=2/{file_name}_2_0.json")
+ if not os.path.exists(file_path): continue
+ data = self._load_json(file_path)
+
+ new_dial_id = f"{data_name}--{mode}--{dial_idx}"
+ new_dial = self.init_dial(dial_idx=dial_idx)
+ new_dial[ORI_DIAL_ID] = file_name
+ new_dial[ORI_DIAL_INFO] = data
+ new_data[new_dial_id] = new_dial
+ dial_idx += 1
+
+ if (dial_idx) % 1000 == 0:
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ new_data = {} # reset
+ file_idx += 1
+ if new_data: self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ if mode == "train": self.save_original_examples([new_dial[ORI_DIAL_INFO]], data_name)
+ self.save_converted_examples(data_name)
+ self.copy_related_files(data_name, exp_list)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def ectsum(self):
+ data_name = "ECTSum"
+ for mode in ["train", "val", "test"]:
+ new_data, file_idx, dial_idx = {}, 1, 1
+ data_dir = os.path.join(self.data_dir, data_name, "data/final", mode)
+ for file_name in os.listdir(os.path.join(data_dir, "ects")):
+ if not file_name.endswith("txt"): pdb.set_trace()
+ ect_data = self._load_txt(os.path.join(data_dir, "ects", file_name))
+ sum_data = self._load_txt(os.path.join(data_dir, "gt_summaries", file_name))
+
+ new_dial_id = f"{data_name}--{mode}--{dial_idx}"
+ new_dial = self.init_dial(dial_idx=dial_idx)
+ new_dial[ORI_DIAL_INFO]["file_name"] = file_name
+ new_dial[ORI_DIAL_INFO]["ect"] = ect_data
+ new_dial[ORI_DIAL_INFO]["summary"] = sum_data
+ new_data[new_dial_id] = new_dial
+ dial_idx += 1
+
+ if (dial_idx) % 1000 == 0:
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ new_data = {} # reset
+ file_idx += 1
+ if new_data: self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ if mode == "train": self.save_original_examples([new_dial[ORI_DIAL_INFO]], data_name)
+ self.save_converted_examples(data_name)
+ self.copy_related_files(data_name, ['codes', 'data'])
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def run_all(self):
+ # self.todsum()
+ # self.tweetsum()
+ # self.samsum()
+ # self.dialogsum()
+ # self.ami()
+ # self.icsi()
+ # self.qmsum()
+ self.mediasum()
+ # self.crd3()
+ # self.ectsum()
+ pass
+
+
+ def copy_example(self):
+ source_dir = self.save_dir
+ target_dir = "/home/qkun/projs/TOD-Project/Datasets/Dialogue-Summarization_PROCESSED/"
+ file_list = ["converted_examples.json", "original_examples.json", "readme.txt", "LICENSE"]
+ for dir_name in sorted(os.listdir(source_dir)):
+ if os.path.isfile(os.path.join(source_dir, dir_name)): continue
+ if not os.path.exists(os.path.join(target_dir, dir_name)): os.makedirs(os.path.join(target_dir, dir_name))
+ for filename in file_list:
+ source_path = os.path.join(source_dir, dir_name, filename)
+ target_path = os.path.join(target_dir, dir_name, filename)
+ if not os.path.exists(source_path): continue
+ shutil.copy(source_path, target_path)
+
+
+def main():
+ preprocess = PreProcessData()
+ preprocess.run_all()
+ preprocess.copy_example()
+
+if __name__ == '__main__':
+ main()
diff --git a/code/preprocess_data_KG.py b/code/preprocess_data_KG.py
new file mode 100644
index 0000000000000000000000000000000000000000..91f64f7ed4510091b07c8abeb864db89f8decff4
--- /dev/null
+++ b/code/preprocess_data_KG.py
@@ -0,0 +1,368 @@
+"""
+ Copyright (c) 2023, salesforce.com, inc.
+ All rights reserved.
+ SPDX-License-Identifier: Apache License 2.0
+ For full license text, see the LICENSE file in the repo root or https://www.apache.org/licenses/LICENSE-2.0
+"""
+
+#!/usr/bin/env python3
+#
+import sys, os, pdb
+import json
+import shutil, errno
+from tqdm import tqdm
+import pandas as pd
+from utils.constant import *
+
+
+class PreProcessData(object):
+ """docstring for PreProcessData"""
+ def __init__(self):
+ super(PreProcessData, self).__init__()
+ self.data_dir = "/path/to/where/the/raw/dataset/is"
+ self.save_dir = "/path/to/store/the/processed/dataset/" # e.g. ./data/processed/Knowledge-Grounded
+
+
+ def _load_json(self, path=None):
+ if path is None or not os.path.exists(path):
+ raise IOError('File does not exist: %s' % path)
+ # return None
+ with open(path) as df:
+ data = json.loads(df.read())
+ return data
+
+
+ def _load_txt(self, path=None, split_tok="\n"):
+ if path is None or not os.path.exists(path):
+ raise IOError('File does not exist: %s' % path)
+ with open(path) as df:
+ data = df.read().strip().split(split_tok)
+ return data
+
+
+ def _load_csv(self, path=None, sep="\t"):
+ if path is None or not os.path.exists(path):
+ raise IOError('File does not exist: %s' % path)
+ with open(path) as df:
+ data = pd.read_csv(df, sep=sep)
+ return data
+
+
+ def _load_jsonl(self, path=None):
+ if path is None or not os.path.exists(path):
+ raise IOError('File does not exist: %s' % path)
+ data = []
+ with open(path) as df:
+ for line in df.readlines():
+ data.append(json.loads(line))
+ return data
+
+
+
+ def _load_dir_json(self, dir_path=None):
+ if dir_path is None or not os.path.exists(dir_path): return None
+ total_data = [] # assume data is a list of dialogs
+ for filename in sorted(os.listdir(dir_path)):
+ if filename in ["schema.json"]: continue
+ if not filename.endswith(".json"): continue
+ file_path = os.path.join(dir_path, filename)
+ data = self._load_json(path=file_path)
+ if type(data) == list:
+ total_data.extend(data)
+ else:
+ total_data.append(data)
+ return total_data
+
+
+ def _load_dir_txt(self, dir_path=None, file_type="txt"):
+ if dir_path is None or not os.path.exists(dir_path): return None
+ total_data = [] # assume data is a list of dialogs
+ for filename in sorted(os.listdir(dir_path)):
+ if not filename.endswith(file_type): continue
+ file_path = os.path.join(dir_path, filename)
+ data = self._load_txt(path=file_path)
+ if type(data) == list:
+ total_data.extend(data)
+ else:
+ total_data.append(data)
+ return total_data
+
+
+ def _load_dir_tsv(self, dir_path=None, sep="\t"):
+ if dir_path is None or not os.path.exists(dir_path): return None
+ total_data = None
+ for filename in sorted(os.listdir(dir_path)):
+ file_path = os.path.join(dir_path, filename)
+ data = self._load_csv(path=file_path, sep=sep)
+ total_data = pd.concat([total_data, data], ignore_index=True)
+ return total_data
+
+
+ def _save_json(self, data, path):
+ with open(path, "w") as tf:
+ json.dump(data, tf, indent=4)
+
+
+ def init_dial(self, dial_idx=0, ori_dial_id=""):
+ dial = {
+ ORI_DIAL_ID: "",
+ DIAL_IDX: dial_idx,
+ ORI_DIAL_INFO: {},
+ LOG: [],
+ # EK_ORI: {
+ # TOD_EK:{},
+ # },
+ # EK: "",
+ PROMPT: [],
+ }
+ return dial
+
+
+ def init_turn(self, turn_id=0, dial_hist=[]):
+ turn = {
+ TURN_ID: int(turn_id),
+ USR_UTT: "",
+ SYS_UTT: "",
+ DIAL_HIST: " ".join(dial_hist),
+ ORI_USR_ANN: {},
+ ORI_SYS_ANN: {},
+ EK_ORI: {
+ TOD_EK:{},
+ },
+ EK: "",
+ }
+ return turn
+
+
+ def save_dial(self, data, data_name="", file_idx=0, mode="train"):
+ save_name = f"dialogues_{file_idx}.json"
+ folder_path = os.path.join(self.save_dir, data_name, mode)
+ if not os.path.exists(folder_path): os.makedirs(folder_path)
+ path = os.path.join(folder_path, save_name)
+ self._save_json(data, path)
+
+
+ def save_original_examples(self, examples, data_name):
+ """
+ save 5 original data points just for reference and check
+ data would be a list of length 5, each entry is a dialog
+ in the form of dictionary
+ """
+ path = os.path.join(self.save_dir, data_name, "original_examples.json")
+ self._save_json(examples, path)
+ print("original examples saved")
+
+
+ def save_converted_examples(self, data_name):
+ """
+ extract the first 5 examples from the train set of the
+ already processed data, just for reference and check
+ """
+ data = self._load_json(os.path.join(self.save_dir, data_name, "train/dialogues_1.json"))
+ examples = {key: data[key] for key in list(data.keys())[:5]}
+ self._save_json(examples, os.path.join(self.save_dir, data_name, "converted_examples.json"))
+ print("converted examples saved")
+
+
+ def dict_to_str(self, ek_ori):
+ """
+ turn non-flat external knowledge into string
+ original format:
+ "metadata":{
+ domain: [
+ {
+ attr1: value1,
+ attr2: value2,
+ ...
+ },
+ ...
+ ]
+ }
+ output format:
+ ( metadata : ( domain : ( attr1 : value1 | attr2 : value2 | ... ) | ( ... ) | ... ))
+ """
+ ek = str(ek_ori).replace("'"," ").replace(", "," | ")
+ ek = ek.replace("{","(").replace("}",")").replace("[","(").replace("]",")")
+ ek = ek.replace(" ", " ")
+ return ek
+
+
+ def wow(self):
+ """
+ Speakers: Apprentice (always starts a turn), Wizard (ends a turn)
+ turn-level EK only
+ checked facts:
+
+ """
+ data_name = "wizard_of_wikipedia"
+ for mode in ["train", "val", "test"]:
+ if mode == "train": filename = "train.json"
+ elif mode == "val": filename = "valid_topic_split.json"
+ else: filename = "test_topic_split.json"
+ data = self._load_json(os.path.join(self.data_dir, data_name, filename))
+ new_data, file_idx = {}, 1
+ for dial_idx, dial in tqdm(enumerate(data)):
+ new_dial = self.init_dial(dial_idx=dial_idx+1)
+ new_dial_id = f"{data_name}--{mode}--{dial_idx+1}"
+ new_dial[ORI_DIAL_INFO]["chosen_topic"] = dial["chosen_topic"]
+ new_dial[ORI_DIAL_INFO]["persona"] = dial["persona"]
+ new_dial[ORI_DIAL_INFO]["wizard_eval"] = dial["wizard_eval"]
+ new_dial[ORI_DIAL_INFO]["chosen_topic_passage"] = dial["chosen_topic_passage"]
+ turn_idx, dial_hist = 1, []
+ new_turn = self.init_turn(turn_id=turn_idx)
+ for turn in (dial["dialog"]):
+ if turn["speaker"].split("_")[-1] == "Apprentice":
+ new_turn = self.init_turn(turn_id=turn_idx)
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+ for key_ in turn:
+ if key_ == "text":
+ new_turn[USR_UTT] = turn["text"]
+ else:
+ new_turn[ORI_USR_ANN][key_] = turn[key_]
+
+ dial_hist.append(f"<{SPEAKER1.upper()}> " + new_turn[USR_UTT])
+ elif turn["speaker"].split("_")[-1] == "Wizard":
+ for key_ in turn:
+ if key_ == "text":
+ new_turn[SYS_UTT] = turn["text"]
+ else:
+ new_turn[ORI_SYS_ANN][key_] = turn[key_]
+ dial_hist.append(f"<{SPEAKER2.upper()}> " + new_turn[SYS_UTT])
+ if not turn["checked_passage"]:
+ turn["checked_passage"] = {"none": dial["chosen_topic"]}
+ if not turn["checked_sentence"]:
+ turn["checked_sentence"] = {"no_passages_used": "no_passages_used"}
+ if len(turn["checked_passage"]) == 2 and "no_passages_used" in turn["checked_passage"]:
+ # for case turn["checked_passage"] = {'chosen_topic_0_Aquarium': 'Aquarium', 'no_passages_used': 'no_passages_used'}
+ del turn["checked_passage"]["no_passages_used"]
+ # if len(turn["checked_passage"].values()) != 1 or len(turn["checked_sentence"].values()) != 1: pdb.set_trace()
+ title = list(turn["checked_passage"].values())[0]
+ sent = list(turn["checked_sentence"].values())[0]
+ new_turn[EK_ORI][TOD_EK][title] = sent
+ new_turn[EK] = self.dict_to_str(new_turn[EK_ORI][TOD_EK])
+ new_dial[LOG].append(new_turn)
+ turn_idx += 1
+ else:
+ print(turn["speaker"])
+ raise ValueError("Unknown speaker")
+
+ if not new_turn[SYS_UTT]:
+ new_dial[LOG].append(new_turn)
+
+ new_data[new_dial_id] = new_dial
+ if new_dial[DIAL_IDX] % 10000 == 0:
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ new_data = {} # reset
+ file_idx += 1
+ if new_data: self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ if mode == "train": self.save_original_examples(data[:5], data_name)
+ print(f"finishing processing {dial_idx+1} dialogs for {mode} set ...")
+ self.save_converted_examples(data_name)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def woi(self):
+ """
+ actions:
+ Apprentice => Wizard
+ Wizard => SearchAgent
+ SearchAgent => Wizard
+ Wizard => Apprentice
+ """
+ data_name = "wizard_of_internet"
+ for mode in ["test", "train"]:
+ data = self._load_jsonl(os.path.join(self.data_dir, data_name, f"{mode}.jsonl"))
+ data = {k:v for dial in data for k,v in dial.items()}
+ new_data, file_idx, dial_idx = {}, 1, 1
+ for dial_id, dial in tqdm(data.items()):
+ # new_dial = dial
+ new_dial = self.init_dial(dial_idx=dial_idx)
+ new_dial_id = f"{data_name}--{mode}--{dial_idx}"
+ new_dial[ORI_DIAL_ID] = dial_id
+ new_dial[ORI_DIAL_INFO]["apprentice_persona"] = dial["apprentice_persona"]
+ new_dial[ORI_DIAL_INFO]["start_timestamp"] = dial["start_timestamp"]
+ turn_idx, dial_hist = 1, []
+ new_turn = self.init_turn(turn_id=turn_idx)
+ for turn in dial["dialog_history"]:
+ if turn["action"] == "Apprentice => Wizard":
+ new_turn = self.init_turn(turn_id=turn_idx)
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+ new_turn[USR_UTT] = turn["text"]
+ new_turn[ORI_USR_ANN]["timestamp"] = turn["timestamp"]
+ dial_hist.append(f"<{SPEAKER1.upper()}> " + new_turn[USR_UTT])
+ elif turn["action"] == "Wizard => SearchAgent":
+ if "query" not in new_turn[ORI_SYS_ANN]:
+ new_turn[ORI_SYS_ANN]["query"] = []
+ new_turn[ORI_SYS_ANN]["query"].append({
+ "query": turn["text"],
+ "query_result": "",
+ "timestamp_query": turn["timestamp"],
+ })
+ elif turn["action"] == "SearchAgent => Wizard":
+ # checked, each query corresponds to one query result
+ # if new_turn[ORI_SYS_ANN]["query"][-1]["query_result"]: pdb.set_trace()
+ new_turn[ORI_SYS_ANN]["query"][-1]["query_result"] = turn["context"]
+ elif turn["action"] == "Wizard => Apprentice":
+ new_turn[SYS_UTT] = turn["text"]
+ for doc_id, doc in enumerate(turn["context"]["selected_contents"][1:]):
+ for sent_id, choose in enumerate(doc):
+ if choose:
+ title = turn["context"]["contents"][doc_id]["title"]
+ sent = turn["context"]["contents"][doc_id]["content"][sent_id]
+ if title not in new_turn[EK_ORI][TOD_EK]:
+ new_turn[EK_ORI][TOD_EK][title] = []
+ new_turn[EK_ORI][TOD_EK][title].append(sent)
+ new_turn[EK] = self.dict_to_str(new_turn[EK_ORI][TOD_EK])
+ new_turn[ORI_SYS_ANN]["context"] = turn["context"]
+ new_turn[ORI_SYS_ANN]["timestamp"] = turn["timestamp"]
+ dial_hist.append(f"<{SPEAKER2.upper()}> " + new_turn[SYS_UTT])
+ new_dial[LOG].append(new_turn)
+ turn_idx += 1
+ else:
+ # checked, no such turns
+ print(turn["action"])
+ raise ValueError("The fifth case")
+ if not new_turn[SYS_UTT]:
+ new_dial[LOG].append(new_turn)
+
+ # new_dial[EK_ORI][TOD_EK]["apprentice_persona"] = dial["apprentice_persona"]
+ # new_dial[EK] = self.dict_to_str(new_dial[EK_ORI][TOD_EK])
+ new_data[new_dial_id] = new_dial
+ if dial_idx % 10000 == 0:
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ new_data = {} # reset
+ file_idx += 1
+ dial_idx += 1
+ if new_data: self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ if mode == "train": self.save_original_examples({k:data[k] for k in list(data.keys())[:5]}, data_name)
+ print(f"finishing processing {dial_idx-1} dialogs for {mode} set ...")
+ self.save_converted_examples(data_name)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+ def run_all(self):
+ self.wow()
+ self.woi()
+
+
+ def copy_example(self):
+ source_dir = self.save_dir
+ for target_dir in [ "/home/qkun/projs/TOD-Project/Datasets/Knowledge-Grounded_PROCESSED/", "/home/qkun/projs/DialogStudio-Release/knowledge-grounded-dialogues/"]:
+ # target_dir = "/home/qkun/projs/TOD-Project/Datasets/Knowledge-Grounded_PROCESSED/"
+ file_list = ["converted_examples.json", "original_examples.json", "readme.txt", "LICENSE"]
+ for dir_name in sorted(os.listdir(source_dir)):
+ if os.path.isfile(os.path.join(source_dir, dir_name)): continue
+ if not os.path.exists(os.path.join(target_dir, dir_name)): os.makedirs(os.path.join(target_dir, dir_name))
+ for filename in file_list:
+ source_path = os.path.join(source_dir, dir_name, filename)
+ target_path = os.path.join(target_dir, dir_name, filename)
+ if not os.path.exists(source_path): continue
+ shutil.copy(source_path, target_path)
+
+
+def main():
+ preprocess = PreProcessData()
+ preprocess.run_all()
+ preprocess.copy_example()
+
+if __name__ == '__main__':
+ main()
diff --git a/code/preprocess_data_OD.py b/code/preprocess_data_OD.py
new file mode 100644
index 0000000000000000000000000000000000000000..316b5900eb00e224067f42affa1a0a55494a7552
--- /dev/null
+++ b/code/preprocess_data_OD.py
@@ -0,0 +1,574 @@
+"""
+ Copyright (c) 2023, salesforce.com, inc.
+ All rights reserved.
+ SPDX-License-Identifier: Apache License 2.0
+ For full license text, see the LICENSE file in the repo root or https://www.apache.org/licenses/LICENSE-2.0
+"""
+
+#!/usr/bin/env python3
+#
+import sys, os, pdb
+import json
+import shutil, errno
+from tqdm import tqdm
+import pandas as pd
+from constant import *
+
+
+class PreProcessData(object):
+ """docstring for PreProcessData"""
+ def __init__(self):
+ super(PreProcessData, self).__init__()
+ self.data_dir = "/path/to/where/the/raw/dataset/is"
+ self.save_dir = "/path/to/store/the/processed/dataset/" # e.g. ./data/processed/Open-Domain
+
+
+ def _load_json(self, path=None):
+ if path is None or not os.path.exists(path):
+ raise IOError('File does not exist: %s' % path)
+ # return None
+ with open(path) as df:
+ data = json.loads(df.read())
+ return data
+
+
+ def _load_txt(self, path=None, split_tok="\n", encoding="utf-8"):
+ if path is None or not os.path.exists(path):
+ raise IOError('File does not exist: %s' % path)
+ with open(path, 'r', encoding=encoding) as df:
+ data = df.read().strip().split(split_tok)
+ return data
+
+
+ def _load_csv(self, path=None, sep="\t"):
+ if path is None or not os.path.exists(path):
+ raise IOError('File does not exist: %s' % path)
+ with open(path) as df:
+ data = pd.read_csv(df, sep=sep)
+ return data
+
+
+ def _load_jsonl(self, path=None):
+ if path is None or not os.path.exists(path):
+ raise IOError('File does not exist: %s' % path)
+ data = []
+ with open(path) as df:
+ for line in df.readlines():
+ data.append(json.loads(line))
+ return data
+
+
+
+ def _load_dir_json(self, dir_path=None):
+ if dir_path is None or not os.path.exists(dir_path): return None
+ total_data = [] # assume data is a list of dialogs
+ for filename in sorted(os.listdir(dir_path)):
+ if filename in ["schema.json"]: continue
+ if not filename.endswith(".json"): continue
+ file_path = os.path.join(dir_path, filename)
+ data = self._load_json(path=file_path)
+ if type(data) == list:
+ total_data.extend(data)
+ else:
+ total_data.append(data)
+ return total_data
+
+
+ def _load_dir_txt(self, dir_path=None, file_type="txt"):
+ if dir_path is None or not os.path.exists(dir_path): return None
+ total_data = [] # assume data is a list of dialogs
+ for filename in sorted(os.listdir(dir_path)):
+ if not filename.endswith(file_type): continue
+ file_path = os.path.join(dir_path, filename)
+ data = self._load_txt(path=file_path)
+ if type(data) == list:
+ total_data.extend(data)
+ else:
+ total_data.append(data)
+ return total_data
+
+
+ def _load_dir_tsv(self, dir_path=None, sep="\t"):
+ if dir_path is None or not os.path.exists(dir_path): return None
+ total_data = None
+ for filename in sorted(os.listdir(dir_path)):
+ file_path = os.path.join(dir_path, filename)
+ data = self._load_csv(path=file_path, sep=sep)
+ total_data = pd.concat([total_data, data], ignore_index=True)
+ return total_data
+
+
+ def _save_json(self, data, path):
+ with open(path, "w") as tf:
+ json.dump(data, tf, indent=4)
+
+
+ def init_dial(self, dial_idx=0, ori_dial_id=""):
+ dial = {
+ ORI_DIAL_ID: ori_dial_id,
+ DIAL_IDX: int(dial_idx),
+ ORI_DIAL_INFO: {},
+ LOG: [],
+ PROMPT: [],
+ }
+ return dial
+
+
+ def init_turn(self, turn_id=0, dial_hist=[]):
+ turn = {
+ TURN_ID: int(turn_id),
+ USR_UTT: "",
+ SYS_UTT: "",
+ DIAL_HIST: " ".join(dial_hist),
+ ORI_USR_ANN: {},
+ ORI_SYS_ANN: {},
+ }
+ return turn
+
+
+ def save_dial(self, data, data_name="", file_idx=0, mode="train"):
+ save_name = f"dialogues_{file_idx}.json"
+ folder_path = os.path.join(self.save_dir, data_name, mode)
+ if not os.path.exists(folder_path): os.makedirs(folder_path)
+ path = os.path.join(folder_path, save_name)
+ self._save_json(data, path)
+
+
+ def save_original_examples(self, examples, data_name):
+ """
+ save 5 original data points just for reference and check
+ data would be a list of length 5, each entry is a dialog
+ in the form of dictionary
+ """
+ path = os.path.join(self.save_dir, data_name, "original_examples.json")
+ self._save_json(examples, path)
+ print("original examples saved")
+
+
+ def save_converted_examples(self, data_name):
+ """
+ extract the first 5 examples from the train set of the
+ already processed data, just for reference and check
+ """
+ data = self._load_json(os.path.join(self.save_dir, data_name, "train/dialogues_1.json"))
+ examples = {key: data[key] for key in list(data.keys())[:5]}
+ self._save_json(examples, os.path.join(self.save_dir, data_name, "converted_examples.json"))
+ print("converted examples saved")
+
+
+
+ def places(self):
+ """
+ no train/val/test split"""
+ data_name = "PLACES3.5"
+ mode = "train"
+ data = self._load_jsonl(os.path.join(self.data_dir, data_name, "data.jsonl"))
+ new_data, file_idx, dial_idx = {}, 1, 1
+ for dial in (data):
+ new_dial = self.init_dial(dial_idx=dial_idx)
+ new_dial_id = f"{data_name}--{mode}--{dial_idx}"
+ for key in dial:
+ if key == "conversation": continue
+ new_dial[ORI_DIAL_INFO][key] = dial[key]
+ dial_hist, multiparty = [], False
+ for turn_idx, utt in enumerate(dial["conversation"]):
+ if utt.startswith("Alice:"):
+ new_turn = self.init_turn(turn_id=turn_idx//2+1)
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+ new_turn[USR_UTT] = utt.split("Alice:")[-1].strip()
+ dial_hist.append(f"<{SPEAKER1.upper()}> " + new_turn[USR_UTT])
+ elif utt.startswith("Bob:"):
+ new_turn[SYS_UTT] = utt.split("Bob:")[-1].strip()
+ dial_hist.append(f"<{SPEAKER2.upper()}> " + new_turn[SYS_UTT])
+ new_dial[LOG].append(new_turn)
+ elif utt.startswith("Emilie:"):
+ multiparty = True
+ break
+ else:
+ if len(utt.split(":")[0].split()) == 1:
+ # might have a third speaker
+ raise ValueError("Unknown Speaker ... ")
+ else:
+ if not turn_idx: continue
+ if new_turn[SYS_UTT]:
+ new_turn[SYS_UTT] += " " + utt
+ else:
+ new_turn[USR_UTT] += " " + utt
+ dial_hist[-1] += " " + utt
+ if multiparty: continue
+ new_data[new_dial_id] = new_dial
+ if (dial_idx) % 10000 == 0:
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ new_data = {} # reset
+ file_idx += 1
+ dial_idx += 1
+ if new_data: self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ print(f"finishing processing {new_dial[DIAL_IDX]} dialogs for {mode} set ...")
+ self.save_original_examples(data[:5], data_name)
+ self.save_converted_examples(data_name)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def chitchat(self):
+ """
+ no train/val/test split"""
+ data_name = "chitchat-dataset"
+ mode = "train"
+ data = self._load_json(os.path.join(self.data_dir, data_name, "chitchat_dataset/dataset.json"))
+ new_data, file_idx, dial_idx = {}, 1, 1
+ for dial_id, dial in data.items():
+ new_dial = self.init_dial(dial_idx=dial_idx)
+ new_dial_id = f"{data_name}--{mode}--{dial_idx}"
+ new_dial[ORI_DIAL_ID] = dial_id
+ for key in dial:
+ if key == "messages": continue
+ new_dial[ORI_DIAL_INFO][key] = dial[key]
+ dial_hist, speakers = [], []
+ for turn in dial["messages"]:
+ if turn[0]["sender"] not in speakers:
+ speakers.append(turn[0]["sender"])
+ if len(speakers) < 2: continue
+ # if len(speakers) != 2:
+ # print("This is a multi-party dialog")
+ # continue
+ for turn_idx, turn in enumerate(dial["messages"]):
+ if turn[0]["sender"] == speakers[0]:
+ new_turn = self.init_turn(turn_id=turn_idx//2+1)
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+ new_turn[USR_UTT] = " ".join([row["text"] for row in turn])
+ new_turn[ORI_USR_ANN]["sender"] = turn[0]["sender"]
+ new_turn[ORI_USR_ANN]["timestamp"] = [row["timestamp"] for row in turn]
+ dial_hist.append(f"<{SPEAKER1.upper()}> " + new_turn[USR_UTT])
+
+ elif turn[0]["sender"] == speakers[1]:
+ new_turn[SYS_UTT] = " ".join([row["text"] for row in turn])
+ new_turn[ORI_SYS_ANN]["sender"] = turn[0]["sender"]
+ new_turn[ORI_SYS_ANN]["timestamp"] = [row["timestamp"] for row in turn]
+ dial_hist.append(f"<{SPEAKER2.upper()}> " + new_turn[SYS_UTT])
+ new_dial[LOG].append(new_turn)
+
+ new_data[new_dial_id] = new_dial
+ if (dial_idx) % 10000 == 0:
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ new_data = {} # reset
+ file_idx += 1
+ dial_idx += 1
+ if new_data: self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ print(f"finishing processing {new_dial[DIAL_IDX]} dialogs for {mode} set ...")
+ self.save_original_examples({k:data[k] for k in list(data.keys())[:5]}, data_name)
+ self.save_converted_examples(data_name)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def prosocial(self):
+ data_name = "Prosocial"
+ from datasets import load_dataset
+ for mode in ["train", "val", "test"]:
+ new_data, file_idx = {}, 1
+ real_name = "validation" if mode == "val" else mode
+ data = load_dataset("allenai/prosocial-dialog", split=real_name)
+ data_df = data.to_pandas()
+ for row_id in (range(len(data_df))):
+ if data_df["response_id"][row_id] == 0:
+ new_dial = self.init_dial(dial_idx=data_df["dialogue_id"][row_id]+1)
+ dial_hist = []
+
+ new_turn = self.init_turn(turn_id=data_df["response_id"][row_id]+1)
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+ new_turn[USR_UTT] = data_df["context"][row_id]
+ new_turn[SYS_UTT] = data_df["response"][row_id]
+ dial_hist.append(f"<{SPEAKER1.upper()}> " + new_turn[USR_UTT])
+ dial_hist.append(f"<{SPEAKER2.upper()}> " + new_turn[SYS_UTT])
+
+ for key in data_df.keys():
+ if key in ["context", "response"]: continue
+ # numpy.ndarray cannot be written into json
+ if type(data_df[key][row_id]) == str:
+ new_turn[ORI_USR_ANN][key] = data_df[key][row_id]
+ else:
+ new_turn[ORI_USR_ANN][key] = data_df[key][row_id].tolist()
+
+ new_dial[LOG].append(new_turn)
+ if data_df["episode_done"][row_id]:
+ new_dial_id = f"{data_name}--{mode}--{new_dial[DIAL_IDX]}"
+ new_data[new_dial_id] = new_dial
+ if new_dial[DIAL_IDX] % 10000 == 0:
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ new_data = {} # reset
+ file_idx += 1
+ if new_data: self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ print(f"finishing processing {new_dial[DIAL_IDX]} dialogs for {mode} set ...")
+ self.save_original_examples(data[:5], data_name)
+ self.save_converted_examples(data_name)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def hhrlhf(self):
+ """
+ only use the chosen pair"""
+ from datasets import load_dataset
+ data_name = "HH-RLHF"
+ for mode in ["train", "test"]:
+ data = load_dataset("Anthropic/hh-rlhf", split=mode)
+ data_df = data.to_pandas()
+ new_data, file_idx = {}, 1
+ for i in (range(len(data_df))):
+ new_dial = self.init_dial(dial_idx=i+1)
+ new_dial_id = f"{data_name}--{mode}--{i+1}"
+ dial_hist = []
+ utts = data_df["chosen"][i].replace("Assistant:", "Human:").split("Human:")
+ for turn_idx, utt in enumerate(utts[1:]):
+ utt = utt.replace("\n\n", " ").strip()
+ if turn_idx % 2 == 0:
+ new_turn = self.init_turn(turn_id=turn_idx//2+1)
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+ new_turn[USR_UTT] = utt
+ dial_hist.append(f"<{SPEAKER1.upper()}> " + new_turn[USR_UTT])
+ else:
+ new_turn[SYS_UTT] = utt
+ dial_hist.append(f"<{SPEAKER2.upper()}> " + new_turn[SYS_UTT])
+ new_dial[LOG].append(new_turn)
+
+ new_data[new_dial_id] = new_dial
+ if new_dial[DIAL_IDX] % 10000 == 0:
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ new_data = {} # reset
+ file_idx += 1
+ if new_data: self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ print(f"finishing processing {new_dial[DIAL_IDX]} dialogs for {mode} set ...")
+ self.save_original_examples(data[:5], data_name)
+ self.save_converted_examples(data_name)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def empathetic(self):
+ """
+ consecutive turns from the same speaker happens"""
+ data_name = "Empathetic"
+ from datasets import load_dataset
+ for mode in ["train", "val", "test"]:
+ real_name = "validation" if mode == "val" else mode
+ data = load_dataset("empathetic_dialogues", split=real_name)
+ data_df = data.to_pandas()
+ new_data, file_idx, dial_idx, speakers = {}, 1, 1, []
+ for row_id in (range(len(data_df))):
+ utt = data_df["utterance"][row_id].replace("_comma_", ",").strip()
+ if data_df["utterance_idx"][row_id] == 1:
+ new_dial = self.init_dial(dial_idx)
+ new_dial[ORI_DIAL_ID] = data_df["conv_id"][row_id]
+ new_dial[ORI_DIAL_INFO]["context"] = data_df["context"][row_id]
+ new_dial[ORI_DIAL_INFO]["selfeval"] = data_df["selfeval"][row_id]
+ dial_hist = []
+
+ # process the first turn
+ new_turn = self.init_turn(turn_id=1)
+ new_turn[USR_UTT] = data_df["prompt"][row_id].strip()
+ new_turn[SYS_UTT] = utt
+ new_turn[ORI_USR_ANN]["tags"] = ""
+ new_turn[ORI_USR_ANN]["speaker_idx"] = int(data_df["speaker_idx"][row_id+1])
+ new_turn[ORI_SYS_ANN]["tags"] = data_df["tags"][row_id]
+ new_turn[ORI_SYS_ANN]["speaker_idx"] = int(data_df["speaker_idx"][row_id])
+
+ dial_hist.append(f"<{SPEAKER1.upper()}> " + new_turn[USR_UTT])
+ dial_hist.append(f"<{SPEAKER2.upper()}> " + new_turn[SYS_UTT])
+ # speakers.append(data_df["speaker_idx"][row_id])
+ # in the first turn, the first speaker's utt is in the prompt and
+ # utterance contains the utt from the second speaker
+ second_speaker_id = data_df["speaker_idx"][row_id]
+ new_dial[LOG].append(new_turn)
+ new_turn = self.init_turn(turn_id=(int(data_df["utterance_idx"][row_id])+1)//2+1)
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+
+ elif data_df["speaker_idx"][row_id] == second_speaker_id:
+ if not new_turn[USR_UTT]: # in this case, consecutive turns from system side happens, we add utt directly to new_dial[LOG][-1]
+ new_dial[LOG][-1][SYS_UTT] += " " + utt
+ dial_hist[-1] += " " + utt
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+ else:
+ new_turn[SYS_UTT] = utt
+ new_turn[ORI_SYS_ANN]["tags"] = data_df["tags"][row_id]
+ new_turn[ORI_SYS_ANN]["speaker_idx"] = int(data_df["speaker_idx"][row_id])
+ dial_hist.append(f"<{SPEAKER2.upper()}> " + new_turn[SYS_UTT])
+ new_dial[LOG].append(new_turn)
+ new_turn = self.init_turn(turn_id=(int(data_df["utterance_idx"][row_id])+1)//2+1)
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+
+ else:
+ if not new_turn[USR_UTT]:
+ new_turn[USR_UTT] = utt
+ new_turn[ORI_USR_ANN]["tags"] = data_df["tags"][row_id]
+ new_turn[ORI_USR_ANN]["speaker_idx"] = int(data_df["speaker_idx"][row_id])
+ dial_hist.append(f"<{SPEAKER1.upper()}> " + new_turn[USR_UTT])
+ else: # in this case, consecutive turns from user side happens, we add utt directly to new_turn
+ new_turn[USR_UTT] += " " + utt
+ dial_hist[-1] += " " + utt
+
+ if row_id == len(data_df)-1 or data_df["utterance_idx"][row_id+1] == 1:
+ # append the rest dialog in case ends with user side
+ if new_turn[USR_UTT]:
+ new_dial[LOG].append(new_turn)
+
+ new_dial_id = f"{data_name}--{mode}--{dial_idx}"
+ new_data[new_dial_id] = new_dial
+
+ if dial_idx % 10000 == 0:
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ new_data = {} # reset
+ file_idx += 1
+ dial_idx += 1
+ if new_data: self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ print(f"finishing processing {dial_idx-1} dialogs for {mode} set ...")
+ self.save_original_examples(data[:5], data_name)
+ self.save_converted_examples(data_name)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def convai2(self):
+ """
+ incomplete dialog included, we remove dialog with equal or less than one turn"""
+ from datasets import load_dataset
+ data_name = "ConvAI2"
+ mode = "train"
+ data = load_dataset("conv_ai_2", split=mode)
+ data_df = data.to_pandas()
+ new_data, file_idx, dial_idx = {}, 1, 1
+ for i in (range(len(data_df))):
+ new_dial = self.init_dial(dial_idx=dial_idx)
+ new_dial_id = f"{data_name}--{mode}--{dial_idx}"
+ new_dial[ORI_DIAL_ID] = data_df["dialog_id"][i]
+ new_dial[ORI_DIAL_INFO]["id"] = data_df["id"][i]
+ new_dial[ORI_DIAL_INFO]["bot_profile"] = ["".join(persona) for persona in data_df["bot_profile"][i]]
+ new_dial[ORI_DIAL_INFO]["user_profile"] = ["".join(persona) for persona in data_df["user_profile"][i]]
+ new_dial[ORI_DIAL_INFO]["eval_score"] = int(data_df["eval_score"][i])
+ new_dial[ORI_DIAL_INFO]["profile_match"] = int(data_df["profile_match"][i])
+ if len(data_df["dialog"][i]) <= 2: continue
+ if "Text is not given." in " ".join([turn["text"] for turn in data_df["dialog"][i]]): continue
+ dial_hist = []
+ for turn_idx, turn in enumerate(data_df["dialog"][i]):
+ if turn_idx % 2 == 0:
+ new_turn = self.init_turn(turn_id=turn_idx//2+1)
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+ new_turn[USR_UTT] = turn["text"]
+ new_turn[ORI_USR_ANN]["id"] = turn["id"]
+ new_turn[ORI_USR_ANN]["sender"] = turn["sender"]
+ new_turn[ORI_USR_ANN]["sender_class"] = turn["sender_class"]
+
+ dial_hist.append(f"<{SPEAKER1.upper()}> " + new_turn[USR_UTT])
+ else:
+ new_turn[SYS_UTT] = turn["text"]
+ new_turn[ORI_SYS_ANN]["id"] = turn["id"]
+ new_turn[ORI_SYS_ANN]["sender"] = turn["sender"]
+ new_turn[ORI_SYS_ANN]["sender_class"] = turn["sender_class"]
+ dial_hist.append(f"<{SPEAKER2.upper()}> " + new_turn[SYS_UTT])
+ new_dial[LOG].append(new_turn)
+ if not new_turn[SYS_UTT]:
+ new_dial[LOG].append(new_turn)
+ new_data[new_dial_id] = new_dial
+ if new_dial[DIAL_IDX] % 10000 == 0:
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ new_data = {} # reset
+ file_idx += 1
+ dial_idx += 1
+ print(f"finishing processing {dial_idx-1} dialogs for {mode} set ...")
+ if new_data: self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ self.save_original_examples(data[:5], data_name)
+ self.save_converted_examples(data_name)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def antiscam(self):
+ """
+ 0: attacker
+ 1: agent
+ 0 always starts conversation
+ 1 always ends conversation
+ """
+ data_name = "AntiScam"
+ data = self._load_txt(os.path.join(self.data_dir, data_name, "data/AntiScam_all.txt"), encoding='latin-1')
+ new_data, file_idx, dial_idx, turn_idx, dial_hist = {}, 1, 1, 1, []
+ mode = "train"
+ new_dial = self.init_dial(dial_idx=dial_idx)
+ new_turn = self.init_turn(turn_id=turn_idx)
+ for row in (data):
+ speaker, utt = row.split("\t")
+ if speaker == "0":
+ if new_turn[SYS_UTT]: # start a new turn
+ # wrap up the previous turn
+ new_dial[LOG].append(new_turn)
+ turn_idx += 1
+ dial_hist.append(f"<{SPEAKER1.upper()}> " + new_turn[USR_UTT])
+ dial_hist.append(f"<{SPEAKER2.upper()}> " + new_turn[SYS_UTT])
+ # start a new turn
+ new_turn = self.init_turn(turn_id=turn_idx)
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+ new_turn[USR_UTT] = utt.strip('\"')
+ else: # multiple utt from '0'
+ new_turn[USR_UTT] += " " + utt.strip('\"')
+ new_turn[USR_UTT] = new_turn[USR_UTT].strip()
+ elif speaker == "1":
+ new_turn[SYS_UTT] += " " + utt.strip('"')
+ new_turn[SYS_UTT] = new_turn[SYS_UTT].strip()
+ elif not speaker: # finish a dialog
+ if new_turn[SYS_UTT]: # wrap up the previous turn
+ new_dial[LOG].append(new_turn)
+ new_dial_id = f"{data_name}--{mode}--{dial_idx}"
+ new_data[new_dial_id] = new_dial
+ if dial_idx % 10000 == 0:
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ new_data = {} # reset
+ file_idx += 1
+ dial_idx += 1
+ turn_idx = 1
+ dial_hist = []
+ new_dial = self.init_dial(dial_idx=dial_idx)
+ new_turn = self.init_turn(turn_id=turn_idx)
+ else:
+ raise ValueError("Unknown speaker ... ")
+ if new_turn[SYS_UTT]:
+ new_dial[LOG].append(new_turn)
+ new_dial_id = f"{data_name}--{mode}--{dial_idx}"
+ new_data[new_dial_id] = new_dial
+ print(f"finishing processing {dial_idx} dialogs for {mode} set ...")
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+
+ if new_data: self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ self.save_original_examples(data[:150], data_name)
+ self.save_converted_examples(data_name)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+
+
+ def run_all(self):
+ # self.places()
+ # self.chitchat()
+ # self.prosocial()
+ # self.hhrlhf()
+ # self.empathetic()
+ # self.convai2()
+ self.antiscam()
+
+
+ def copy_example(self):
+ source_dir = self.save_dir
+ for target_dir in [ "/home/qkun/projs/TOD-Project/Datasets/Open-Domain_PROCESSED/", "/home/qkun/projs/DialogStudio-Release/open-domain-dialogues/"]:
+ # target_dir = "/home/qkun/projs/TOD-Project/Datasets/Open-Domain_PROCESSED/"
+ # target_dir2 = "/home/qkun/projs/DialogStudio-Release/open-domain-dialogues/"
+ file_list = ["converted_examples.json", "original_examples.json", "readme.txt", "LICENSE"]
+ for dir_name in sorted(os.listdir(source_dir)):
+ if os.path.isfile(os.path.join(source_dir, dir_name)): continue
+ if not os.path.exists(os.path.join(target_dir, dir_name)): os.makedirs(os.path.join(target_dir, dir_name))
+ for filename in file_list:
+ source_path = os.path.join(source_dir, dir_name, filename)
+ target_path = os.path.join(target_dir, dir_name, filename)
+ if not os.path.exists(source_path): continue
+ shutil.copy(source_path, target_path)
+
+
+def main():
+ preprocess = PreProcessData()
+ preprocess.run_all()
+ preprocess.copy_example()
+
+if __name__ == '__main__':
+ main()
diff --git a/code/preprocess_data_TOD.py b/code/preprocess_data_TOD.py
new file mode 100644
index 0000000000000000000000000000000000000000..3cc559736858e39e73c37b053da1959580b33297
--- /dev/null
+++ b/code/preprocess_data_TOD.py
@@ -0,0 +1,3078 @@
+"""
+ Copyright (c) 2023, salesforce.com, inc.
+ All rights reserved.
+ SPDX-License-Identifier: Apache License 2.0
+ For full license text, see the LICENSE file in the repo root or https://www.apache.org/licenses/LICENSE-2.0
+"""
+
+#!/usr/bin/env python3
+#
+import random
+import sys, os, pdb
+import json, math
+import shutil, errno
+from tqdm import tqdm
+import pandas as pd
+from collections import defaultdict
+from utils.domain_mapping import generate_prompt
+from utils.constant import *
+
+random.seed(42)
+
+class PreProcessData(object):
+ """docstring for PreProcessData"""
+ def __init__(self):
+ super(PreProcessData, self).__init__()
+ self.data_dir = "/path/to/where/the/raw/dataset/is"
+ self.save_dir = "/path/to/store/the/processed/dataset/" # e.g. ./data/processed/Task-Oriented
+
+
+ def _load_json(self, path=None):
+ if path is None or not os.path.exists(path):
+ raise IOError('File does not exist: %s' % path)
+ with open(path) as df:
+ data = json.loads(df.read())
+ return data
+
+
+ def _load_txt(self, path=None, split_tok="\n"):
+ if path is None or not os.path.exists(path):
+ raise IOError('File does not exist: %s' % path)
+ with open(path) as df:
+ data = df.read().strip().split(split_tok)
+ return data
+
+
+ def _load_csv(self, path=None, sep="\t"):
+ if path is None or not os.path.exists(path):
+ raise IOError('File does not exist: %s' % path)
+ with open(path) as df:
+ data = pd.read_csv(df, sep=sep)
+ return data
+
+
+ def _load_dir_json(self, dir_path=None):
+ if dir_path is None or not os.path.exists(dir_path): return None
+ total_data = [] # assume data is a list of dialogs
+ for filename in sorted(os.listdir(dir_path)):
+ if filename in ["schema.json"]: continue
+ if not filename.endswith(".json"): continue
+ file_path = os.path.join(dir_path, filename)
+ data = self._load_json(path=file_path)
+ if type(data) == list:
+ for item in data:
+ item["filename"] = filename.split(".json")[0]
+ total_data.extend(data)
+ else: # assume is a dict
+ data["filename"] = filename.split(".json")[0]
+ total_data.append(data)
+ return total_data
+
+
+ def _load_dir_txt(self, dir_path=None, file_type="txt"):
+ if dir_path is None or not os.path.exists(dir_path): return None
+ total_data = [] # assume data is a list of dialogs
+ for filename in sorted(os.listdir(dir_path)):
+ if not filename.endswith(file_type): continue
+ file_path = os.path.join(dir_path, filename)
+ data = self._load_txt(path=file_path)
+ if type(data) == list:
+ total_data.extend(data)
+ else:
+ total_data.append(data)
+ return total_data
+
+
+ def _load_dir_tsv(self, dir_path=None, sep="\t"):
+ if dir_path is None or not os.path.exists(dir_path): return None
+ total_data = None
+ for filename in sorted(os.listdir(dir_path)):
+ file_path = os.path.join(dir_path, filename)
+ data = self._load_csv(path=file_path, sep=sep)
+ data["filename"] = filename.split(".tsv")[0]
+ total_data = pd.concat([total_data, data], ignore_index=True)
+ return total_data
+
+
+ def _save_json(self, data, path):
+ with open(path, "w") as tf:
+ json.dump(data, tf, indent=4)
+
+
+ def init_dial(self, dial_idx=0):
+ dial = {
+ ORI_DIAL_ID: "",
+ DIAL_IDX: dial_idx,
+ ORI_DIAL_INFO: {},
+ LOG: [],
+ EK_ORI: {
+ TOD_EK:{},
+ DST_EK:{},
+ INTENT_EK:{},
+ },
+ EK: "",
+ EK_DST: "",
+ EK_INTENT: "",
+ PROMPT: [],
+ }
+ return dial
+
+
+ def init_turn(self, turn_id=1, dial_hist=[]):
+ turn = {
+ TURN_ID: turn_id,
+ USR_UTT: "",
+ SYS_UTT: "",
+ DIAL_HIST: " ".join(dial_hist),
+ ORI_USR_ANN: {},
+ ORI_SYS_ANN: {},
+ DST: "",
+ DST_ACC: "",
+ }
+ return turn
+
+
+ def save_dial(self, data, data_name="", file_idx=0, mode="train"):
+ save_name = f"dialogues_{file_idx}.json"
+ folder_path = os.path.join(self.save_dir, data_name, mode)
+ if not os.path.exists(folder_path): os.makedirs(folder_path)
+ path = os.path.join(folder_path, save_name)
+ self._save_json(data, path)
+
+ # pdb.set_trace()
+
+
+ def copy_general(self, src, dst):
+ try:
+ shutil.copytree(src, dst, dirs_exist_ok=True)
+ except OSError as exc: # python >2.5
+ if exc.errno in (errno.ENOTDIR, errno.EINVAL):
+ shutil.copy(src, dst)
+ else: raise
+
+
+ def copy_related_files(self, data_name, exp_list=[], extra_dir=""):
+ source_dir = os.path.join(self.data_dir, data_name, extra_dir)
+ target_dir = os.path.join(self.save_dir, data_name)
+ for filename in os.listdir(source_dir):
+ source_path = os.path.join(source_dir, filename)
+ target_path = os.path.join(target_dir, filename)
+ if filename in exp_list: continue
+ self.copy_general(source_path, target_path)
+
+
+ def save_original_examples(self, examples, data_name):
+ """
+ save 5 original data points just for reference and check
+ data would be a list of length 5, each entry is a dialog
+ in the form of dictionary
+ """
+ path = os.path.join(self.save_dir, data_name, "original_examples.json")
+ self._save_json(examples, path)
+ print("original examples saved")
+
+
+ def save_converted_examples(self, data_name):
+ """
+ extract the first 5 examples from the train set of the
+ already processed data, just for reference and check
+ """
+ data = self._load_json(os.path.join(self.save_dir, data_name, "train/dialogues_1.json"))
+ examples = {key: data[key] for key in list(data.keys())[:5]}
+ self._save_json(examples, os.path.join(self.save_dir, data_name, "converted_examples.json"))
+ print("converted examples saved")
+
+
+ def filter_cand(self, cand_list, constraints):
+ """
+ pop up cands that satisfy constraints
+ cand_list = [
+ {
+ attribute1: ...,
+ attribute2: ...,
+ ...
+ },
+ ...
+ ]
+ constraints = [
+ {
+ attribute1: ...,
+ attributei: ...,
+ }
+ ]
+ constraints[i].keys() is a subset of cand_list[k].keys()
+ """
+ satisfy_results = []
+ for cand in cand_list:
+ for constraint in constraints:
+ flag = 1 # flag for marking whether constraint is satisfied
+ for key_ in constraint:
+ # if key_ == "category" and (key_ not in cand or key_ not in constraint):
+ # pdb.set_trace()
+ if key_ not in cand: continue
+ if cand[key_] != constraint[key_]:
+ flag = 0
+ break
+ if flag:
+ satisfy_results.append(cand)
+ break
+ for cand in satisfy_results:
+ cand_list.remove(cand)
+ return satisfy_results, cand_list
+
+
+ def kvret(self):
+ """
+ system or user side might have consecutive turns"""
+ data_name, exp_list = "KVRET", []
+ # slot type belonging to each doamin
+ dom_slot = {
+ "schedule": {_key:[] for _key in ["event","time","data","party","room","agenda"]},
+ "weather": {_key:[] for _key in ["location","weekly_time","temperature","weather_attribute"]},
+ "navigate": {_key:[] for _key in ["traffic_info","poi_type","poi","distance"]},
+ }
+ schema = self._load_json(os.path.join(self.data_dir, data_name, "kvret_entities.json"))
+ for slot in schema:
+ for domain in dom_slot:
+ if slot in dom_slot[domain]:
+ dom_slot[domain][slot] = schema[slot]
+ for mode in ["train", "val", "test"]:
+ real_name = f"kvret_{mode}_public.json" if mode != "val" else "kvret_dev_public.json"
+ path = os.path.join(self.data_dir, data_name, real_name)
+ exp_list.append(real_name)
+
+ data = self._load_json(path)
+ new_data = {}
+ file_idx = 1
+
+ for dial_idx, dial in enumerate(data):
+ domain = dial["scenario"]["task"]["intent"]
+ new_dial_id = f"{data_name}--{mode}--{dial_idx+1}"
+ new_dial = self.init_dial(dial_idx=dial_idx+1) # idx starts from 1
+ new_dial[ORI_DIAL_ID] = dial["scenario"]['uuid']
+ new_dial[ORI_DIAL_INFO] = {
+ "scenario" : dial["scenario"]
+ }
+ dial_hist, result_list, dst_dict = [], [], {}
+ usr_utts, sys_utts, turn_id = [], [], 2
+ new_turn = self.init_turn()
+ for idx, turn in enumerate(dial["dialogue"]):
+ utt = turn["data"]["utterance"]
+ if turn["turn"] == "driver":
+ if idx and dial["dialogue"][idx - 1]["turn"] == "assistant":
+ # wrap previous turn
+ new_turn[USR_UTT] = " ".join(usr_utts)
+ new_turn[SYS_UTT] = sys_utts[-1] if sys_utts else " ".join(sys_utts)
+ new_dial[LOG].append(new_turn)
+ dial_hist.append(f" {new_turn[USR_UTT]}")
+ dial_hist.append(f" {new_turn[SYS_UTT]}")
+
+ # new turn start from user
+ new_turn = self.init_turn(turn_id=turn_id)
+ turn_id += 1
+ usr_utts, sys_utts = [], []
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+ # # include user utterance into dialog history
+ # dial_hist.append(f" {utt}")
+
+ if utt in usr_utts: continue
+ usr_utts.append(utt)
+ # other annotation for user side
+ for key in turn["data"]:
+ if key == "utterance": continue
+ new_turn[ORI_USR_ANN][key] = turn["data"][key]
+
+ if turn["turn"] == "assistant":
+ # new_turn[SYS_UTT] = utt
+ if utt in sys_utts: continue
+ sys_utts.append(utt)
+ # include system response into dialog history
+ # dial_hist.append(f" {utt}")
+ # other annotation for system side
+ for key in turn["data"]:
+ if key == "utterance": continue
+ new_turn[ORI_SYS_ANN][key] = turn["data"][key]
+ # adding dst output
+ # if "slots" not in turn["data"]: continue # checked
+ new_turn[DST] = ", ".join([f"{domain} {slot} {value}" for slot, value in turn["data"]["slots"].items()])
+ # adding accumulated dst output
+ if domain not in dst_dict: dst_dict[domain] = {}
+ dst_dict[domain].update(turn["data"]["slots"])
+ new_turn[DST_ACC] = self.dst_dict_to_str(dst_dict)
+ if "slots" in turn and "poi" in turn["slots"]:
+ result_list.append(turn["slots"]["poi"])
+ elif "slots" in turn and "event" in turn["slots"]:
+ result_list.append(turn["slots"]["event"])
+
+ if usr_utts or sys_utts:
+ new_turn[USR_UTT] = " ".join(usr_utts)
+ new_turn[SYS_UTT] = sys_utts[-1] if sys_utts else " ".join(sys_utts)
+ new_dial[LOG].append(new_turn)
+
+
+ # adding metadata for TOD task
+ new_dial[EK_ORI][TOD_EK][domain] = []
+ if dial["scenario"]["kb"]["items"] is not None:
+ cand_list = dial["scenario"]["kb"]["items"]
+ for result in dial["scenario"]["kb"]["items"]:
+ if "poi" in result and result["poi"] in result_list:
+ new_dial[EK_ORI][TOD_EK][domain].append(result)
+ cand_list.remove(result)
+ elif "event" in result and result["event"] in result_list:
+ new_dial[EK_ORI][TOD_EK][domain].append(result)
+ cand_list.remove(result)
+ if len(dial["scenario"]["kb"]["items"]) < TOD_LENGTH:
+ new_dial[EK_ORI][TOD_EK][domain].extend(cand_list)
+ else:
+ new_dial[EK_ORI][TOD_EK][domain].extend(random.choices(cand_list, k=(TOD_LENGTH-len(result_list))))
+ # adding ek for DST task
+ new_dial[EK_ORI][DST_EK] = dom_slot[domain]
+ # turn the external knowledge into a flat string
+ new_dial[EK] = self.dict_to_str(new_dial[EK_ORI][TOD_EK])
+ new_dial[EK_DST] = self.dict_to_str(new_dial[EK_ORI][DST_EK])
+ new_dial[EK_INTENT] = self.dict_to_str(new_dial[EK_ORI][INTENT_EK])
+ # adding prompt for each dialog
+ domains = [domain]
+ new_dial[PROMPT] = generate_prompt(data_name, domains)
+ # finish and wrap the current dialog
+ new_data[new_dial_id] = new_dial
+ if (dial_idx+1) % 1000 == 0 or dial_idx+1 == len(data):
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ new_data = {} # reset
+ file_idx += 1
+
+ if mode == "train": self.save_original_examples(data[:5], data_name)
+ print(f"finishing processing {len(data)} dialogs for {mode} set ...")
+ self.save_converted_examples(data_name)
+ self.copy_related_files(data_name, exp_list)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def woz(self):
+ # dialog ends on the user side
+ # first system response recorded in the second turn
+ data_name, exp_list = "WOZ2_0", []
+ otgy = self._load_json(os.path.join(self.save_dir, data_name, "otgy.json"))
+ del otgy["request"]
+
+ for mode in ["train", "val", "test"]:
+ real_name = f"{mode}_en.json" if mode != "val" else "valid_en.json"
+ path = os.path.join(self.data_dir, data_name, real_name)
+ exp_list.append(real_name)
+
+ data = self._load_json(path)
+ new_data = {}
+ file_idx = 1
+
+ for dial_idx, dial in enumerate(data):
+ new_dial_id = f"{data_name}--{mode}--{dial_idx+1}"
+ new_dial = self.init_dial(dial_idx=dial_idx+1) # idx starts from 1
+ new_dial[ORI_DIAL_ID] = dial['dialogue_idx']
+ dial_hist, dst_dict = [], {}
+ new_turn = self.init_turn(turn_id=1)
+ for idx, turn in enumerate(dial["dialogue"]):
+ usr_utt, sys_utt = turn["transcript"], turn["system_transcript"]
+
+ if sys_utt:
+ new_turn[ORI_SYS_ANN]["system_acts"] = turn["system_acts"]
+ new_turn[SYS_UTT] = sys_utt
+ dial_hist.append(f" {sys_utt}")
+ new_dial[LOG].append(new_turn)
+ # reset new turn for next
+ new_turn = self.init_turn(turn_id=idx+1)
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+
+ # dst output
+ # if "turn_label" not in turn: pdb.set_trace() # checked
+ slot_list = []
+ for slot in turn["turn_label"]:
+ if slot[0] == "request": continue
+ slot_type = "_".join(slot[0].split())
+ slot_list.append(f"restaurant {slot_type} {slot[1]}")
+ new_turn[DST] = ", ".join(slot_list)
+ # accumulate dst output
+ dst_dict = self.update_with_slot_list(dst_dict, slot_list)
+ new_turn[DST_ACC] = self.dst_dict_to_str(dst_dict)
+
+
+ new_turn[USR_UTT] = usr_utt
+ dial_hist.append(f" {usr_utt}")
+ for key in turn:
+ if key.startswith("system"): continue
+ new_turn[ORI_USR_ANN][key] = turn[key]
+ # append the last turn with no system response
+ new_dial[LOG].append(new_turn)
+
+ # adding ek for DST task
+ new_dial[EK_ORI][DST_EK] = {"restaurant" : otgy}
+ for slot in new_dial[EK_ORI][DST_EK]["restaurant"]:
+ if len(new_dial[EK_ORI][DST_EK]["restaurant"][slot]) > 2*DST_LENGTH:
+ new_dial[EK_ORI][DST_EK]["restaurant"][slot] = random.choices(otgy[slot], k=DST_LENGTH)
+ # turn the external knowledge into a flat string
+ new_dial[EK] = self.dict_to_str(new_dial[EK_ORI][TOD_EK])
+ new_dial[EK_DST] = self.dict_to_str(new_dial[EK_ORI][DST_EK])
+ new_dial[EK_INTENT] = self.dict_to_str(new_dial[EK_ORI][INTENT_EK])
+ # adding prompt for each dialog
+ domains = ["restaurant"]
+ new_dial[PROMPT] = generate_prompt(data_name, domains)
+ # finish and wrap the current dialog
+
+ new_data[new_dial_id] = new_dial
+ if (dial_idx+1) % 1000 == 0 or dial_idx+1 == len(data):
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ new_data = {} # reset
+ file_idx += 1
+
+ if mode == "train": self.save_original_examples(data[:5], data_name)
+ print(f"finishing processing {dial_idx} dialogs for {mode} set ...")
+ self.save_converted_examples(data_name)
+ self.copy_related_files(data_name, exp_list)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def sgd(self):
+ data_name, exp_list = "SGD", []
+ for mode in ["train", "val", "test"]:
+ real_name = mode if mode != "val" else "dev"
+ dir_path = os.path.join(self.data_dir, data_name, real_name)
+ exp_list.append(real_name)
+
+ data = self._load_dir_json(dir_path)
+ schema = self._load_json(os.path.join(self.data_dir, data_name, real_name, "schema.json"))
+ new_data = {}
+ file_idx = 1
+
+ for dial_idx, dial in (enumerate(data)):
+ new_dial_id = f"{data_name}--{mode}--{dial_idx+1}"
+ new_dial = self.init_dial(dial_idx=dial_idx+1) # idx starts from 1
+ new_dial[ORI_DIAL_ID] = dial['dialogue_id']
+ new_dial[ORI_DIAL_INFO]["services"] = dial["services"]
+
+ dial_hist, result_list, cand_list = [], {}, {}
+ for idx, turn in enumerate(dial["turns"]):
+ utt = turn["utterance"]
+ if turn["speaker"] == "USER":
+ # new turn start from user
+ new_turn = self.init_turn(turn_id=idx//2+1)
+ new_turn[USR_UTT] = utt
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+ # include user utterance into dialog history
+ dial_hist.append(f" {utt}")
+ # other annotation for user side
+ new_turn[ORI_USR_ANN]["frames"] = turn["frames"]
+ # add dst output
+ slot_list = []
+ for frame in turn["frames"]:
+ if not frame["slots"]: continue
+ for slot in frame["slots"]:
+ slot_list.append(frame["service"] +" "+ slot["slot"] +" "+ turn["utterance"][slot["start"]: slot["exclusive_end"]])
+ new_turn[DST] = DST_SPLIT.join(slot_list)
+ # add accu dst output
+ slot_list = []
+ for frame in turn["frames"]:
+ if not frame["state"]: continue
+ for slot_type, slot_values in frame["state"]["slot_values"].items():
+ slot_list.append(frame["service"]+" "+slot_type+" "+slot_values[0])
+ new_turn[DST_ACC] = DST_SPLIT.join(slot_list)
+ # dialog ends at user side
+ if idx == len(dial["turns"]) - 1:
+ new_dial[LOG].append(new_turn)
+
+ if turn["speaker"] == "SYSTEM":
+ new_turn[SYS_UTT] = utt
+ # include system response into dialog history
+ dial_hist.append(f" {utt}")
+ # other annotation for system side
+ new_turn[ORI_SYS_ANN]["frames"] = turn["frames"]
+ # turn must end at assistant side
+ new_dial[LOG].append(new_turn)
+
+ for frame in turn["frames"]:
+ if "service_results" in frame:
+ domain = frame["service"]
+ # # # accumulate db results
+ if domain not in cand_list:
+ cand_list[domain] = []
+ cand_list[domain].extend(frame["service_results"])
+ # # # accumulate offered results
+ if domain not in result_list:
+ result_list[domain] = []
+ result_list[domain].append(frame["service_call"]["parameters"])
+ # adding EK for TOD
+ for domain in cand_list:
+ new_dial[EK_ORI][TOD_EK][domain] = []
+ satisfied_cand, unsatisfied_cand = self.filter_cand(cand_list[domain], result_list[domain])
+ if len(satisfied_cand)+len(unsatisfied_cand) < TOD_LENGTH:
+ new_dial[EK_ORI][TOD_EK][domain] = satisfied_cand + unsatisfied_cand
+ else:
+ new_dial[EK_ORI][TOD_EK][domain] = satisfied_cand
+ new_dial[EK_ORI][TOD_EK][domain].extend(random.choices(unsatisfied_cand, k=(TOD_LENGTH-len(satisfied_cand))))
+ # adding EK for DST
+ for domain in dial["services"]:
+ new_dial[EK_ORI][DST_EK][domain] = {}
+ for service in schema:
+ if service["service_name"] != domain: continue
+ for slot in service["slots"]:
+ if not slot["possible_values"]: continue
+ new_dial[EK_ORI][DST_EK][domain][slot["name"]] = slot["possible_values"]
+ # adding EK for Intent
+ for domain in dial["services"]:
+ new_dial[EK_ORI][INTENT_EK][domain] = []
+ for service in schema:
+ if service["service_name"] != domain: continue
+ for intent in service["intents"]:
+ new_dial[EK_ORI][INTENT_EK][domain].append(intent["name"])
+ # turn the external knowledge into a flat string
+ new_dial[EK] = self.dict_to_str(new_dial[EK_ORI][TOD_EK])
+ new_dial[EK_DST] = self.dict_to_str(new_dial[EK_ORI][DST_EK])
+ new_dial[EK_INTENT] = self.dict_to_str(new_dial[EK_ORI][INTENT_EK])
+ # adding prompt for each dialog
+ domains = [domain.lower().split("_")[0] for domain in dial["services"]]
+ new_dial[PROMPT] = generate_prompt(data_name, domains)
+ # finish and wrap the current dialog
+ new_data[new_dial_id] = new_dial
+ if (dial_idx+1) % 1000 == 0 or dial_idx+1 == len(data):
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ new_data = {} # reset
+ file_idx += 1
+
+ if mode == "train": self.save_original_examples(data[:5], data_name)
+ print(f"finishing processing {dial_idx+1} dialogs for {mode} set ...")
+ self.save_converted_examples(data_name)
+ self.copy_related_files(data_name, exp_list)
+ for mode in ["train", "dev", "test"]:
+ source_path = os.path.join(self.data_dir, data_name, mode, "schema.json")
+ target_dir = os.path.join(self.save_dir, data_name, mode)
+ shutil.copy(source_path, target_dir)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def bitod(self):
+ data_name, exp_list = "BiTOD", []
+ otgy = self._load_json(os.path.join(self.save_dir, data_name, "otgy.json"))
+ for mode in ["train", "val", "test"]:
+ real_name = f"{mode}_en.json" if mode != "val" else "valid_en.json"
+ path = os.path.join(self.data_dir, data_name, real_name)
+ exp_list.append(real_name)
+
+ data = self._load_json(path)
+ new_data, file_idx, dial_idx = {}, 1, 1
+
+ for dial_id in data:
+ new_dial_id = f"{data_name}--{mode}--{dial_idx}"
+ new_dial = self.init_dial(dial_idx=dial_idx) # idx starts from 1
+ new_dial[ORI_DIAL_ID] = dial_id
+ new_dial[ORI_DIAL_INFO]["Scenario"] = data[dial_id]["Scenario"]
+ domains = []
+ for intent in data[dial_id]["Scenario"]["User_Goal"]:
+ domains.append(intent.split("_")[0])
+ domains = list(set(domains))
+ dial_hist, idx = [], 0
+ dst_dict = {}
+ for turn in data[dial_id]["Events"]:
+ if "Text" not in turn: continue
+ utt = turn["Text"]
+ if turn["Agent"] == "User":
+ idx += 1
+ # new turn start from user
+ new_turn = self.init_turn(turn_id=idx)
+ new_turn[USR_UTT] = utt
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+ # include user utterance into dialog history
+ dial_hist.append(f" {utt}")
+ # adding dst output
+ # if "active_intent" not in turn: pdb.set_trace() #checked
+ domain = turn["active_intent"].split("_")[0]
+ if domain == "chat":
+ new_turn[DST] = ""
+ else:
+ slot_list = []
+ for act in turn["Actions"]:
+ if act["act"] != "inform": continue
+ slot_type = act["slot"]
+ slot_values = act["value"]
+ slot_list.append(f"{domain} {slot_type} {slot_values[0]}")
+ new_turn[DST] = ", ".join(slot_list)
+
+ # accumulate dst output
+ dst_dict = self.update_with_slot_list(dst_dict, slot_list)
+ new_turn[DST_ACC] = self.dst_dict_to_str(dst_dict)
+ # adding intent prediction output
+ new_turn[INTENT] = turn["active_intent"]
+ # other annotation for user side
+ for key in turn:
+ if key == "Text": continue
+ new_turn[ORI_USR_ANN][key] = turn[key]
+
+ if turn["Agent"] == "Wizard":
+ new_turn[SYS_UTT] = utt
+ # include system response into dialog history
+ dial_hist.append(f" {utt}")
+ # other annotation for system side
+ for key in turn:
+ if key == "Text": continue
+ new_turn[ORI_SYS_ANN][key] = turn[key]
+ # turn must end at assistant side
+ new_dial[LOG].append(new_turn)
+ # adding EK for Intent Prediction
+ new_dial[EK_ORI][INTENT_EK] = {}
+ for domain in domains:
+ if domain not in otgy:
+ pdb.set_trace()
+ if "intents" not in otgy[domain]:
+ pdb.set_trace()
+ new_dial[EK_ORI][INTENT_EK][domain] = otgy[domain]["intents"]
+ # adding EK for DST task
+ new_dial[EK_ORI][DST_EK] = {}
+ for domain in domains:
+ new_dial[EK_ORI][DST_EK][domain] = otgy[domain]["slots"]
+ for slot in new_dial[EK_ORI][DST_EK][domain]:
+ if len(new_dial[EK_ORI][DST_EK][domain][slot]) > 2*DST_LENGTH:
+ new_dial[EK_ORI][DST_EK][domain][slot] = random.choices(otgy[domain]["slots"][slot], k=DST_LENGTH)
+ # turn the external knowledge into a flat string
+ new_dial[EK] = self.dict_to_str(new_dial[EK_ORI][TOD_EK])
+ new_dial[EK_DST] = self.dict_to_str(new_dial[EK_ORI][DST_EK])
+ new_dial[EK_INTENT] = self.dict_to_str(new_dial[EK_ORI][INTENT_EK])
+ # adding prompt for each dialog
+ new_dial[PROMPT] = generate_prompt(data_name, domains)
+ # finish and wrap the current dialog
+ new_data[new_dial_id] = new_dial
+ if (dial_idx) % 1000 == 0 or dial_idx == len(data):
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ new_data = {} # reset
+ file_idx += 1
+ dial_idx += 1
+
+ if mode == "train": self.save_original_examples([data[key] for key in list(data.keys())[:5]], data_name)
+
+ print(f"finishing processing {dial_idx} dialogs for {mode} set ...")
+ self.save_converted_examples(data_name)
+ self.copy_related_files(data_name, exp_list)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def metalwoz(self):
+ """
+ system side starts first
+ """
+ data_name, exp_list = "MetaLWOZ", []
+ for mode in ["train", "test"]:
+ if mode == "train":
+ real_name = "dialogues"
+ exp_list.append(real_name)
+ else:
+ real_name = "MetalWOZ-Test-v1/dstc8_metalwoz_heldout/dialogues"
+ exp_list.append("MetalWOZ-Test-v1")
+ dir_path = os.path.join(self.data_dir, data_name, real_name)
+
+ data = self._load_dir_txt(dir_path)
+ new_data = {}
+ file_idx = 1
+ for dial_idx, dial_str in enumerate(data):
+ dial = json.loads(dial_str)
+ new_dial_id = f"{data_name}--{mode}--{dial_idx+1}"
+ new_dial = self.init_dial(dial_idx=dial_idx+1) # idx starts from 1
+ new_dial[ORI_DIAL_ID] = dial['id']
+ for key in dial:
+ if key in ["turns"]: continue
+ new_dial[ORI_DIAL_INFO][key] = dial[key]
+
+ dial_hist, new_turn = [], self.init_turn(turn_id=1)
+ for idx, utt in enumerate(dial["turns"]):
+ if not idx: continue
+ if idx % 2 == 0:
+ # the first turn start from system
+ new_turn[SYS_UTT] = utt
+ # turn must end at assistant side
+ new_dial[LOG].append(new_turn)
+ # include system response into dialog history
+ dial_hist.append(f" {utt}")
+ else:
+ # a new turn (except the first) start from user
+ new_turn = self.init_turn(turn_id=(idx+1)//2)
+ new_turn[USR_UTT] = utt
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+ # include user utterance into dialog history
+ dial_hist.append(f" {utt}")
+ # dialog ends at user side
+ if idx == len(dial["turns"]) - 1:
+ new_dial[LOG].append(new_turn)
+
+ # adding prompt for each dialog
+ domains = [dial["domain"].lower()]
+ new_dial[PROMPT] = generate_prompt(data_name, domains)
+ # finish and wrap the current dialog
+ new_data[new_dial_id] = new_dial
+ if (dial_idx+1) % 1000 == 0 or dial_idx == len(data)-1:
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ new_data = {} # reset
+ file_idx += 1
+
+ if mode == "train": self.save_original_examples(data[:5], data_name)
+
+ print(f"finishing processing {dial_idx} dialogs for {mode} set ...")
+ self.save_converted_examples(data_name)
+ self.copy_related_files(data_name, exp_list)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def star(self):
+ """
+ 1. No train/val/test split is availble
+ 2. Agents in this dataset includes "User", "UserGuide", "Wizard" and "KnowledgeBase"
+ """
+ data_name, exp_list = "STAR", []
+ for mode in ["train"]:
+ dir_path = os.path.join(self.data_dir, data_name, "dialogues")
+ exp_list.append("dialogues")
+ data = self._load_dir_json(dir_path)
+ data.sort(key=lambda x:x["DialogueID"])
+ new_data = {}
+ file_idx = 1
+
+ for dial_idx, dial in enumerate(data):
+ new_dial_id = f"{data_name}--{mode}--{dial_idx+1}"
+ new_dial = self.init_dial(dial_idx=dial_idx+1) # idx starts from 1
+ new_dial[ORI_DIAL_ID] = dial['DialogueID']
+ for key in dial:
+ if key == "Events": continue
+ new_dial[ORI_DIAL_INFO][key] = dial[key]
+
+ dial_hist, turn_id = [], 1
+ for idx, turn in enumerate(dial["Events"]):
+ # ignore "userguide" and "knowledgebase"
+ if turn["Agent"] not in ["User", "Wizard"] or \
+ turn["Action"] not in ["utter", "pick_suggestion"]: continue
+ utt = turn["Text"]
+ if turn["Agent"] == "User":
+ # new turn start from user
+ new_turn = self.init_turn(turn_id=turn_id)
+ new_turn[USR_UTT] = utt
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+ # include user utterance into dialog history
+ dial_hist.append(f" {utt}")
+ # other annotation for user side
+ for key in turn:
+ if key == "Text": continue
+ new_turn[ORI_USR_ANN][key] = turn[key]
+ # dialog ends at user side
+ if idx == len(dial["Events"]) - 1:
+ new_dial[LOG].append(new_turn)
+
+ if turn["Agent"] == "Wizard":
+ new_turn[SYS_UTT] = utt
+ # include system response into dialog history
+ dial_hist.append(f" {utt}")
+ # other annotation for system side
+ for key in turn:
+ if key == "Text": continue
+ new_turn[ORI_SYS_ANN][key] = turn[key]
+ # turn must end at assistant side
+ new_dial[LOG].append(new_turn)
+ turn_id += 1
+
+ # adding prompt for each dialog
+ domains = dial["Scenario"]["Domains"]
+ if domains == [None]:
+ domains = [dial["Scenario"]["WizardCapabilities"][0]["Task"]]
+ new_dial[PROMPT] = generate_prompt(data_name, domains)
+ # finish and wrap the current dialog
+ new_data[new_dial_id] = new_dial
+ if (dial_idx+1) % 1000 == 0 or dial_idx+1 == len(data):
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ new_data = {} # reset
+ file_idx += 1
+
+ if mode == "train": self.save_original_examples(data[:5], data_name)
+
+ print(f"finishing processing {dial_idx} dialogs for {mode} set ...")
+ self.save_converted_examples(data_name)
+ self.copy_related_files(data_name, exp_list)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def taskmaster1(self):
+ data_name = "Taskmaster1"
+ path = os.path.join(self.data_dir, data_name, "self-dialogs.json")
+ data = self._load_json(path)
+ # data.extend(self._load_json(os.path.join(self.data_dir, data_name, "woz-dialogs.json")))
+ exp_list = ["self-dialogs.json"]
+ split_id, new_data, file_idx, finish_flag, dial_idx = {}, {}, {}, {}, {}
+ otgy = self._load_json(os.path.join(self.save_dir, data_name, "otgy.json"))
+ for mode in ["train", "val", "test"]:
+ real_name = f"{mode}.csv" if mode != "val" else "dev.csv"
+ idx_path = os.path.join(self.data_dir, data_name, "train-dev-test", real_name)
+ split_id[mode] = self._load_txt(idx_path, split_tok=",\n")
+ new_data[mode], file_idx[mode], finish_flag[mode], dial_idx[mode] = {}, 1, 0, 1
+
+ for dial in data:
+ new_dial = self.init_dial() # idx starts from 1
+ new_dial[ORI_DIAL_ID] = dial['conversation_id']
+ new_dial[ORI_DIAL_INFO]["instruction_id"] = dial["instruction_id"]
+ dial_hist, dst_dict = [], {}
+ domain = dial["instruction_id"].split("-")[0]
+ usr_utts, sys_utts, turn_id = [], [], 2
+ new_turn = self.init_turn()
+ for idx, turn in enumerate(dial["utterances"]):
+ utt = turn["text"]
+ if turn["speaker"] == "USER": # user side
+ if idx and dial["utterances"][idx-1]["speaker"] == "ASSISTANT":
+ # wrap up the previous turn
+ new_turn[USR_UTT] = " ".join(usr_utts)
+ new_turn[SYS_UTT] = " ".join(sys_utts)
+ if usr_utts and sys_utts:
+ new_dial[LOG].append(new_turn)
+ dial_hist.append(f" {new_turn[USR_UTT]}")
+ dial_hist.append(f" {new_turn[SYS_UTT]}")
+ # initialize a new turn
+ new_turn = self.init_turn(turn_id=turn_id)
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+ turn_id += 1
+ usr_utts, sys_utts = [], []
+ usr_utts.append(utt)
+ # dial_hist.append(f" {utt}")
+ new_turn[ORI_USR_ANN]['speaker'] = turn["speaker"]
+ slot_list = []
+ if "segments" in turn:
+ new_turn[ORI_USR_ANN]['speaker'] = turn["segments"]
+ # add output for dst task (only accumulated dst provided)
+ for segment in turn["segments"]:
+ slot_value = segment["text"].replace(",","")
+ if len(segment["annotations"][0]["name"].split(".")) == 2:
+ slot_type = segment["annotations"][0]["name"].split(".")[1]
+ else:
+ slot1, dom = segment["annotations"][0]["name"].split(".")[1], segment["annotations"][0]["name"].split(".")[2]
+ if dom == domain:
+ slot_type = slot1
+ else:
+ slot_type = f"{dom}_{slot1}"
+ slot_list.append(f"{domain} {slot_type} {slot_value}")
+ new_turn[DST] = ", ".join(slot_list)
+ # accumulate dst output
+ dst_dict = self.update_with_slot_list(dst_dict, slot_list)
+ new_turn[DST_ACC] = self.dst_dict_to_str(dst_dict)
+
+ else: # system side
+ if idx == 0 : continue
+ sys_utts.append(utt)
+ # new_turn[SYS_UTT] = utt
+ # dial_hist.append(f" {utt}")
+ new_turn[ORI_SYS_ANN]['speaker'] = turn["speaker"]
+ new_turn[ORI_SYS_ANN]['segments'] = []
+ if "segments" in turn:
+ new_turn[ORI_SYS_ANN]['segments'] = turn["segments"]
+ new_turn[EK] = self.dict_to_str(new_turn[ORI_SYS_ANN]["segments"])
+ new_turn[EK_ORI] = new_turn[ORI_SYS_ANN]["segments"]
+ if idx+1 == len(dial["utterances"]) and usr_utts and sys_utts:
+ new_turn[USR_UTT] = " ".join(usr_utts)
+ new_turn[SYS_UTT] = " ".join(sys_utts)
+ new_dial[LOG].append(new_turn)
+ turn_id += 1
+ usr_utts, sys_utts = [], []
+ # adding EK for DST task
+ new_dial[EK_ORI][DST_EK] = {domain: otgy["slots"][domain]}
+ for slot in new_dial[EK_ORI][DST_EK][domain]:
+ if len(new_dial[EK_ORI][DST_EK][domain][slot]) > DST_LENGTH:
+ new_dial[EK_ORI][DST_EK][domain][slot] = random.choices(otgy["slots"][domain][slot], k=DST_LENGTH//2)
+
+ # turn the external knowledge into a flat string
+ new_dial[EK] = self.dict_to_str(new_dial[EK_ORI][TOD_EK])
+ new_dial[EK_DST] = self.dict_to_str(new_dial[EK_ORI][DST_EK])
+ new_dial[EK_INTENT] = self.dict_to_str(new_dial[EK_ORI][INTENT_EK])
+ # adding prompt for each dialog
+ domains = [domain]
+ new_dial[PROMPT] = generate_prompt(data_name, domains)
+ # finish and wrap the current dialog
+
+ mode = "train"
+ for mode_option in ["val", "test"]:
+ if dial["conversation_id"] in split_id[mode_option]:
+ mode = mode_option
+ new_dial_id = f"{data_name}--{mode}--{dial_idx[mode]}"
+ new_dial[DIAL_IDX] = dial_idx[mode]
+ dial_idx[mode] += 1
+ new_data[mode][new_dial_id] = new_dial
+ if not new_dial[LOG]:
+ pdb.set_trace()
+ if len(new_data[mode]) == 1000:
+ self.save_dial(new_data[mode], data_name=data_name, file_idx=file_idx[mode], mode=mode)
+ new_data[mode] = {} # reset
+ file_idx[mode] += 1
+ finish_flag[mode] = 1
+ else:
+ finish_flag[mode] = 0
+
+ # if there are some unsaved dialogs left, save it now
+ for mode in ["train", "val", "test"]:
+ if not finish_flag[mode]:
+ self.save_dial(new_data[mode], data_name=data_name, file_idx=file_idx[mode], mode=mode)
+ print(f"finishing processing {dial_idx[mode]} dialogs for {mode} set ...")
+
+ self.save_original_examples(data[:5], data_name)
+ self.save_converted_examples(data_name)
+ self.copy_related_files(data_name, exp_list)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def taskmaster2(self):
+ """
+ user/system side utterances are separated into sentences
+ """
+ data_name = "Taskmaster2"
+ dir_path, exp_list = os.path.join(self.data_dir, data_name, "data"), ["data"]
+ data = self._load_dir_json(dir_path)
+ new_data, file_idx, mode = {}, 1, "train"
+ otgy = self._load_json(os.path.join(self.save_dir, data_name, "otgy.json"))
+
+ for dial_idx, dial in enumerate(data):
+ new_dial_id = f"{data_name}--{mode}--{dial_idx+1}"
+ new_dial = self.init_dial(dial_idx=dial_idx+1) # idx starts from 1
+ new_dial[ORI_DIAL_ID] = dial['conversation_id']
+ # if new_dial[ORI_DIAL_ID] == "dlg-bcc6972e-13e0-4c70-b703-8197ebfb388b":
+ # pdb.set_trace()
+ new_dial[ORI_DIAL_INFO]["instruction_id"] = dial["instruction_id"]
+
+ domain = dial["instruction_id"].split("-")[0]
+ dial_hist, turn_id, usr_utt_list, sys_utt_list, dst_dict = [], 1, [], [], {}
+ for idx, turn in enumerate(dial["utterances"]):
+ if turn["speaker"] == "USER":
+ # finish previous turn
+ if sys_utt_list:
+ new_turn[SYS_UTT] = " ".join(sys_utt_list)
+ dial_hist.append(" " + new_turn[SYS_UTT])
+ new_turn[EK_ORI] = new_turn[ORI_SYS_ANN]['segments'] if 'segments' in new_turn[ORI_SYS_ANN] else []
+ new_turn[EK] = self.dict_to_str(new_turn[EK_ORI])
+ new_dial[LOG].append(new_turn)
+ turn_id += 1
+ sys_utt_list = []
+ if not usr_utt_list:
+ # initialize a new turn for the following
+ new_turn = self.init_turn(turn_id=turn_id)
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+
+ usr_utt_list.append(turn["text"])
+ new_turn[ORI_USR_ANN]['speaker'] = turn["speaker"]
+ slot_list = []
+ if "segments" in turn:
+ if "segments" not in new_turn[ORI_USR_ANN]:
+ new_turn[ORI_USR_ANN]['segments'] = []
+ new_turn[ORI_USR_ANN]['segments'].extend(turn["segments"])
+ # add output for dst task (only accumulated dst provided)
+ for segment in turn["segments"]:
+ slot_value = segment["text"].replace(",","")
+ if len(segment["annotations"][0]["name"].split(".")) == 2:
+ slot_type = segment["annotations"][0]["name"].split(".")[1]
+ else:
+ slot1, dom = segment["annotations"][0]["name"].split(".")[1], segment["annotations"][0]["name"].split(".")[2]
+ if dom == domain:
+ slot_type = slot1
+ else:
+ slot_type = f"{dom}_{slot1}"
+ slot_list.append(f"{domain} {slot_type} {slot_value}")
+ new_turn[DST] = ", ".join(slot_list)
+ # accumulate dst output
+ dst_dict = self.update_with_slot_list(dst_dict, slot_list)
+ new_turn[DST_ACC] = self.dst_dict_to_str(dst_dict)
+
+ if turn["speaker"] == "ASSISTANT": # system side
+ # process previous user side utt
+ if usr_utt_list: # process only for the first system side turn
+ new_turn[USR_UTT] = " ".join(usr_utt_list)
+ dial_hist.append(" " + new_turn[USR_UTT])
+ usr_utt_list = []
+ if not dial_hist: # skip for the first turn
+ continue
+
+ # record system side info
+ sys_utt_list.append(turn["text"])
+ new_turn[ORI_SYS_ANN]["speaker"] = turn["speaker"]
+ if "segments" not in new_turn[ORI_SYS_ANN]:
+ new_turn[ORI_SYS_ANN]['segments'] = []
+ if "segments" in turn:
+ new_turn[ORI_SYS_ANN]['segments'].extend(turn["segments"])
+
+ if usr_utt_list:
+ new_turn[USR_UTT] = " ".join(usr_utt_list)
+ if sys_utt_list:
+ new_turn[SYS_UTT] = " ".join(sys_utt_list)
+ new_turn[EK_ORI] = new_turn[ORI_SYS_ANN]["segments"]
+ new_turn[EK] = self.dict_to_str(new_turn[EK_ORI])
+ new_dial[LOG].append(new_turn)
+
+ # adding EK for DST task
+ new_dial[EK_ORI][DST_EK] = {domain: otgy["slots"][domain]}
+ for slot in new_dial[EK_ORI][DST_EK][domain]:
+ if len(new_dial[EK_ORI][DST_EK][domain][slot]) > DST_LENGTH:
+ new_dial[EK_ORI][DST_EK][domain][slot] = random.choices(otgy["slots"][domain][slot], k=DST_LENGTH//2)
+
+ # turn the external knowledge into a flat string
+ new_dial[EK] = self.dict_to_str(new_dial[EK_ORI][TOD_EK])
+ new_dial[EK_DST] = self.dict_to_str(new_dial[EK_ORI][DST_EK])
+ new_dial[EK_INTENT] = self.dict_to_str(new_dial[EK_ORI][INTENT_EK])
+ # adding prompt for each dialog
+ domains = [dial["filename"]]
+ new_dial[PROMPT] = generate_prompt(data_name, domains)
+ # finish and wrap the current dialog
+ new_data[new_dial_id] = new_dial
+
+ if (dial_idx+1) % 1000 == 0 or dial_idx+1 == len(data):
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ new_data = {} # reset
+ file_idx += 1
+
+ print(f"finishing processing {dial_idx} dialogs for {mode} set ...")
+ self.save_original_examples(data[:5], data_name)
+ self.save_converted_examples(data_name)
+ self.copy_related_files(data_name, exp_list)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def taskmaster3(self):
+ """
+ for set split
+ # some id exist in more than one set
+ # some id does not exist in any set
+ # almost all val dialog exists in train split set
+ # therefore we set up val and test set first, and dump all left to train
+ # Since val and test only have 3 / 1 unique dialog
+ # we set test val with a size of 2000
+ # and consider the rest dialogs as train data"""
+ data_name = "Taskmaster3"
+ dir_path = os.path.join(self.data_dir, data_name, "data")
+ data = self._load_dir_json(dir_path)
+ exp_list = ["data", "splits"]
+ split_id, new_data, file_idx, finish_flag, dial_idx = {}, {}, {}, {}, {}
+ otgy = self._load_json(os.path.join(self.data_dir, data_name, "otgy.json"))
+
+ for mode in ["train", "val", "test"]:
+ real_name = mode if mode != "val" else "dev"
+ split_dir = os.path.join(self.data_dir, data_name, "splits", real_name)
+ split_file = self._load_dir_txt(dir_path=split_dir, file_type="tsv")
+ split_id[mode] = []
+ for line in split_file:
+ split_id[mode].append(line.split()[-1])
+ new_data[mode], file_idx[mode], finish_flag[mode], dial_idx[mode] = {}, 1, 0, 1
+
+ for dial in tqdm(data):
+ new_dial = self.init_dial() # idx starts from 1
+ new_dial[ORI_DIAL_ID] = dial['conversation_id']
+ domain = "movie"
+ for key in ["vertical", "scenario", "instructions"]:
+ new_dial[ORI_DIAL_INFO][key] = dial[key]
+ dial_hist, dst_dict = [], {}
+ usr_utts, sys_utts, turn_id = [], [], 2
+ new_turn = self.init_turn()
+ for idx, turn in enumerate(dial["utterances"]):
+ utt = turn["text"]
+ if turn["speaker"] == "user":
+ if idx and dial["utterances"][idx-1]["speaker"] == "assistant":
+ # wrap up the previous turn
+ new_turn[USR_UTT] = " ".join(usr_utts)
+ new_turn[SYS_UTT] = " ".join(sys_utts)
+ if usr_utts and sys_utts:
+ new_dial[LOG].append(new_turn)
+ dial_hist.append(f" {new_turn[USR_UTT]}")
+ dial_hist.append(f" {new_turn[SYS_UTT]}")
+ # initialize a new turn
+ new_turn = self.init_turn(turn_id=turn_id)
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+ turn_id += 1
+ usr_utts, sys_utts = [], []
+ usr_utts.append(utt)
+ for key in turn:
+ if key in ["text", "speaker", "index"]: continue
+ new_turn[ORI_USR_ANN][key] = turn[key]
+ # dial_hist.append(f" {utt}")
+
+ slot_list = []
+ if "segments" in turn:
+ # add output for dst task (only accumulated dst provided)
+ for segment in turn["segments"]:
+ slot_value = segment["text"].replace(",","") # remove ",", because we use "," to separate slot triplet
+ if len(segment["annotations"][0]["name"].split(".")) == 1:
+ slot_type = segment["annotations"][0]["name"]
+ else:
+ slot1, dom = segment["annotations"][0]["name"].split(".")[0], segment["annotations"][0]["name"].split(".")[1]
+ if dom == domain:
+ slot_type = slot1
+ else:
+ slot_type = f"{dom}_{slot1}"
+ slot_list.append(f"{domain} {slot_type} {slot_value}")
+ new_turn[DST] = ", ".join(slot_list)
+ # accumulate dst output
+ dst_dict = self.update_with_slot_list(dst_dict, slot_list)
+ new_turn[DST_ACC] = self.dst_dict_to_str(dst_dict)
+
+ else: # system side
+ if idx == 0 : continue
+ sys_utts.append(utt)
+ # new_turn[SYS_UTT] = utt
+ # dial_hist.append(f" {utt}")
+ new_turn[ORI_SYS_ANN]["segments"] = []
+ if "segments" in turn:
+ new_turn[ORI_SYS_ANN]["segments"] = turn["segments"]
+ new_turn[EK] = self.dict_to_str(new_turn[ORI_SYS_ANN]["segments"])
+ new_turn[EK_ORI] = new_turn[ORI_SYS_ANN]["segments"]
+
+ if idx+1 == len(dial["utterances"]) and usr_utts and sys_utts:
+ new_turn[USR_UTT] = " ".join(usr_utts)
+ new_turn[SYS_UTT] = " ".join(sys_utts)
+ new_dial[LOG].append(new_turn)
+ turn_id += 1
+ usr_utts, sys_utts = [], []
+
+ # adding EK for DST task
+ new_dial[EK_ORI][DST_EK] = {domain: otgy["slots"][domain]}
+ for slot in new_dial[EK_ORI][DST_EK][domain]:
+ if len(new_dial[EK_ORI][DST_EK][domain][slot]) > DST_LENGTH:
+ new_dial[EK_ORI][DST_EK][domain][slot] = random.choices(otgy["slots"][domain][slot], k=DST_LENGTH//2)
+
+ # turn the external knowledge into a flat string
+ new_dial[EK] = self.dict_to_str(new_dial[EK_ORI][TOD_EK])
+ new_dial[EK_DST] = self.dict_to_str(new_dial[EK_ORI][DST_EK])
+ new_dial[EK_INTENT] = self.dict_to_str(new_dial[EK_ORI][INTENT_EK])
+ # adding prompt for each dialog
+ domains = ["movie"]
+ new_dial[PROMPT] = generate_prompt(data_name, domains)
+ # finish and wrap the current dialog
+
+ if dial["conversation_id"] in split_id["val"] and \
+ dial["conversation_id"] not in split_id["train"]:
+ mode = "val"
+ elif dial["conversation_id"] in split_id["test"] and \
+ dial["conversation_id"] not in split_id["train"]:
+ mode = "test"
+ elif dial["conversation_id"] in split_id["val"] and dial_idx["val"] < 2000:
+ mode = "val"
+ elif dial["conversation_id"] in split_id["test"] and dial_idx["test"] < 2000:
+ mode = "test"
+ else:
+ mode = "train"
+ new_dial_id = f"{data_name}--{mode}--{dial_idx[mode]}"
+ new_dial[DIAL_IDX] = dial_idx[mode]
+ dial_idx[mode] += 1
+ new_data[mode][new_dial_id] = new_dial
+ if len(new_data[mode]) == 1000:
+ self.save_dial(new_data[mode], data_name=data_name, file_idx=file_idx[mode], mode=mode)
+ new_data[mode] = {} # reset
+ file_idx[mode] += 1
+ finish_flag[mode] = 1
+ else:
+ finish_flag[mode] = 0
+
+
+ # if there are some unsaved dialogs left, save it now
+ for mode in ["train", "val", "test"]:
+ if not finish_flag[mode]:
+ self.save_dial(new_data[mode], data_name=data_name, file_idx=file_idx[mode], mode=mode)
+
+ print(f"finishing processing {dial_idx[mode]} dialogs for {mode} set ...")
+ self.save_original_examples(data[:5], data_name)
+ self.save_converted_examples(data_name)
+ self.copy_related_files(data_name, exp_list)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def simjoint(self):
+ """
+ original turn format leads by system side
+ """
+ for data_name in ["SimJointMovie", "SimJointRestaurant"]:
+ exp_list = []
+ otgy = self._load_json(os.path.join(self.save_dir, data_name, "otgy.json"))
+ domain = "movie" if data_name == "SimJointMovie" else "restaurant"
+ for mode in ["train", "val", "test"]:
+ real_name = f"{mode}.json" if mode != "val" else "dev.json"
+ path = os.path.join(self.data_dir, data_name, real_name)
+ exp_list.append(real_name)
+
+ data = self._load_json(path)
+ new_data = {}
+ file_idx = 1
+
+ for dial_idx, dial in enumerate(data):
+ new_dial_id = f"{data_name}--{mode}--{dial_idx+1}"
+ new_dial = self.init_dial(dial_idx=dial_idx+1) # idx starts from 1
+ new_dial[ORI_DIAL_ID] = dial["dialogue_id"]
+ dial_hist, dst_dict = [], {}
+ for idx, turn in enumerate(dial["turns"]):
+ if "system_utterance" in turn:
+ new_turn[SYS_UTT] = turn["system_utterance"]["text"]
+ for key in ["system_acts", "system_utterance"]:
+ new_turn[ORI_SYS_ANN][key] = turn[key]
+ dial_hist.append(" " + new_turn[SYS_UTT])
+ new_dial[LOG].append(new_turn)
+
+ if "user_utterance" in turn:
+ new_turn = self.init_turn(turn_id=idx+1)
+ new_turn[USR_UTT] = turn["user_utterance"]["text"]
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+ for key in ["dialogue_state", "user_acts", "user_intents", "user_utterance"]:
+ if key in turn:
+ new_turn[ORI_USR_ANN][key] = turn[key]
+ # adding dst output
+ slot_list = []
+ for slot in turn["user_utterance"]["slots"]:
+ slot_type = slot["slot"]
+ slot_value = " ".join(turn["user_utterance"]["tokens"][slot["start"]:slot["exclusive_end"]])
+ slot_list.append(f"{domain} {slot_type} {slot_value}")
+ new_turn[DST] = ", ".join(slot_list)
+ # accumulate dst output
+ dst_dict = self.update_with_slot_list(dst_dict, slot_list)
+ new_turn[DST_ACC] = self.dst_dict_to_str(dst_dict)
+ # adding intent prediction output
+ # if len(turn["user_acts"]) > 1: pdb.set_trace() # checked: yes
+ intent_list = [intent["type"] for intent in turn["user_acts"]]
+ new_turn[INTENT] = ", ".join(intent_list)
+ dial_hist.append(" " + new_turn[USR_UTT])
+ if not new_turn[SYS_UTT]:
+ new_dial[LOG].append(new_turn)
+
+ # adding EK for Intent Prediction
+ new_dial[EK_ORI][INTENT_EK] = {domain:otgy["intents"]}
+ # adding EK for DST task
+ new_dial[EK_ORI][DST_EK] = {domain: otgy["slots"]}
+ for slot in new_dial[EK_ORI][DST_EK][domain]:
+ if len(new_dial[EK_ORI][DST_EK][domain][slot]) > 2*DST_LENGTH:
+ new_dial[EK_ORI][DST_EK][domain][slot] = random.choices(otgy["slots"][slot], k=DST_LENGTH)
+ # turn the external knowledge into a flat string
+ new_dial[EK] = self.dict_to_str(new_dial[EK_ORI][TOD_EK])
+ new_dial[EK_DST] = self.dict_to_str(new_dial[EK_ORI][DST_EK])
+ new_dial[EK_INTENT] = self.dict_to_str(new_dial[EK_ORI][INTENT_EK])
+ # adding prompt for each dialog
+ domains = [domain]
+ new_dial[PROMPT] = generate_prompt(data_name, domains)
+ # finish and wrap the current dialog
+ new_data[new_dial_id] = new_dial
+ if (dial_idx+1) % 1000 == 0 or dial_idx+1 == len(data):
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ new_data = {} # reset
+ file_idx += 1
+
+ if mode == "train": self.save_original_examples(data[:5], data_name)
+
+ print(f"finishing processing {len(data)} dialogs for {mode} set ...")
+ self.save_converted_examples(data_name)
+ self.copy_related_files(data_name, exp_list)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def simjointgen(self):
+ """
+ original turn format leads by system side
+ but our format should end by system side
+ """
+ data_name = "SimJointGEN"
+ exp_list = ["data"]
+ otgy = self._load_json(os.path.join(self.data_dir, data_name, "data/db.json"))
+ domain = "movie"
+ for mode in ["train", "val", "test"]:
+ real_name = f"{mode}.json" if mode != "val" else "dev.json"
+ path = os.path.join(self.data_dir, data_name, "data", real_name)
+
+ data = self._load_json(path)
+ new_data = {}
+ file_idx = 1
+
+ for dial_idx, dial in tqdm(enumerate(data)):
+ new_dial_id = f"{data_name}--{mode}--{dial_idx+1}"
+ new_dial = self.init_dial(dial_idx=dial_idx+1) # idx starts from 1
+ new_dial[ORI_DIAL_ID] = dial["dialogue_id"]
+ dial_hist, prev_slot_list = [], []
+
+ # init the first turn, which would contain only system utt
+ new_turn = self.init_turn(turn_id=1)
+ for idx, turn in enumerate(dial["turns"]):
+ if "system_utterance" in turn: # turn ends at system side
+ new_turn[SYS_UTT] = turn["system_utterance"]
+ new_turn[ORI_SYS_ANN]["system_acts"] = turn["system_acts"]
+ dial_hist.append(" " + new_turn[SYS_UTT])
+ dial_hist.append(" " + new_turn[USR_UTT])
+ new_dial[LOG].append(new_turn)
+
+ if "user_utterance" in turn: # turn starts at user side
+ new_turn = self.init_turn(turn_id=idx+2)
+ new_turn[USR_UTT] = turn["user_utterance"]
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+ for key in ["dialogue_state", "database_state"]:
+ if key in turn:
+ new_turn[ORI_USR_ANN][key] = turn[key]
+ # add output for accumulated dst task (only accumulated dst provided)
+ slot_list = []
+ for slot_type, slot_value in turn["dialogue_state"].items():
+ slot_list.append(f"{domain} {slot_type} {slot_value}")
+ new_turn[DST_ACC] = ", ".join(slot_list)
+ # add output for current turn dst task
+ current_slot_list = []
+ for slot_type, slot_value in turn["dialogue_state"].items():
+ slot = f"{domain} {slot_type} {slot_value}"
+ if slot in prev_slot_list: continue
+ current_slot_list.append(slot)
+ new_turn[DST] = ", ".join(current_slot_list)
+ prev_slot_list = current_slot_list
+
+ if not new_turn[SYS_UTT]:
+ new_dial[LOG].append(new_turn)
+
+ # adding EK for DST task
+ new_dial[EK_ORI][DST_EK] = {domain: otgy}
+ for slot in new_dial[EK_ORI][DST_EK][domain]:
+ if len(new_dial[EK_ORI][DST_EK][domain][slot]) > 2*DST_LENGTH:
+ new_dial[EK_ORI][DST_EK][domain][slot] = random.choices(otgy[slot], k=DST_LENGTH)
+ # turn the external knowledge into a flat string
+ new_dial[EK] = self.dict_to_str(new_dial[EK_ORI][TOD_EK])
+ new_dial[EK_DST] = self.dict_to_str(new_dial[EK_ORI][DST_EK])
+ new_dial[EK_INTENT] = self.dict_to_str(new_dial[EK_ORI][INTENT_EK])
+ # adding prompt for each dialog
+ domains = ["movie"]
+ new_dial[PROMPT] = generate_prompt(data_name, domains)
+ # finish and wrap the current dialog
+ new_data[new_dial_id] = new_dial
+ if (dial_idx+1) % 1000 == 0 or dial_idx+1 == len(data):
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ new_data = {} # reset
+ file_idx += 1
+
+ if mode == "train": self.save_original_examples(data[:5], data_name)
+
+ print(f"finishing processing {len(data)} dialogs for {mode} set ...")
+ self.save_converted_examples(data_name)
+ self.copy_related_files(data_name, exp_list)
+ self.copy_general(os.path.join(self.data_dir, data_name, "data", "db.json"), os.path.join(self.save_dir, data_name, "db.json"))
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def muldogo(self):
+ """
+ raw data in ./data/unannotated/${domain}.tsv in format of:
+ conversationId,turnNumber,utteranceId,utterance,authorRole
+ acs-31971762-f14e-4d55-b909-0370f6e4db19-1,0,acs-2571cf40-4e39-46b6-940c-7b8cce559bae,HI GOOD MORNING,customer
+ split and annotation in ./data/paper_splits/splits_annotated_at_turn_level/${domain}/[train/dev/test].tsv in format of:
+ conversationId turnNumber utteranceId utterance slot-labels intent
+ 31971762-f14e-4d55-b909-0370f6e4db19 0 31971762-f14e-4d55-b909-0370f6e4db190 hi good morning O O O openinggreeting
+
+ 1. conversationId in raw has prefix (acs-) and suffix (-1/-2)
+ 2. user/system turn can be consecutive
+ 3. not all data have been annotated
+ 4. split in sentence level is different from that in turn level. use turn level split for now
+ """
+ data_name = "MulDoGO"
+ dir_dial = os.path.join(self.data_dir, data_name, "data/unannotated")
+ dir_split = os.path.join(self.data_dir, data_name, "data/paper_splits/splits_annotated_at_turn_level")
+ data = self._load_dir_tsv(dir_dial, sep=",")
+ split_annotation, new_data, file_idx, finish_flag, dial_idx = {}, {}, {}, {}, {}
+ exp_list = ["data"]
+
+ for mode in ["train", "val", "test"]:
+ real_name = mode if mode != "val" else "dev"
+ split_annotation[mode] = None
+ for domain in sorted(os.listdir(dir_split)):
+ split_file = self._load_csv(os.path.join(dir_split, domain, f"{real_name}.tsv"))
+ split_file["domain"] = domain
+ split_annotation[mode] = pd.concat([split_annotation[mode], split_file], ignore_index=True)
+ new_data[mode], file_idx[mode], finish_flag[mode], dial_idx[mode] = {}, 1, 0, 1
+
+ new_dial = None
+ for idx, turn in tqdm(data.iterrows()):
+ if turn.conversationId.endswith("2"): continue # repeated conversation
+ dial_id = turn.conversationId[:-2]
+ if dial_id.startswith("acs-"): dial_id = dial_id[4:]
+ # init a new dial for the current and following turns
+ if new_dial is None:
+ annotate_flag = 0
+ for mode in ["val", "test", "train"]:
+ if dial_id in split_annotation[mode]['conversationId'].values:
+ annotate_flag = 1
+ index = split_annotation[mode][split_annotation[mode]['conversationId']==dial_id].index[0]
+ domain_ = split_annotation[mode]["domain"][index]
+ # pdb.set_trace()
+ break
+ new_dial_id = f"{data_name}--{mode}--{dial_idx[mode]}"
+ new_dial = self.init_dial(dial_idx=dial_idx[mode])
+ new_dial[ORI_DIAL_ID] = dial_id
+ new_dial[ORI_DIAL_INFO]["domain"] = domain_
+ turn_id, dial_hist = 1, []
+ new_turn = self.init_turn(turn_id=turn_id)
+
+ # continue extending the current dial
+ if turn.authorRole == "customer":
+ # adding utterances
+ new_turn[USR_UTT] += f" {turn.utterance}"
+ new_turn[USR_UTT] = new_turn[USR_UTT].strip()
+ # adding annotation for turn level
+ if annotate_flag:
+ utt_id = f"{dial_id}{turn.turnNumber}"
+ row = split_annotation[mode][split_annotation[mode]["utteranceId"]==utt_id]
+ # pdb.set_trace()
+ new_turn[ORI_USR_ANN]["slot-labels"] = row["slot-labels"].tolist()
+ new_turn[ORI_USR_ANN]["intent"] = row["intent"].values.tolist()
+
+ elif turn.authorRole == "agent":
+ # no annotation on system side
+ new_turn[SYS_UTT] += f" {turn.utterance}"
+ new_turn[SYS_UTT] = new_turn[SYS_UTT].strip()
+
+ # wrap up turn
+ if idx == len(data)-1 or data.authorRole[idx+1] != "agent":
+ new_dial[LOG].append(new_turn)
+ turn_id += 1
+ dial_hist.append(" " + new_turn[USR_UTT])
+ dial_hist.append(" " + new_turn[SYS_UTT])
+ new_turn = self.init_turn(turn_id=turn_id)
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+
+ # wrap up dial (add new dial to new data)
+ if idx == len(data)-1 or dial_id not in data.conversationId[idx+1]:
+ # adding prompt for each dialog
+ domains = [turn["filename"]]
+ new_dial[PROMPT] = generate_prompt(data_name, domains)
+ # finish and wrap the current dialog
+ if new_dial[LOG]:
+ new_data[mode][new_dial_id] = new_dial
+ dial_idx[mode] += 1
+ new_dial = None
+
+ if len(new_data[mode]) == 1000:
+ self.save_dial(new_data[mode], data_name=data_name, file_idx=file_idx[mode], mode=mode)
+ new_data[mode] = {} # reset
+ file_idx[mode] += 1
+ finish_flag[mode] = 1
+ else:
+ finish_flag[mode] = 0
+
+ # if there are some unsaved dialogs left, save it now
+ for mode in ["train", "val", "test"]:
+ if not finish_flag[mode]:
+ self.save_dial(new_data[mode], data_name=data_name, file_idx=file_idx[mode], mode=mode)
+ print(f"finishing processing {dial_idx[mode]} dialogs for {mode} set ...")
+
+ self.save_original_examples(data[:6].to_string(index=False).split('\n'), data_name)
+ self.save_converted_examples(data_name)
+ self.copy_related_files(data_name, exp_list)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def casino(self):
+ """
+ 1. operation like "Submit-Deal","Accept-Deal" and "Reject-Deal" inlcuded in the "chat-log"
+ we move them to dialog-level annotation: new_dial[ORI_DIAL_INFO]["result"] = [turn1, turn2, ...]
+ 2. no user/system but mturk_agent_1/2, and either might start the dialog. Therefore, we consider
+ whoever starts the dialog as user.
+ 3. no consecutive turn from the same side
+ 4. xxx-Deal can happens during the dialog
+ """
+ data_name = "CaSiNo"
+ exp_list = ["data"]
+ dir_data = os.path.join(self.data_dir, data_name, "data/split")
+ for mode in ["train", "val", "test"]:
+ real_name = mode if mode != "val" else "valid"
+ data = self._load_json(os.path.join(dir_data, f"casino_{real_name}.json"))
+ new_data, file_idx = {}, 1
+ for dial_idx, dial in tqdm(enumerate(data)):
+ new_dial_id = f"{data_name}--{mode}--{dial_idx+1}"
+ new_dial = self.init_dial(dial_idx=dial_idx+1) # idx starts from 1
+ new_dial[ORI_DIAL_ID] = dial["dialogue_id"]
+ new_dial[ORI_DIAL_INFO]["participant_info"] = dial["participant_info"]
+ new_dial[ORI_DIAL_INFO]["annotations"] = dial["annotations"]
+ new_dial[ORI_DIAL_INFO]["results"] = []
+ dial_hist = []
+ speaker_user = dial["chat_logs"][0]["id"]
+ new_turn = self.init_turn()
+ usr_utts, sys_utts, turn_id = [], [], 2
+
+ for idx, turn in enumerate(dial["chat_logs"]):
+ # skip those negotiation decision turns
+ if turn["text"].endswith("-Deal"):
+ new_dial[ORI_DIAL_INFO]["results"].append(turn)
+ continue
+ if turn["id"] == speaker_user:
+ if sys_utts:
+ new_turn[USR_UTT] = " ".join(usr_utts)
+ new_turn[SYS_UTT] = " ".join(sys_utts)
+ new_dial[LOG].append(new_turn)
+ dial_hist.append(" " + new_turn[USR_UTT])
+ dial_hist.append(" " + new_turn[SYS_UTT])
+ new_turn = self.init_turn(turn_id=turn_id)
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+ turn_id += 1
+ usr_utts, sys_utts = [], []
+
+ # if not usr_utts:
+ # new_turn = self.init_turn(turn_id=turn_id, dial_hist=dial_hist)
+ usr_utts.append(turn["text"])
+ # new_turn[USR_UTT] = turn["text"]
+ new_turn[ORI_USR_ANN] = turn["task_data"]
+ new_turn[ORI_USR_ANN]["speakere"] = turn["id"]
+ else:
+ sys_utts.append(turn["text"])
+ new_turn[ORI_SYS_ANN] = turn["task_data"]
+ new_turn[ORI_SYS_ANN]["speaker"] = turn["id"]
+
+ if usr_utts or sys_utts:
+ new_turn[USR_UTT] = " ".join(usr_utts)
+ new_turn[SYS_UTT] = " ".join(sys_utts)
+ new_dial[LOG].append(new_turn)
+ usr_utts, sys_utts = [], []
+
+ # adding prompt for each dialog
+ domains = ["negotiate"]
+ new_dial[PROMPT] = generate_prompt(data_name, domains)
+ # finish and wrap the current dialog
+ new_data[new_dial_id] = new_dial
+ if (dial_idx+1) % 1000 == 0 or dial_idx+1 == len(data):
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ new_data = {} # reset
+ file_idx += 1
+
+ if mode == "train": self.save_original_examples(data[:5], data_name)
+ print(f"finishing processing {dial_idx} dialogs for {mode} set ...")
+ self.save_converted_examples(data_name)
+ self.copy_related_files(data_name, exp_list)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def airdialogue(self):
+ """
+ both user and system can end a dialog. we ignore the last utt if user ends a dialog.
+ system side can start a dialog, ignore it
+ """
+ data_name = "AirDialogue"
+ exp_list = ["airdialogue"]
+ dir_path = os.path.join(self.data_dir, data_name, "airdialogue")
+ for mode in ["val", "train"]:
+ real_name = mode if mode != "val" else "dev"
+ data = self._load_txt(os.path.join(dir_path, f"{real_name}_data.json"))
+ database = self._load_txt(os.path.join(dir_path, f"{real_name}_kb.json"))
+ new_data, file_idx = {}, 1
+
+ for dial_idx, dial in tqdm(enumerate(data)):
+ dial = json.loads(dial)
+ new_dial_id = f"{data_name}--{mode}--{dial_idx+1}"
+ new_dial = self.init_dial(dial_idx=dial_idx+1) # idx starts from 1
+ for key in dial:
+ if key == "dialogue": continue
+ new_dial[ORI_DIAL_INFO][key] = dial[key]
+ dial_hist, turn_id = [], 1
+ for idx, turn in enumerate(dial["dialogue"]):
+ speaker, utt = turn.split(": ")[0], ": ".join(turn.split(": ")[1:])
+ if idx == 0 and speaker == "agent": continue
+ if speaker == "customer":
+ new_turn = self.init_turn(turn_id=turn_id, dial_hist=dial_hist)
+ new_turn[USR_UTT] = utt
+ elif speaker == "agent":
+ new_turn[SYS_UTT] = utt
+ new_dial[LOG].append(new_turn)
+ dial_hist.append(" " + new_turn[USR_UTT])
+ dial_hist.append(" " + new_turn[SYS_UTT])
+ turn_id += 1
+
+ target_fligt_num_list, cands = new_dial[ORI_DIAL_INFO]["action"]["flight"], []
+ ek = json.loads(database[dial_idx])
+ for flight in ek["kb"]:
+ if flight["flight_number"] in target_fligt_num_list:
+ cands.append(flight)
+ while len(cands) < TOD_LENGTH:
+ cand = random.choice(ek["kb"])
+ if cand not in cands:
+ cands.append(cand)
+ new_dial[EK_ORI][TOD_EK]["flight"] = cands
+ # turn the external knowledge into a flat string
+ new_dial[EK] = self.dict_to_str(new_dial[EK_ORI][TOD_EK])
+ new_dial[EK_DST] = self.dict_to_str(new_dial[EK_ORI][DST_EK])
+ new_dial[EK_INTENT] = self.dict_to_str(new_dial[EK_ORI][INTENT_EK])
+ # adding prompt for each dialog
+ domains = ["flight"]
+ new_dial[PROMPT] = generate_prompt(data_name, domains)
+ # finish and wrap the current dialog
+ new_data[new_dial_id] = new_dial
+ if (dial_idx+1) % 1000 == 0 or dial_idx+1 == len(data):
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ new_data = {} # reset
+ file_idx += 1
+
+ if mode == "train": self.save_original_examples(data[:5], data_name)
+
+ print(f"finishing processing {dial_idx} dialogs for {mode} set ...")
+ self.save_converted_examples(data_name)
+ self.copy_related_files(data_name, exp_list)
+ self.copy_general(os.path.join(self.data_dir, data_name, "airdialogue", "train_kb.json"), os.path.join(self.save_dir, data_name, "train_kb.json"))
+ self.copy_general(os.path.join(self.data_dir, data_name, "airdialogue", "dev_kb.json"), os.path.join(self.save_dir, data_name, "val_kb.json"))
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def msdc(self):
+ """
+ 1. raw data is not standardized with error:
+ pandas.errors.ParserError: Error tokenizing data. C error: Expected 10 fields in line 23317, saw 11
+ therefore process as txt file
+
+ 2. agent might have consecutive utt at the end of dialog
+ """
+ data_name = "MS-DC"
+ mode, new_data, file_idx, new_dial, dial_idx = "train", {}, 1, None, 1
+ otgy = self._load_json(os.path.join(self.save_dir, data_name, "otgy.json"))
+
+ for filename in os.listdir(os.path.join(self.data_dir, data_name)):
+ domain = filename.split("_")[0]
+ data = self._load_txt(os.path.join(self.data_dir, data_name, filename))[1:]
+
+ for idx, row in tqdm(enumerate(data)):
+ [dial_id, turn_id, timestamp, speaker, utt], act = row.strip().split("\t")[:5], row.strip().split("\t")[5:]
+ if new_dial is None:
+ # init dialog
+ new_dial_id = f"{data_name}--{mode}--{dial_idx}"
+ new_dial = self.init_dial(dial_idx=dial_idx)
+ new_dial[ORI_DIAL_ID] = dial_id
+ new_dial[ORI_DIAL_INFO]["domain"] = domain
+ dst_dict = {}
+ # init turn
+ turn_idx, prev_speaker, dial_hist = 1, None, []
+ new_turn = self.init_turn(turn_id=turn_idx)
+ new_turn[ORI_USR_ANN]["act"] = []
+ new_turn[ORI_SYS_ANN]["act"] = []
+
+ # continue extending the current dial
+ if speaker == "user":
+ # adding utterances
+ new_turn[USR_UTT] += f" {utt}"
+ new_turn[USR_UTT] = new_turn[USR_UTT].strip()
+ new_turn[ORI_USR_ANN]["act"].extend(act)
+
+ elif speaker == "agent":
+ # no annotation on system side
+ new_turn[SYS_UTT] += f" {utt}"
+ new_turn[SYS_UTT] = new_turn[SYS_UTT].strip()
+ new_turn[ORI_SYS_ANN]["act"].extend(act)
+
+ # wrap up turn
+ if idx == len(data)-1 or data[idx+1].split("\t")[3] != "agent":
+ # adding output for dst task
+ slot_list = []
+ for act in new_turn[ORI_USR_ANN]["act"]:
+ if act.split("(")[0] not in ["inform", "request"]: continue
+ slots = act.split("(")[1][:-1].replace("?",";").replace("==","=").replace(",",";c").replace("||",";")
+ if slots.startswith("mc_list"): continue
+ for slot in slots.split(";"):
+ if slot == "pickup_location_city=West Roxburystate=MA":
+ slot_list.append(f"{domain} pickup_location_city West Roxbury")
+ slot_list.append(f"{domain} state MA")
+ continue
+ if slot == "date=Apr 2ndstarttime=1pm":
+ slot_list.append(f"{domain} date Apr 2nd")
+ slot_list.append(f"{domain} starttime 1pm")
+ continue
+ if slot == "numberofpeople=2date=tomorrow night":
+ slot_list.append(f"{domain} numberofpeople 2")
+ slot_list.append(f"{domain} date tomorrow night")
+ continue
+ if slot == "city=Washington DCtheater=a regular":
+ slot_list.append(f"{domain} city Washington DC")
+ slot_list.append(f"{domain} theater a regular")
+ continue
+ if "=" in slot:
+ slot_type = slot.split("=")[0].strip()
+ slot_value = "=".join(slot.split("=")[1:])
+ slot_value = slot_value.replace("\\","").replace("{{","{").strip()
+ if not slot_value: continue
+ if slot_type in ["result","closing","greeting"]: continue
+ if slot_type in ["cstate", "ccity", "cdate", "cnumberofpeople", "cstarttime", "cpickup_location_city"]: slot_type = slot_type[1:]
+ slot_list.append(f"{domain} {slot_type} {slot_value}")
+ new_turn[DST] = ", ".join(slot_list)
+ # accumulate dst output
+ dst_dict = self.update_with_slot_list(dst_dict, slot_list)
+ new_turn[DST_ACC] = self.dst_dict_to_str(dst_dict)
+ # adding output for intents task
+ new_turn[INTENT] = ", ".join([act.split("(")[0] for act in new_turn[ORI_USR_ANN]["act"]])
+ new_dial[LOG].append(new_turn)
+ turn_idx += 1
+ dial_hist.append(" " + new_turn[USR_UTT])
+ dial_hist.append(" " + new_turn[SYS_UTT])
+ new_turn = self.init_turn(turn_id=turn_idx)
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+ new_turn[ORI_USR_ANN]["act"] = []
+ new_turn[ORI_SYS_ANN]["act"] = []
+
+ # wrap up dial (add new dial to new data)
+ if idx == len(data)-1 or dial_id != data[idx+1].split("\t")[0]:
+ # adding EK for DST task
+ new_dial[EK_ORI][DST_EK] = {domain: otgy["slots"][domain]}
+ for slot in new_dial[EK_ORI][DST_EK][domain]:
+ if len(new_dial[EK_ORI][DST_EK][domain][slot]) > 2*DST_LENGTH:
+ new_dial[EK_ORI][DST_EK][domain][slot] = random.choices(otgy["slots"][domain][slot], k=DST_LENGTH)
+ # adding EK for Intent task
+ new_dial[EK_ORI][INTENT_EK] = {domain: otgy["intents"][domain]}
+ # turn the external knowledge into a flat string
+ new_dial[EK] = self.dict_to_str(new_dial[EK_ORI][TOD_EK])
+ new_dial[EK_DST] = self.dict_to_str(new_dial[EK_ORI][DST_EK])
+ new_dial[EK_INTENT] = self.dict_to_str(new_dial[EK_ORI][INTENT_EK])
+
+ # adding prompt for each dialog
+ domains = [domain]
+ new_dial[PROMPT] = generate_prompt(data_name, domains)
+ # finish and wrap the current dialog
+ new_data[new_dial_id] = new_dial
+ new_dial = None
+ dial_idx += 1
+
+ if (dial_idx-1) % 1000 == 0 or dial_idx == len(data):
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ new_data = {} # reset
+ file_idx += 1
+
+ print(f"finishing processing {dial_idx} dialogs for {mode} set ...")
+ self.save_original_examples(data[:50], data_name)
+ self.save_converted_examples(data_name)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def abcd(self):
+ """
+ 1. consecutive turns exist, with repeated annotation
+ 2. conversation starts from agent. Therefore, no user utt is included in the first turn
+ "dialog history": " Hello. How can i help you today?",
+ """
+ data_name = "ABCD"
+ data = self._load_json(os.path.join(self.data_dir, data_name, "data/abcd_v1.1.json"))
+ new_data, file_idx, exp_list = {}, {}, ["abcd_v1.1.json"]
+ otgy = self._load_json(os.path.join(self.save_dir, data_name, "otgy.json"))
+
+ for real_name, split_data in data.items():
+ mode = real_name if real_name != "dev" else "val"
+ new_data[mode] = {}
+ file_idx[mode] = 1
+
+ for dial_idx, dial in tqdm(enumerate(split_data)):
+ # init dialog
+ new_dial_id = f"{data_name}--{mode}--{dial_idx+1}"
+ new_dial = self.init_dial(dial_idx=dial_idx+1)
+ new_dial[ORI_DIAL_ID] = dial["convo_id"]
+ new_dial[ORI_DIAL_INFO] = dial["scenario"]
+ # init the first turn
+ turn_idx, prev_speaker, dial_hist = 1, None, []
+ new_turn = self.init_turn(turn_id=turn_idx)
+ new_turn[ORI_USR_ANN]["delexed"], new_turn[ORI_SYS_ANN]["delexed"] = [], []
+ domain = dial["scenario"]["flow"]
+ dst_dict = {}
+ for idx, [speaker, utt] in enumerate(dial["original"]):
+ # continue extending the current dial
+ if speaker == "customer":
+ # adding utterances
+ new_turn[USR_UTT] += f" {utt}"
+ new_turn[USR_UTT] = new_turn[USR_UTT].strip()
+ new_turn[ORI_USR_ANN]["delexed"].append(dial["delexed"][idx])
+ slot_list = []
+ if "<" in dial["delexed"][idx]["text"]:
+ slot_val_list = self.compare_delex(utt, dial["delexed"][idx]["text"])
+ for [slot_value, slot_type] in slot_val_list:
+ if not slot_type.startswith("<") and not slot_type.endswith(">"): continue
+ slot_type = slot_type.split(">")[0].split("<")[-1]
+ slot_list.append(f"{domain} {slot_type} {slot_value}")
+ new_turn[DST] = ", ".join(slot_list)
+ # accumulate dst output
+ dst_dict = self.update_with_slot_list(dst_dict, slot_list)
+ new_turn[DST_ACC] = self.dst_dict_to_str(dst_dict)
+
+
+ if speaker == "agent":
+ # no annotation on system side
+ new_turn[SYS_UTT] += f" {utt}"
+ new_turn[SYS_UTT] = new_turn[SYS_UTT].strip()
+ new_turn[ORI_SYS_ANN]["delexed"].append(dial["delexed"][idx])
+
+ # wrap up turn
+ if idx == len(dial["original"])-1 or dial["original"][idx+1][0] != "agent":
+ new_dial[LOG].append(new_turn)
+ turn_idx += 1
+ if new_turn[USR_UTT]:
+ dial_hist.append(" " + new_turn[USR_UTT])
+ dial_hist.append(" " + new_turn[SYS_UTT])
+ new_turn = self.init_turn(turn_id=turn_idx)
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+ new_turn[ORI_USR_ANN]["delexed"], new_turn[ORI_SYS_ANN]["delexed"] = [], []
+
+ # adding EK for DST task
+ new_dial[EK_ORI][DST_EK] = {domain: otgy["slots"][domain]}
+ for slot in new_dial[EK_ORI][DST_EK][domain]:
+ if len(new_dial[EK_ORI][DST_EK][domain][slot]) > 2*DST_LENGTH:
+ new_dial[EK_ORI][DST_EK][domain][slot] = random.choices(otgy["slots"][domain][slot], k=DST_LENGTH)
+ # turn the external knowledge into a flat string
+ new_dial[EK] = self.dict_to_str(new_dial[EK_ORI][TOD_EK])
+ new_dial[EK_DST] = self.dict_to_str(new_dial[EK_ORI][DST_EK])
+ new_dial[EK_INTENT] = self.dict_to_str(new_dial[EK_ORI][INTENT_EK])
+ # adding prompt for each dialog
+ domains = [dial["scenario"]["flow"]]
+ new_dial[PROMPT] = generate_prompt(data_name, domains)
+ # finish and wrap the current dialog
+ new_data[mode][new_dial_id] = new_dial
+ if (dial_idx+1) % 1000 == 0 or dial_idx+1 == len(split_data):
+ self.save_dial(new_data[mode], data_name=data_name, file_idx=file_idx[mode], mode=mode)
+ new_data[mode] = {} # reset
+ file_idx[mode] += 1
+
+ if mode == "train": self.save_original_examples(split_data[:5], data_name)
+ print(f"finishing processing {dial_idx} dialogs for {mode} set ...")
+ self.save_converted_examples(data_name)
+ self.copy_related_files(data_name, exp_list, "data")
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def salesbot(self):
+ """
+ chitchat+top, no user/system
+ no ek
+ """
+ data_name = "SalesBot"
+ data = self._load_dir_json(os.path.join(self.data_dir, data_name, "data/dialogues"))
+ mode, new_data, file_idx, exp_list = "train", {}, 1, ["data"]
+ for dial_idx, dial in tqdm(enumerate(data)):
+ # init dialog
+ new_dial_id = f"{data_name}--{mode}--{dial_idx+1}"
+ new_dial = self.init_dial(dial_idx=dial_idx+1)
+ new_dial[ORI_DIAL_ID] = dial["id"]
+ new_dial[ORI_DIAL_INFO]["intent"] = dial["intent"]
+ new_dial[ORI_DIAL_INFO]["transition_candidates"] = dial["transition_candidates"]
+ dial_hist = []
+ for turn_idx, utt in enumerate(dial["dialog"]):
+ if turn_idx%2==0:
+ new_turn = self.init_turn(turn_id=turn_idx//2+1)
+ finish_turn_flag = 0
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+ new_turn[USR_UTT] = utt
+ else:
+ new_turn[SYS_UTT] = utt
+ new_dial[LOG].append(new_turn)
+ finish_turn_flag = 1
+ dial_hist.append(" " + new_turn[USR_UTT])
+ dial_hist.append(" " + new_turn[SYS_UTT])
+
+ if not finish_turn_flag:
+ new_dial[LOG].append(new_turn)
+ # adding prompt for each dialog
+ domains = dial["intent"]["type"]
+ new_dial[PROMPT] = generate_prompt(data_name, domains)
+ new_data[new_dial_id] = new_dial
+ if (dial_idx+1) % 1000 == 0 or dial_idx+1 == len(data):
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ new_data = {} # reset
+ file_idx += 1
+
+ print(f"finishing processing {dial_idx} dialogs for {mode} set ...")
+ self.save_original_examples(data[:5], data_name)
+ self.save_converted_examples(data_name)
+ self.copy_related_files(data_name, exp_list, "data")
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def craigslist(self):
+ """
+ 1. dialog acts of the last two turns could be "offer" and "accept/reject" ("message" for usual cases)
+ since outcome has already been included in dial["outcome"], therefore we skip all turns withour action "message"
+ 2. no consecutive turns
+ 3. no user/system. we consider the agent who starts the conversation as user (seller/buyer might exchange over dialogs)
+ 4. action space: ["accept", "reject", "quit", "message", "offer"]
+ """
+
+ data_name = "CraigslistBargains"
+ exp_list = []
+ for mode in ["train", "val", "test"]:
+ data = self._load_json(os.path.join(self.data_dir, data_name, f"{mode}.json"))
+ new_data, file_idx = {}, 1
+ exp_list.append(f"{mode}.json")
+ dial_idx = 1
+
+ for dial in (data):
+ # init dialog
+ new_dial = self.init_dial(dial_idx=dial_idx)
+ new_dial[ORI_DIAL_ID] = dial["uuid"]
+ for key in dial:
+ if key in ["uuid", "events"]: continue
+ new_dial[ORI_DIAL_INFO][key] = dial[key]
+ dial_hist, turn_id = [], 1
+ for idx, turn in enumerate(dial["events"]):
+ if turn["action"] != "message": continue
+ turn["data"] = turn["data"].replace("\\","")
+ turn_id += 1
+ if turn_id%2==0:
+ new_turn = self.init_turn(turn_id=turn_id//2)
+ finish_turn_flag = 0
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+ new_turn[USR_UTT] = turn["data"]
+ for key in turn:
+ if key == "data": continue
+ new_turn[ORI_USR_ANN][key] = turn[key]
+ else:
+ new_turn[SYS_UTT] = turn["data"]
+ for key in turn:
+ if key == "data": continue
+ new_turn[ORI_USR_ANN][key] = turn[key]
+ new_dial[LOG].append(new_turn)
+ finish_turn_flag = 1
+ dial_hist.append(" " + new_turn[USR_UTT])
+ dial_hist.append(" " + new_turn[SYS_UTT])
+ if not finish_turn_flag and idx+1 == len(dial["events"]):
+ new_dial[LOG].append(new_turn)
+
+ # adding prompt for each dialog
+ domains = ["bargain"]
+ new_dial[PROMPT] = generate_prompt(data_name, domains)
+ if new_dial[LOG]:
+ new_dial[DIAL_IDX] = dial_idx
+ new_dial_id = f"{data_name}--{mode}--{dial_idx}"
+ new_data[new_dial_id] = new_dial
+ dial_idx += 1
+ if (dial_idx-1) % 1000 == 0 or dial_idx == len(data):
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ new_data = {} # reset
+ file_idx += 1
+
+ if mode == "train": self.save_original_examples(data[:5], data_name)
+
+ print(f"finishing processing {dial_idx} dialogs for {mode} set ...")
+ self.save_converted_examples(data_name)
+ self.copy_related_files(data_name, exp_list)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def frames(self):
+ data_name = "FRAMES"
+ otgy = self._load_json(os.path.join(self.save_dir, data_name, "otgy.json"))
+ exp_list = []
+ for mode in ['train', 'test']:
+ data = self._load_json(os.path.join(self.data_dir, data_name, f"{mode}_dials.json"))
+ new_data, file_idx = {}, 1
+ for dial_idx, dial in tqdm(enumerate(data["dialogues"])):
+ new_dial_id = f"{data_name}--{mode}--{dial_idx+1}"
+ new_dial = self.init_dial(dial_idx=dial_idx+1)
+ new_dial[ORI_DIAL_ID] = dial["dialogue_id"]
+ new_dial[ORI_DIAL_INFO]["scenario"] = dial["scenario"]
+ domain = dial["scenario"]["task"]
+
+ utterances = dial["utterances"]
+ # There are formats such as ,...,,...,...,...
+ user_uttr = ""
+ sys_uttr = ""
+ dialog_history = ""
+ user_da_label = "" # dialog acts
+ sys_da_label = ""
+ sys_slots_values = {}
+ uttr_index = 0
+ turn_index = 1
+ mentioned_slots = {
+ "dst_city": set(),
+ "or_city": set(),
+ }
+ dst_dict = {}
+ while uttr_index < len(utterances):
+ while uttr_index < len(utterances) and utterances[uttr_index]["speaker"] == "USR":
+ user_uttr += " " + utterances[uttr_index]["text"]
+ user_uttr = user_uttr.strip()
+
+ # Keep the latest
+ user_da_label = utterances[uttr_index]["da_label"]
+
+ uttr_index += 1
+
+ while uttr_index < len(utterances) and utterances[uttr_index]["speaker"] == "SYS":
+ sys_uttr += " " + utterances[uttr_index]["text"]
+ sys_uttr = sys_uttr.strip()
+
+ # Keep the latest
+ sys_da_label = utterances[uttr_index]["da_label"]
+ sys_slots_values = utterances[uttr_index]["slots"]
+
+ uttr_index += 1
+
+ # converted "null", i.e., no dialog act labels, to ""
+ if sys_da_label == "null":
+ sys_da_label = ""
+
+ turn_log = {}
+ turn_log["turn id"] = turn_index
+ turn_log["user utterance"] = user_uttr
+ turn_log["system response"] = sys_uttr
+ turn_log["dialog history"] = dialog_history
+ turn_log["original user side information"] = {}
+ turn_log["original system side information"] = {}
+
+ turn_log["original user side information"]["da_label"] = user_da_label
+ turn_log["original system side information"]["da_label"] = sys_da_label
+ turn_log["original system side information"]["slots"] = sys_slots_values
+ # adding output for intent task
+ turn_log[INTENT] = user_da_label
+ # adding output for dst task
+ slot_list = []
+ for slot_type, slot_value in sys_slots_values.items():
+ slot_list.append(f"{domain} {slot_type} {slot_value}")
+ turn_log[DST] = DST_SPLIT.join(slot_list)
+ # accumulate dst output
+ dst_dict = self.update_with_slot_list(dst_dict, slot_list)
+ turn_log[DST_ACC] = self.dst_dict_to_str(dst_dict)
+ new_dial['log'].append(turn_log)
+
+ dialog_history += " " + user_uttr + " " + sys_uttr
+ dialog_history = dialog_history.strip()
+
+ user_uttr = ""
+ sys_uttr = ""
+ user_da_label = ""
+ sys_da_label = ""
+ sys_slots_values = {}
+
+ turn_index += 1
+ if "dst_city" in sys_slots_values:
+ mentioned_slots["dst_city"].add(sys_slots_values["dst_city"])
+ if "or_city" in sys_slots_values:
+ mentioned_slots["or_city"].add(sys_slots_values["or_city"])
+
+ # adding EK for TOD task
+ if len(dial["scenario"]["items"]) <= TOD_LENGTH:
+ new_dial[EK_ORI][TOD_EK]["travel"] = dial["scenario"]["items"]
+ else:
+ # select the dialog-mentioned item first and then random select the rest
+ cands = []
+ for item in dial["scenario"]["items"]:
+ if item["trip"]["or_city"] in mentioned_slots["or_city"] and \
+ item["hotel"]["dst_city"] in mentioned_slots["dst_city"]:
+ cands.append(item)
+ while len(cands) < TOD_LENGTH:
+ cand = random.choice(dial["scenario"]["items"])
+ if cand not in cands:
+ cands.append(cand)
+ new_dial[EK_ORI][TOD_EK]["travel"] = cands
+ # adding EK for DST task
+ new_dial[EK_ORI][DST_EK] = {domain: otgy["slots"][domain]}
+ for slot in new_dial[EK_ORI][DST_EK][domain]:
+ if len(new_dial[EK_ORI][DST_EK][domain][slot]) > 2*DST_LENGTH:
+ new_dial[EK_ORI][DST_EK][domain][slot] = random.choices(otgy["slots"][domain][slot], k=DST_LENGTH)
+ # adding EK for Intent task
+ new_dial[EK_ORI][INTENT_EK] = {domain: otgy["intents"][domain]}
+ # turn the external knowledge into a flat string
+ new_dial[EK] = self.dict_to_str(new_dial[EK_ORI][TOD_EK])
+ new_dial[EK_DST] = self.dict_to_str(new_dial[EK_ORI][DST_EK])
+ new_dial[EK_INTENT] = self.dict_to_str(new_dial[EK_ORI][INTENT_EK])
+ # adding prompt for each dialog
+ domains = ["trip"]
+ new_dial[PROMPT] = generate_prompt(data_name, domains)
+
+ new_data[new_dial_id] = new_dial
+
+ if (dial_idx+1) % 1000 == 0 or dial_idx+1 == len(data["dialogues"]):
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ new_data = {} # reset
+ file_idx += 1
+
+ if mode == "train": self.save_original_examples(data["dialogues"][:5], data_name)
+ print(f"finishing processing {dial_idx} dialogs for {mode} set ...")
+ self.save_converted_examples(data_name)
+ self.copy_related_files(data_name, exp_list)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def dstc2(self):
+ data_name = "DSTC2-Clean"
+ otgy = self._load_json(os.path.join(self.data_dir, data_name, "ontology_en.json"))
+ del otgy["informable"]["request"]
+ exp_list = []
+ domain = "restaurant"
+ for mode in ["train", "test", "val"]:
+ real_name = f"{mode}_en.json" if mode!="val" else "valid_en.json"
+ new_data, file_idx = {}, 1
+ f_text = self._load_json(os.path.join(self.data_dir, data_name, real_name))
+ for index, text in enumerate(f_text):
+ dialog = self.init_dial(dial_idx=index+1)
+ # dialog = defaultdict(list)
+ dialog[ORI_DIAL_ID] = ""
+ dialog[DIAL_IDX] = index + 1
+ dialog[ORI_DIAL_INFO] = defaultdict(list)
+ dialog_history = ""
+ turn_index = 1
+ new_dial_id = f"{data_name}--{mode}--{index+1}"
+
+ messages = text['dialogue']
+ for uttr_index, utterance in enumerate(messages):
+ if uttr_index == 0:
+ sys_uttr = utterance["system_transcript"]
+
+ turn_log = self.save_info_to_dict(turn_index, "", sys_uttr, dialog_history)
+ turn_log[ORI_SYS_ANN]["system_acts"] = utterance["system_acts"]
+
+ dialog['log'].append(turn_log)
+
+ dialog_history = " " + sys_uttr
+ user_uttr = utterance["transcript"]
+ user_turn_label = utterance["turn_label"]
+ user_asr = utterance["asr"]
+
+ else:
+ sys_uttr = utterance["system_transcript"]
+
+ turn_log = self.save_info_to_dict(turn_index, user_uttr, sys_uttr, dialog_history)
+
+ turn_log[ORI_USR_ANN]["turn_label"] = user_turn_label
+ turn_log[ORI_USR_ANN]["asr"] = user_asr
+ turn_log[ORI_SYS_ANN]["system_acts"] = utterance["system_acts"]
+
+ # adding output for dst task
+ # if user_turn_label:
+ slot_list = []
+ for [slot_type, slot_value] in user_turn_label:
+ slot_list.append(f"{domain} {slot_type} {slot_value}")
+ turn_log[DST] = ", ".join(slot_list)
+
+ dialog['log'].append(turn_log)
+
+ dialog_history += " " + user_uttr + " " + sys_uttr
+ user_uttr = utterance["transcript"]
+ user_turn_label = utterance["turn_label"]
+ user_asr = utterance["asr"]
+
+ if uttr_index + 1 == len(messages):
+ turn_log = self.save_info_to_dict(turn_index + 1, user_uttr, "", dialog_history)
+
+ turn_log[ORI_USR_ANN]["turn_label"] = user_turn_label
+ turn_log[ORI_USR_ANN]["asr"] = user_asr
+ turn_log[ORI_SYS_ANN]["system_acts"] = utterance["system_acts"]
+
+ # adding output for dst task
+ # if user_turn_label:
+ slot_list = []
+ for [slot_type, slot_value] in user_turn_label:
+ slot_list.append(f"{domain} {slot_type} {slot_value}")
+ turn_log[DST] = ", ".join(slot_list)
+ dialog['log'].append(turn_log)
+
+ turn_index += 1
+
+ # adding EK for DST task
+ dialog[EK_ORI][DST_EK] = {domain: otgy["informable"]}
+ for slot in dialog[EK_ORI][DST_EK][domain]:
+ if len(dialog[EK_ORI][DST_EK][domain][slot]) > 2*DST_LENGTH:
+ dialog[EK_ORI][DST_EK][domain][slot] = random.choices(otgy["informable"][slot], k=DST_LENGTH)
+
+ # turn the external knowledge into a flat string
+ dialog[EK] = self.dict_to_str(dialog[EK_ORI][TOD_EK])
+ dialog[EK_DST] = self.dict_to_str(dialog[EK_ORI][DST_EK])
+ dialog[EK_INTENT] = self.dict_to_str(dialog[EK_ORI][INTENT_EK])
+
+ # adding prompt for each dialog
+ domains = ["restaurant"]
+ dialog[PROMPT] = generate_prompt(data_name, domains)
+
+ new_data[new_dial_id] = dialog
+ # Save every 1000 dialogs to a file
+ if (index + 1) % 1000 == 0 or (index + 1) == len(f_text):
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ new_data = {} # reset
+ file_idx += 1
+
+ if mode == "train": self.save_original_examples(f_text[:5], data_name)
+ print(f"finishing processing {index} dialogs for {mode} set ...")
+ self.save_converted_examples(data_name)
+ self.copy_related_files(data_name, exp_list)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def multiwoz_hdsa(self):
+ data_name, exp_list = "HDSA-Dialog", []
+ for mode in ["train", "val", "test"]:
+ dir_path = os.path.join(self.data_dir, data_name, f"data/{mode}.json")
+ data = self._load_json(dir_path)
+ new_data = {}
+ file_idx = 1
+
+ for dial_idx, dial in tqdm(enumerate(data)):
+ new_dial_id = f"{data_name}--{mode}--{dial_idx+1}"
+ new_dial = self.init_dial(dial_idx=dial_idx+1) # idx starts from 1
+ new_dial[ORI_DIAL_ID] = dial["file"]
+
+ dial_hist, result_list, cand_list = [], {}, {}
+ for idx, turn in enumerate(dial["info"]):
+ new_turn = self.init_turn(turn_id=idx+1)
+ new_turn[USR_UTT] = turn["user"]
+ new_turn[SYS_UTT] = turn["sys"]
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+ # include user utterance into dialog history
+ dial_hist.append(f" {new_turn[USR_UTT]}")
+ dial_hist.append(f" {new_turn[SYS_UTT]}")
+ for key_ in turn.keys():
+ if key_ in ["user", "sys"]: continue
+ elif key_ in ["sys_orig", "source", "KB", "act"]:
+ new_turn[ORI_SYS_ANN][key_] = turn[key_]
+ else:
+ new_turn[ORI_USR_ANN][key_] = turn[key_]
+ new_dial[LOG].append(new_turn)
+
+ # finish and wrap the current dialog
+ new_data[new_dial_id] = new_dial
+ if (dial_idx+1) % 1000 == 0 or dial_idx+1 == len(data):
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ new_data = {} # reset
+ file_idx += 1
+
+ if mode == "train": self.save_original_examples(data[:5], data_name)
+ print(f"finishing processing {dial_idx} dialogs for {mode} set ...")
+ self.save_converted_examples(data_name)
+ # self.copy_related_files(data_name, exp_list)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def multiwoz22(self):
+ data_name, exp_list = "MULTIWOZ2_2", []
+ for mode in ["train", "val", "test"]:
+ real_name = mode if mode != "val" else "dev"
+ data = self._load_dir_json(os.path.join(self.data_dir, data_name, real_name))
+ data_21 = self._load_json(os.path.join(self.data_dir, "MULTIWOZ2_1", f"{mode}_dials.json"))
+ otgy = self.multiwoz_dst_otgy()
+ intents = self._load_json(os.path.join(self.data_dir, "MultiWOZ_2.1", "intents.json" ))
+ exp_list.append(real_name)
+ new_data = {}
+ file_idx = 1
+ turn_num = 0
+ for dial_idx, dial in tqdm(enumerate(data)):
+ new_dial_id = f"{data_name}--{mode}--{dial_idx+1}"
+ new_dial = self.init_dial(dial_idx=dial_idx+1) # idx starts from 1
+ new_dial[ORI_DIAL_ID] = dial['dialogue_id']
+ new_dial[ORI_DIAL_INFO]["services"] = dial["services"]
+
+ dial_hist, prev_dst_set = [], set()
+ for idx, turn in enumerate(dial["turns"]):
+ utt = turn["utterance"]
+ if turn["speaker"] == "USER":
+ # new turn start from user
+ new_turn = self.init_turn(turn_id=idx//2+1)
+ new_turn[USR_UTT] = utt
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+ # include user utterance into dialog history
+ dial_hist.append(f" {utt}")
+ # other annotation for user side
+ new_turn[ORI_USR_ANN]["frames"] = turn["frames"]
+ # add dst output
+ slot_list = []
+ # # only for the current turn and non-categorical slots
+ # for frame in turn["frames"]:
+ # if not frame["slots"]: continue
+ # for slot in frame["slots"]:
+ # dom, slot_type = slot["slot"].split("-")
+ # value = slot["value"] if type(slot["value"]) == str else slot["value"][0]
+ # slot_list.append(f"{dom} {slot_type} {value}")
+ # used for accumulated slots
+ for frame in turn["frames"]:
+ if not frame["state"]["slot_values"]: continue
+ for slot, value in frame["state"]["slot_values"].items():
+ dom, slot_type = slot.split("-")
+ value = value[0] if type(value) == list else value
+ slot_list.append(f"{dom} {slot_type} {value}")
+ new_turn[DST_ACC] = DST_SPLIT.join(slot_list)
+ # compute the non-accumulated slots
+ new_turn[DST] = DST_SPLIT.join(list(set(slot_list).difference(prev_dst_set)))
+ prev_dst_set = set(slot_list)
+ # add intent output
+ intent_list = []
+ for frame in turn["frames"]:
+ if frame["state"]["active_intent"] != "NONE":
+ intent_list.append(frame["state"]["active_intent"])
+ new_turn[INTENT] = ", ".join(intent_list)
+
+ # dialog ends at user side
+ if idx == len(dial["turns"]) - 1:
+ new_dial[LOG].append(new_turn)
+
+ if turn["speaker"] == "SYSTEM":
+ new_turn[SYS_UTT] = utt
+ # include system response into dialog history
+ dial_hist.append(f" {utt}")
+ # turn must end at assistant side
+ new_dial[LOG].append(new_turn)
+ turn_num += 1
+ goal = data_21[dial["dialogue_id"]]["goal"]
+ # get active domains
+ domains = []
+ for dom in MULTIWOZ_DOMAINS:
+ if goal[dom]: domains.append(dom)
+ # adding EK for TOD
+ for dom in ["restaurant", "hotel", "attraction", "train"]:
+ if not goal[dom]: continue
+ constraint = [goal[dom]["info"]]
+ db = self._load_json(os.path.join(self.data_dir, data_name, f"db/{dom}_db.json"))
+
+ new_dial[EK_ORI][TOD_EK][dom] = []
+ satisfied_cand, unsatisfied_cand = self.filter_cand(db, constraint)
+ if len(satisfied_cand)+len(unsatisfied_cand) < TOD_LENGTH:
+ new_dial[EK_ORI][TOD_EK][dom] = satisfied_cand + unsatisfied_cand
+ else:
+ new_dial[EK_ORI][TOD_EK][dom] = satisfied_cand
+ new_dial[EK_ORI][TOD_EK][dom].extend(random.choices(unsatisfied_cand, k=(TOD_LENGTH-len(satisfied_cand))))
+ # adding EK for DST
+ for dom in domains:
+ if dom not in otgy: continue
+ if dom not in new_dial[EK_ORI][DST_EK]: new_dial[EK_ORI][DST_EK][dom] = {}
+ for slot_type in otgy[dom]:
+ new_dial[EK_ORI][DST_EK][dom][slot_type] = random.choices(otgy[dom][slot_type], k=DST_LENGTH)
+ # adding EK for Intent
+ for dom in domains+["booking", "general"]:
+ if dom not in intents: continue
+ new_dial[EK_ORI][INTENT_EK][dom] = intents[dom]
+ # turn the external knowledge into a flat string
+ new_dial[EK] = self.dict_to_str(new_dial[EK_ORI][TOD_EK])
+ new_dial[EK_DST] = self.dict_to_str(new_dial[EK_ORI][DST_EK])
+ new_dial[EK_INTENT] = self.dict_to_str(new_dial[EK_ORI][INTENT_EK])
+ # adding prompt for each dialog
+ domains = dial["services"] # some dial["services"] are not annotated
+ new_dial[PROMPT] = generate_prompt(data_name, domains)
+ # finish and wrap the current dialog
+ new_data[new_dial_id] = new_dial
+ if (dial_idx+1) % 1000 == 0 or dial_idx+1 == len(data):
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ new_data = {} # reset
+ file_idx += 1
+
+ print(f"Processing {mode} data with {dial_idx} dialogs i.e. {turn_num} turns ... " )
+ if mode == "train": self.save_original_examples(data[:5], data_name)
+ self.save_converted_examples(data_name)
+ self.copy_related_files(data_name, exp_list)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def mudoco(self):
+ def save_info_to_dict(turn_index, user_uttr, sys_uttr, dialog_history):
+ turn_log = defaultdict(list)
+ turn_log["turn id"] = turn_index
+ turn_log["user utterance"] = user_uttr
+ turn_log["system response"] = sys_uttr
+
+ turn_log["dialog history"] = dialog_history
+ turn_log["original user side information"] = defaultdict(list)
+ turn_log["original system side information"] = defaultdict(list)
+ return turn_log
+
+ data_name = "MuDoCo"
+ exp_list = []
+ # combine dialog from each domain
+ data = defaultdict(list)
+ domains = ["calling", "messaging", "music", "news", "reminders", "weather"]
+ for domain in domains:
+ domain_data = self._load_json(os.path.join(self.data_dir, data_name, f"mudoco_{domain}.json"))
+ exp_list.append(f"mudoco_{domain}.json")
+ for dial_id in domain_data["dialogs"]:
+ domain_data["dialogs"][dial_id]["domain"] = domain
+ data.update(domain_data["dialogs"])
+ # split dialogs into train/val/test set
+ f_text = defaultdict(list)
+ for dialog_id in data:
+ mode = data[dialog_id]["split"]
+ if mode == "eval":
+ mode = "val"
+ f_text[mode].append([dialog_id, data[dialog_id]])
+
+ for mode in ["train", "val", "test"]:
+ # out_folder_path = os.path.join(folder_path + '_PROCESSED', out_dataset_name, attribute)
+ data = defaultdict(list)
+ file_idx = 1
+ for dial_idx, (dialog_id, text) in enumerate(f_text[mode]):
+ dialog = self.init_dial()
+ dialog[ORI_DIAL_ID] = dialog_id
+ dialog[DIAL_IDX] = dial_idx + 1
+ dialog[ORI_DIAL_INFO] = {
+ "split": text["split"],
+ "domain": text["domain"],
+ }
+
+ dialog_history = ""
+ turn_index = 1
+ for uttr_index, utterance in enumerate(text["turns"]):
+ if uttr_index %2 == 0:
+ user_uttr = utterance["utterance"]
+ user_name_entities = utterance["named_entities"]
+ user_references = utterance["references"]
+ user_links = utterance["links"]
+ elif uttr_index %2 == 1:
+ sys_uttr = utterance["utterance"]
+ sys_name_entities = utterance["named_entities"]
+ sys_references = utterance["references"]
+ sys_links = utterance["links"]
+
+ new_turn = self.init_turn(turn_id=turn_index, dial_hist=[dialog_history])
+ new_turn[USR_UTT] = user_uttr
+ new_turn[SYS_UTT] = sys_uttr
+ # turn_log = save_info_to_dict(turn_index, user_uttr, sys_uttr, dialog_history)
+
+ new_turn[ORI_USR_ANN]["name_entities"] = user_name_entities
+ new_turn[ORI_USR_ANN]["references"] = user_references
+ new_turn[ORI_USR_ANN]["links"] = user_links
+ new_turn[ORI_SYS_ANN]["name_entities"] = sys_name_entities
+ new_turn[ORI_SYS_ANN]["references"] = sys_references
+ new_turn[ORI_SYS_ANN]["links"] = sys_links
+ dialog['log'].append(new_turn)
+ dialog_history += " " + user_uttr + " " + sys_uttr
+ turn_index += 1
+
+ if uttr_index %2 == 0 and (uttr_index + 1) == len(text):
+ new_turn = self.init_turn(turn_id=turn_index, dial_hist=[dialog_history])
+ new_turn[USR_UTT] = user_uttr
+ new_turn[SYS_UTT] = ""
+ # turn_log = save_info_to_dict(turn_index, user_uttr, "", dialog_history)
+
+ new_turn[ORI_USR_ANN]["name_entities"] = user_name_entities
+ new_turn[ORI_USR_ANN]["references"] = user_references
+ new_turn[ORI_USR_ANN]["links"] = user_links
+
+ dialog['log'].append(new_turn)
+
+ dial_id = f"{data_name}--{mode}--{dial_idx+1}"
+ data[dial_id] = dialog
+
+ if (dial_idx+1) % 1000 == 0 or dial_idx+1 == len(f_text[mode]):
+ self.save_dial(data, data_name=data_name, file_idx=file_idx, mode=mode)
+ data = defaultdict(list) # reset
+ file_idx += 1
+
+ if mode == "train": self.save_original_examples(data["dialogues"][:5], data_name)
+ print(f"finishing processing {dial_idx} dialogs for {mode} set ...")
+ self.save_converted_examples(data_name)
+ self.copy_related_files(data_name, exp_list)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def ketod(self):
+ """
+ This dataset is build based on SGD, focusing on enrich system response with knowledge
+ therefore, we igore the DST and INTENT task, since the annotation would be exactly same as SGD
+ we replace utt with utt_enrich if it exists, otherwise keep the same
+ we add turn-level ek for enriched knowledge
+ the entity query usually expose the ground-truth item
+ so it might not be necessary to add noisy items"""
+ data_name = "KETOD"
+ exp_list = []
+ for mode in ["train", "val", "test"]:
+ real_name = "dev" if mode == "val" else mode
+ data = self._load_json(os.path.join(self.data_dir, data_name, f"{real_name}.json"))
+ new_data, file_idx = {}, 1
+ for dial_idx, dial in (enumerate(data)):
+ new_dial_id = f"{data_name}--{mode}--{dial_idx+1}"
+ new_dial = self.init_dial(dial_idx=dial_idx+1) # idx starts from 1
+ new_dial[ORI_DIAL_ID] = dial['dialogue_id']
+ # new_dial[ORI_DIAL_INFO]["services"] = dial["services"]
+ domains = []
+
+ dial_hist, result_list, cand_list = [], {}, {}
+ for idx, turn in enumerate(dial["turns"]):
+ utt = turn["utterance"]
+ if turn["speaker"] == "USER":
+ # new turn start from user
+ new_turn = self.init_turn(turn_id=idx//2+1)
+ new_turn[EK_ORI] = {TOD_EK:{}}
+ new_turn[EK] = ""
+ new_turn[USR_UTT] = utt
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+ # include user utterance into dialog history
+ dial_hist.append(f" {utt}")
+ # other annotation for user side
+ new_turn[ORI_USR_ANN]["frames"] = turn["frames"]
+ new_turn[ORI_USR_ANN]["enrich"] = turn["frames"]
+ # dialog ends at user side
+ if idx == len(dial["turns"]) - 1:
+ new_dial[LOG].append(new_turn)
+ for frame in turn["frames"]:
+ if frame["service"] not in domains: domains.append(frame["service"])
+
+ if turn["speaker"] == "SYSTEM":
+ if turn["enrich"]:
+ new_turn[SYS_UTT] = turn["enriched_utter"]
+ new_turn[ORI_SYS_ANN]["original_utt"] = utt
+ new_turn[ORI_SYS_ANN]["entity_query"] = turn["entity_query"]
+ new_turn[ORI_SYS_ANN]["kg_snippets"] = turn["kg_snippets"]
+ new_turn[ORI_SYS_ANN]["kg_snippets_text"] = turn["kg_snippets_text"]
+ new_turn[EK_ORI][TOD_EK]["entity_query"] = turn["entity_query"]
+ new_turn[EK_ORI][TOD_EK]["kg_snippets_text"] = turn["kg_snippets_text"]
+ new_turn[EK] = self.dict_to_str(new_turn[EK_ORI][TOD_EK])
+ else:
+ new_turn[SYS_UTT] = utt
+ # include system response into dialog history
+ dial_hist.append(f" {utt}")
+ # other annotation for system side
+ new_turn[ORI_SYS_ANN]["frames"] = turn["frames"]
+ # turn must end at assistant side
+ new_dial[LOG].append(new_turn)
+
+ for frame in turn["frames"]:
+ if "service_results" in frame:
+ domain = frame["service"]
+ # # # accumulate db results
+ if domain not in cand_list:
+ cand_list[domain] = []
+ cand_list[domain].extend(frame["service_results"])
+ # # # accumulate offered results
+ if domain not in result_list:
+ result_list[domain] = []
+ result_list[domain].append(frame["service_call"]["parameters"])
+ # adding EK for TOD
+ for domain in cand_list:
+ new_dial[EK_ORI][TOD_EK][domain] = []
+ satisfied_cand, unsatisfied_cand = self.filter_cand(cand_list[domain], result_list[domain])
+ if len(satisfied_cand)+len(unsatisfied_cand) < TOD_LENGTH:
+ new_dial[EK_ORI][TOD_EK][domain] = satisfied_cand + unsatisfied_cand
+ else:
+ new_dial[EK_ORI][TOD_EK][domain] = satisfied_cand
+ new_dial[EK_ORI][TOD_EK][domain].extend(random.choices(unsatisfied_cand, k=(TOD_LENGTH-len(satisfied_cand))))
+ # turn the external knowledge into a flat string
+ new_dial[EK] = self.dict_to_str(new_dial[EK_ORI][TOD_EK])
+ # adding prompt for each dialog
+ domains = [domain.lower().split("_")[0] for domain in domains]
+ new_dial[PROMPT] = generate_prompt("SGD", domains)
+ # finish and wrap the current dialog
+ new_data[new_dial_id] = new_dial
+ if (dial_idx+1) % 1000 == 0 or dial_idx+1 == len(data):
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ new_data = {} # reset
+ file_idx += 1
+
+ print(f"Processing {mode} data with {dial_idx} dialogs ... " )
+ if mode == "train": self.save_original_examples(data[:5], data_name)
+ self.save_converted_examples(data_name)
+ self.copy_related_files(data_name, exp_list)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def task2dial(self):
+ data_name = "Task2Dial"
+ mode = "train"
+ from datasets import load_dataset
+ data = load_dataset("cstrathe435/Task2Dial")
+ for dial in data:
+ pdb.set_trace()
+ pass
+
+
+ def gecor(self):
+ """
+ constructed based on CamRest676
+ [{"dial":[{}, ...]},...]
+ also, since annotation are the same as camrest676
+ we do not care about the DST or INTENT tasks"""
+ data_name = "GECOR"
+ exp_list = []
+ for filename in os.listdir(os.path.join(self.data_dir, data_name)):
+ if filename in ["LICENSE", "readme.txt"]: continue
+ exp_list.append(filename)
+
+ data = self._load_json(os.path.join(self.data_dir, data_name, "CamRest676_for_coreference_and_ellipsis_resolution/CamRest676_annotated.json"))
+ schema = self._load_json(os.path.join(self.data_dir, data_name, "CamRest676_for_coreference_and_ellipsis_resolution/CamRestOTGY.json"))
+ db = self._load_json(os.path.join(self.data_dir, data_name, "CamRest676_for_coreference_and_ellipsis_resolution/CamRestDB.json"))
+ mode = "train"
+ new_data, file_idx = {}, 1
+ for dial_idx, dial in tqdm(enumerate(data)):
+ new_dial_id = f"{data_name}--{mode}--{dial_idx+1}"
+ new_dial = self.init_dial(dial_idx=dial_idx+1) # idx starts from 1
+ new_dial[ORI_DIAL_ID] = dial['dialogue_id']
+ new_dial[ORI_DIAL_INFO]["finished"] = dial['finished']
+ new_dial[ORI_DIAL_INFO]["goal"] = dial['goal']
+ dial_hist = []
+ for turn in dial["dial"]:
+ new_turn = self.init_turn(turn_id=turn["turn"])
+ new_turn[USR_UTT] = turn["usr"]["transcript"]
+ new_turn[SYS_UTT] = turn["sys"]["sent"]
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+ for key_ in turn["usr"]:
+ if key_ == "transcript": continue
+ new_turn[ORI_USR_ANN][key_] = turn["usr"][key_]
+ new_turn[ORI_SYS_ANN]["DA"] = turn["sys"]["DA"]
+ dial_hist.append(f" {new_turn[USR_UTT]}")
+ dial_hist.append(f" {new_turn[SYS_UTT]}")
+ new_dial[LOG].append(new_turn)
+
+ # adding EK for TOD
+ constraint = [{cons[0]:cons[1] for cons in dial['goal']["constraints"]}]
+ new_dial[EK_ORI][TOD_EK] = []
+ satisfied_cand, unsatisfied_cand = self.filter_cand(db, constraint)
+ if len(satisfied_cand)+len(unsatisfied_cand) < TOD_LENGTH:
+ new_dial[EK_ORI][TOD_EK] = satisfied_cand + unsatisfied_cand
+ else:
+ new_dial[EK_ORI][TOD_EK] = satisfied_cand
+ new_dial[EK_ORI][TOD_EK].extend(random.choices(unsatisfied_cand, k=(TOD_LENGTH-len(satisfied_cand))))
+
+ # turn the external knowledge into a flat string
+ new_dial[EK] = self.dict_to_str(new_dial[EK_ORI][TOD_EK])
+ # adding prompt for each dialog, since camrest676 is only about restaurant, we use...
+ domains = ["restaurant"]
+ new_dial[PROMPT] = generate_prompt("MULTIWOZ2_2", domains)
+ # finish and wrap the current dialog
+ new_data[new_dial_id] = new_dial
+ if (dial_idx+1) % 1000 == 0 or dial_idx+1 == len(data):
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ new_data = {} # reset
+ file_idx += 1
+
+ print(f"Processing {mode} data with {dial_idx+1} dialogs ... " )
+ if mode == "train": self.save_original_examples(data[:5], data_name)
+ self.save_converted_examples(data_name)
+ self.copy_related_files(data_name, exp_list)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def disamb(self):
+ """
+ a variant of multiwoz22, though in format of multiwoz21"""
+ data_name = "Disambiguation"
+ exp_list = []
+ for filename in os.listdir(os.path.join(self.data_dir, data_name)):
+ if not filename.startswith("data_aug"):continue
+ exp_list.append(filename)
+ otgy = self.multiwoz_dst_otgy()
+ intents = self._load_json(os.path.join(self.data_dir, "MultiWOZ_2.1", "intents.json" ))
+ for mode in ["train", "val", "test"]:
+ data = self._load_json(os.path.join(self.data_dir, data_name, f"data_aug_{mode}.json"))
+ new_data, dial_idx, file_idx = {}, 1, 1
+ for dial_id, dial in tqdm(data.items()):
+
+ new_dial_id = f"{data_name}--{mode}--{dial_idx}"
+ new_dial = self.init_dial(dial_idx=dial_idx) # idx starts from 1, set this when checking its source
+ new_dial[ORI_DIAL_ID] = dial_id
+ new_dial[ORI_DIAL_INFO]["goal"] = dial['goal']
+ dial_hist = []
+ # """
+ # note: these five dialogs do not contain any annotation
+ # for user side, including span_info or dialog acts
+ # therefore, we exclude these five dialogs since slot-->ek-->utt
+ # """
+ if dial_id in ["pmul4707.json", "pmul2245.json", "pmul4776.json", "pmul3872.json", "pmul4859.json"]: continue
+ for turn_num in range(math.ceil(len(dial["log"]) / 2)):
+ # # # turn number
+ usr_turn = dial["log"][turn_num*2]
+ sys_turn = dial["log"][turn_num*2+1]
+
+ new_turn = self.init_turn(turn_id=turn_num+1)
+ new_turn[USR_UTT] = usr_turn["text"]
+ new_turn[SYS_UTT] = sys_turn["text"]
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+ dial_hist.append(f" {new_turn[USR_UTT]}")
+ dial_hist.append(f" {new_turn[SYS_UTT]}")
+ for key_ in ["metadata", "dialog_act", "span_info"]:
+ # other annotation for user side
+ if key_ in usr_turn:
+ new_turn[ORI_USR_ANN][key_] = usr_turn[key_]
+ # other annotation for system side
+ if key_ in sys_turn:
+ new_turn[ORI_SYS_ANN][key_] = sys_turn[key_]
+
+ # used for accumulated slots, extracted based on "metadata", only in system side (turn_num * 2 + 1)
+ slot_list_acc = []
+ for dom, slot in sys_turn["metadata"].items():
+ for slot_type, slot_val in slot["book"].items():
+ if not slot_val or slot_type == "booked" or slot_val == "not mentioned": continue
+ slot_list_acc.append(f"{dom} {slot_type} {slot_val[0]}")
+ for slot_type, slot_val in slot["semi"].items():
+ if not slot_val or slot_val == "not mentioned": continue
+ slot_list_acc.append(f"{dom} {slot_type} {slot_val[0]}")
+ new_turn[DST_ACC] = DST_SPLIT.join(slot_list_acc).lower()
+ # compute the non-accumulated slots
+ slot_list = []
+ for act in usr_turn["dialog_act"]:
+ if not act.endswith("inform"): continue
+ for slot_type, slot_val in usr_turn["dialog_act"][act]:
+ dom = act.split("-")[0]
+ slot_type = slot_type
+ slot_list.append(f"{dom} {slot_type} {slot_val}")
+ new_turn[DST] = DST_SPLIT.join(slot_list).lower()
+ # add intent output
+ new_turn[INTENT] = ", ".join(list(usr_turn["dialog_act"].keys())).lower()
+ new_dial[LOG].append(new_turn)
+
+ # get active domains
+ domains = []
+ for dom in MULTIWOZ_DOMAINS:
+ if dial["goal"][dom]: domains.append(dom)
+ # adding EK for TOD
+ goal = dial['goal']
+ for dom in ["restaurant", "hotel", "attraction", "train"]:
+ if not goal[dom]: continue
+ constraint = [goal[dom]["info"]]
+ db = self._load_json(os.path.join(self.data_dir, "MultiWOZ_2.1", f"{dom}_db.json"))
+
+ new_dial[EK_ORI][TOD_EK][dom] = []
+ satisfied_cand, unsatisfied_cand = self.filter_cand(db, constraint)
+ if len(satisfied_cand)+len(unsatisfied_cand) < TOD_LENGTH:
+ new_dial[EK_ORI][TOD_EK][dom] = satisfied_cand + unsatisfied_cand
+ else:
+ new_dial[EK_ORI][TOD_EK][dom] = satisfied_cand
+ new_dial[EK_ORI][TOD_EK][dom].extend(random.choices(unsatisfied_cand, k=(TOD_LENGTH-len(satisfied_cand))))
+ # adding EK for DST
+ for dom in domains:
+ if dom not in otgy: continue
+ if dom not in new_dial[EK_ORI][DST_EK]: new_dial[EK_ORI][DST_EK][dom] = {}
+ for slot_type in otgy[dom]:
+ new_dial[EK_ORI][DST_EK][dom][slot_type] = random.choices(otgy[dom][slot_type], k=DST_LENGTH)
+ # adding EK for Intent
+ for dom in domains+["booking", "general"]:
+ if dom not in intents: continue
+ new_dial[EK_ORI][INTENT_EK][dom] = intents[dom]
+ # turn the external knowledge into a flat string
+ new_dial[EK] = self.dict_to_str(new_dial[EK_ORI][TOD_EK])
+ new_dial[EK_DST] = self.dict_to_str(new_dial[EK_ORI][DST_EK])
+ new_dial[EK_INTENT] = self.dict_to_str(new_dial[EK_ORI][INTENT_EK])
+
+ # turn the external knowledge into a flat string
+ new_dial[EK] = self.dict_to_str(new_dial[EK_ORI][TOD_EK])
+ # adding prompt for each dialog, since camrest676 is only about restaurant, we use...
+ domains = []
+ for dom in MULTIWOZ_DOMAINS:
+ if dial["goal"][dom]: domains.append(dom)
+ new_dial[PROMPT] = generate_prompt("MULTIWOZ2_2", domains)
+ # finish and wrap the current dialog
+ new_data[new_dial_id] = new_dial
+ if (dial_idx) % 1000 == 0:
+ self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ new_data = {} # reset
+ file_idx += 1
+ dial_idx += 1
+
+ if len(new_data) > 0: self.save_dial(new_data, data_name=data_name, file_idx=file_idx, mode=mode)
+ print(f"Processing {mode} data with {dial_idx-1} dialogs ... " )
+ if mode=="train":self.save_original_examples({k:data[k] for k in list(data.keys())[:5]}, data_name)
+ self.save_converted_examples(data_name)
+ self.copy_related_files(data_name, exp_list)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+
+ def multiwoz21(self):
+ data_name, exp_list = "MultiWOZ_2.1", ["data.json"]
+ MULTIWOZ_DOMAINS = ["taxi", "police", "hospital", "hotel","attraction","train","restaurant"]
+ data = self._load_json(os.path.join(self.data_dir, data_name, "data.json"))
+ val_list = self._load_txt(os.path.join(self.data_dir, data_name, "valListFile.txt"))
+ test_list = self._load_txt(os.path.join(self.data_dir, data_name, "testListFile.txt"))
+ otgy = self.multiwoz_dst_otgy()
+ intents = self._load_json(os.path.join(self.data_dir, "MultiWOZ_2.1", "intents.json" ))
+ new_data = {"train":{}, "val":{}, "test":{}}
+ dial_idx = {"train":1, "val":1, "test":1}
+ file_idx = {"train":1, "val":1, "test":1}
+ for dial_id, dial in tqdm(data.items()):
+ if dial_id in test_list:
+ mode = "test"
+ elif dial_id in val_list:
+ mode = "val"
+ else:
+ mode = "train"
+
+ new_dial_id = f"{data_name}--{mode}--{dial_idx[mode]}"
+ new_dial = self.init_dial(dial_idx=dial_idx[mode]) # idx starts from 1, set this when checking its source
+ new_dial[ORI_DIAL_ID] = dial_id
+ new_dial[ORI_DIAL_INFO]["goal"] = dial['goal']
+ dial_hist = []
+ # """
+ # note: these five dialogs do not contain any annotation
+ # for user side, including span_info or dialog acts
+ # therefore, we exclude these five dialogs since slot-->ek-->utt
+ # """
+ if dial_id in ["PMUL4707.json", "PMUL2245.json", "PMUL4776.json",
+ "PMUL3872.json", "PMUL4859.json"]:
+ continue
+ for turn_num in range(math.ceil(len(dial["log"]) / 2)):
+ # # # turn number
+ usr_turn = dial["log"][turn_num*2]
+ sys_turn = dial["log"][turn_num*2+1]
+
+ new_turn = self.init_turn(turn_id=turn_num+1)
+ new_turn[USR_UTT] = usr_turn["text"]
+ new_turn[SYS_UTT] = sys_turn["text"]
+ new_turn[DIAL_HIST] = " ".join(dial_hist)
+ dial_hist.append(f" {new_turn[USR_UTT]}")
+ dial_hist.append(f" {new_turn[SYS_UTT]}")
+ for key_ in ["metadata", "dialog_act", "span_info"]:
+ # other annotation for user side
+ if key_ in usr_turn:
+ new_turn[ORI_USR_ANN][key_] = usr_turn[key_]
+ # other annotation for system side
+ if key_ in sys_turn:
+ new_turn[ORI_SYS_ANN][key_] = sys_turn[key_]
+
+ # used for accumulated slots, extracted based on "metadata", only in system side (turn_num * 2 + 1)
+ slot_list_acc = []
+ for dom, slot in sys_turn["metadata"].items():
+ for slot_type, slot_val in slot["book"].items():
+ if not slot_val or slot_type == "booked" or slot_val == "not mentioned": continue
+ slot_list_acc.append(f"{dom} {slot_type} {slot_val}")
+
+ for slot_type, slot_val in slot["semi"].items():
+ if not slot_val or slot_val == "not mentioned": continue
+ slot_list_acc.append(f"{dom} {slot_type} {slot_val}")
+ new_turn[DST_ACC] = DST_SPLIT.join(slot_list_acc).lower()
+ # compute the non-accumulated slots
+ slot_list = []
+ for act in usr_turn["dialog_act"]:
+ if not act.endswith("Inform"): continue
+ for slot_type, slot_val in usr_turn["dialog_act"][act]:
+ dom = act.split("-")[0]
+ slot_type = slot_type
+ slot_list.append(f"{dom} {slot_type} {slot_val}")
+ new_turn[DST] = DST_SPLIT.join(slot_list).lower()
+ # add intent output
+ new_turn[INTENT] = ", ".join(list(usr_turn["dialog_act"].keys())).lower()
+
+ new_dial[LOG].append(new_turn)
+
+ # get active domains
+ domains = []
+ for dom in MULTIWOZ_DOMAINS:
+ if dial["goal"][dom]: domains.append(dom)
+ # adding EK for TOD
+ for dom in ["restaurant", "hotel", "attraction", "train"]:
+ if not dial["goal"][dom]: continue
+ constraint = [dial['goal'][dom]["info"]]
+ db = self._load_json(os.path.join(self.data_dir, data_name, f"{dom}_db.json"))
+ new_dial[EK_ORI][TOD_EK][dom] = []
+ satisfied_cand, unsatisfied_cand = self.filter_cand(db, constraint)
+ if len(satisfied_cand)+len(unsatisfied_cand) < TOD_LENGTH:
+ new_dial[EK_ORI][TOD_EK][dom] = satisfied_cand + unsatisfied_cand
+ else:
+ new_dial[EK_ORI][TOD_EK][dom] = satisfied_cand
+ new_dial[EK_ORI][TOD_EK][dom].extend(random.choices(unsatisfied_cand, k=(TOD_LENGTH-len(satisfied_cand))))
+ # adding EK for DST
+ for dom in domains:
+ if dom not in otgy: continue
+ if dom not in new_dial[EK_ORI][DST_EK]: new_dial[EK_ORI][DST_EK][dom] = {}
+ for slot_type in otgy[dom]:
+ new_dial[EK_ORI][DST_EK][dom][slot_type] = random.choices(otgy[dom][slot_type], k=DST_LENGTH)
+ # adding EK for Intent
+ for dom in domains+["booking", "general"]:
+ if dom not in intents: continue
+ new_dial[EK_ORI][INTENT_EK][dom] = intents[dom]
+ # turn the external knowledge into a flat string
+ new_dial[EK] = self.dict_to_str(new_dial[EK_ORI][TOD_EK])
+ new_dial[EK_DST] = self.dict_to_str(new_dial[EK_ORI][DST_EK])
+ new_dial[EK_INTENT] = self.dict_to_str(new_dial[EK_ORI][INTENT_EK])
+ # adding prompt for each dialog, since camrest676 is only about restaurant, we use...
+ new_dial[PROMPT] = generate_prompt("MULTIWOZ2_2", domains)
+ # finish and wrap the current dialog
+ new_data[mode][new_dial_id] = new_dial
+ if (dial_idx[mode]) % 1000 == 0:
+ # pdb.set_trace()
+ self.save_dial(new_data[mode], data_name=data_name, file_idx=file_idx[mode], mode=mode)
+ new_data[mode] = {} # reset
+ file_idx[mode] += 1
+ if file_idx[mode]>100: pdb.set_trace()
+ dial_idx[mode] += 1
+ for mode in ["train", "test", "val"]:
+ if len(new_data[mode]) > 0: self.save_dial(new_data[mode], data_name=data_name, file_idx=file_idx[mode], mode=mode)
+ print(f"Processing {mode} data with {dial_idx[mode]-1} dialogs ... " )
+ self.save_original_examples({k:data[k] for k in list(data.keys())[:5]}, data_name)
+ self.save_converted_examples(data_name)
+ self.copy_related_files(data_name, exp_list)
+ print("*"*10, f"finishing processing dataset {data_name}", "*"*10)
+
+ def multiwoz21_trade(self):
+ pass
+
+
+ def run_all(self):
+ # self.kvret() # 800 tod+dst
+ # self.woz() # dst
+ # self.sgd() # 16k tod+dst+intent
+ # self.bitod()
+ # self.metalwoz()
+ # self.star()
+ # self.taskmaster1() # 643 dst only
+ # self.taskmaster2() # 981 dst only
+ self.taskmaster3() # 1167 dst only
+ # self.simjoint() # dst
+ # self.simjointgen() # dst
+ # self.muldogo()
+ # self.casino()
+ # self.airdialogue() # 1595
+ # self.msdc() # 759 dst+intent
+ # self.abcd() # 8034 dst
+ # self.salesbot()
+ # self.craigslist()
+ # self.frames() # 2000 tod+dst+intent
+ # self.dstc2()
+ # self.multiwoz_hdsa()
+ # self.multiwoz22()
+ # self.mudoco()
+ # self.ketod()
+ # self.task2dial()
+ # self.gecor()
+ # self.disamb()
+ # self.multiwoz21()
+ pass
+
+ def multiwoz_dst_otgy(self):
+ """
+ transfer the ontology file in multiwoz from format:
+ {domain-semi/book-slot_type:[slot_value, ...],}
+ into
+ {domain:{slot_type:[slot_value, ...],}"""
+ otgy_ori = self._load_json(os.path.join(self.data_dir, "MultiWOZ_2.1/ontology.json"))
+ otgy = {}
+ for dom_slot in otgy_ori:
+ dom, _, slot_type = dom_slot.split("-")
+ if dom not in otgy: otgy[dom] = {}
+ otgy[dom][slot_type] = otgy_ori[dom_slot]
+ return otgy
+
+ def save_info_to_dict(self, turn_index, user_uttr, sys_uttr, dialog_history):
+ turn_log = self.init_turn(turn_id=turn_index, dial_hist=dialog_history)
+ turn_log[TURN_ID] = turn_index
+ turn_log[USR_UTT] = user_uttr
+ turn_log[SYS_UTT] = sys_uttr
+ turn_log[DIAL_HIST] = dialog_history
+ return turn_log
+
+
+ def compare_delex(self, utt_ori, utt_delex):
+ """
+ original: Yes my order id is 4870952797
+ delexicalized: yes my order id is
+ assuming delexicalized token is only of length 1
+ """
+ utt_ori = utt_ori.lower().replace(",", " , ").replace(". ", " . ").replace(":", " : ").replace(" ", " ").replace(" | ", " and ").split()
+ utt_delex = utt_delex.replace(",", " , ").replace(". ", " . ").replace(":", " : ").replace(" ", " ").replace(" | ", " and ").split()
+ utt_ori = [slot_value.strip(",.$?\"\\()") for slot_value in utt_ori]
+ utt_delex = [slot_type.strip(",.$?\"\\0()") for slot_type in utt_delex]
+ pointer_ori_start, pointer_ori_end, pointer_delex = 0, 0, 0
+ result = []
+ while pointer_ori_start < len(utt_ori):
+ if utt_ori[pointer_ori_start] == utt_delex[pointer_delex]: # not slot, continue
+ pointer_ori_start += 1
+ pointer_delex += 1
+ elif pointer_delex == len(utt_delex) - 1: # the last token
+ result.append([" ".join(utt_ori[pointer_ori_start:]), utt_delex[pointer_delex]])
+ break
+ else:
+ for pointer_delex_end in range(pointer_delex+1,len(utt_delex)):
+ if not (utt_delex[pointer_delex_end].startswith("<") or utt_delex[pointer_delex_end].endswith(">")):
+ break
+ flag = 0
+ for pointer_ori_end in range(pointer_ori_start+1, len(utt_ori)):
+ if utt_ori[pointer_ori_end] == utt_delex[pointer_delex_end]:
+ result.extend(self.match(utt_ori[pointer_ori_start:pointer_ori_end], utt_delex[pointer_delex:pointer_delex_end]))
+ pointer_ori_start = pointer_ori_end + 1
+ pointer_delex += 2
+ flag = 1
+ break
+ if not flag:
+ result.extend(self.match(utt_ori[pointer_ori_start:], utt_delex[pointer_delex:]))
+ break
+ return result
+
+
+ def match(self, list_ori, list_delex):
+ if len(list_delex) == 0:
+ return []
+ elif len(list_delex) == 1: # only one slot
+ return [[" ".join(list_ori), " ".join(list_delex)]]
+ elif len(list_ori) == len(list_delex): # slot has only length 1
+ return [[list_ori[i], list_delex[i]] for i in range(len(list_delex))]
+ elif len(list_ori) < len(list_delex): # something wrong with annotation
+ return []
+ else: # multiple, length-variant slots
+ if "" in list_delex:
+ i_delex = list_delex.index("")
+ for i_ori, value in enumerate(list_ori):
+ if "@" in value:
+ break
+ result = self.match(list_ori[:i_ori], list_delex[:i_delex])
+ result.append([value, ""])
+ # pdb.set_trace()
+ if i_ori < len(list_ori) - 1:
+ result.extend(self.match(list_ori[i_ori+1:], list_delex[i_delex+1:]))
+ return result
+ elif list_delex[0] in ["", "", "", ""]:
+ result = [[list_ori[0], list_delex[0]]]
+ result.extend(self.match(list_ori[1:], list_delex[1:]))
+ return result
+ elif list_delex[-1] in ["", "", "", ""]:
+ result = [[list_ori[-1], list_delex[-1]]]
+ result.extend(self.match(list_ori[:-1], list_delex[:-1]))
+ return result
+ else:
+ # return [[" ".join(list_ori), " ".join(list_delex)]]
+ return []
+
+
+ def copy_example(self):
+ source_dir = self.save_dir
+ target_dir = "/home/qkun/projs/TOD-Project/Datasets/Task-Oriented_PROCESSED/"
+ file_list = ["converted_examples.json", "original_examples.json", "readme.txt", "LICENSE"]
+ for dir_name in sorted(os.listdir(source_dir)):
+ if os.path.isfile(os.path.join(source_dir, dir_name)): continue
+ if not os.path.exists(os.path.join(target_dir, dir_name)): os.makedirs(os.path.join(target_dir, dir_name))
+ for filename in file_list:
+ source_path = os.path.join(source_dir, dir_name, filename)
+ target_path = os.path.join(target_dir, dir_name, filename)
+ if not os.path.exists(source_path): continue
+ shutil.copy(source_path, target_path)
+
+
+ def dict_to_str(self, ek_ori):
+ """
+ turn non-flat external knowledge into string
+ original format:
+ "metadata":{
+ domain: [
+ {
+ attr1: value1,
+ attr2: value2,
+ ...
+ },
+ ...
+ ]
+ }
+ output format:
+ ( metadata : ( domain : ( attr1 : value1 | attr2 : value2 | ... ) | ( ... ) | ... ))
+ """
+ ek = str(ek_ori).replace("'"," ").replace(", "," | ")
+ ek = ek.replace("{","(").replace("}",")").replace("[","(").replace("]",")")
+ ek = ek.replace(" ", " ")
+ return ek
+
+
+ def dst_dict_to_str(self, dst_dict):
+ """
+ use a dict to store updated dst state, and now conver it into string for generation
+ """
+ slot_list = []
+ for domain in dst_dict:
+ for slot, value in dst_dict[domain].items():
+ slot_list.append(f"{domain} {slot} {value.strip()}")
+ return ", ".join(slot_list)
+
+
+ def update_with_slot_list(self, dst_dict, slot_list):
+ """
+ assuming using a dict to store updated dst state, now update the dict with slot_list
+ """
+ for slot in slot_list:
+ if len(slot.split()) < 2:
+ pdb.set_trace()
+ domain, slot_type, slot_value = slot.split()[0], slot.split()[1], " ".join(slot.split()[2:])
+ if domain not in dst_dict:
+ dst_dict[domain] = {}
+ dst_dict[domain][slot_type] = slot_value
+ return dst_dict
+
+
+ def examine(self):
+ for data_name in sorted(os.listdir(self.save_dir)):
+ if data_name in ["AirDialogue"]: continue
+ print(f"Loading {data_name} ...")
+ if os.path.isfile(os.path.join(self.save_dir, data_name)): continue
+ for filename in os.listdir(os.path.join(self.save_dir, data_name, "train")):
+ if not filename.startswith("dialog"): continue
+ idx = 1
+ data = self._load_json(os.path.join(self.save_dir, data_name, "train", filename))
+ for dial_id, dial in data.items():
+ if not dial_id.endswith(str(idx)) and idx != 1000:
+ print(data_name, filename, dial_id, idx)
+ pdb.set_trace()
+ idx += 1
+ # for idx, turn in enumerate(dial[LOG]):
+ # if idx + 1 != turn[TURN_ID]:
+ # print(data_name, dial_id)
+
+
+def main():
+ preprocess = PreProcessData()
+ preprocess.run_all()
+ preprocess.copy_example()
+ # preprocess.examine()
+
+if __name__ == '__main__':
+ main()
diff --git a/code/utils/constant.py b/code/utils/constant.py
new file mode 100644
index 0000000000000000000000000000000000000000..ebf4b19171d92daca9ca90cbb11df9594beaaa90
--- /dev/null
+++ b/code/utils/constant.py
@@ -0,0 +1,55 @@
+"""
+ Copyright (c) 2023, salesforce.com, inc.
+ All rights reserved.
+ SPDX-License-Identifier: Apache License 2.0
+ For full license text, see the LICENSE file in the repo root or https://www.apache.org/licenses/LICENSE-2.0
+"""
+
+#!/usr/bin/env python3
+#
+
+# key used for direct usage
+SPEAKER1 = "user"
+SPEAKER2 = "system"
+ORI_DIAL_ID = "original dialog id"
+DIAL_IDX = "dialog index"
+ORI_DIAL_INFO = "original dialog info"
+TURN_ID = "turn id"
+USR_UTT = f"{SPEAKER1} utterance"
+SYS_UTT = f"{SPEAKER2} response"
+DIAL_HIST = "dialog history"
+ORI_USR_ANN = f"original {SPEAKER1} side information"
+ORI_SYS_ANN = f"original {SPEAKER2} side information"
+LOG = "log"
+
+# # # output for different task
+# domain prediction
+DOM = "domain"
+# intent prediction, including dialog act prediction if intent missing
+INTENT = "intent"
+INTENT_SPLIT = " , "
+# dst
+DST = "dst"
+DST_ACC = "dst accumulated"
+DST_SPLIT = " , "
+
+# # # used for external knowledge
+EK = "external knowledge"
+EK_DST = "dst knowledge"
+EK_INTENT = "intent knowledge"
+# non-flat external knowledge dictionary
+EK_ORI = "external knowledge non-flat"
+TOD_EK = "metadata"
+TOD_LENGTH = 10
+# DOM_EK = "domains"
+INTENT_EK = "intents"
+DST_EK = "slots and values"
+DST_LENGTH = 10
+
+# # # prompt for each dialog
+PROMPT = "prompt"
+PROMPT_DST = "prompt for dst task"
+PROMPT_INTENT = "prompt for intent prediction"
+
+MULTIWOZ_DOMAINS = ["taxi", "police", "hospital", "hotel","attraction","train","restaurant"]
+
diff --git a/code/utils/constant_tod.py b/code/utils/constant_tod.py
new file mode 100644
index 0000000000000000000000000000000000000000..6230f29c2323de4f52dcec15e2ca637139b460ac
--- /dev/null
+++ b/code/utils/constant_tod.py
@@ -0,0 +1,56 @@
+"""
+ Copyright (c) 2023, salesforce.com, inc.
+ All rights reserved.
+ SPDX-License-Identifier: Apache License 2.0
+ For full license text, see the LICENSE file in the repo root or https://www.apache.org/licenses/LICENSE-2.0
+"""
+
+
+#!/usr/bin/env python3
+#
+
+# key used for direct usage
+SPEAKER1 = "user"
+SPEAKER2 = "system"
+ORI_DIAL_ID = "original dialog id"
+DIAL_IDX = "dialog index"
+ORI_DIAL_INFO = "original dialog info"
+TURN_ID = "turn id"
+USR_UTT = f"{SPEAKER1} utterance"
+SYS_UTT = f"{SPEAKER2} response"
+DIAL_HIST = "dialog history"
+ORI_USR_ANN = f"original {SPEAKER1} side information"
+ORI_SYS_ANN = f"original {SPEAKER2} side information"
+LOG = "log"
+
+# # # output for different task
+# domain prediction
+DOM = "domain"
+# intent prediction, including dialog act prediction if intent missing
+INTENT = "intent"
+INTENT_SPLIT = " , "
+# dst
+DST = "dst"
+DST_ACC = "dst accumulated"
+DST_SPLIT = " , "
+
+# # # used for external knowledge
+EK = "external knowledge"
+EK_DST = "dst knowledge"
+EK_INTENT = "intent knowledge"
+# non-flat external knowledge dictionary
+EK_ORI = "external knowledge non-flat"
+TOD_EK = "metadata"
+TOD_LENGTH = 10
+# DOM_EK = "domains"
+INTENT_EK = "intents"
+DST_EK = "slots and values"
+DST_LENGTH = 10
+
+# # # prompt for each dialog
+PROMPT = "prompt"
+PROMPT_DST = "prompt for dst task"
+PROMPT_INTENT = "prompt for intent prediction"
+
+MULTIWOZ_DOMAINS = ["taxi", "police", "hospital", "hotel","attraction","train","restaurant"]
+
diff --git a/code/utils/domain_mapping.py b/code/utils/domain_mapping.py
new file mode 100644
index 0000000000000000000000000000000000000000..b7e084cdbb29cc7633e52fa6bbdccc85cc2f99d8
--- /dev/null
+++ b/code/utils/domain_mapping.py
@@ -0,0 +1,238 @@
+"""
+ Copyright (c) 2023, salesforce.com, inc.
+ All rights reserved.
+ SPDX-License-Identifier: Apache License 2.0
+ For full license text, see the LICENSE file in the repo root or https://www.apache.org/licenses/LICENSE-2.0
+"""
+
+
+#!/usr/bin/env python3
+import sys, os
+import random
+
+templates = [
+ "This is a bot helping users to _____. Given the dialog context and external database, please generate a relevant system response for the user.",
+ "This bot assists users to _____. Based on the dialogue context and information from the external database, please generate an appropriate response for the user.",
+ "This bot helps users to _____. Provide a suitable response to the user, keeping in mind the conversation history and accessible external data.",
+ "The purpose of this bot is to assist users to _____. Considering the dialogue context and the information available in the external knowledge, please provide a fitting response for the user.",
+ "This bot is designed to help users _____. By utilizing the current dialog context and external resources, generate an appropriate response for the user."
+]
+
+mapping = {
+ "ABCD":{
+ "product_defect": "solve issues about refunds and returns",
+ "storewide_query": "find answers to FAQ questions about pricing, timing, membership or features",
+ "shipping_issue": "check out update a shipment of an item",
+ "subscription_inquiry": "updates premium subscription",
+ "account_access": "manage account access information",
+ "troubleshoot_site": "solve website-related issues", # website slow, search not working, credit card, cart not updating
+ "single_item_query": "find answers to FAQ questions about clothes",
+ "order_issue": "get status of an order or change an order",
+ "purchase_dispute": "dispute a purchase",
+ "manage_account": "manage account profile",
+ },
+ "AirDialogue":{
+ "flight": "book a flight ticket",
+ },
+ "BiTOD":{
+ "restaurants": "find and book a restaurant",
+ "attractions": "find a tourist attraction",
+ "HKMTR": "find a metro line",
+ "weathers": "search weather information",
+ "hotels": "find and book a hotel",
+ },
+ "CaSiNo":{
+ "negotiate": "take the role of campsite neighbors and negotiate for food, water, and firewood" # not help to finish task
+ },
+ "CraigslistBargains":{
+ "bargain": "bargain for goods" # can be devided into different items e.g. housing, bike, electronics
+ },
+ "DSTC2-Clean":{
+ "restaurant": "find a restaurant"
+ },
+ "FRAMES":{
+ "trip": "book a trip"
+ },
+ "KVRET":{
+ "schedule":"manage a calendar",
+ "weather":"find weather information",
+ "navigate":"get navigation",
+ },
+ "WOZ2_0":{
+ "restaurant":"find a restaurant",
+ },
+ "SGD":{
+ "alarm": "manage alarms",
+ "banks": "manage bank accounts",
+ "buses": "book a bus journey",
+ "events": "book an event ticket",
+ "flights": "book a flight ticket",
+ "homes": "find an apartment or schedule an apartment viewing", # Homes_2 : Service for finding properties to buy and rent"
+ "hotels": "book a hotel",
+ "media": "rent a movie to watch",
+ "music": "find a song",
+ "rentalcars": "rent a car",
+ "restaurants": "book a restaurant",
+ "ridesharing": "book a ride",
+ "services": "reserve a therapist, dentists, doctor or hair stylist", # Services_1: hair stylist; _2:dentists; Services_4: therapist
+ "travel": "find a tourist attraction",
+ "weather": "get weather information",
+ "messaging": "connect and share locations", # Messaging_1:Connect and share locations with your contact
+ "movies": "book a movie ticket",
+ "payment": "manage a payment", # payment_1 The fast, simple way to pay in apps, on the web, and in millions of stores
+ "trains": "book a train journey",
+ "calendar": "manage a calendar",
+ },
+ "MetaLWOZ":{
+ "update_calendar": "schedule meetings on a calendar",
+ "order_pizza": "order a pizza",
+ "movie_listings": "get movie information",
+ "event_reserve": "make reservations for events",
+ "weather_check": "get weather information",
+ "update_contact": "update cell phone contacts",
+ "make_restaurant_reservations": "reserve a restaurant",
+ "edit_playlist": "manage music playlists",
+ "look_up_info": "fetch information from the internet",
+ "shopping": "order products from website",
+ "store_details": "get information about stores and businesses",
+ "sports_info": "get sports information",
+ "quote_of_the_day_bot": "get a quote of the day",
+ "how_to_basic": "get instructions for basic tasks",
+ "prompt_generator": "get creative prompts",
+ "library_request": "get library information",
+ "bank_bot": "manage bank accounts",
+ "restaurant_picker": "find a restaurant",
+ "phone_plan_bot": "get mobile phone service",
+ "name_suggester": "get names for things",
+ "city_info": "get facts about different cities",
+ "music_suggester": "get music suggestions",
+ "agreement_bot": "get agreements",
+ "pet_advice": "get pet advice",
+ "apartment_finder": "find an apartment",
+ "guiness_check": "get world records",
+ "geography": "get to know where countries are",
+ "alarm_set": "manage alarms",
+ "contact_manager": "manage the user's contacts",
+ "phone_settings": "manage the user's phone's settings",
+ "appointment_reminder": "confirm their appointments",
+ "home_bot": "manage the user's home",
+ "policy_bot": "get information about a company's policies",
+ "decider_bot": "make decisions for the user",
+ "catalogue_bot": "search a catalogue",
+ "ski_bot": "book skiing trips",
+ "bus_schedule_bot": "manage public transit schedules",
+ "insurance": "get insurance information",
+ "what_is_it": "remember what a thing is.",
+ "auto_sort": "sort things",
+ "scam_lookup": "get about various scams",
+ "time_zone": "get information about time zones",
+ "play_times": "schedule shows during a theatre festival",
+ "game_rules": "know the rules for games",
+ "wedding_planner": "plan weddings",
+ "check_status": "check the status of things",
+ "present_ideas": "get advice on gift giving",
+ "booking_flight": "book a flight ticket",
+ "hotel_reserve": "book rooms in a hotel",
+ "vacation_ideas": "plan for vacations and trips",
+ "tourism": "get tourism related advices"
+},
+ "STAR":{
+ "apartment": "find an apartment or schedule an apartment viewing",
+ "bank": "manage bank accounts", # Check the balance / Report suspicious behavior
+ "doctor": "make an appointment with a doctor", # appointment Make an appointment with a doctor / followup doctor appointment Check instructions given by doctor upon last visit
+ "hotel": "book a hotel", # find / book a hotel or call for room service
+ "meeting": "schedule a meeting",
+ "party": "plan a party", # plan Plan a party at a given venue, party rsvp RSVP to a party of a given host at a given venue
+ "plane": "book a flight ticket", # search Find a flight between two cities / plane reserve Book a flight, given its id
+ "restaurant": "reserve a restaurant", # search Find a restaurant / restaurant reserve Reserve a table at a restaurant
+ "ride": "book a ride", # book ride Call a Taxi/Uber/Lyft ride to any destination / ride change Change details of a Taxi/Uber/Lyft ride that had been called earlier / ride status Check the status of a ride you called earlier
+ "spaceship": "solve issues with a spaceship", # life support Recover the spaceship’s life support / spaceship access codes Get a repair robot to open a door for you
+ "trip": "get navigation", # directions Get walking/driving/transit directions between two locations (B).
+ "trivia": "plan a game of trivia", # Play a game of trivia (C).
+ "weather": "get weather information", # Check the weather (forecast) in various cities
+ },
+ "Taskmaster1":{
+ "restaurant": "reserve a restaurant",
+ "movie": "book a movie ticket",
+ "pizza": "order a pizza",
+ "coffee": "order coffee drinks",
+ "auto": "make appointment for auto repair",
+ "uber": "book a uber ride",
+ },
+ "Taskmaster2":{ # domains can be split further into subdomains based on instruction_id e.g. sports --> nba, nfl, epl
+ "flights": "book a flight ticket",
+ "food-ordering": "make a take-out order",
+ "hotels":"find a hotel",
+ "movies":"find a movie to watch",
+ "music": "find music tracks",
+ "restaurant-search": "find a restaurant",
+ "sports": "get sports information",
+ },
+ "Taskmaster3":{
+ "movie": "book a movie ticket",
+ },
+ "SimJointMovie":{
+ "movie": "book a movie ticket",
+ },
+ "SimJointRestaurant":{
+ "restaurant": "reserve a restaurant",
+ },
+ "SimJointGEN":{
+ "movie": "book a movie ticket",
+ },
+ "MulDoGO":{
+ "airline": "book a flight ticket", # e domain dialogues focus on booking airline flights, selecting or changing seat assignments, and requesting boarding passes;
+ "fastfood": "order fast food", # domain is the least similar to the others, as the intents primarily involve ordering food and the slots quantify their order.
+ "finance": "manage bank accounts", # domain simulates dialogues a customer may have with a bank. These include opening a bank account, checking their balance, and reporting a lost credit card;
+ "insurance": "get insurance information", # domain simulates users calling about their insurance policy or requesting the fulfillment of a policy on their car or phone
+ "media": "order a media service", # domain simulates dialogues a customer may have ordering a service or paying bills related to telecommunications.
+ "software": "get software service information", # domain involves customers inquiring about software services: products, outages, promotions, and bills
+ },
+ "MS-DC":{
+ "restaurant": "reserve a restaurant",
+ "taxi": "book a taxi",
+ "movie": "book a movie ticket",
+ },
+ "SalesBot":{
+ "GetTimesForMovie": "get information for a movie",
+ "LookupSong": "find a song",
+ "FindMovies": "find a movie to watch",
+ "LookupMusic": "find and play a song",
+ "PlaySong": "play a song",
+ "FindAttractions": "find a tourist attraction",
+ },
+ "MuDoCo":{
+ "calling": "make a call", # user initiates or manipulates a voice or video call
+ "messaging": "send or read messages", # user sends or reads messages, asks for information about their message queue
+ "music": "find a song", # user searches for music by a certain artist or in a certain genre, asks the system to play songs, etc.
+ "news": "get news information", # user asks for information about current events related to a variety of topics
+ "reminders": "modify a reminder", # user sets, modifies, queries or deletes reminders for a certain date or time
+ "weather": "get weather information", # user asks about the current or future weather conditions in various locations
+ },
+ "MULTIWOZ2_2":{
+ "restaurant": "find a restaurant",
+ "hotel": "find a hotel",
+ "attraction": "find an attraction",
+ "train": "book a train ticket",
+ "taxi": "find a taxi",
+ "hospital": "find a hospital",
+ "police": "find a police station",
+ "bus": "find a bus",
+ },
+}
+
+def generate_prompt(data_name, domains, num_sample=1):
+ """
+ Including all 5 types of prompt and output a list
+ """
+ # sampled_template = random.choice(templates)
+ descriptions = [mapping[data_name][domain] for domain in domains]
+ if len(descriptions) == 1:
+ description_str = descriptions[0]
+ elif len(descriptions) == 2:
+ description_str = " and ".join(descriptions)
+ elif len(descriptions) == 0:
+ description_str = "complete specified tasks"
+ else:
+ description_str = "complete multiple tasks, e.g. " + ", ".join(descriptions[:-1]) + ", and " + descriptions[-1]
+ return [sampled_template.replace("_____", description_str) for sampled_template in templates]
\ No newline at end of file
diff --git a/conversational-recommendation-dialogues/DuRecDial-2.0/LICENSE.txt b/conversational-recommendation-dialogues/DuRecDial-2.0/LICENSE.txt
new file mode 100644
index 0000000000000000000000000000000000000000..261eeb9e9f8b2b4b0d119366dda99c6fd7d35c64
--- /dev/null
+++ b/conversational-recommendation-dialogues/DuRecDial-2.0/LICENSE.txt
@@ -0,0 +1,201 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/conversational-recommendation-dialogues/DuRecDial-2.0/README.md b/conversational-recommendation-dialogues/DuRecDial-2.0/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..b3f3e5d2f47b4fb4ed36847998ea0475a002c7b2
--- /dev/null
+++ b/conversational-recommendation-dialogues/DuRecDial-2.0/README.md
@@ -0,0 +1,75 @@
+# DuRecDial
+
+In this paper, we provide a **bilingual parallel** human-to-human recommendation dialog dataset, **DuRecDial 2.0**, to enable researchers to explore the challenging task of multilingual and cross-lingual conversational recommendation. The difference between DuRecDial 2.0 and existing conversational recommendation datasets is that the data item (Profile, Goal, Knowledge, Context, Response) in DuRecDial 2.0 is annotated in two languages, both English and Chinese, while other datasets are built with the setting of a single language. We collect **8.2k** dialogues aligned across English and Chinese languages (16.5k dialogs and 255k utterances in total) that are annotated by crowdsourced workers with strict quality control procedure. DuRecDial 2.0 provides a challenging testbed for future studies of monolingual, multilingual, and cross-lingual conversational recommendation. For a detailed introduction of DuRecDial 2.0, please refer to [DuRecDial](https://github.com/liuzeming01/Research/tree/master/NLP/ACL2020-DuRecDial) on [IEEE Xplore](https://ieeexplore.ieee.org/document/9699426), [ACL Anthology](https://aclanthology.org/2020.acl-main.98/) and [arXiv](https://arxiv.org/abs/2005.03954).
+
+Our paper on [ACL Anthology](https://aclanthology.org/2021.emnlp-main.356/) and [arXiv](https://arxiv.org/abs/2109.08877) . If the corpus is helpful to your research, please kindly cite our paper:
+
+```bib
+@inproceedings{liu-etal-2021-durecdial,
+ title = "{D}u{R}ec{D}ial 2.0: A Bilingual Parallel Corpus for Conversational Recommendation",
+ author = "Liu, Zeming and
+ Wang, Haifeng and
+ Niu, Zheng-Yu and
+ Wu, Hua and
+ Che, Wanxiang",
+ booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
+ month = nov,
+ year = "2021",
+ address = "Online and Punta Cana, Dominican Republic",
+ publisher = "Association for Computational Linguistics",
+ url = "https://aclanthology.org/2021.emnlp-main.356",
+ doi = "10.18653/v1/2021.emnlp-main.356",
+ pages = "4335--4347",
+}
+```
+
+## Data
+
+**Note:If the first goal is "Greetings/寒暄", the seeker starts the conversation, otherwise, the user starts the conversation.**
+
+An example of the conversation in DuRecDial 2.0:
+
+
+
+DuRecDial 2.0 is an extension of the [DuRecDial](https://baidu-nlp.bj.bcebos.com/DuRecDial.zip). Specifically, we extend the DuRecDial to English by crowdsourced workers with strict quality control procedure.
+
+If DuRecDial is helpful to your research, please kindly cite our papers:
+
+```bib
+@inproceedings{liu-etal-2020-towards-conversational,
+ title = "Towards Conversational Recommendation over Multi-Type Dialogs",
+ author = "Liu, Zeming and
+ Wang, Haifeng and
+ Niu, Zheng-Yu and
+ Wu, Hua and
+ Che, Wanxiang and
+ Liu, Ting",
+ booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
+ month = jul,
+ year = "2020",
+ address = "Online",
+ publisher = "Association for Computational Linguistics",
+ url = "https://aclanthology.org/2020.acl-main.98",
+ doi = "10.18653/v1/2020.acl-main.98",
+ pages = "1036--1049",
+}
+```
+```bib
+@ARTICLE{9699426,
+ author={Liu, Zeming and Zhou, Ding and Liu, Hao and Wang, Haifeng and Niu, Zheng-Yu and Wu, Hua and Che, Wanxiang and Liu, Ting and Xiong, Hui},
+ journal={IEEE Transactions on Knowledge and Data Engineering},
+ title={Graph-Grounded Goal Planning for Conversational Recommendation},
+ year={2023},
+ volume={35},
+ number={5},
+ pages={4923-4939},
+ doi={10.1109/TKDE.2022.3147210}
+ }
+```
+
+
+## License
+
+Apache License 2.0 and CC BY-NC-SA 4.0.
+
+Since DuRecDial 2.0 is licensed under CC BY-NC-SA 4.0. Note the dataset may not be adopted for commercial use.
diff --git a/conversational-recommendation-dialogues/DuRecDial-2.0/converted_examples.json b/conversational-recommendation-dialogues/DuRecDial-2.0/converted_examples.json
new file mode 100644
index 0000000000000000000000000000000000000000..2c9616e27d9b0afd40791b5f2240c67764e7bcae
--- /dev/null
+++ b/conversational-recommendation-dialogues/DuRecDial-2.0/converted_examples.json
@@ -0,0 +1,998 @@
+{
+ "DuRecDial-2.0--train--1": {
+ "original dialog id": "",
+ "dialog index": 1,
+ "original dialog info": {
+ "goal": "[1] Q&A(Cecilia Cheung)-->[2] Chat about stars(Cecilia Cheung)-->[3] Movie recommendation(Failan)-->[4] Movie recommendation(The Stool Pigeon)-->[5]Say goodbye",
+ "user_profile": {
+ "Age Range": "Under 18 years old",
+ "Name": "Fangyang Liu",
+ "Residence": "Qingdao",
+ "Accepted food": "Jiaozi\u00a0Stuffed\u00a0with\u00a0Mackerel\u00a0",
+ "Accepted movies": [
+ "Left Right Love Destiny",
+ "Hot Summer Days",
+ "Fly Me to Polaris",
+ "Help!!!",
+ "One Night in Mongkok",
+ "The Bullet Vanishes"
+ ],
+ "Accepted Music": [
+ "Once"
+ ],
+ "Rejected music": [
+ "It's Time"
+ ],
+ "Gender": "Female",
+ "Accepted celebrities": [
+ "Cecilia Cheung",
+ "Kris Wu",
+ "Nicholas Tse"
+ ],
+ "Accepted movie": [
+ "The Legend of Speed"
+ ],
+ "Reject": [
+ "News"
+ ],
+ "Rejected movies": [
+ "Everyday is Valentine",
+ "Unforgettable",
+ "King of Comedy"
+ ],
+ "Occupation": "Student",
+ "Accepted music": "Time Boils The Rain",
+ "Accepted POI": [
+ "Minguo Seafood Dumpling House"
+ ]
+ },
+ "conversation": [
+ "[1] Who is the leading actor of the movie Left Right Love Destiny?",
+ "It's Cecilia Cheung",
+ "You even know that. Excellent!",
+ "[2] Thank you for your praise. She's a Chinese idol. Of course I know her.",
+ "Yes my idol is quite unusual.",
+ "She is also an Asian Outstanding Artist in New York Chinese Film Festival. Excellent.",
+ "Yes. People like her will shine wherever they go.",
+ "[3] Then don't miss her movie Failan. It shows a love tragedy in which the two people are secluded in different time spaces. However, the movie doesn't just tell a sad but beautiful love story. Through the story, the audiences can see the director's profound thinking and understanding of life and human nature.",
+ "Although I really like her, I prefer to see Nicholas Tse's movies.",
+ "[4] Then you can watch Nicholas Tse's movie The Stool Pigeon. His acting is good.",
+ "What kind of movie is it?",
+ "It's an action drama, with a little bit of thriller.",
+ "Woah, sounds interesting.",
+ "I'm sure you'll like it.",
+ "[5] Then I'm leaving too see the movie.",
+ "Okay, see you next time."
+ ],
+ "goal_topic_list": [
+ "Left Right Love Destiny",
+ "Left Right Love Destiny",
+ "Left Right Love Destiny",
+ "Cecilia Cheung",
+ "Cecilia Cheung",
+ "Cecilia Cheung",
+ "Cecilia Cheung",
+ "Failan",
+ "Failan",
+ "The Stool Pigeon",
+ "The Stool Pigeon",
+ "The Stool Pigeon",
+ "The Stool Pigeon",
+ "The Stool Pigeon",
+ "Say goodbye",
+ "Say goodbye"
+ ],
+ "goal_type_list": [
+ "Q&A",
+ "Q&A",
+ "Q&A",
+ "Chat about stars",
+ "Chat about stars",
+ "Chat about stars",
+ "Chat about stars",
+ "Movie recommendation",
+ "Movie recommendation",
+ "Movie recommendation",
+ "Movie recommendation",
+ "Movie recommendation",
+ "Movie recommendation",
+ "Movie recommendation",
+ "Say goodbye",
+ "Say goodbye"
+ ],
+ "situation": "Time: 12:00, at school",
+ "knowledge": [
+ [
+ "Cecilia Cheung",
+ "Stars",
+ "Left Right Love Destiny"
+ ],
+ [
+ "Cecilia Cheung",
+ "Stars",
+ "Left Right Love Destiny"
+ ],
+ [],
+ [
+ "Cecilia Cheung",
+ "Intro",
+ "Chinese dreamgirl"
+ ],
+ [],
+ [
+ "Cecilia Cheung",
+ "Achievement",
+ "Outstanding Asian artists of New York China Film Festival"
+ ],
+ [],
+ [
+ "Failan",
+ "Comments",
+ "Failan presents a love tragedy isolated in different time and space, but the film does not stop at telling a sad and beautiful love. Through the shell of the sad story, the audience gets the director's deeper thinking and understanding of the human nature of life.Focusing on the spiritual core of saving and being saved, the narrative unfolds, the theme is expounded, and the blindness to reality is unveiled. The audience is touched and awakened. The director fully played the magic of the movie, disrupting the narrative time and space, intersecting the past and the present, and advancing freely following the protagonists\u2019 emotions, making the story structure more tense and the theme and emotions more heartfelt. Although it describes the underworld, the director deliberately avoided the violence rendering and used extremely life-like methods to describe the underworld story routines such as gang struggle and group fight."
+ ],
+ [],
+ [
+ "Nicholas Tse",
+ "Stars",
+ "The Stool Pigeon"
+ ],
+ [
+ "The Stool Pigeon",
+ "Type",
+ "Action plot Thriller"
+ ],
+ [
+ "The Stool Pigeon",
+ "Type",
+ "Action plot Thriller"
+ ],
+ [],
+ [],
+ [],
+ []
+ ]
+ },
+ "log": [
+ {
+ "turn id": 1,
+ "user utterance": "Who is the leading actor of the movie Left Right Love Destiny?",
+ "system response": "It's Cecilia Cheung",
+ "dialog history": "",
+ "original user side information": {},
+ "original system side information": {}
+ },
+ {
+ "turn id": 2,
+ "user utterance": "You even know that. Excellent!",
+ "system response": "Thank you for your praise. She's a Chinese idol. Of course I know her.",
+ "dialog history": " Who is the leading actor of the movie Left Right Love Destiny? It's Cecilia Cheung",
+ "original user side information": {},
+ "original system side information": {}
+ },
+ {
+ "turn id": 3,
+ "user utterance": "Yes my idol is quite unusual.",
+ "system response": "She is also an Asian Outstanding Artist in New York Chinese Film Festival. Excellent.",
+ "dialog history": " Who is the leading actor of the movie Left Right Love Destiny? It's Cecilia Cheung You even know that. Excellent! Thank you for your praise. She's a Chinese idol. Of course I know her.",
+ "original user side information": {},
+ "original system side information": {}
+ },
+ {
+ "turn id": 4,
+ "user utterance": "Yes. People like her will shine wherever they go.",
+ "system response": "Then don't miss her movie Failan. It shows a love tragedy in which the two people are secluded in different time spaces. However, the movie doesn't just tell a sad but beautiful love story. Through the story, the audiences can see the director's profound thinking and understanding of life and human nature.",
+ "dialog history": " Who is the leading actor of the movie Left Right Love Destiny? It's Cecilia Cheung You even know that. Excellent! Thank you for your praise. She's a Chinese idol. Of course I know her. Yes my idol is quite unusual. She is also an Asian Outstanding Artist in New York Chinese Film Festival. Excellent.",
+ "original user side information": {},
+ "original system side information": {}
+ },
+ {
+ "turn id": 5,
+ "user utterance": "Although I really like her, I prefer to see Nicholas Tse's movies.",
+ "system response": "Then you can watch Nicholas Tse's movie The Stool Pigeon. His acting is good.",
+ "dialog history": " Who is the leading actor of the movie Left Right Love Destiny? It's Cecilia Cheung You even know that. Excellent! Thank you for your praise. She's a Chinese idol. Of course I know her. Yes my idol is quite unusual. She is also an Asian Outstanding Artist in New York Chinese Film Festival. Excellent. Yes. People like her will shine wherever they go. Then don't miss her movie Failan. It shows a love tragedy in which the two people are secluded in different time spaces. However, the movie doesn't just tell a sad but beautiful love story. Through the story, the audiences can see the director's profound thinking and understanding of life and human nature.",
+ "original user side information": {},
+ "original system side information": {}
+ },
+ {
+ "turn id": 6,
+ "user utterance": "What kind of movie is it?",
+ "system response": "It's an action drama, with a little bit of thriller.",
+ "dialog history": " Who is the leading actor of the movie Left Right Love Destiny? It's Cecilia Cheung You even know that. Excellent! Thank you for your praise. She's a Chinese idol. Of course I know her. Yes my idol is quite unusual. She is also an Asian Outstanding Artist in New York Chinese Film Festival. Excellent. Yes. People like her will shine wherever they go. Then don't miss her movie Failan. It shows a love tragedy in which the two people are secluded in different time spaces. However, the movie doesn't just tell a sad but beautiful love story. Through the story, the audiences can see the director's profound thinking and understanding of life and human nature. Although I really like her, I prefer to see Nicholas Tse's movies. Then you can watch Nicholas Tse's movie The Stool Pigeon. His acting is good.",
+ "original user side information": {},
+ "original system side information": {}
+ },
+ {
+ "turn id": 7,
+ "user utterance": "Woah, sounds interesting.",
+ "system response": "I'm sure you'll like it.",
+ "dialog history": " Who is the leading actor of the movie Left Right Love Destiny? It's Cecilia Cheung You even know that. Excellent! Thank you for your praise. She's a Chinese idol. Of course I know her. Yes my idol is quite unusual. She is also an Asian Outstanding Artist in New York Chinese Film Festival. Excellent. Yes. People like her will shine wherever they go. Then don't miss her movie Failan. It shows a love tragedy in which the two people are secluded in different time spaces. However, the movie doesn't just tell a sad but beautiful love story. Through the story, the audiences can see the director's profound thinking and understanding of life and human nature. Although I really like her, I prefer to see Nicholas Tse's movies. Then you can watch Nicholas Tse's movie The Stool Pigeon. His acting is good. What kind of movie is it? It's an action drama, with a little bit of thriller.",
+ "original user side information": {},
+ "original system side information": {}
+ },
+ {
+ "turn id": 8,
+ "user utterance": "Then I'm leaving too see the movie.",
+ "system response": "Okay, see you next time.",
+ "dialog history": " Who is the leading actor of the movie Left Right Love Destiny? It's Cecilia Cheung You even know that. Excellent! Thank you for your praise. She's a Chinese idol. Of course I know her. Yes my idol is quite unusual. She is also an Asian Outstanding Artist in New York Chinese Film Festival. Excellent. Yes. People like her will shine wherever they go. Then don't miss her movie Failan. It shows a love tragedy in which the two people are secluded in different time spaces. However, the movie doesn't just tell a sad but beautiful love story. Through the story, the audiences can see the director's profound thinking and understanding of life and human nature. Although I really like her, I prefer to see Nicholas Tse's movies. Then you can watch Nicholas Tse's movie The Stool Pigeon. His acting is good. What kind of movie is it? It's an action drama, with a little bit of thriller. Woah, sounds interesting. I'm sure you'll like it.",
+ "original user side information": {},
+ "original system side information": {}
+ }
+ ]
+ },
+ "DuRecDial-2.0--train--2": {
+ "original dialog id": "",
+ "dialog index": 2,
+ "original dialog info": {
+ "goal": "[1]Ask about weather-->[2] Food recommendation(Marinated Fish)-->[3] POI recommendation(Mr.Fish Roasted)-->[4]Say goodbye",
+ "user_profile": {
+ "Age Range": "Under 18 years old",
+ "Name": "Mingzheng Li",
+ "Residence": "Hengshui",
+ "Accepted food": "Marinated Fish",
+ "Accepted Music": [
+ "You Will Always Be My Love",
+ "After Leaving",
+ "Lingering Memory of Past Love",
+ "Still Think You're the Best",
+ "I Waited Until the Flower Withered",
+ "The Crescent",
+ "Can't Fight The Feeling"
+ ],
+ "Rejected music": [
+ "Coffee",
+ "be torn with grief"
+ ],
+ "Gender": "Male",
+ "Favorite news": [
+ "Jacky Cheung's news"
+ ],
+ "Accepted celebrities": [
+ "Jacky Cheung"
+ ],
+ "Reject": [
+ "Movie"
+ ],
+ "Accepted POI": "Mr.Fish Roasted",
+ "Accepted music": [
+ "Wolf Legend"
+ ],
+ "Occupation": "Student"
+ },
+ "conversation": [
+ "[1] Baby, can you tell me the weather today?",
+ "OK, today in Hengshui it is sunny, with southwest winds. The high will be 4 \u2103 and low - 7 \u2103. Be sure to keep warm.",
+ "Wow, you're great. You know everything. Great!",
+ "[2] There's nothing I don't know. I also know that today's weather is suitable to eat Marinated Fish.",
+ "Okay, I like it.",
+ "I know a restaurant where the Marinated Fish is made of grouper.",
+ "Woah, there's grouper! I really like it! It's a tonic! I really want to have it!",
+ "[3] If you like it so much, I know a restaurant which cooks groupers very well. Mr.Fish Roasted. The Marinated Fish here is very authentic!",
+ "Then tell me about the average meal cost per person, the address and the ratings.",
+ "It costs 0 yuan per person. The address is the intersection of Liushi Road, Huzhuang commercial and residential building, Yingbin North Street, Jizhou District, Jizhou. 5 points, very high.",
+ "Okay. I'll be there tomorrow noon with my partner, taking her to have a taste. Hehe!",
+ "Okay, I'll make a reservation for you. I'll say bon appetite in advance!",
+ "[4] Thanks. Then I'm going to bed first. See you tomorrow. Bye.",
+ "All right. Bye~"
+ ],
+ "goal_topic_list": [
+ "Ask about weather",
+ "Ask about weather",
+ "Ask about weather",
+ "Marinated Fish",
+ "Marinated Fish",
+ "Marinated Fish",
+ "Marinated Fish",
+ "Mr.Fish Roasted",
+ "Mr.Fish Roasted",
+ "Mr.Fish Roasted",
+ "Mr.Fish Roasted",
+ "Mr.Fish Roasted",
+ "Say goodbye",
+ "Say goodbye"
+ ],
+ "goal_type_list": [
+ "Ask about weather",
+ "Ask about weather",
+ "Ask about weather",
+ "Food recommendation",
+ "Food recommendation",
+ "Food recommendation",
+ "Food recommendation",
+ "POI recommendation",
+ "POI recommendation",
+ "POI recommendation",
+ "POI recommendation",
+ "POI recommendation",
+ "Say goodbye",
+ "Say goodbye"
+ ],
+ "situation": "Time: 20 18 - 12 - 12 22:00, at home Wednesday",
+ "knowledge": [
+ [
+ "Hengshui",
+ "2018-12-12",
+ "Sunny with southwest winds. The high will be 4 \u2103, and low - 7 \u2103"
+ ],
+ [
+ "Hengshui",
+ "2018-12-12",
+ "Sunny with southwest winds. The high will be 4 \u2103, and low - 7 \u2103"
+ ],
+ [],
+ [
+ "Sunny with southwest winds. The high will be 4 \u2103, and low - 7 \u2103",
+ "Perfect for having",
+ "Marinated Fish"
+ ],
+ [],
+ [
+ "Marinated Fish",
+ "Ingredients",
+ "grouper\u3001Olive oil\u3001lemon juice\u3001Garlic\u3001pepper powder\u3001salt"
+ ],
+ [],
+ [
+ "Mr.Fish Roasted",
+ "Specials",
+ "Marinated Fish"
+ ],
+ [
+ "Mr.Fish Roasted",
+ "Rating",
+ "5"
+ ],
+ [
+ "Mr.Fish Roasted",
+ "Rating",
+ "5"
+ ],
+ [],
+ [],
+ [],
+ []
+ ]
+ },
+ "log": [
+ {
+ "turn id": 1,
+ "user utterance": "Baby, can you tell me the weather today?",
+ "system response": "OK, today in Hengshui it is sunny, with southwest winds. The high will be 4 \u2103 and low - 7 \u2103. Be sure to keep warm.",
+ "dialog history": "",
+ "original user side information": {},
+ "original system side information": {}
+ },
+ {
+ "turn id": 2,
+ "user utterance": "Wow, you're great. You know everything. Great!",
+ "system response": "There's nothing I don't know. I also know that today's weather is suitable to eat Marinated Fish.",
+ "dialog history": " Baby, can you tell me the weather today? OK, today in Hengshui it is sunny, with southwest winds. The high will be 4 \u2103 and low - 7 \u2103. Be sure to keep warm.",
+ "original user side information": {},
+ "original system side information": {}
+ },
+ {
+ "turn id": 3,
+ "user utterance": "Okay, I like it.",
+ "system response": "I know a restaurant where the Marinated Fish is made of grouper.",
+ "dialog history": " Baby, can you tell me the weather today? OK, today in Hengshui it is sunny, with southwest winds. The high will be 4 \u2103 and low - 7 \u2103. Be sure to keep warm. Wow, you're great. You know everything. Great! There's nothing I don't know. I also know that today's weather is suitable to eat Marinated Fish.",
+ "original user side information": {},
+ "original system side information": {}
+ },
+ {
+ "turn id": 4,
+ "user utterance": "Woah, there's grouper! I really like it! It's a tonic! I really want to have it!",
+ "system response": "If you like it so much, I know a restaurant which cooks groupers very well. Mr.Fish Roasted. The Marinated Fish here is very authentic!",
+ "dialog history": " Baby, can you tell me the weather today? OK, today in Hengshui it is sunny, with southwest winds. The high will be 4 \u2103 and low - 7 \u2103. Be sure to keep warm. Wow, you're great. You know everything. Great! There's nothing I don't know. I also know that today's weather is suitable to eat Marinated Fish. Okay, I like it. I know a restaurant where the Marinated Fish is made of grouper.",
+ "original user side information": {},
+ "original system side information": {}
+ },
+ {
+ "turn id": 5,
+ "user utterance": "Then tell me about the average meal cost per person, the address and the ratings.",
+ "system response": "It costs 0 yuan per person. The address is the intersection of Liushi Road, Huzhuang commercial and residential building, Yingbin North Street, Jizhou District, Jizhou. 5 points, very high.",
+ "dialog history": " Baby, can you tell me the weather today? OK, today in Hengshui it is sunny, with southwest winds. The high will be 4 \u2103 and low - 7 \u2103. Be sure to keep warm. Wow, you're great. You know everything. Great! There's nothing I don't know. I also know that today's weather is suitable to eat Marinated Fish. Okay, I like it. I know a restaurant where the Marinated Fish is made of grouper. Woah, there's grouper! I really like it! It's a tonic! I really want to have it! If you like it so much, I know a restaurant which cooks groupers very well. Mr.Fish Roasted. The Marinated Fish here is very authentic!",
+ "original user side information": {},
+ "original system side information": {}
+ },
+ {
+ "turn id": 6,
+ "user utterance": "Okay. I'll be there tomorrow noon with my partner, taking her to have a taste. Hehe!",
+ "system response": "Okay, I'll make a reservation for you. I'll say bon appetite in advance!",
+ "dialog history": " Baby, can you tell me the weather today? OK, today in Hengshui it is sunny, with southwest winds. The high will be 4 \u2103 and low - 7 \u2103. Be sure to keep warm. Wow, you're great. You know everything. Great! There's nothing I don't know. I also know that today's weather is suitable to eat Marinated Fish. Okay, I like it. I know a restaurant where the Marinated Fish is made of grouper. Woah, there's grouper! I really like it! It's a tonic! I really want to have it! If you like it so much, I know a restaurant which cooks groupers very well. Mr.Fish Roasted. The Marinated Fish here is very authentic! Then tell me about the average meal cost per person, the address and the ratings. It costs 0 yuan per person. The address is the intersection of Liushi Road, Huzhuang commercial and residential building, Yingbin North Street, Jizhou District, Jizhou. 5 points, very high.",
+ "original user side information": {},
+ "original system side information": {}
+ },
+ {
+ "turn id": 7,
+ "user utterance": "Thanks. Then I'm going to bed first. See you tomorrow. Bye.",
+ "system response": "All right. Bye~",
+ "dialog history": " Baby, can you tell me the weather today? OK, today in Hengshui it is sunny, with southwest winds. The high will be 4 \u2103 and low - 7 \u2103. Be sure to keep warm. Wow, you're great. You know everything. Great! There's nothing I don't know. I also know that today's weather is suitable to eat Marinated Fish. Okay, I like it. I know a restaurant where the Marinated Fish is made of grouper. Woah, there's grouper! I really like it! It's a tonic! I really want to have it! If you like it so much, I know a restaurant which cooks groupers very well. Mr.Fish Roasted. The Marinated Fish here is very authentic! Then tell me about the average meal cost per person, the address and the ratings. It costs 0 yuan per person. The address is the intersection of Liushi Road, Huzhuang commercial and residential building, Yingbin North Street, Jizhou District, Jizhou. 5 points, very high. Okay. I'll be there tomorrow noon with my partner, taking her to have a taste. Hehe! Okay, I'll make a reservation for you. I'll say bon appetite in advance!",
+ "original user side information": {},
+ "original system side information": {}
+ }
+ ]
+ },
+ "DuRecDial-2.0--train--3": {
+ "original dialog id": "",
+ "dialog index": 3,
+ "original dialog info": {
+ "goal": "[1] Music on demand(Hand in Hand)-->[2] Music recommendation(Change Me)-->[3] Play music(All the Things You Never Knew)-->[4]Say goodbye",
+ "user_profile": {
+ "Residence": "Yangzhou",
+ "Age Range": "Over 50 years old",
+ "Name": "Li Xu",
+ "Accepted food": "Marinated Fish",
+ "Favorite news": [
+ "Bingbing Fan's news",
+ "Leehom Wang's news"
+ ],
+ "Accepted movies": [
+ "Tiles of Deception, Lurid Affections: The Making of 'Lust, Caution'"
+ ],
+ "Accepted Music": [
+ "A Simple Song",
+ "Heroes of Earth"
+ ],
+ "Rejected music": [
+ "KISS GOODBYE"
+ ],
+ "Gender": "Male",
+ "Accepted celebrities": [
+ "Bingbing Fan",
+ "Leehom Wang"
+ ],
+ "Accepted movie": "Flash Point",
+ "Reject": [
+ "Poi"
+ ],
+ "Rejected movies": [
+ "Lust, Caution"
+ ],
+ "Accepted music": [
+ "Hand in Hand"
+ ],
+ "Occupation": "Employed"
+ },
+ "conversation": [
+ "[1] Let's listen to Hand in Hand.",
+ "It's playing for you. Please enjoy it!",
+ "The songs of Leehom Wang are still so good.",
+ "[2] Then I recommend that you listen to Change Me, which is also sung by Leehom Wang. It's a song with a rock&roll style. Unlike Leehom Wang's previous works which are exciting and passionate, it starts from the bottom of the heart and finds the emotion in the ordinary with a calm heart.",
+ "I\u2018ve heard it, and I'm not interested in this song.",
+ "Then you can listen to his All the Things You Never Knew, which is very catchy. Both the melody and the lyrics along the movie are very impressive.",
+ "I haven't listened to it before. How is it?",
+ "It's catchy. You can't miss it.",
+ "Sounds good! I wanna listen to it.",
+ "[3] May I play this song for you?",
+ "Sure.",
+ "It's playing for you. Please enjoy it!",
+ "[4] It's a good song. Okay, I need to go now. See you next time.",
+ "All right. Bye!"
+ ],
+ "goal_topic_list": [
+ "Hand in Hand",
+ "Hand in Hand",
+ "Hand in Hand",
+ "Change Me",
+ "Change Me",
+ "All the Things You Never Knew",
+ "All the Things You Never Knew",
+ "All the Things You Never Knew",
+ "All the Things You Never Knew",
+ "All the Things You Never Knew",
+ "All the Things You Never Knew",
+ "All the Things You Never Knew",
+ "Say goodbye",
+ "Say goodbye"
+ ],
+ "goal_type_list": [
+ "Music on demand",
+ "Music on demand",
+ "Music on demand",
+ "Music recommendation",
+ "Music recommendation",
+ "Music recommendation",
+ "Music recommendation",
+ "Music recommendation",
+ "Music recommendation",
+ "Play music",
+ "Play music",
+ "Play music",
+ "Say goodbye",
+ "Say goodbye"
+ ],
+ "situation": "Time: 18:00,on the way home",
+ "knowledge": [
+ [],
+ [],
+ [
+ "Leehom Wang",
+ "Sings",
+ "Hand in Hand"
+ ],
+ [
+ "Leehom Wang",
+ "Sings",
+ "Change Me"
+ ],
+ [],
+ [
+ "Leehom Wang",
+ "Sings",
+ "All the Things You Never Knew"
+ ],
+ [],
+ [],
+ [],
+ [],
+ [],
+ [],
+ [],
+ []
+ ]
+ },
+ "log": [
+ {
+ "turn id": 1,
+ "user utterance": "Let's listen to Hand in Hand.",
+ "system response": "It's playing for you. Please enjoy it!",
+ "dialog history": "",
+ "original user side information": {},
+ "original system side information": {}
+ },
+ {
+ "turn id": 2,
+ "user utterance": "The songs of Leehom Wang are still so good.",
+ "system response": "Then I recommend that you listen to Change Me, which is also sung by Leehom Wang. It's a song with a rock&roll style. Unlike Leehom Wang's previous works which are exciting and passionate, it starts from the bottom of the heart and finds the emotion in the ordinary with a calm heart.",
+ "dialog history": " Let's listen to Hand in Hand. It's playing for you. Please enjoy it!",
+ "original user side information": {},
+ "original system side information": {}
+ },
+ {
+ "turn id": 3,
+ "user utterance": "I\u2018ve heard it, and I'm not interested in this song.",
+ "system response": "Then you can listen to his All the Things You Never Knew, which is very catchy. Both the melody and the lyrics along the movie are very impressive.",
+ "dialog history": " Let's listen to Hand in Hand. It's playing for you. Please enjoy it! The songs of Leehom Wang are still so good. Then I recommend that you listen to Change Me, which is also sung by Leehom Wang. It's a song with a rock&roll style. Unlike Leehom Wang's previous works which are exciting and passionate, it starts from the bottom of the heart and finds the emotion in the ordinary with a calm heart.",
+ "original user side information": {},
+ "original system side information": {}
+ },
+ {
+ "turn id": 4,
+ "user utterance": "I haven't listened to it before. How is it?",
+ "system response": "It's catchy. You can't miss it.",
+ "dialog history": " Let's listen to Hand in Hand. It's playing for you. Please enjoy it! The songs of Leehom Wang are still so good. Then I recommend that you listen to Change Me, which is also sung by Leehom Wang. It's a song with a rock&roll style. Unlike Leehom Wang's previous works which are exciting and passionate, it starts from the bottom of the heart and finds the emotion in the ordinary with a calm heart. I\u2018ve heard it, and I'm not interested in this song. Then you can listen to his All the Things You Never Knew, which is very catchy. Both the melody and the lyrics along the movie are very impressive.",
+ "original user side information": {},
+ "original system side information": {}
+ },
+ {
+ "turn id": 5,
+ "user utterance": "Sounds good! I wanna listen to it.",
+ "system response": "May I play this song for you?",
+ "dialog history": " Let's listen to Hand in Hand. It's playing for you. Please enjoy it! The songs of Leehom Wang are still so good. Then I recommend that you listen to Change Me, which is also sung by Leehom Wang. It's a song with a rock&roll style. Unlike Leehom Wang's previous works which are exciting and passionate, it starts from the bottom of the heart and finds the emotion in the ordinary with a calm heart. I\u2018ve heard it, and I'm not interested in this song. Then you can listen to his All the Things You Never Knew, which is very catchy. Both the melody and the lyrics along the movie are very impressive. I haven't listened to it before. How is it? It's catchy. You can't miss it.",
+ "original user side information": {},
+ "original system side information": {}
+ },
+ {
+ "turn id": 6,
+ "user utterance": "Sure.",
+ "system response": "It's playing for you. Please enjoy it!",
+ "dialog history": " Let's listen to Hand in Hand. It's playing for you. Please enjoy it! The songs of Leehom Wang are still so good. Then I recommend that you listen to Change Me, which is also sung by Leehom Wang. It's a song with a rock&roll style. Unlike Leehom Wang's previous works which are exciting and passionate, it starts from the bottom of the heart and finds the emotion in the ordinary with a calm heart. I\u2018ve heard it, and I'm not interested in this song. Then you can listen to his All the Things You Never Knew, which is very catchy. Both the melody and the lyrics along the movie are very impressive. I haven't listened to it before. How is it? It's catchy. You can't miss it. Sounds good! I wanna listen to it. May I play this song for you?",
+ "original user side information": {},
+ "original system side information": {}
+ },
+ {
+ "turn id": 7,
+ "user utterance": "It's a good song. Okay, I need to go now. See you next time.",
+ "system response": "All right. Bye!",
+ "dialog history": " Let's listen to Hand in Hand. It's playing for you. Please enjoy it! The songs of Leehom Wang are still so good. Then I recommend that you listen to Change Me, which is also sung by Leehom Wang. It's a song with a rock&roll style. Unlike Leehom Wang's previous works which are exciting and passionate, it starts from the bottom of the heart and finds the emotion in the ordinary with a calm heart. I\u2018ve heard it, and I'm not interested in this song. Then you can listen to his All the Things You Never Knew, which is very catchy. Both the melody and the lyrics along the movie are very impressive. I haven't listened to it before. How is it? It's catchy. You can't miss it. Sounds good! I wanna listen to it. May I play this song for you? Sure. It's playing for you. Please enjoy it!",
+ "original user side information": {},
+ "original system side information": {}
+ }
+ ]
+ },
+ "DuRecDial-2.0--train--4": {
+ "original dialog id": "",
+ "dialog index": 4,
+ "original dialog info": {
+ "goal": "[1]Ask about weather-->[2] Food recommendation(Saut\u00e9ed\u00a0Spicy\u00a0Pork\u00a0)-->[3] POI recommendation(Laoqi Sichuan Restaurant)-->[4]Say goodbye",
+ "user_profile": {
+ "Age Range": "26-35",
+ "Name": "Jiaoyang Jin",
+ "Residence": "Xining",
+ "Accepted food": "Saut\u00e9ed\u00a0Spicy\u00a0Pork\u00a0",
+ "Accepted movies": [
+ "Flash Point",
+ "Sophie's Revenge",
+ "I Am Not Madame Bovary",
+ "Call for Love",
+ "Hand Phone"
+ ],
+ "Gender": "Female",
+ "Accepted celebrities": [
+ "Bingbing Fan"
+ ],
+ "Accepted movie": [
+ "Battle of Wits"
+ ],
+ "Reject": [
+ "music"
+ ],
+ "Rejected movies": [
+ "Ever Since We Love"
+ ],
+ "Occupation": "Student",
+ "Favorite news": [
+ "Bingbing Fan's news"
+ ],
+ "Accepted POI": "Laoqi Sichuan Restaurant"
+ },
+ "conversation": [
+ "[1] Good afternoon. What's the weather like today?",
+ "Today in Xining it is sunny, with northwest winds. The high will be 6 \u2103 and low - 12 \u2103. The temperature is low. Keep warm.",
+ "Okay, I'll be careful. Thank you.",
+ "[2] Hehe. This weather is suitable to eat Saut\u00e9ed Spicy Pork.",
+ "I was thinking about what to eat and then you recommend. You can read my mind.",
+ "Haha. Then don't miss Saut\u00e9ed Spicy Pork.",
+ "I'll have it for lunch.",
+ "[3] I know a place which cooks Saut\u00e9ed Spicy Pork very well, and that is Laoqi Sichuan Restaurant.",
+ "How much does this restaurant cost per person?",
+ "32 yuan.",
+ "It's affordable. What's the address?",
+ "Next to Yinhe Fish, Golden Home, No.914 Qilian Road, Chengbei District. It's easy to find it.",
+ "Okay. I've checked it and it's not very far. What about the ratings?",
+ "4.7 points, very high.",
+ "The score is high. Okay, I'll choose this one. Three of us, at around 1 pm. Would you please make a reservation?",
+ "Okay, I'll do it right now.",
+ "[4] Thank you. I'll go first, bye.",
+ "Okay. Goodbye."
+ ],
+ "goal_topic_list": [
+ "Ask about weather",
+ "Ask about weather",
+ "Ask about weather",
+ "Saut\u00e9ed\u00a0Spicy\u00a0Pork\u00a0",
+ "Saut\u00e9ed\u00a0Spicy\u00a0Pork\u00a0",
+ "Saut\u00e9ed\u00a0Spicy\u00a0Pork\u00a0",
+ "Saut\u00e9ed\u00a0Spicy\u00a0Pork\u00a0",
+ "Laoqi Sichuan Restaurant",
+ "Laoqi Sichuan Restaurant",
+ "Laoqi Sichuan Restaurant",
+ "Laoqi Sichuan Restaurant",
+ "Laoqi Sichuan Restaurant",
+ "Laoqi Sichuan Restaurant",
+ "Laoqi Sichuan Restaurant",
+ "Laoqi Sichuan Restaurant",
+ "Laoqi Sichuan Restaurant",
+ "Say goodbye",
+ "Say goodbye"
+ ],
+ "goal_type_list": [
+ "Ask about weather",
+ "Ask about weather",
+ "Ask about weather",
+ "Food recommendation",
+ "Food recommendation",
+ "Food recommendation",
+ "Food recommendation",
+ "POI recommendation",
+ "POI recommendation",
+ "POI recommendation",
+ "POI recommendation",
+ "POI recommendation",
+ "POI recommendation",
+ "POI recommendation",
+ "POI recommendation",
+ "POI recommendation",
+ "Say goodbye",
+ "Say goodbye"
+ ],
+ "situation": "Time: 20 18 - 11 - 22 12:00, at school Thursday",
+ "knowledge": [
+ [
+ "Xining",
+ "2018-11-22",
+ "Sunny with northwest winds. The high will be 6 \u2103, and low - 12 \u2103"
+ ],
+ [
+ "Xining",
+ "2018-11-22",
+ "Sunny with northwest winds. The high will be 6 \u2103, and low - 12 \u2103"
+ ],
+ [],
+ [
+ "Sunny with northwest winds. The high will be 6 \u2103, and low - 12 \u2103",
+ "Perfect for having",
+ "Saut\u00e9ed\u00a0Spicy\u00a0Pork\u00a0"
+ ],
+ [],
+ [
+ "Saut\u00e9ed\u00a0Spicy\u00a0Pork\u00a0",
+ "Type",
+ "Hot Dishes"
+ ],
+ [],
+ [
+ "Laoqi Sichuan Restaurant",
+ "Specials",
+ "Saut\u00e9ed\u00a0Spicy\u00a0Pork\u00a0"
+ ],
+ [
+ "Laoqi Sichuan Restaurant",
+ "Price per person",
+ "32"
+ ],
+ [
+ "Laoqi Sichuan Restaurant",
+ "Price per person",
+ "32"
+ ],
+ [
+ "Laoqi Sichuan Restaurant",
+ "Address",
+ "Next to Yinhe Yuzhuang, Jinse Jiayuan, No.914 Qilian Road, Chengbei District"
+ ],
+ [
+ "Laoqi Sichuan Restaurant",
+ "Address",
+ "Next to Yinhe Yuzhuang, Jinse Jiayuan, No.914 Qilian Road, Chengbei District"
+ ],
+ [
+ "Laoqi Sichuan Restaurant",
+ "Rating",
+ "4.7"
+ ],
+ [
+ "Laoqi Sichuan Restaurant",
+ "Rating",
+ "4.7"
+ ],
+ [],
+ [],
+ [],
+ []
+ ]
+ },
+ "log": [
+ {
+ "turn id": 1,
+ "user utterance": "Good afternoon. What's the weather like today?",
+ "system response": "Today in Xining it is sunny, with northwest winds. The high will be 6 \u2103 and low - 12 \u2103. The temperature is low. Keep warm.",
+ "dialog history": "",
+ "original user side information": {},
+ "original system side information": {}
+ },
+ {
+ "turn id": 2,
+ "user utterance": "Okay, I'll be careful. Thank you.",
+ "system response": "Hehe. This weather is suitable to eat Saut\u00e9ed Spicy Pork.",
+ "dialog history": " Good afternoon. What's the weather like today? Today in Xining it is sunny, with northwest winds. The high will be 6 \u2103 and low - 12 \u2103. The temperature is low. Keep warm.",
+ "original user side information": {},
+ "original system side information": {}
+ },
+ {
+ "turn id": 3,
+ "user utterance": "I was thinking about what to eat and then you recommend. You can read my mind.",
+ "system response": "Haha. Then don't miss Saut\u00e9ed Spicy Pork.",
+ "dialog history": " Good afternoon. What's the weather like today? Today in Xining it is sunny, with northwest winds. The high will be 6 \u2103 and low - 12 \u2103. The temperature is low. Keep warm. Okay, I'll be careful. Thank you. Hehe. This weather is suitable to eat Saut\u00e9ed Spicy Pork.",
+ "original user side information": {},
+ "original system side information": {}
+ },
+ {
+ "turn id": 4,
+ "user utterance": "I'll have it for lunch.",
+ "system response": "I know a place which cooks Saut\u00e9ed Spicy Pork very well, and that is Laoqi Sichuan Restaurant.",
+ "dialog history": " Good afternoon. What's the weather like today? Today in Xining it is sunny, with northwest winds. The high will be 6 \u2103 and low - 12 \u2103. The temperature is low. Keep warm. Okay, I'll be careful. Thank you. Hehe. This weather is suitable to eat Saut\u00e9ed Spicy Pork. I was thinking about what to eat and then you recommend. You can read my mind. Haha. Then don't miss Saut\u00e9ed Spicy Pork.",
+ "original user side information": {},
+ "original system side information": {}
+ },
+ {
+ "turn id": 5,
+ "user utterance": "How much does this restaurant cost per person?",
+ "system response": "32 yuan.",
+ "dialog history": " Good afternoon. What's the weather like today? Today in Xining it is sunny, with northwest winds. The high will be 6 \u2103 and low - 12 \u2103. The temperature is low. Keep warm. Okay, I'll be careful. Thank you. Hehe. This weather is suitable to eat Saut\u00e9ed Spicy Pork. I was thinking about what to eat and then you recommend. You can read my mind. Haha. Then don't miss Saut\u00e9ed Spicy Pork. I'll have it for lunch. I know a place which cooks Saut\u00e9ed Spicy Pork very well, and that is Laoqi Sichuan Restaurant.",
+ "original user side information": {},
+ "original system side information": {}
+ },
+ {
+ "turn id": 6,
+ "user utterance": "It's affordable. What's the address?",
+ "system response": "Next to Yinhe Fish, Golden Home, No.914 Qilian Road, Chengbei District. It's easy to find it.",
+ "dialog history": " Good afternoon. What's the weather like today? Today in Xining it is sunny, with northwest winds. The high will be 6 \u2103 and low - 12 \u2103. The temperature is low. Keep warm.