File size: 7,380 Bytes
3d8bfaf
1a2c623
 
 
 
b52d688
1a2c623
 
b4ffd38
 
 
 
 
 
 
 
 
 
 
 
 
 
b52d688
 
349b3da
b52d688
 
 
349b3da
 
 
 
 
61be000
349b3da
 
b52d688
 
 
 
61be000
b52d688
 
 
 
 
 
f86db5f
8bd3007
 
 
7607b2c
 
 
 
b5d745e
7607b2c
 
 
 
 
 
b5d745e
7607b2c
 
 
 
 
 
 
f86db5f
 
 
 
b52d688
 
 
 
 
 
 
a5f32cd
7d0509c
b52d688
 
 
 
7607b2c
 
b52d688
 
 
 
 
 
 
 
 
 
 
 
7d0509c
b52d688
 
 
 
5f6e816
 
b52d688
 
 
 
 
 
5f6e816
61be000
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b52d688
5f6e816
b52d688
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
---
task_categories:
- text-to-image
language:
- en
pretty_name: Midjourney-Threads
size_categories:
- 100K<n<1M
configs:
- config_name: default
  data_files:
  - split: train
    path: 
    - "threads_0.csv"
    - "threads_20000.csv"
    - "threads_40000.csv"
    - "threads_60000.csv"
    - "threads_80000.csv"
    - "threads_100000.csv"
    - "threads_120000.csv"
    - "threads_140000.csv"
    - "threads_160000.csv"
---

# Dataset Card for Midjourney-Threads 🧵💬

<!-- Provide a quick summary of the dataset. -->

This dataset contains users prompts from the Midjourney discord channel, organized into "threads of interaction".
Each thread contains a user’s trails to create one target image.

The dataset was introduced as part of the paper: [Human Learning by Model Feedback: The Dynamics of Iterative Prompting with Midjourney][ourpaper].

[ourpaper]: https://aclanthology.org/2023.emnlp-main.253/ "markdown our paper"


### Dataset Sources

<!-- Provide the basic links for the dataset. -->
- **Repository:** https://github.com/shachardon/Mid-Journey-to-alignment
- **Paper:** https://aclanthology.org/2023.emnlp-main.253/


## Dataset Structure

<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->

Main Columns:
- 'text' - the original prompt
- 'args' - predefined parameters (such as the aspect ratio, chaos and [more][myexample])
- 'channel_id' - the discord channel
- 'userid' - an anonymous user id
- 'timestamp' - a timestamp of the prompt creation
- 'label' - Ture whether an image that was generated based on that prompt was upscaled, otherwise False.
- 'id' - unique id of the prompt
- 'url_png' - link to the generated images (a 4-grid version)
- 'main_content' - prefix of the prompt, without trailing magic-words
- 'concreteness' - concreteness score, based on the [this paper][concpaper] 
- 'word_len' - the number of words
- 'repeat_words' - the occurrences of each word that appears more than once in the prompt, excluding stop words.
- 'reapeat_words_ratio' - repeat_words / word_len
- 'perplexity' - the perplexity GPT-2 assigns to each prompt.
- 'caption_0-3' - captions that were generated by the BLIP-2 model, with the 4 created images as its inputs.
- 'phase' - train/test split, as was used to train image/text classifiers
- 'magic_ratio' - the percentage of words that were recognized as magic words in the prompt
- 'thread_id' - the id of the thread
- 'depth' - the max depth of a constituency parse tree of the prompt.
- 'num_sent_parser' - the number of sentences in the prompt.
- 'num_sent_parser_ratio' - num_sent_parser / word_len
- 'words_per_sent' - word_len / num_sent_parser


[myexample]: https://docs.midjourney.com/docs/parameter-list "markdown more"
[concpaper]: https://link.springer.com/article/10.3758/s13428-013-0403-5 "markdown this paper"

## Dataset Creation

### Source Data

<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
We construct the dataset by scraping user-generated prompts from the Midjourney Discord server.
The server contains channels in which a user can type a prompt and arguments, and then the Midjourney bot would reply with 4 generated images, combined together into a grid. Then, if the user is satisfied with one of the 4 images, they can send an 'upscale' command to the bot, to get an upscaled version of the desired image.
We randomly choose one of the 'newbies' channels, where both new and experienced users are experimenting with general domain prompts. We collect 693,528 prompts (From 23 January to 1 March 2023), together with their matching images and meta-data such as timestamps and user ids (which we anonymize).

#### Data Collection and Processing

<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
We split the prompts into threads automatically, see the paper for more details.
In addition, we extract features (perplexity, sentence length, and more).


#### Personal and Sensitive Information

<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->

We fully anonymize the data by removing user names and other user-specific meta-data. If you recognize your prompts here and want to remove them, please send us an [email](mailto:[email protected]).

The Midjourney Discord is an open community that allows others to use images and prompts whenever they are posted in a public setting.
Paying users do own all assets they create, and therefore we do not include the image files in our dataset, but only links to them.


### Recommendations, Risks, and Limitations

<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->

We split the prompts into threads automatically, and therefore there are some mistakes. For more about our annotations method, please see the paper.
Our manual sample did not find any offensive content in the prompts.


## Citation

<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**
```
@inproceedings{don-yehiya-etal-2023-human,
    title = "Human Learning by Model Feedback: The Dynamics of Iterative Prompting with Midjourney",
    author = "Don-Yehiya, Shachar  and
      Choshen, Leshem  and
      Abend, Omri",
    editor = "Bouamor, Houda  and
      Pino, Juan  and
      Bali, Kalika",
    booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
    month = dec,
    year = "2023",
    address = "Singapore",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.emnlp-main.253",
    pages = "4146--4161",
    abstract = "Generating images with a Text-to-Image model often requires multiple trials, where human users iteratively update their prompt based on feedback, namely the output image. Taking inspiration from cognitive work on reference games and dialogue alignment, this paper analyzes the dynamics of the user prompts along such iterations. We compile a dataset of iterative interactions of human users with Midjourney. Our analysis then reveals that prompts predictably converge toward specific traits along these iterations. We further study whether this convergence is due to human users, realizing they missed important details, or due to adaptation to the model{'}s {``}preferences{''}, producing better images for a specific language style. We show initial evidence that both possibilities are at play. The possibility that users adapt to the model{'}s preference raises concerns about reusing user data for further training. The prompts may be biased towards the preferences of a specific model, rather than align with human intentions and natural manner of expression.",
}
```