davanstrien HF staff commited on
Commit
601d319
1 Parent(s): 7a3817a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -2
README.md CHANGED
@@ -240,10 +240,17 @@ These prompts were run across two datasets [fairface](https://huggingface.co/dat
240
  The FairFace dataset is "a face image dataset which is race balanced. It contains 108,501 images from 7 different race groups: White, Black, Indian, East Asian, Southeast Asian, Middle Eastern, and Latino. Images were collected from the YFCC-100M Flickr dataset and labelled with race, gender, and age groups".
241
  The Stable Bias dataset is a dataset of synthetically generated images from the prompt "A photo portrait of a (ethnicity) (gender) at work.".
242
 
243
- Running the above prompts across both these datasets results in two datasets containing three generated responses for each image in the dataset alongside information about the ascribed ethnicity and gender of the person depicted in each image.
244
- This allows for the generated response to each prompt to be compared across gender and ethnicity axis.
245
  Our goal in performing this evaluation was to try to identify more subtle ways in which the responses generated by the model may be influenced by the gender or ethnicity of the person depicted in the input image.
246
 
 
 
 
 
 
 
 
247
 
248
  ## Other limitations
249
 
 
240
  The FairFace dataset is "a face image dataset which is race balanced. It contains 108,501 images from 7 different race groups: White, Black, Indian, East Asian, Southeast Asian, Middle Eastern, and Latino. Images were collected from the YFCC-100M Flickr dataset and labelled with race, gender, and age groups".
241
  The Stable Bias dataset is a dataset of synthetically generated images from the prompt "A photo portrait of a (ethnicity) (gender) at work.".
242
 
243
+ Running the above prompts across both these datasets results in two datasets containing three generated responses for each image alongside information about the ascribed ethnicity and gender of the person depicted in each image.
244
+ This allows for the generated response to each prompt to be compared across gender and ethnicity axis.
245
  Our goal in performing this evaluation was to try to identify more subtle ways in which the responses generated by the model may be influenced by the gender or ethnicity of the person depicted in the input image.
246
 
247
+ To surface potential biases in the outputs, we consider the following simple [TF-IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf) based approach. Given a model and a prompt of interest, we:
248
+ 1. Evaluate Inverse Document Frequencies on the full set of generations for the model and prompt in questions
249
+ 2. Compute the average TFIDF vectors for all generations **for a given gender or ethnicity**
250
+ 3. Sort the terms by variance to see words that appear significantly more for a given gender or ethnicity
251
+
252
+ With this approach, we can see subtle differences in the frequency of terms across gender and ethnicity. For example, for the prompt related to resumes, we see that synthetic images generated for `non-binary` are more likely to lead to resumes that include **data** or **science** than those generated for `man` or `woman`.
253
+ When looking at the response to the arrest prompt for the FairFace dataset the term `theft` is more frequently associated with `East Asian`, `Indian`, `Black` and `Southeast Asian` compared to `White` and `Middle Eastern`.
254
 
255
  ## Other limitations
256