ffeew commited on
Commit
2d134c8
1 Parent(s): ebd2249
.gitignore ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ GRID_alignments
2
+ lip_coordinates
3
+ lip_images
GRID_alignments.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:40abb10300984775e6465b6ab6665231c8d81f3a9af559622d6f48a48d902f11
3
+ size 67338240
README.md CHANGED
@@ -1,3 +1,69 @@
1
  ---
2
  license: cc-by-4.0
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
+ language:
4
+ - en
5
+ size_categories:
6
+ - 1K<n<10K
7
  ---
8
+
9
+ # Enhanced GRID Corpus with Lip Landmark Coordinates
10
+
11
+ ## Introduction
12
+
13
+ This enhanced version of the GRID audiovisual sentence corpus, originally available at [Zenodo](https://zenodo.org/records/3625687), incorporates significant new features for auditory-visual speech recognition research. Building upon the preprocessed data from [LipNet-PyTorch](https://github.com/VIPL-Audio-Visual-Speech-Understanding/LipNet-PyTorch), we have added lip landmark coordinates to the dataset, providing detailed positional information of key points around the lips. This addition greatly enhances its utility in visual speech recognition and related fields. Furthermore, to facilitate ease of access and integration into existing machine learning workflows, we have published this enriched dataset on the Hugging Face platform, making it readily available to the research community.
14
+
15
+ ## Dataset Structure
16
+
17
+ This dataset is split into 3 directories:
18
+
19
+ - `lip_images`: contains the images of the lips
20
+ - `speaker_id`: contains the videos of a particular speaker
21
+ - `video_id`: contains the video frames of a particular video
22
+ - `frame_no.jpg`: jpg image of the lips of a particular frame
23
+ - `lip_coordinates`: contains the landmark coordinates of the lips
24
+ - `speaker_id`: contains the lip landmark of a particular speaker
25
+ - `video_id.json`: a json file containing the lip landmark coordinates of a particular video, where the keys are the frame numbers and the values are the x, y lip landmark coordinates
26
+ - `GRID_alignments`: contains the alignments of all the videos in the dataset
27
+ - `speaker_id`: contains the alignments of a particular speaker
28
+ - `video_id`: contains the alignments of a particular video
29
+ - `video_id.align`: contains the alignments of a particular video, where each line is a word and the start and end time of the word in the video
30
+
31
+ ## Details
32
+
33
+ The lip landmark coordinates are extracted using the original videos in the GRID corpus and using the dlib library, using the [shape_predictor_68_face_landmarks_GTX.dat](https://github.com/davisking/dlib-models) pretrained model. The lip landmark coordinates are then saved in a json file, where the keys are the frame numbers and the values are the x, y lip landmark coordinates. The lip landmark coordinates are saved in the same order as the frames in the video.
34
+
35
+ ## Usage
36
+
37
+ The dataset can be downloaded by cloning this repository.
38
+
39
+ ### Cloning the repository
40
+
41
+ ```bash
42
+ git clone https://huggingface.co/datasets/SilentSpeak/EGCLLC
43
+ ```
44
+
45
+ ### Loading the dataset
46
+
47
+ After cloning the repository, you can load the dataset by unpacking the tar file and using dataset_tar.py script.
48
+
49
+ Alternatively, a probably faster method is that, you can un-tar the tar files using the following command:
50
+
51
+ ```bash
52
+ tar -xvf lip_images.tar
53
+ tar -xvf lip_coordinates.tar
54
+ tar -xvf GRID_alignments.tar
55
+ ```
56
+
57
+ ## References
58
+
59
+ Alvarez Casado, C., Bordallo Lopez, M.
60
+ Real-time face alignment: evaluation methods, training strategies and implementation optimization.
61
+ Springer Journal of Real-time image processing, 2021
62
+
63
+ Assael, Y., Shillingford, B., Whiteson, S., & Freitas, N. (2017). LipNet: End-to-End Sentence-level Lipreading. GPU Technology Conference.
64
+
65
+ Cooke, M., Barker, J., Cunningham, S., & Shao, X. (2006). The Grid Audio-Visual Speech Corpus (1.0) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.3625687
66
+
67
+ ```
68
+
69
+ ```
alignments.txt ADDED
The diff for this file is too large to render. See raw diff
 
dataset_tar.py ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import tarfile
2
+
3
+ ALIGNMENTS_PATH = "GRID_alignments"
4
+ LIP_COORDINATES_PATH = "lip_coordinates"
5
+ LIP_IMAGES_PATH = "lip_images"
6
+
7
+ # # Create tar files
8
+ # with tarfile.open(f"{LIP_COORDINATES_PATH}.tar", "w") as tar:
9
+ # tar.add(f"{LIP_COORDINATES_PATH}/")
10
+
11
+ # with tarfile.open(f"{LIP_IMAGES_PATH}.tar", "w") as tar:
12
+ # tar.add(f"{LIP_IMAGES_PATH}/")
13
+
14
+ # with tarfile.open(f"{ALIGNMENTS_PATH}.tar", "w") as tar:
15
+ # tar.add(f"{ALIGNMENTS_PATH}/")
16
+
17
+ # Extract tar files
18
+ # note: may take a while to untar the files, ~2-4 hours
19
+ with tarfile.open(f"{LIP_COORDINATES_PATH}.tar", "r") as tar:
20
+ tar.extractall()
21
+
22
+ with tarfile.open(f"{LIP_IMAGES_PATH}.tar", "r") as tar:
23
+ tar.extractall()
24
+
25
+ with tarfile.open(f"{ALIGNMENTS_PATH}.tar", "r") as tar:
26
+ tar.extractall()
lip_coordinates.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7917e94adcdb1e0baa7c9c18a9877d7b6a09f8dfb2afa870b020e16c7c36a9ba
3
+ size 588185600
lip_coordinates.txt ADDED
The diff for this file is too large to render. See raw diff
 
lip_images.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:49111af2cac2b0754296161237c79b36daf0100bedea426bfcfacd7d9b221a58
3
+ size 14338662400
lip_images.txt ADDED
The diff for this file is too large to render. See raw diff