You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

iSign: A Benchmark for Indian Sign Language Processing

The iSign dataset serves as a benchmark for Indian Sign Language Processing. The dataset comprises of NLP-specific tasks (including SignVideo2Text, SignPose2Text, Text2Pose, Word Prediction, and Sign Semantics). The dataset is free for research use but not for commercial purposes.

Quick Links

Dataset Usage

Videos

The iSign videos and the corresponding pose files are available in part files (due to huggingface cap on file sizes). The video part files iSign-videos_v1.1_part_aa and iSign-videos_v1.1_part_ab can be combined to get the complete video dataset zip file using the following command:

cat iSign-videos_v1.1_part_aa iSign-videos_v1.1_part_ab > iSign-videos_v1.1.zip

Pose

Similarly, the pose part files iSign-poses_v1.1_part_aa, iSign-poses_v1.1_part_ab, iSign-poses_v1.1_part_ac, and iSign-poses_v1.1_part_ad can be combined to get the complete pose dataset zip file using the following command:

cat iSign-poses_v1.1_part_aa iSign-poses_v1.1_part_ab iSign-poses_v1.1_part_ac iSign-poses_v1.1_part_ad > iSign-poses_v1.1.zip

The pose files are saved using the pose-format [https://github.com/sign-language-processing/pose].

pip install pose-format

Reading .pose Files:

To load a .pose file, use the Pose class.

from pose_format import Pose

data_buffer = open("file.pose", "rb").read()
pose = Pose.read(data_buffer)

numpy_data = pose.body.data
confidence_measure  = pose.body.confidence

Text

The translations for the videos are available in the CSV files. iSign_v1.1.csv contains the translations for the videos, word-presence-dataset_v1.1.csv contains the word presence dataset for Task 4 (Word Presence Prediction) in the paper, and word-description-dataset_v1.1.csv contains the word description dataset for Task-5 (Semantic Similarity Prediction) in the paper.

Each entry in the datasets is identified by a unique identifier (UID) structured as follows:

  • Format: [video_id]-[sequence_number]
  • Example: 1782bea75c7d-7
    • 1782bea75c7d: Unique video ID
    • -7: Sequence number within the video

Note the sequence number in the UID indicates the order of the text within each video, allowing for proper reconstruction of the full translation or description. For train/dev/test split, we recommend splitting using the video_id, i.e. keeping all the videos with a video_id in the same split. This will ensures that all segments (rows) belonging to a single video remain together in the same split, preventing data leakage and contamination.

Citing Our Work

If you find the iSign dataset beneficial, please consider citing our work:

@inproceedings{iSign-2024,
  title = "{iSign}: A Benchmark for Indian Sign Language Processing",
  author = "Joshi, Abhinav  and
    Mohanty, Romit  and
    Kanakanti, Mounika  and
    Mangla, Andesha  and
    Choudhary, Sudeep  and
    Barbate, Monali  and
    Modi, Ashutosh",
  booktitle = "Findings of the Association for Computational Linguistics: ACL 2024",
  month = aug,    
  year = "2024",
  address = "Bangkok, Thailand",
  publisher = "Association for Computational Linguistics", 
}
Downloads last month
73