SeoyeonPark1223 commited on
Commit
d1b4fc9
Β·
verified Β·
1 Parent(s): da37b9e

Upload 2 files

Browse files
Files changed (2) hide show
  1. README.md +93 -0
  2. pics/data-preprocessing.png +3 -0
README.md ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - object-detection
4
+ - text-classification
5
+ - feature-extraction
6
+ language:
7
+ - ko
8
+ tags:
9
+ - homecam
10
+ - video
11
+ - audio
12
+ - npy
13
+ size_categories:
14
+ - 100B<n<1T
15
+ ---
16
+
17
+ ## Dataset Overview
18
+
19
+ - The dataset is designed to support the development of machine learning models for detecting daily activities, violence, and fall down scenarios from combined audio and video sources.
20
+ - The preprocessing pipeline leverages audio feature extraction, human keypoint detection, and relative positional encoding to generate a unified representation for training and inference.
21
+ - Classes:
22
+ - 0: Daily - Normal indoor activities
23
+ - 1: Violence - Aggressive behaviors
24
+ - 2: Fall Down - Sudden falls or collapses
25
+ - Data Format:
26
+ - Stored as `.npy` files for efficient loading and processing.
27
+ - Each `.npy` file is a tensor containing concatenated audio and video feature representations for a fixed sequence of frames.
28
+
29
+ ## Dataset Preprocessing Pipeline
30
+ ![Data Preprocessing](./pics/data-preprocessing.png)
31
+ - The dataset preprocessing consists of a multi-step pipeline to extract and align audio features and video keypoints. Below is a detailed explanation of each step:
32
+
33
+ ### Step 1: Audio Processing
34
+ 1. WAV File Extraction:
35
+ - Audio is extracted from the original video files in WAV format.
36
+ 2. Frame Splitting:
37
+ - The audio signal is divided into 1/30-second segments to synchronize with video frames.
38
+ 3. MFCC Feature Extraction:
39
+ - Mel-Frequency Cepstral Coefficients (MFCC) are computed for each audio segment.
40
+ - Each MFCC output has a shape of 13 x m, where m represents the number of frames in the audio segment.
41
+
42
+ ### Step 2: Video Processing
43
+ 1. YOLO Object Detection:
44
+ - Detects up to 3 individuals in each video frame using the YOLO model.
45
+ - Outputs bounding boxes for detected individuals.
46
+ 2. MediaPipe Keypoint Extraction:
47
+ - For each detected individual, MediaPipe extracts 33 keypoints, each represented as (x, y, z, visibility), where:
48
+ - x, y, z : Spatial coordinates.
49
+ - visibility : Confidence score for the detected keypoint.
50
+ 3. Keypoint Filtering:
51
+ - Keypoints 1, 2, and 3 (eyebrow-specific) are excluded.
52
+ - Keypoints are further filtered by visibility threshold(0.5) to ensure reliable data.
53
+ - Visibility property is excluded in further calculations.
54
+ 4. Relative Positional Encoding:
55
+ - For the remaining 30 keypoints, relative positions of the 10 most important keypoints are computed.
56
+ - These relative positions are added as additional features to improve context-aware modeling.
57
+ 5. Feature Dimensionality Adjustment:
58
+ - The output is reshaped to (n, 30*3 + 30, 3), where n is the number of frames.
59
+
60
+ ### Step 3: Audio-Video Feature Concatenation
61
+ 1. Expansion:
62
+ - Video keypoints are expanded to match the audio feature dimensions, resulting in a tensor of shape (1, 1, 4).
63
+ 2. Concatenation:
64
+ - Audio (13) and video (12) features are concatenated along the feature axis.
65
+ - The final representation has a shape of (n, 120, 13+12), where n is the number of frames.
66
+
67
+ ### Data Storage
68
+ - The final processed data is saved as `.npy` files, organized into three folders:
69
+ - `0_daily/`: Contains data representing normal daily activities.
70
+ - `1_violence/`: Contains data representing violent scenarios.
71
+ - `2_fall_down/`: Contains data representing falling events.
72
+
73
+ ## Dataset Description
74
+
75
+ - This dataset provides a comprehensive representation of synchronized audio and video features for real-time activity recognition tasks.
76
+ - The combination of MFCC audio features and MediaPipe keypoints ensures the model can accurately detect and differentiate between the defined activity classes.
77
+
78
+ - Key Features:
79
+ 1. Multimodal Representation:
80
+ - Audio and video modalities are fused into a single representation to capture both sound and motion dynamics.
81
+ 2. Efficient Format:
82
+ - The `.npy` format ensures fast loading and processing, suitable for large-scale training.
83
+ 3. Real-World Applications:
84
+ - Designed for safety systems, healthcare monitoring, and smart home applications.
85
+ - Adaptation in `SilverAssistant` project: [HuggingFace SilverAssistant Model](https://huggingface.co/SilverAvocado/SilverAssistant)
86
+
87
+ - This dataset enables the development of robust multimodal models for detecting critical situations with high accuracy and efficiency.
88
+
89
+ ## Data Sources
90
+ - Source 1: [μ‹œλ‹ˆμ–΄ 이상행동 μ˜μƒ AI Hub](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=data&dataSetSn=167)
91
+ - Source 2: [이상행동 cctv μ˜μƒ AI Hub](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=data&dataSetSn=171)
92
+ - Source 3: [λ©€ν‹°λͺ¨λ‹¬ μ˜μƒ](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=realm&dataSetSn=58)
93
+
pics/data-preprocessing.png ADDED

Git LFS Details

  • SHA256: 230ffdc0d796c01bb003e429809833fd0485a00abb8fb86504a75205fc7d9cda
  • Pointer size: 131 Bytes
  • Size of remote file: 197 kB