Databoost commited on
Commit
e76f0cf
·
verified ·
1 Parent(s): 7148067

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +122 -0
README.md ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Technical Documentation for the Text-to-Video Dataset “VidData”
2
+
3
+ ## 1. Introduction
4
+ This dataset contains 1006 annotated videos of everyday scenes, used for training and evaluating AI models in video generation and recognition. It is structured to meet the needs of Text-to-Video models and motion analysis.
5
+
6
+ ## 2. Dataset Specifications
7
+
8
+ ### 2.1. Generation Criteria
9
+ - **Maximum video duration**: 10 seconds maximum
10
+ - **Video themes**:
11
+ - Walking
12
+ - Exercising
13
+ - Writing
14
+ - Shopping
15
+ - Sleeping
16
+ - Meditating
17
+ - Working
18
+ - Studying
19
+ - Driving
20
+ - Washing
21
+ - Gardening
22
+ - Calling
23
+ - Listening
24
+ - Organizing
25
+ - Planning
26
+ - Relaxing
27
+ - Teaching
28
+ - **Video size**: 512×512 pixels
29
+
30
+ ### 2.2. Dataset Organization
31
+ The dataset is organized under a main folder called VidData, which includes three essential parts:
32
+ data/train/: Contains a VidData.csv file, likely storing metadata or structured details about the videos.
33
+ video/: Holds the video files (e.g., ---_iRTHryQ_13_0to241.mp4), named in a specific format, possibly indicating segments or unique identifiers.
34
+ readme.md: Provides documentation about the dataset's structure and usage.
35
+ This structure clearly separates raw video data, metadata (CSV), and documentation, ensuring efficient organization for analysis and processing.
36
+ data/train/: Contains CSV files with video-related metadata.
37
+ video/: Stores the actual video files.
38
+ ## 3. Data Structure
39
+ The dataset is stored as a CSV file and includes the following columns:
40
+
41
+ | Column | Type | Description |
42
+ |-------------------------|---------|--------------------------------------|
43
+ | video | string | Video file name |
44
+ | caption | string | Textual description of the video |
45
+ | temporal consistency score | float64 | Temporal consistency score |
46
+ | fps | float64 | Frame per second |
47
+ | frame | int64 | Number of frames in the video |
48
+ | seconds | float64 | Video duration in seconds |
49
+ | motion score | float64 | Motion score |
50
+ | camera motion | string | Type of camera motion (e.g., pan_left) |
51
+
52
+ ## 4. Libraries Used
53
+
54
+ ### 4.1. Library Examples
55
+ Here are some example libraries that can be used when analyzing this data:
56
+ - **OpenCV**: Video manipulation and processing (reading, writing, frame extraction, contour detection, filtering, etc.).
57
+ - **Scikit-Image**: Calculating the Structural Similarity Index (SSIM) for image quality evaluation and various image transformations (segmentation, filtering, etc.).
58
+ - **NumPy**: Efficient manipulation of matrices and arrays, essential for calculations on images and videos.
59
+ - **Pandas**: Managing and structuring metadata associated with videos (e.g., file names, timestamps, annotations).
60
+ - **Matplotlib/Seaborn**: Visualizing analysis results as graphs.
61
+
62
+ ### 4.2. Installing Dependencies
63
+ Follow the instructions below to install the required libraries:
64
+ 1. Create a `requirements.txt` file and add the following:
65
+ opencv-python==4.8.1.78 # Video manipulation and processing
66
+ scikit-image==0.22.0 # SSIM calculation and image transformations
67
+ numpy==1.26.2 # Efficient manipulation of matrices and arrays
68
+ pandas==2.1.4 # Managing and structuring metadata
69
+ matplotlib==3.8.2 # Visualizing analysis results
70
+ seaborn==0.12.2 # Advanced visualization with enhanced graphics
71
+ 2. Run the command: `pip install -r requirements.txt`
72
+
73
+ **Note**: Only include the libraries you need in `requirements.txt`.
74
+
75
+ ## 5. Using the Dataset
76
+
77
+ ### 5.1. Primary Applications
78
+ #### 5.1.1. Text-to-Video Generation
79
+ - Train models to generate video based on textual input.
80
+ - Benchmark performance by comparing generated video against dataset entities.
81
+
82
+ #### 5.1.2. Video Description Models
83
+ - Evaluate models designed to generate textual descriptions from videos.
84
+
85
+ #### 5.1.3. Temporal Consistency Analysis
86
+ - Test model for maintaining smoothness and coherence in video generation.
87
+
88
+ ### 5.2. Example Workflow
89
+ - Load the dataset using Python:
90
+ ## 4. Libraries Used
91
+ ```python
92
+ import pandas as pd
93
+
94
+ dataset = pd.read_csv('VidData.csv')
95
+ print(dataset.head())
96
+
97
+ ##Access video metadata:
98
+ video = dataset.iloc[0] # First entry
99
+ print(f"Video Name: {video['video_name']}")
100
+ print(f"Caption: {video['Caption']}")
101
+ print(f"Duration: {video['duration_seconds']} seconds")
102
+
103
+ ##Filter video based on motion:
104
+ high_motion_videos = dataset[dataset['motion_score'] > 1.0]
105
+ print(high_motion_videos)
106
+ ```
107
+
108
+ ## 6. File Format
109
+ The dataset is delivered in CSV format, with each column representing a video and its metadata.
110
+
111
+ ## 7. Sample Entry:
112
+
113
+ | video\_name | caption | temporal\_consistency\_score | fps | frames | duration\_seconds | motion\_score | camera\_motion |
114
+ | ----------- | ------------------------------------------------------ | ---------------------------- | --- | ------ | ----------------- | ------------- | -------------- |
115
+ | E\_1.mp4 | The video shows a soccer player kicking a soccer ball. | 0.948826 | 30 | 195 | 6.5 | 0.826522 | 1.105807 |
116
+
117
+
118
+ ## 8. Contact
119
+ For inquiries, please contact:
120
+
121
+ - **Email**: [[email protected]](mailto\:[email protected])
122
+ - **Website**: [databoost.us](https://databoost.us)