CAPYLEE commited on
Commit
6dbb42f
Β·
verified Β·
1 Parent(s): dd1cb8f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +52 -52
README.md CHANGED
@@ -1,52 +1,52 @@
1
- # Infant Cry Detection using Causal Temporal Representation
2
-
3
- This project focuses on detecting infant cries using a novel **causal temporal representation** framework. Our approach incorporates causal reasoning into the data-generating process (DGP) to improve the interpretability and reliability of cry detection systems. This repository provides the necessary resources to explore, train, and evaluate supervised models for this task, along with mathematical assumptions and metrics tailored for event-based evaluation.
4
-
5
- ## Features
6
- - **Data Generating Process**: Based on mathematical causal assumptions, our DGP defines how audio features and annotations are causally connected.
7
- - **Supervised Models**: State-of-the-art supervised learning methods, including Bidirectional LSTM, Transformer, and MobileNet V2.
8
- - **Event-Based Metrics**: Evaluation metrics tailored for time-sensitive detection tasks, including event-based F1-score and IOU.
9
- - **Interactive Example**: A Jupyter Notebook with step-by-step demonstrations.
10
-
11
- ![Causal Graph](https://github.com/PeterIsDanning/Infant-Cry-Detection-by-CRSTC/blob/main/casual_graph.png)
12
-
13
- ## Repository Structure
14
-
15
- ```plaintext
16
- .
17
- β”œβ”€β”€ data/ # Audio data in .wav format
18
- β”œβ”€β”€ labels/ # Annotation files corresponding to audio data (.TextGrid)
19
- β”œβ”€β”€ metrics/ # Event-based evaluation metrics
20
- β”œβ”€β”€ models/ # Pre-trained supervised models
21
- β”œβ”€β”€ src/ # Core codebase
22
- β”œβ”€β”€ experiment.ipynb # Usage demonstration
23
- └── README.md # Project description
24
- ```
25
-
26
- ### Directory Details
27
-
28
- - **data/**: Contains raw audio files in `.wav` format.
29
- - Each audio file represents an infant cry recording.
30
-
31
- - **labels/**: Stores annotation files in `.TextGrid` format.
32
- - Each `.TextGrid` file corresponds to an audio file and provides ground truth segmentations for cry events.
33
-
34
- - **metrics/**: Houses the implementation of event-based metrics for evaluating the performance of models.
35
- - Metrics include event-based F1-score and IOU, designed to measure temporal accuracy effectively.
36
-
37
- - **models/**: Contains pre-trained supervised models for infant cry detection.
38
- - Models include:
39
- - Bidirectional LSTM
40
- - Transformer
41
- - MobileNet V2
42
-
43
- - **src/**: Core implementation of the infant cry detection framework.
44
- - Includes modules for data preprocessing, feature extraction, model training, and evaluation.
45
-
46
- - **experiment.ipynb**: A Jupyter Notebook with a simple use case example.
47
- - Demonstrates how to load data, preprocess it, train a model, and evaluate its performance.
48
-
49
- For more details, refer to our accompanying research paper.
50
-
51
- ## License
52
- This project is licensed under the MIT License. See the LICENSE file for more details.
 
1
+ # Infant Cry Detection using Causal Temporal Representation
2
+
3
+ This project focuses on detecting infant cries using a novel **causal temporal representation** framework. Our approach incorporates causal reasoning into the data-generating process (DGP) to improve the interpretability and reliability of cry detection systems. This repository provides the necessary resources to explore, train, and evaluate supervised models for this task, along with mathematical assumptions and metrics tailored for event-based evaluation.
4
+
5
+ ## Features
6
+ - **Data Generating Process**: Based on mathematical causal assumptions, our DGP defines how audio features and annotations are causally connected.
7
+ - **Supervised Models**: State-of-the-art supervised learning methods, including Bidirectional LSTM, Transformer, and MobileNet V2.
8
+ - **Event-Based Metrics**: Evaluation metrics tailored for time-sensitive detection tasks, including event-based F1-score and IOU.
9
+ - **Interactive Example**: A Jupyter Notebook with step-by-step demonstrations.
10
+
11
+ ![Causal Graph](https://huggingface.co/CAPYLEE/CRSTC/blob/main/casual_graph.png)
12
+
13
+ ## Repository Structure
14
+
15
+ ```plaintext
16
+ .
17
+ β”œβ”€β”€ data/ # Audio data in .wav format
18
+ β”œβ”€β”€ labels/ # Annotation files corresponding to audio data (.TextGrid)
19
+ β”œβ”€β”€ metrics/ # Event-based evaluation metrics
20
+ β”œβ”€β”€ models/ # Pre-trained supervised models
21
+ β”œβ”€β”€ src/ # Core codebase
22
+ β”œβ”€β”€ experiment.ipynb # Usage demonstration
23
+ └── README.md # Project description
24
+ ```
25
+
26
+ ### Directory Details
27
+
28
+ - **data/**: Contains raw audio files in `.wav` format.
29
+ - Each audio file represents an infant cry recording.
30
+
31
+ - **labels/**: Stores annotation files in `.TextGrid` format.
32
+ - Each `.TextGrid` file corresponds to an audio file and provides ground truth segmentations for cry events.
33
+
34
+ - **metrics/**: Houses the implementation of event-based metrics for evaluating the performance of models.
35
+ - Metrics include event-based F1-score and IOU, designed to measure temporal accuracy effectively.
36
+
37
+ - **models/**: Contains pre-trained supervised models for infant cry detection.
38
+ - Models include:
39
+ - Bidirectional LSTM
40
+ - Transformer
41
+ - MobileNet V2
42
+
43
+ - **src/**: Core implementation of the infant cry detection framework.
44
+ - Includes modules for data preprocessing, feature extraction, model training, and evaluation.
45
+
46
+ - **experiment.ipynb**: A Jupyter Notebook with a simple use case example.
47
+ - Demonstrates how to load data, preprocess it, train a model, and evaluate its performance.
48
+
49
+ For more details, refer to our accompanying research paper.
50
+
51
+ ## License
52
+ This project is licensed under the MIT License. See the LICENSE file for more details.