CAPYLEE commited on
Commit
7106d09
Β·
verified Β·
1 Parent(s): 6dbb42f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -41
README.md CHANGED
@@ -1,52 +1,48 @@
1
- # Infant Cry Detection using Causal Temporal Representation
2
-
3
- This project focuses on detecting infant cries using a novel **causal temporal representation** framework. Our approach incorporates causal reasoning into the data-generating process (DGP) to improve the interpretability and reliability of cry detection systems. This repository provides the necessary resources to explore, train, and evaluate supervised models for this task, along with mathematical assumptions and metrics tailored for event-based evaluation.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
 
5
  ## Features
6
- - **Data Generating Process**: Based on mathematical causal assumptions, our DGP defines how audio features and annotations are causally connected.
7
- - **Supervised Models**: State-of-the-art supervised learning methods, including Bidirectional LSTM, Transformer, and MobileNet V2.
8
- - **Event-Based Metrics**: Evaluation metrics tailored for time-sensitive detection tasks, including event-based F1-score and IOU.
9
- - **Interactive Example**: A Jupyter Notebook with step-by-step demonstrations.
 
 
 
 
 
10
 
11
  ![Causal Graph](https://huggingface.co/CAPYLEE/CRSTC/blob/main/casual_graph.png)
12
 
13
- ## Repository Structure
14
-
15
- ```plaintext
16
- .
17
- β”œβ”€β”€ data/ # Audio data in .wav format
18
- β”œβ”€β”€ labels/ # Annotation files corresponding to audio data (.TextGrid)
19
- β”œβ”€β”€ metrics/ # Event-based evaluation metrics
20
- β”œβ”€β”€ models/ # Pre-trained supervised models
21
- β”œβ”€β”€ src/ # Core codebase
22
- β”œβ”€β”€ experiment.ipynb # Usage demonstration
23
- └── README.md # Project description
24
- ```
25
-
26
- ### Directory Details
27
-
28
- - **data/**: Contains raw audio files in `.wav` format.
29
- - Each audio file represents an infant cry recording.
30
-
31
- - **labels/**: Stores annotation files in `.TextGrid` format.
32
- - Each `.TextGrid` file corresponds to an audio file and provides ground truth segmentations for cry events.
33
 
34
- - **metrics/**: Houses the implementation of event-based metrics for evaluating the performance of models.
35
- - Metrics include event-based F1-score and IOU, designed to measure temporal accuracy effectively.
36
 
37
- - **models/**: Contains pre-trained supervised models for infant cry detection.
38
- - Models include:
39
- - Bidirectional LSTM
40
- - Transformer
41
- - MobileNet V2
42
 
43
- - **src/**: Core implementation of the infant cry detection framework.
44
- - Includes modules for data preprocessing, feature extraction, model training, and evaluation.
45
 
46
- - **experiment.ipynb**: A Jupyter Notebook with a simple use case example.
47
- - Demonstrates how to load data, preprocess it, train a model, and evaluate its performance.
48
 
49
- For more details, refer to our accompanying research paper.
 
 
50
 
51
- ## License
52
- This project is licensed under the MIT License. See the LICENSE file for more details.
 
1
+ ---
2
+ language: en
3
+ tags:
4
+ - audio-classification
5
+ - causal-representation
6
+ - infant-cry-detection
7
+ license: mit
8
+ datasets:
9
+ - custom-audio-dataset
10
+ metrics:
11
+ - event-based-f1
12
+ - iou
13
+ - accuracy
14
+ ---
15
+
16
+ # Infant Cry Detection Using Causal Temporal Representation
17
+
18
+ This model detects infant cries using a novel **causal temporal representation** framework. By integrating causal reasoning into the data-generating process (DGP), the model aims to enhance the interpretability and reliability of cry detection systems.
19
 
20
  ## Features
21
+ - **Causal Data Generating Process**: Incorporates mathematical causal assumptions to define the relationship between audio features and annotations.
22
+ - **Supervised Models**: Includes pre-trained state-of-the-art models:
23
+ - Bidirectional LSTM
24
+ - Transformer
25
+ - MobileNet V2
26
+ - **Event-Based Metrics**: Tailored for time-sensitive detection tasks:
27
+ - Event-based F1-score
28
+ - Intersection over Union (IOU)
29
+ - **Interactive Example**: Jupyter Notebook with step-by-step usage demonstrations.
30
 
31
  ![Causal Graph](https://huggingface.co/CAPYLEE/CRSTC/blob/main/casual_graph.png)
32
 
33
+ ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
 
35
+ ## How to Use
 
36
 
37
+ You can load the model directly from Hugging Face:
 
 
 
 
38
 
39
+ ```python
40
+ from transformers import AutoModel
41
 
42
+ # Load model
43
+ model = AutoModel.from_pretrained("your-username/infant-cry-detection")
44
 
45
+ # Example usage
46
+ audio_features = ... # Preprocessed audio features
47
+ outputs = model(audio_features)
48