awsaf49 commited on
Commit
bfa974b
·
verified ·
1 Parent(s): bbc0fcf

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +93 -0
README.md ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - awsaf49/sonics
5
+ language:
6
+ - en
7
+ metrics:
8
+ - f1
9
+ pipeline_tag: audio-classification
10
+ tags:
11
+ - deepfake
12
+ - audio_classification
13
+ - fake_song_detection
14
+ - music
15
+ - song
16
+ ---
17
+
18
+ <div align="center">
19
+ <img src="https://i.postimg.cc/3Jx3yZ5b/real-vs-fake-sonics-w-logo.jpg" width="250">
20
+ </div>
21
+
22
+ <div align="center">
23
+ <h1>SONICS: Synthetic Or Not - Identifying Counterfeit Songs</h1>
24
+ <h3><span style="color:red;"><b>ICLR 2025 [Poster]</b></span></h3>
25
+ </div>
26
+
27
+
28
+ ## Overview
29
+
30
+ The recent surge in AI-generated songs presents exciting possibilities and challenges. These innovations necessitate the ability to distinguish between human-composed and synthetic songs to safeguard artistic integrity and protect human musical artistry. Existing research and datasets in fake song detection only focus on singing voice deepfake detection (SVDD), where the vocals are AI-generated but the instrumental music is sourced from real songs. However, these approaches are inadequate for detecting contemporary end-to-end artificial songs where all components (vocals, music, lyrics, and style) could be AI-generated. Additionally, existing datasets lack music-lyrics diversity, long-duration songs, and open-access fake songs. To address these gaps, we introduce **SONICS**, a novel dataset for end-to-end **Synthetic Song Detection (SSD)**, comprising over **97k songs (4,751 hours)**, with over **49k synthetic songs** from popular platforms like **Suno and Udio**. Furthermore, we highlight the importance of modeling long-range temporal dependencies in songs for effective authenticity detection, an aspect entirely overlooked in existing methods. To utilize long-range patterns, we introduce **SpecTTTra**, a novel architecture that significantly improves time and memory efficiency over conventional CNN and Transformer-based models. In particular, for long audio samples, our top-performing variant **outperforms ViT by 8% F1 score while being 38% faster and using 26% less memory**. Additionally, in comparison with ConvNeXt, our model achieves **1% gain in F1 score with a 20% boost in speed and 67% reduction in memory usage**.
31
+
32
+
33
+ ## Resources
34
+
35
+ - 📄 [**Paper**](https://openreview.net/forum?id=PY7KSh29Z8)
36
+ - 🎵 [**Dataset**](https://huggingface.co/datasets/awsaf49/sonics)
37
+ - 🔬 [**ArXiv**](https://arxiv.org/abs/2408.14080)
38
+ - 💻 [**GitHub**](https://github.com/awsaf49/sonics)
39
+
40
+ ## Model Variants
41
+
42
+ <style>
43
+ .hf-button {
44
+ display: inline-flex;
45
+ align-items: center;
46
+ gap: 6px;
47
+ padding: 6px 12px;
48
+ font-size: 14px;
49
+ font-weight: bold;
50
+ color: white;
51
+ border-radius: 6px;
52
+ text-decoration: none;
53
+ }
54
+ .hf-button img {
55
+ height: 18px;
56
+ }
57
+ </style>
58
+
59
+ | Model Name | HF Link | Variant | Duration | f_clip | t_clip | F1 | Sensitivity | Specificity | Speed (A/S) | FLOPs (G) | Mem. (GB) | # Act. (M) | # Param. (M) |
60
+ |--------------------------------|---------|---------------|----------|--------|--------|-----|-------------|-------------|-------------|-----------|-----------|------------|-------------|
61
+ | `sonics-spectttra-alpha-5s` | <a class="hf-button" href="https://huggingface.co/awsaf49/sonics-spectttra-alpha-5s"><img src="https://huggingface.co/front/assets/huggingface_logo-noborder.svg">HF</a> | SpecTTTra-α | 5s | 1 | 3 | 0.78 | 0.69 | 0.94 | 148 | 2.9 | 0.5 | 6 | 17 |
62
+ | `sonics-spectttra-beta-5s` | <a class="hf-button" href="https://huggingface.co/awsaf49/sonics-spectttra-beta-5s"><img src="https://huggingface.co/front/assets/huggingface_logo-noborder.svg">HF</a> | SpecTTTra-β | 5s | 3 | 5 | 0.78 | 0.69 | 0.94 | 152 | 1.1 | 0.2 | 5 | 17 |
63
+ | `sonics-spectttra-gamma-5s` | <a class="hf-button" href="https://huggingface.co/awsaf49/sonics-spectttra-gamma-5s"><img src="https://huggingface.co/front/assets/huggingface_logo-noborder.svg">HF</a> | SpecTTTra-γ | 5s | 5 | 7 | 0.76 | 0.66 | 0.92 | 154 | 0.7 | 0.1 | 2 | 17 |
64
+ | `sonics-spectttra-alpha-120s` | <a class="hf-button" href="https://huggingface.co/awsaf49/sonics-spectttra-alpha-120s"><img src="https://huggingface.co/front/assets/huggingface_logo-noborder.svg">HF</a> | SpecTTTra-α | 120s | 1 | 3 | 0.97 | 0.96 | 0.99 | 47 | 23.7 | 3.9 | 50 | 19 |
65
+ | `sonics-spectttra-beta-120s` | <a class="hf-button" href="https://huggingface.co/awsaf49/sonics-spectttra-beta-120s"><img src="https://huggingface.co/front/assets/huggingface_logo-noborder.svg">HF</a> | SpecTTTra-β | 120s | 3 | 5 | 0.92 | 0.86 | 0.99 | 80 | 14.0 | 2.3 | 29 | 17 |
66
+ | `sonics-spectttra-gamma-120s` | <a class="hf-button" href="https://huggingface.co/awsaf49/sonics-spectttra-gamma-120s"><img src="https://huggingface.co/front/assets/huggingface_logo-noborder.svg">HF</a> | SpecTTTra-γ | 120s | 5 | 7 | 0.97 | 0.96 | 0.99 | 97 | 10.1 | 1.6 | 138 | 22 |
67
+
68
+ ## Model Architecture
69
+
70
+ - **Base Model:** SpectTTTra (Spectro-Temporal Tokens Transformer)
71
+ - **Embedding Dimension:** 384
72
+ - **Number of Heads:** 6
73
+ - **Number of Layers:** 12
74
+ - **MLP Ratio:** 2.67
75
+
76
+ ## Audio Processing
77
+
78
+ - **Sample Rate:** 16kHz
79
+ - **FFT Size:** 2048
80
+ - **Hop Length:** 512
81
+ - **Mel Bands:** 128
82
+ - **Frequency Range:** 20Hz - 8kHz
83
+ - **Normalization:** Mean-std normalization
84
+
85
+ ## Usage
86
+
87
+ ```python
88
+ # Install from GitHub
89
+ pip install git+https://github.com/awsaf49/sonics.git
90
+
91
+ # Load model
92
+ from sonics import HFAudioClassifier
93
+ model = HFAudioClassifier.from_pretrained("awsaf49/sonics-spectttra-beta-120s")