Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -11,6 +11,19 @@ model_name: CoNeTTE
|
|
11 |
task_categories:
|
12 |
- audio-captioning
|
13 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
|
15 |
<!-- <div align="center"> -->
|
16 |
|
@@ -29,11 +42,11 @@ task_categories:
|
|
29 |
<img src='https://readthedocs.org/projects/aac-metrics/badge/?version=stable&style=for-the-badge' alt='Documentation Status' />
|
30 |
</a> -->
|
31 |
|
32 |
-
CoNeTTE is an audio captioning system, which generate a short textual description of the sound events in any audio file.
|
33 |
|
34 |
<!-- </div> -->
|
35 |
|
36 |
-
|
37 |
|
38 |
## Installation
|
39 |
```bash
|
@@ -98,12 +111,13 @@ conette-predict --audio "/your/path/to/audio.wav"
|
|
98 |
This model checkpoint has been trained for the Clotho dataset, but it can also reach a good performance on AudioCaps with the "audiocaps" task.
|
99 |
|
100 |
## Limitations
|
101 |
-
The model
|
|
|
102 |
|
103 |
## Citation
|
104 |
The preprint version of the paper describing CoNeTTE is available on arxiv: https://arxiv.org/pdf/2309.00454.pdf
|
105 |
|
106 |
-
```
|
107 |
@misc{labbé2023conette,
|
108 |
title = {CoNeTTE: An efficient Audio Captioning system leveraging multiple datasets with Task Embedding},
|
109 |
author = {Étienne Labbé and Thomas Pellegrini and Julien Pinquier},
|
@@ -117,7 +131,7 @@ The preprint version of the paper describing CoNeTTE is available on arxiv: http
|
|
117 |
```
|
118 |
|
119 |
## Additional information
|
120 |
-
|
121 |
- Model weights are available on HuggingFace: https://huggingface.co/Labbeti/conette
|
122 |
- The encoder part of the architecture is based on a ConvNeXt model for audio classification, available here: https://huggingface.co/topel/ConvNeXt-Tiny-AT. More precisely, the encoder weights used are named "convnext_tiny_465mAP_BL_AC_70kit.pth", available on Zenodo: https://zenodo.org/record/8020843.
|
123 |
|
|
|
11 |
task_categories:
|
12 |
- audio-captioning
|
13 |
---
|
14 |
+
---
|
15 |
+
language: en
|
16 |
+
license: mit
|
17 |
+
tags:
|
18 |
+
- audio
|
19 |
+
- captioning
|
20 |
+
- text
|
21 |
+
- audio-captioning
|
22 |
+
- automated-audio-captioning
|
23 |
+
model_name: CoNeTTE
|
24 |
+
task_categories:
|
25 |
+
- audio-captioning
|
26 |
+
---
|
27 |
|
28 |
<!-- <div align="center"> -->
|
29 |
|
|
|
42 |
<img src='https://readthedocs.org/projects/aac-metrics/badge/?version=stable&style=for-the-badge' alt='Documentation Status' />
|
43 |
</a> -->
|
44 |
|
45 |
+
CoNeTTE is an audio captioning system, which generate a short textual description of the sound events in any audio file.
|
46 |
|
47 |
<!-- </div> -->
|
48 |
|
49 |
+
The architecture and training are explained in the corresponding [paper](https://arxiv.org/pdf/2309.00454.pdf). CoNeTTE has been developped by me ([Étienne Labbé](https://labbeti.github.io/)) during my PhD.
|
50 |
|
51 |
## Installation
|
52 |
```bash
|
|
|
111 |
This model checkpoint has been trained for the Clotho dataset, but it can also reach a good performance on AudioCaps with the "audiocaps" task.
|
112 |
|
113 |
## Limitations
|
114 |
+
- The model expected audio sampled at 32 kHz. The model automatically resample up or down the input audio files. However, it might give worse results, especially when using audio with lower sampling rates.
|
115 |
+
- The model has been trained on audio lasting from 1 to 30 seconds. It can handle longer audio files, but it might require more memory and give worse results.
|
116 |
|
117 |
## Citation
|
118 |
The preprint version of the paper describing CoNeTTE is available on arxiv: https://arxiv.org/pdf/2309.00454.pdf
|
119 |
|
120 |
+
```bibtex
|
121 |
@misc{labbé2023conette,
|
122 |
title = {CoNeTTE: An efficient Audio Captioning system leveraging multiple datasets with Task Embedding},
|
123 |
author = {Étienne Labbé and Thomas Pellegrini and Julien Pinquier},
|
|
|
131 |
```
|
132 |
|
133 |
## Additional information
|
134 |
+
- CoNeTTE stands for **Co**nv**Ne**Xt-**T**ransformer with **T**ask **E**mbedding.
|
135 |
- Model weights are available on HuggingFace: https://huggingface.co/Labbeti/conette
|
136 |
- The encoder part of the architecture is based on a ConvNeXt model for audio classification, available here: https://huggingface.co/topel/ConvNeXt-Tiny-AT. More precisely, the encoder weights used are named "convnext_tiny_465mAP_BL_AC_70kit.pth", available on Zenodo: https://zenodo.org/record/8020843.
|
137 |
|