base_model: pyannote/embedding | |
datasets: | |
- voxceleb | |
extra_gated_fields: | |
Company/university: text | |
I plan to use this model for (task, type of audio data, etc): text | |
Website: text | |
extra_gated_prompt: The collected information will help acquire a better knowledge | |
of pyannote.audio userbase and help its maintainers apply for grants to improve | |
it further. If you are an academic researcher, please cite the relevant papers in | |
your own publications using the model. If you work for a company, please consider | |
contributing back to pyannote.audio development (e.g. through unrestricted gifts). | |
We also provide scientific consulting services around speaker diarization and machine | |
listening. | |
inference: false | |
license: mit | |
tags: | |
- pyannote | |
- pyannote-audio | |
- pyannote-audio-model | |
- audio | |
- voice | |
- speech | |
- speaker | |
- speaker-recognition | |
- speaker-verification | |
- speaker-identification | |
- speaker-embedding | |
- onnx | |
This is the ONNX exported version of [pyannote/embedding](https://huggingface.co/pyannote/embedding). | |