Fedir Zadniprovskyi commited on
Commit
ddc67e6
·
unverified ·
1 Parent(s): 35eafc3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -1
README.md CHANGED
@@ -5,6 +5,9 @@ Features:
5
  - Easily deployable using Docker.
6
  - **Configurable through environment variables (see [config.py](./src/faster_whisper_server/config.py))**.
7
  - OpenAI API compatible.
 
 
 
8
 
9
  Please create an issue if you find a bug, have a question, or a feature suggestion.
10
 
@@ -13,7 +16,7 @@ See [OpenAI API reference](https://platform.openai.com/docs/api-reference/audio)
13
  - Audio file transcription via `POST /v1/audio/transcriptions` endpoint.
14
  - Unlike OpenAI's API, `faster-whisper-server` also supports streaming transcriptions(and translations). This is usefull for when you want to process large audio files would rather receive the transcription in chunks as they are processed rather than waiting for the whole file to be transcribe. It works in the similar way to chat messages are being when chatting with LLMs.
15
  - Audio file translation via `POST /v1/audio/translations` endpoint.
16
- - (WIP) Live audio transcription via `WS /v1/audio/transcriptions` endpoint.
17
  - LocalAgreement2 ([paper](https://aclanthology.org/2023.ijcnlp-demo.3.pdf) | [original implementation](https://github.com/ufal/whisper_streaming)) algorithm is used for live transcription.
18
  - Only transcription of single channel, 16000 sample rate, raw, 16-bit little-endian audio is supported.
19
 
 
5
  - Easily deployable using Docker.
6
  - **Configurable through environment variables (see [config.py](./src/faster_whisper_server/config.py))**.
7
  - OpenAI API compatible.
8
+ - Streaming support (transcription is sent via SSE as the audio is transcribed. You don't need to wait for the audio to fully be transcribed before receiving it)
9
+ - Live transcription support (audio is sent via websocket as it's generated)
10
+ - Dynamic model loading / offloading. Just specify which model you want to use in the request and it will be loaded automatically. It will then be unloaded after a period of inactivity.
11
 
12
  Please create an issue if you find a bug, have a question, or a feature suggestion.
13
 
 
16
  - Audio file transcription via `POST /v1/audio/transcriptions` endpoint.
17
  - Unlike OpenAI's API, `faster-whisper-server` also supports streaming transcriptions(and translations). This is usefull for when you want to process large audio files would rather receive the transcription in chunks as they are processed rather than waiting for the whole file to be transcribe. It works in the similar way to chat messages are being when chatting with LLMs.
18
  - Audio file translation via `POST /v1/audio/translations` endpoint.
19
+ - Live audio transcription via `WS /v1/audio/transcriptions` endpoint.
20
  - LocalAgreement2 ([paper](https://aclanthology.org/2023.ijcnlp-demo.3.pdf) | [original implementation](https://github.com/ufal/whisper_streaming)) algorithm is used for live transcription.
21
  - Only transcription of single channel, 16000 sample rate, raw, 16-bit little-endian audio is supported.
22