Add listed support and example code for Transformers.js

#9
by Xenova HF staff - opened
Files changed (1) hide show
  1. README.md +18 -0
README.md CHANGED
@@ -4,6 +4,7 @@ language:
4
  tags:
5
  - audio
6
  - automatic-speech-recognition
 
7
  widget:
8
  - example_title: LibriSpeech sample 1
9
  src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
@@ -244,6 +245,23 @@ Coming soon ...
244
 
245
  Coming soon ...
246
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
247
  ## Model Details
248
 
249
  Distil-Whisper inherits the encoder-decoder architecture from Whisper. The encoder maps a sequence of speech vector
 
4
  tags:
5
  - audio
6
  - automatic-speech-recognition
7
+ - transformers.js
8
  widget:
9
  - example_title: LibriSpeech sample 1
10
  src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
 
245
 
246
  Coming soon ...
247
 
248
+
249
+ ### Transformers.js
250
+
251
+ ```js
252
+ import { pipeline } from '@xenova/transformers';
253
+
254
+ let transcriber = await pipeline('automatic-speech-recognition', 'distil-whisper/distil-large-v2');
255
+
256
+ let url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/jfk.wav';
257
+ let output = await transcriber(url);
258
+ // { text: " And so, my fellow Americans, ask not what your country can do for you. Ask what you can do for your country." }
259
+ ```
260
+
261
+ See the [docs](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.AutomaticSpeechRecognitionPipeline) for more information.
262
+
263
+ *Note:* Due to the large model size, we recommend running this model server-side with [Node.js](https://huggingface.co/docs/transformers.js/guides/node-audio-processing) (instead of in-browser).
264
+
265
  ## Model Details
266
 
267
  Distil-Whisper inherits the encoder-decoder architecture from Whisper. The encoder maps a sequence of speech vector