Edit model card

This repository contains the Aspik101/distil-whisper-large-v3-pl model converted to the CTranslate2 format.

This is mainly a model prepared and tested for use with the faster-whisper add-on in Home Assistant.

If you have a Jetson Nvidia device you can use the docker compose file below to run this model.

x-shared-properties: &shared-properties
  runtime: nvidia                 # Use NVIDIA runtime
  init: false                     # Do not use init process
  restart: unless-stopped         # Restart policy
  network_mode: host              # Use host network mode, to auto-detect devices in network
  devices:
    - /dev/snd:/dev/snd           # to share audio devices
    - /dev/bus/usb                # to share usb devices

name: whisper-jetson
version: "3.9"
services:
  whisper:
    image: dustynv/wyoming-whisper:latest-r36.2.0
    <<: *shared-properties
    container_name: whisper
    hostname: whisper
    ports:
      - "10300:10300/tcp"
    volumes:
      - ./whisper/models/:/share/whisper
      - ./whisper/data/:/data
      - /etc/localtime:/etc/localtime:ro
      - /etc/timezone:/etc/timezone:ro
    environment:
      WHISPER_LANGUAGE: "pl"
      WHISPER_MODEL: "mmalyska/distil-whisper-large-v3-pl-ct2"

This model can also be used directly in the Home Assistant add-on by setting the Custom model variable to WitoldG/distil-whisper-large-v3-pl-ct2.

Downloads last month
7
Inference API
Unable to determine this model's library. Check the docs .

Model tree for WitoldG/distil-whisper-large-v3-pl-ct2

Finetuned
(1)
this model