Update README.md
Browse files
README.md
CHANGED
@@ -65,12 +65,11 @@ simuleval \
|
|
65 |
If not already stored in your system, the SeamlessM4T model will be downloaded automatically when
|
66 |
running the script. The output will be saved in `${OUT_DIR}`.
|
67 |
|
68 |
-
We suggest
|
69 |
any device (e.g., CPU) supported by SimulEval and HuggingFace.
|
70 |
|
71 |
## 💬 Inference using docker
|
72 |
-
To run SimulSeamless using docker,
|
73 |
-
steps below:
|
74 |
1. Download the docker file by cloning this repository
|
75 |
2. Load the docker image:
|
76 |
```bash
|
@@ -95,7 +94,7 @@ simuleval \
|
|
95 |
```
|
96 |
To set, `${TGT_LANG}`, `${FRAME}`, `${LAYER}`, `${BLEU_TOKENIZER}`, `${LATENCY_UNIT}`,
|
97 |
`${LIST_OF_AUDIO}`, `${TGT_FILE}`, `${SEG_SIZE}`, and `${OUT_DIR}` refer to
|
98 |
-
[🤖 Inference using your environment](
|
99 |
|
100 |
|
101 |
## 📍Citation
|
|
|
65 |
If not already stored in your system, the SeamlessM4T model will be downloaded automatically when
|
66 |
running the script. The output will be saved in `${OUT_DIR}`.
|
67 |
|
68 |
+
We suggest running the inference using a GPU to speed up the process but the system can be run on
|
69 |
any device (e.g., CPU) supported by SimulEval and HuggingFace.
|
70 |
|
71 |
## 💬 Inference using docker
|
72 |
+
To run SimulSeamless using docker, follow the steps below:
|
|
|
73 |
1. Download the docker file by cloning this repository
|
74 |
2. Load the docker image:
|
75 |
```bash
|
|
|
94 |
```
|
95 |
To set, `${TGT_LANG}`, `${FRAME}`, `${LAYER}`, `${BLEU_TOKENIZER}`, `${LATENCY_UNIT}`,
|
96 |
`${LIST_OF_AUDIO}`, `${TGT_FILE}`, `${SEG_SIZE}`, and `${OUT_DIR}` refer to
|
97 |
+
[🤖 Inference using your environment](#🤖-inference-using-your-environment).
|
98 |
|
99 |
|
100 |
## 📍Citation
|