Spaces:
Sleeping
Sleeping
import string | |
PROMPT_WITH_GLOSSARY = """ | |
You have a glossary of terms with their Korean translations. When translating a sentence, you need to check if any of the words in the sentence are in the glossary, and if so, translate them according to the provided Korean terms. Here is the glossary: | |
- revision: ๊ฐ์ | |
- method: ๋ฉ์๋ | |
- secrets: ๋น๋ฐ๊ฐ | |
- search helper: ๊ฒ์ ํฌํผ | |
- logging level: ๋ก๊ทธ ๋ ๋ฒจ | |
- workflow: ์ํฌํ๋ก์ฐ | |
- corner case: ์ฝ๋ ์ผ์ด์ค | |
- tokenization: ํ ํฐํ | |
- architecture: ์ํคํ ์ฒ | |
- attention mask: ์ดํ ์ ๋ง์คํฌ | |
- backbone: ๋ฐฑ๋ณธ | |
- argmax: argmax | |
- beam search: ๋น ์์น | |
- clustering: ๊ตฐ์งํ | |
- configuration: ๊ตฌ์ฑ | |
- context: ๋ฌธ๋งฅ | |
- cross entropy: ๊ต์ฐจ ์ํธ๋กํผ | |
- cross-attention: ํฌ๋ก์ค ์ดํ ์ | |
- dictionary: ๋์ ๋๋ฆฌ | |
- entry: ์ํธ๋ฆฌ | |
- few shot: ํจ์ท | |
- flatten: flatten | |
- ground truth: ์ ๋ต | |
- head: ํค๋ | |
- helper function: ํฌํผ ํจ์ | |
- image captioning: ์ด๋ฏธ์ง ์บก์ ๋ | |
- image patch: ์ด๋ฏธ์ง ํจ์น | |
- inference: ์ถ๋ก | |
- instance: ์ธ์คํด์ค | |
- Instantiate: ์ธ์คํด์คํ | |
- knowledge distillation: ์ง์ ์ฆ๋ฅ | |
- labels: ๋ ์ด๋ธ | |
- large language models (LLM): ๋๊ท๋ชจ ์ธ์ด ๋ชจ๋ธ | |
- layer: ๋ ์ด์ด | |
- learning rate scheduler: Learning Rate Scheduler | |
- localization: ๋ก์ปฌ๋ฆฌ์ ์ด์ | |
- log mel-filter bank: ๋ก๊ทธ ๋ฉ ํํฐ ๋ฑ ํฌ | |
- look-up table: ๋ฃฉ์ ํ ์ด๋ธ | |
- loss function: ์์ค ํจ์ | |
- machine learning: ๋จธ์ ๋ฌ๋ | |
- mapping: ๋งคํ | |
- masked language modeling (MLM): ๋ง์คํฌ๋ ์ธ์ด ๋ชจ๋ธ | |
- malware: ์ ์ฑ์ฝ๋ | |
- metric: ์งํ | |
- mixed precision: ํผํฉ ์ ๋ฐ๋ | |
- modality: ๋ชจ๋ฌ๋ฆฌํฐ | |
- monolingual model: ๋จ์ผ ์ธ์ด ๋ชจ๋ธ | |
- multi gpu: ๋ค์ค GPU | |
- multilingual model: ๋ค๊ตญ์ด ๋ชจ๋ธ | |
- parsing: ํ์ฑ | |
- perplexity (PPL): ํํ๋ ์ํฐ(Perplexity) | |
- pipeline: ํ์ดํ๋ผ์ธ | |
- pixel values: ํฝ์ ๊ฐ | |
- pooling: ํ๋ง | |
- position IDs: ์์น ID | |
- preprocessing: ์ ์ฒ๋ฆฌ | |
- prompt: ํ๋กฌํํธ | |
- pythonic: ํ์ด์จ๋ | |
- query: ์ฟผ๋ฆฌ | |
- question answering: ์ง์ ์๋ต | |
- raw audio waveform: ์์ ์ค๋์ค ํํ | |
- recurrent neural network (RNN): ์ํ ์ ๊ฒฝ๋ง | |
- accelerator: ๊ฐ์๊ธฐ | |
- Accelerate: Accelerate | |
- architecture: ์ํคํ ์ฒ | |
- arguments: ์ธ์ | |
- attention mask: ์ดํ ์ ๋ง์คํฌ | |
- augmentation: ์ฆ๊ฐ | |
- autoencoding models: ์คํ ์ธ์ฝ๋ฉ ๋ชจ๋ธ | |
- autoregressive models: ์๊ธฐํ๊ท ๋ชจ๋ธ | |
- backward: ์ญ๋ฐฉํฅ | |
- bounding box: ๋ฐ์ด๋ฉ ๋ฐ์ค | |
- causal language modeling: ์ธ๊ณผ์ ์ธ์ด ๋ชจ๋ธ๋ง(causal language modeling) | |
- channel: ์ฑ๋ | |
- checkpoint: ์ฒดํฌํฌ์ธํธ(checkpoint) | |
- chunk: ๋ฌถ์ | |
- computer vision: ์ปดํจํฐ ๋น์ | |
- convolution: ํฉ์ฑ๊ณฑ | |
- crop: ์๋ฅด๊ธฐ | |
- custom: ์ฌ์ฉ์ ์ ์ | |
- customize: ๋ง์ถค ์ค์ ํ๋ค | |
- data collator: ๋ฐ์ดํฐ ์ฝ๋ ์ดํฐ | |
- dataset: ๋ฐ์ดํฐ ์ธํธ | |
- decoder input IDs: ๋์ฝ๋ ์ ๋ ฅ ID | |
- decoder models: ๋์ฝ๋ ๋ชจ๋ธ | |
- deep learning (DL): ๋ฅ๋ฌ๋ | |
- directory: ๋๋ ํฐ๋ฆฌ | |
- distributed training: ๋ถ์ฐ ํ์ต | |
- downstream: ๋ค์ด์คํธ๋ฆผ | |
- encoder models: ์ธ์ฝ๋ ๋ชจ๋ธ | |
- entity: ๊ฐ์ฒด | |
- epoch: ์ํญ | |
- evaluation method: ํ๊ฐ ๋ฐฉ๋ฒ | |
- feature extraction: ํน์ฑ ์ถ์ถ | |
- feature matrix: ํน์ฑ ํ๋ ฌ(feature matrix) | |
- fine-tunning: ๋ฏธ์ธ ์กฐ์ | |
- finetuned models: ๋ฏธ์ธ ์กฐ์ ๋ชจ๋ธ | |
- hidden state: ์๋ ์ํ | |
- hyperparameter: ํ์ดํผํ๋ผ๋ฏธํฐ | |
- learning: ํ์ต | |
- load: ๊ฐ์ ธ์ค๋ค | |
- method: ๋ฉ์๋ | |
- optimizer: ์ตํฐ๋ง์ด์ | |
- pad (padding): ํจ๋ (ํจ๋ฉ) | |
- parameter: ๋งค๊ฐ๋ณ์ | |
- pretrained model: ์ฌ์ ํ๋ จ๋ ๋ชจ๋ธ | |
- separator (* [SEP]๋ฅผ ๋ถ๋ฅด๋ ์ด๋ฆ): ๋ถํ ํ ํฐ | |
- sequence: ์ํ์ค | |
- silent error: ์กฐ์ฉํ ์ค๋ฅ | |
- token: ํ ํฐ | |
- tokenizer: ํ ํฌ๋์ด์ | |
- training: ํ๋ จ | |
- workflow: ์ํฌํ๋ก์ฐ | |
Please revise the translated sentences accordingly using the terms provided in this glossary. | |
""" | |
def get_prompt_with_glossary() -> str: | |
prompt = string.Template( | |
PROMPT_WITH_GLOSSARY | |
).safe_substitute() | |
return prompt |