Malaysian Seq2Seq
Collection
Trained on 17B tokens, 81GB of cleaned texts, able to understand standard Malay, local Malay, local Mandarin, Manglish, and local Tamil.
•
8 items
•
Updated
Pretrained T5 3x-super-tiny standard language model for Malay.
t5-3x-super-tiny-standard-bahasa-cased
model was pretrained on multiple tasks. Below is list of tasks we trained on,
Preparing steps can reproduce at https://github.com/huseinzol05/malaya/tree/master/pretrained-model/t5/prepare
You can use this model by installing torch
or tensorflow
and Huggingface library transformers
. And you can use it directly by initializing it like this:
from transformers import T5Tokenizer, T5Model
model = T5Model.from_pretrained('malay-huggingface/t5-3x-super-tiny-bahasa-cased')
tokenizer = T5Tokenizer.from_pretrained('malay-huggingface/t5-3x-super-tiny-bahasa-cased')
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained('malay-huggingface/t5-3x-super-tiny-bahasa-cased')
model = T5ForConditionalGeneration.from_pretrained('malay-huggingface/t5-3x-super-tiny-bahasa-cased')
input_ids = tokenizer.encode('soalan: siapakah perdana menteri malaysia?', return_tensors = 'pt')
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
Output is,
'Mahathir Mohamad'
soalan: {string}
, trained using Natural QA.ringkasan: {string}
, for abstractive summarization.tajuk: {string}
, for abstractive title.parafrasa: {string}
, for abstractive paraphrase.terjemah Inggeris ke Melayu: {string}
, for EN-MS translation.terjemah Melayu ke Inggeris: {string}
, for MS-EN translation.grafik pengetahuan: {string}
, for MS text to EN Knowledge Graph triples format.ayat1: {string1} ayat2: {string2}
, semantic similarity.