SLPG commited on
Commit
2f55f9c
1 Parent(s): ccd674a

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +86 -0
README.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - fa
4
+ - en
5
+ license: apache-2.0
6
+ datasets:
7
+ - iwslt14
8
+ metrics:
9
+ - bleu
10
+ library_name: fairseq
11
+ pipeline_tag: translation
12
+ ---
13
+
14
+ ### Biomedical Domain French to English Machine Translation
15
+ French to English translation model is a Transformer model trained on in-domain web crawled corpus from Wikipedia.
16
+ This model is produced during the experimentation related to building domain-adapted NMT models for specialized domains.
17
+ The evaluation is done on Medline 19-20 standard development and test sets.
18
+
19
+ * source group: French
20
+ * target group: English
21
+
22
+ * model: transformer
23
+ * Test Set: Medline-20
24
+ * pre-processing: Moses Tokenizer
25
+ * Dataset Details: [DLNMT](https://huggingface.co/datasets/SLPG/Biomedical_EN_FA_Corpus)
26
+
27
+ For a more in-depth exploration of our work, please refer to our **[paper](https://aclanthology.org/2023.wmt-1.26.pdf)**:
28
+
29
+ ## Benchmarks
30
+
31
+ | testset | BLEU |
32
+ |-----------------------|-------|
33
+ | Medline20 | 21.11 |
34
+
35
+ ## How to use model?
36
+ * This model can be accessed via git clone:
37
+ ```
38
+ git clone https://huggingface.co/SLPG/Biomedical_MT_EN-FR
39
+ ```
40
+ * You can use Fairseq library to access the model for translations:
41
+ ```
42
+ from fairseq.models.transformer import TransformerModel
43
+ ```
44
+
45
+ ### Load the model
46
+ ```
47
+ model = TransformerModel.from_pretrained('path/to/model')
48
+
49
+ ```
50
+
51
+ #### Set the model to evaluation mode
52
+ ```
53
+ model.eval()
54
+ ```
55
+
56
+ #### Perform inference
57
+ ```
58
+ input_text = 'Saisir du texte'
59
+
60
+ output_text = model.translate(input_text)
61
+
62
+ print(output_text)
63
+ ```
64
+
65
+ ## Citation
66
+
67
+ **If you use our model, kindly cite our [paper](https://aclanthology.org/2023.wmt-1.26.pdf)**:
68
+ ```
69
+ @inproceedings{firdous-rauf-2023-biomedical,
70
+ title = "Biomedical Parallel Sentence Retrieval Using Large Language Models",
71
+ author = "Firdous, Sheema and
72
+ Rauf, Sadaf Abdul",
73
+ editor = "Koehn, Philipp and
74
+ Haddow, Barry and
75
+ Kocmi, Tom and
76
+ Monz, Christof",
77
+ booktitle = "Proceedings of the Eighth Conference on Machine Translation",
78
+ month = dec,
79
+ year = "2023",
80
+ address = "Singapore",
81
+ publisher = "Association for Computational Linguistics",
82
+ url = "https://aclanthology.org/2023.wmt-1.26",
83
+ pages = "263--270",
84
+ abstract = "We have explored the effect of in domain knowledge during parallel sentence filtering from in domain corpora. Models built with sentences mined from in domain corpora without domain knowledge performed poorly, whereas model performance improved by more than 2.3 BLEU points on average with further domain centric filtering. We have used Large Language Models for selecting similar and domain aligned sentences. Our experiments show the importance of inclusion of domain knowledge in sentence selection methodologies even if the initial comparable corpora are in domain.",
85
+ }
86
+ ```