Datasets:

Modalities:
Text
Formats:
parquet
Libraries:
Datasets
pandas
License:
File size: 2,346 Bytes
86862c3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f98ccfd
 
 
 
 
 
 
86862c3
f98ccfd
86862c3
f98ccfd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
---
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: dev
    path: data/dev-*
  - split: test
    path: data/test-*
dataset_info:
  features:
  - name: source
    dtype: string
  - name: target
    sequence: string
  - name: hypothesis
    dtype: string
  - name: reference
    dtype: string
  splits:
  - name: train
    num_bytes: 59125062
    num_examples: 183582
  - name: dev
    num_bytes: 7397816
    num_examples: 22948
  - name: test
    num_bytes: 7414683
    num_examples: 22948
  download_size: 50953604
  dataset_size: 73937561
license: cc-by-sa-4.0
task_categories:
- text2text-generation
language:
- en
- ja
pretty_name: Simplifyingmt
---
## SimplifyingMT

## Dataset Description
-Repository: [https://github.com/nttcslab-nlp/SimplifyingMT_ACL24](https://github.com/nttcslab-nlp/SimplifyingMT_ACL24)  
-Papre: to appear

## Paper

Oshika et al., Simplifying Translations for Children: Iterative Simplification Considering Age of Acquisition with LLMs, Findings of ACL 2024

## Abstract

In recent years, neural machine translation (NMT) has been widely used in everyday life.
However, the current NMT lacks a mechanism to adjust the difficulty level of translations to match the user's language level.
Additionally, due to the bias in the training data for NMT, translations of simple source sentences are often produced with complex words.
In particular, this could pose a problem for children, who may not be able to understand the meaning of the translations correctly. 
In this study, we propose a method that replaces words with high Age of Acquisitions (AoA) in translations with simpler words to match the translations to the user's level.
We achieve this by using large language models (LLMs), providing a triple of a source sentence, a translation, and a target word to be replaced.
We create a benchmark dataset using back-translation on Simple English Wikipedia.
The experimental results obtained from the dataset show that our method effectively replaces high-AoA words with lower-AoA words and, moreover, can iteratively replace most of the high-AoA words while still maintaining high BLEU and COMET scores.

## License
Simple-English-Wikipedia is distributed under the CC-BY-SA 4.0 license.  
This dataset follows suit and is distributed under the CC-BY-SA 4.0 license.