ACL-OCL / Base_JSON /prefixA /json /aacl /2020.aacl-main.19.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:13:07.728843Z"
},
"title": "Transformer-based Approach for Predicting Chemical Compound Structures",
"authors": [
{
"first": "Yutaro",
"middle": [],
"last": "Omote",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ehime University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Kyoumoto",
"middle": [],
"last": "Matsushita",
"suffix": "",
"affiliation": {
"laboratory": "Fujitsu Laboratories, Ltd",
"institution": "",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Tomoya",
"middle": [],
"last": "Iwakura",
"suffix": "",
"affiliation": {
"laboratory": "Fujitsu Laboratories, Ltd",
"institution": "",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Akihiro",
"middle": [],
"last": "Tamura",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Doshisha University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Takashi",
"middle": [],
"last": "Ninomiya",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ehime University",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "By predicting chemical compound structures from their names, we can better comprehend chemical compounds written in text and identify the same chemical compound given different notations for database creation. Previous methods have predicted the chemical compound structures from their names and represented them by Simplified Molecular Input Line Entry System (SMILES) strings. However, these methods mainly apply handcrafted rules, and cannot predict the structures of chemical compound names not covered by the rules. Instead of handcrafted rules, we propose Transformer-based models that predict SMILES strings from chemical compound names. We improve the conventional Transformer-based model by introducing two features: (1) a loss function that constrains the number of atoms of each element in the structure, and (2) a multi-task learning approach that predicts both SMILES strings and InChI strings (another string representation of chemical compound structures). In evaluation experiments, our methods achieved higher Fmeasures than previous rule-based approaches (Open Parser for Systematic IUPAC Nomenclature and two commercially used products), and the conventional Transformer-based model. We release the dataset used in this paper as a benchmark for the future research 1 .",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "By predicting chemical compound structures from their names, we can better comprehend chemical compounds written in text and identify the same chemical compound given different notations for database creation. Previous methods have predicted the chemical compound structures from their names and represented them by Simplified Molecular Input Line Entry System (SMILES) strings. However, these methods mainly apply handcrafted rules, and cannot predict the structures of chemical compound names not covered by the rules. Instead of handcrafted rules, we propose Transformer-based models that predict SMILES strings from chemical compound names. We improve the conventional Transformer-based model by introducing two features: (1) a loss function that constrains the number of atoms of each element in the structure, and (2) a multi-task learning approach that predicts both SMILES strings and InChI strings (another string representation of chemical compound structures). In evaluation experiments, our methods achieved higher Fmeasures than previous rule-based approaches (Open Parser for Systematic IUPAC Nomenclature and two commercially used products), and the conventional Transformer-based model. We release the dataset used in this paper as a benchmark for the future research 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Knowledge of chemical substances is necessary for developing new materials and drugs, and for synthesizing products from new materials. To utilize such knowledge, researchers have created databases containing the physical property values of chemical substances and the interrelationships among chemical substances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "It is thought that several billions of chemical compounds exist (Lahana, 1999; Hoffmann and Gastreich, 2019) , but only a portion of these are entered into chemical databases. Even PubChem 2 , one of the largest databases of chemical compounds, includes the information of only approximately 100 million chemical compounds. Moreover, databases for chemical domains are manually maintained, which consumes much time and cost. One of the time consuming processes is the integration of the same chemical compounds with different notations. For instance, a chemical structure can be derived from partial structures which are given notational variants, or the notation can fluctuate for a given chemical compound (Watanabe et al., 2019) . Therefore, a system that automatically predicts a chemical compound structure from its chemical compound names would improve the database creation procedure.",
"cite_spans": [
{
"start": 64,
"end": 78,
"text": "(Lahana, 1999;",
"ref_id": "BIBREF7"
},
{
"start": 79,
"end": 108,
"text": "Hoffmann and Gastreich, 2019)",
"ref_id": "BIBREF3"
},
{
"start": 708,
"end": 731,
"text": "(Watanabe et al., 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Structures are most commonly predicted from their notations by rule-based conversion methods (Lowe et al., 2011) . Although rule-based conversion can accurately predict the structures of chemical compounds based on systematic nomenclatures such as the International Union of Pure and Applied Chemistry (IUPAC) 3 nomenclature, it often fails the structure prediction of chemical compound names that violate these nomenclatures (e.g., Synonyms 4 ).",
"cite_spans": [
{
"start": 93,
"end": 112,
"text": "(Lowe et al., 2011)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To improve the low prediction performance of compounds with non-IUPAC names, we propose neural network-based models that predict chemical compound structures represented as Simplified Molecular Input Line Entry System (SMILES) (Weininger, 1988) strings from chemical compound names categorized as Synonyms 5 . In this work, we use the Transformer-based sequence-Name Type Name IUPAC 2-acetyloxybenzoic acid DATABASE ID (CAS registry number) 50-78-2 ABBREVIATION ASA COMMON aspirin to-sequence neural network model (Vaswani et al., 2017) for machine translation, which achieves a state-of-the-art performance in various tasks among the sequence-to-sequence neural network models such as recurrent neural network-based models.",
"cite_spans": [
{
"start": 227,
"end": 244,
"text": "(Weininger, 1988)",
"ref_id": "BIBREF14"
},
{
"start": 514,
"end": 536,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To improve the conventional Transformer-based model, we introduce the following two chemicalstructure oriented features:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. A loss function considering the constraints on the number of atoms of each element in the chemical structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. A multi-task learning for predicting both SMILES strings and IUPAC International Chemical Identifier (InChI) (Heller et al., 2015) strings, which are representations for denoting chemical compound structures as strings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For our experiments, we created a dataset from PubChem for predicting chemical compound structures represented by SMILES strings from Synonyms. The experimental results demonstrate the Transformer-based conversion methods achieve higher F-measures than the existing rule-based methods. In addition, our two proposals (i.e., constraining the number of atoms of each element and multi-task learning of both SMILES strings and InChI strings) improve the performance of the conventional Transformer-based method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In PubChem, the text names of chemical compounds are represented by three main types of notational categories: IUPAC, DATABASE ID, and Synonyms. IUPAC is a systematic nomenclature for chemical compound names. DATABASE ID is the unique identifier of a chemical compound in a database. An example is the Chemical Abstracts Service (CAS) 6 registry number. -4,8,11H,5,10H2,(H,12,13 )/t8-/m0/s1 Table 1 shows various \"aspirin\" representations.",
"cite_spans": [
{
"start": 354,
"end": 378,
"text": "-4,8,11H,5,10H2,(H,12,13",
"ref_id": null
}
],
"ref_spans": [
{
"start": 391,
"end": 398,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Chemical Compound Names",
"sec_num": "2.1"
},
{
"text": "N[C@@H](Cc1ccc(O)cc1)C(=O)O InChI=1S/C9H11NO3/c10-8(9(12)13)5-6-1-3-7(11)4-2-6 /h1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chemical Compound Names",
"sec_num": "2.1"
},
{
"text": "The IUPAC nomenclature provides a systematic naming under standardized rules, which are easily and accurately converted by rule-based conversion methods (Lowe et al., 2011) ; (Heller et al., 2015) . Provided they are registered in the database, DATABASE IDs are easily converted to their corresponding chemical compounds using dictionary-lookup methods. However, neither rulebased nor dictionary-based approach can convert chemical compound names that are not covered by the rules or dictionaries. Unlike IUPAC and DATABASE ID notations, the naming patterns of Synonyms are complex and widely variable. In many cases, the chemical compound names appearing in documents cannot be converted by rule-based or dictionary-based approaches. Consequently, the prediction performance of chemical compound names is worse in Synonyms than in IUPAC, as shown in section 6.1. In our preliminary experiments, the highest F-measure obtained with an existing tool exceeded 0.96 on IUPAC data, but was reduced to 0.75 on Synonyms data. ",
"cite_spans": [
{
"start": 153,
"end": 172,
"text": "(Lowe et al., 2011)",
"ref_id": "BIBREF8"
},
{
"start": 175,
"end": 196,
"text": "(Heller et al., 2015)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Chemical Compound Names",
"sec_num": "2.1"
},
{
"text": "For multi-task learning, we represented chemical compound structures as SMILES strings and InChI strings. These two representations are major notations of chemical compound structures. We use SMILES strings as the target representation because they are simpler than InChI strings but were sufficiently representative for our purpose (i.e., creating a chemical compound database). The SMILES (Weininger, 1988 ) notation system was designed for modern chemical information processing. Based on the principles of molecular graph theory, SMILES allows rigorous structure specification using a very small and natural grammar. SMILES strings are composed of atoms and symbols representing their bonds, branches, rings, and other structural features, assembled into a linear expression of the two-dimensional structure of a molecule. An example of a SMILES string is shown in Figure 1 . In this work, we used Canonical SMILES because it uniquely determines the correspondence between chemical structures and SMILES strings.",
"cite_spans": [
{
"start": 391,
"end": 407,
"text": "(Weininger, 1988",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 869,
"end": 877,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Representation of Chemical Compound Structures",
"sec_num": "2.2"
},
{
"text": "In the InChI (Heller et al., 2015) representation, the information of a chemical compound structure is represented by five layers. In Figure 1 , the layers are separated by \"/\" symbols. Each layer adds detailed information to the following layer. Because these layers are interrelated, InChI strings are more complex than SMILES strings.",
"cite_spans": [],
"ref_spans": [
{
"start": 134,
"end": 142,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Representation of Chemical Compound Structures",
"sec_num": "2.2"
},
{
"text": "This section presents our proposed methods, namely, our tokenizer training method and sequence-to-sequence models. Let X and T be a set of chemical compound names and a set of SMILES strings, respectively. We define a training dataset consisting of n samples as D Figure 2 overviews the Transformer-based prediction of SMILES strings from chemical compound names, where <s> is a special symbol denoting the start and end of a sequence. Chemical compound names, SMILES, and InChI are long strings without explicit boundaries (such as white spaces in English text). Therefore, to convert chemical compound names to SMILES strings, we propose (a) training of a tokenizer and (b) a Transformer-based approach.",
"cite_spans": [],
"ref_spans": [
{
"start": 264,
"end": 272,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Proposed Methods",
"sec_num": "3"
},
{
"text": "= (X 1 , T 1 ), ..., (X n , T n ) , where X i \u2208 X is a chem- ical compound name and T i \u2208 T is the SMILES string of X i for 1 \u2264 i \u2264 n. Our objective is to learn a mapping function f that realizes f (X i ) = T i from D.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Methods",
"sec_num": "3"
},
{
"text": "Chemical compound names can be tokenized by the Open Parser for Systematic IUPAC Nomenclature (OPSIN) (Lowe et al., 2011) tokenizer, a rule-based parser that generates SMILES and InChI strings from chemical compound names (mainly, from IU-PAC names). However, some chemical compound names, especially Synonyms, cannot be tokenized by rule-based tokenizers such as OPSIN. In particular, the OPSIN tokenizer is limited to chemical compound names covered by its dictionary and rules; meanwhile (as mentioned above) chemical compound names lack explicit word-boundary markers. To overcome these restrictions, we propose a method that trains tokenizers for Synonyms, SMILES, and InChI representations. Note that InChI is used in a multi-task learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tokenizer",
"sec_num": "3.1"
},
{
"text": "To eliminate the unknown tokens, our tokenizer learning method is unsupervised and covers a large set of chemical compound names. The tokenization is performed by byte pair encoding (BPE) (Sennrich et al., 2016) 7 . The BPE-based tokenizer was learned by fastBPE 8 . First, the chemical compound names obtained by the OPSIN tokenizer were segmented because fastBPE requires segmented input text. By virtue of the newly obtained BPE dictionary, the BPE-based tokenizer can tokenize chemical compound names that cannot be handled by the OPSIN tokenizer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tokenizer",
"sec_num": "3.1"
},
{
"text": "When tokenizing the SMILES strings, each element (e.g., \"C\", \"O\", \"Cl\") identified by regular expressions was regarded as one token. The remaining symbols not covered by regular expressions were divided into single characters, each regarded as one token.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tokenizer",
"sec_num": "3.1"
},
{
"text": "For tokenizing InChI strings, the model was learned on SentencePiece (Kudo and Richardson, 2018 ), a unigram-based unsupervised training method for word segmentation. Note that InChI strings cannot be tokenized by BPE because the segmentations of InChI strings are not preliminarily given.",
"cite_spans": [
{
"start": 69,
"end": 95,
"text": "(Kudo and Richardson, 2018",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tokenizer",
"sec_num": "3.1"
},
{
"text": "The Transformer model consists of stacked encoder and decoder layers. Based on self-attention, it attends to tokens in the same sequence, i.e., a single input sequence or a single output sequence. The encoder maps an input sequence to a sequence of vector representations. From this vector representations, the decoder generates an output sequence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer-based Prediction of SMILES Strings from Chemical Compound Names",
"sec_num": "3.2"
},
{
"text": "The Transformer-based model predicts SMILES strings from chemical compound names, so its input is a chemical compound name and its output is a SMILES string. During the learning process, the following objective function is minimized:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer-based Prediction of SMILES Strings from Chemical Compound Names",
"sec_num": "3.2"
},
{
"text": "L smiles = \u2212 log P (T |X; \u03b8 enc , \u03b8 smiles ), (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer-based Prediction of SMILES Strings from Chemical Compound Names",
"sec_num": "3.2"
},
{
"text": "where \u03b8 enc and \u03b8 smiles are the parameter sets of the compound name encoder and SMILES decoder, respectively, and X = x 1 , x 2 , . . . , x n is the word sequence of a chemical compound name segmented by the BPE model. T = t 1 , t 2 , . . . , t m is the = | \"C\", \"O\", \u22ef , \" = \" \u2209 \"C\" : 2, \"O\" : 1 Figure 3 : Calculating the constraints on the number of atoms of each element sequence of elements and symbols in the correct SMILES string of X.",
"cite_spans": [],
"ref_spans": [
{
"start": 298,
"end": 306,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Transformer-based Prediction of SMILES Strings from Chemical Compound Names",
"sec_num": "3.2"
},
{
"text": "To correctly predict the chemical structure from a chemical compound name, the number of atoms of each element included in the chemical structure must be fixed. In this subsection, we propose a softmax-based loss function that constrains the number of atoms of each element, that is, we minimize the difference between the numbers of atoms of each element in the predicted and correct SMILES strings. The differences are measured by their squared errors. The squared errors are computed using the Gumbel softmax (Jang et al., 2016) function, which obtains the probability distribution of the number of atoms of each element in a predicted SMILES string. Let \u03c0 i = (\u03c0 i1 , \u03c0 i2 , . . . , \u03c0 i|V| ) be the probability distribution of the i-th output token from the Transformer model. Then, y i = (y i1 , y i2 , . . . , y i|V| ) for the i-th output token with Gumbel softmax is calculated as follows:",
"cite_spans": [
{
"start": 512,
"end": 531,
"text": "(Jang et al., 2016)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training with a Constraint on the Number of Atoms",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y ij = exp ((log(\u03c0 ij ) + g ij )/\u03c4 ) |V| k=1 exp ((log(\u03c0 ik ) + g ik )/\u03c4 ) ,",
"eq_num": "(2)"
}
],
"section": "Training with a Constraint on the Number of Atoms",
"sec_num": "3.3"
},
{
"text": "g ij = \u2212 log(\u2212 log(u ij )), u ij \u223c Uniform(0, 1),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training with a Constraint on the Number of Atoms",
"sec_num": "3.3"
},
{
"text": "where V represents the vocabulary set of SMILES, and \u03c4 is a hyperparameter of Gumbel softmax. The distribution y i approximates an one-hot vector as \u03c4 decreases, and a uniform distribution as \u03c4 increases. In this work, \u03c4 was set to 0.1. Using Equation 2, the loss function under the proposed constraints is given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training with a Constraint on the Number of Atoms",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L atom = 1 |A| a\u2208A (N a (T ) \u2212 y pred idx(a) ) 2 ,",
"eq_num": "(3)"
}
],
"section": "Training with a Constraint on the Number of Atoms",
"sec_num": "3.3"
},
{
"text": "y pred = y 1 + y 2 + \u2022 \u2022 \u2022 + y m = (y pred 1 , y pred 2 , . . . , y pred |V| ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training with a Constraint on the Number of Atoms",
"sec_num": "3.3"
},
{
"text": "where A is a set of elements, N a (T ) is a function that returns the number of atoms of element a in SMILES string T , and idx(a) is a function that returns the index of element a in V. Note that A contains only elemental symbols, and the other features such as symbols representing bonds are absent. More formally, \"C\", \"O\" \u2208 A, \"=\", \"#\" / \u2208 A, and V \u2283 A. Each dimension of y pred is an estimation of the frequency of the corresponding token of the vocabulary V in the predicted SMILES. The proposed constraint calculation uses only the estimation of the elements in V. The frequencies of elements not included in the correct SMILES are set to 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training with a Constraint on the Number of Atoms",
"sec_num": "3.3"
},
{
"text": "As an example, Figure 3 shows how the number of atoms of each element is constrained when the correct SMILES string is \"CC=O\". As \"C\" and \"O\" are elements and \"=\" is a subsidiary symbol representing a double bond, the proposed constraint function treats the number of atoms of each element (\"C\" and \"O\") as the error to be minimized, and disregards the \"=\" symbol.",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 23,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training with a Constraint on the Number of Atoms",
"sec_num": "3.3"
},
{
"text": "The objective function under the proposed constraints is defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training with a Constraint on the Number of Atoms",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L smiles + \u03bb atom L atom ,",
"eq_num": "(4)"
}
],
"section": "Training with a Constraint on the Number of Atoms",
"sec_num": "3.3"
},
{
"text": "where \u03bb atom is a hyperparameter that controls the degree of considering L atom .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training with a Constraint on the Number of Atoms",
"sec_num": "3.3"
},
{
"text": "The same chemical structure is differently represented in a SMILES string and an InChI string. Assuming that the models for predicting SMILES and InChI strings compensate each other, we propose a multi-task learning method that shares the encoder of the name-to-SMILES and name-to-InChI conversion models, and trains both models at the same time. Let I be the set of InChI strings. We define a training dataset consisting of n samples asD = (X 1 , T 1 , I 1 ), ..., , (X n , T n , I n ) , where X i \u2208 X , T i \u2208 T , and I i \u2208 I for 1 \u2264 i \u2264 n. The objective is to learn a functionf fromD.f (X i ) predicts both T i and I i . Specifically, the proposed multi-task learning minimizes the following objective function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-task Learning for Predicting both SMILES Strings and InChI Strings",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L smiles + \u03bb inchi L inchi ,",
"eq_num": "(5)"
}
],
"section": "Multi-task Learning for Predicting both SMILES Strings and InChI Strings",
"sec_num": "3.4"
},
{
"text": "L inchi = \u2212 log P (I|X; \u03b8 enc , \u03b8 inchi ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-task Learning for Predicting both SMILES Strings and InChI Strings",
"sec_num": "3.4"
},
{
"text": "where \u03b8 inchi and \u03b8 enc are parameter sets for the InChI decoder and shared encoder, respectively, and \u03bb inchi is a hyperparameter that controls the degree of considering L inchi . L smiles is calculated by Eq. 1. The method is overviewed in Figure 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 242,
"end": 250,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Multi-task Learning for Predicting both SMILES Strings and InChI Strings",
"sec_num": "3.4"
},
{
"text": "In all experiments, the data comprised a chemical compound name and a correct SMILES string. Using the dump data of PubChem 9 (97M compound records), the chemical compound names were converted to Synonyms associated with each CID 10 , and the correct SMILES strings were converted from isomeric SMILES strings 11 to canon- ical SMILES strings using RDKit 12 . Note that in PubChem, the Synonyms includes the IUPAC names, common names, and IDs of the compounds in chemical compound databases. Here, we used the isomeric SMILES strings because they least overlap with their corresponding CIDs. In the multi-task learning, the InChI strings are also associated with CIDs. From the dump data, 10,000 CIDs and 100,000 CIDs were randomly selected as the development and test datasets, respectively, and only the two chemical compound names with the longest edit distance were assigned to each CID.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Set",
"sec_num": "4.1"
},
{
"text": "To create Synonyms in the development and test data, chemical compound names like IDs in the chemical compound databases were removed using manually created regular expressions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Set",
"sec_num": "4.1"
},
{
"text": "In the development and test datasets, duplicate chemical compound names with different CIDs were removed 13 . From the development and test datasets, we removed 820 and 8,241 duplicates, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Set",
"sec_num": "4.1"
},
{
"text": "As the training dataset, we selected chemical compound names that were categorized as Synonyms that could be tokenized by the OPSIN tokenizer. The size of each dataset is listed in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 181,
"end": 188,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Data Set",
"sec_num": "4.1"
},
{
"text": "The hyperparameters of the Transformer model were set as follows: number of stacks in the encoder and decoder layers = 6, number of heads 12 https://github.com/rdkit/rdkit 13 The same chemical compound name may have more than one CID. = 8, embedding dimension = 512, and dropout probability = 0.1. The loss functions L smiles and L inchi were computed using a label-smoothing cross entropy with the smoothing parameter set to 0.1. The learning rate was linearly increased to 0.0005 over the first 4,000 steps. In later steps, it was decreased proportionally to the inverse square root of the step number. The optimizer was an Adam (Kingma and Ba, 2015) optimizer with \u03b2 1 = 0.9, \u03b2 2 = 0.98, and = 10 \u22128 . The model parameters were updated 300,000 times. The hyperparameters \u03bb atom and \u03bb inchi for controlling the degree of constraint consideration were set to 0.7 and 0.3, respectively. The number of merge operations for the BPE-based tokenizer of chemical compound names was set to 500. The vocabulary size for the tokenizer of InChI strings was set to 1,000. We tuned the hyperparameters for our constraints and subword on the development data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Settings",
"sec_num": "4.2"
},
{
"text": "To present the results of our Transformer-based models, we averaged the last 10 checkpoints (saved at 1,000-step intervals) of the Transformer models. We used beam search with a beam size of 4 and length penalty \u03b1 = 0.6 (Vaswani et al., 2017) . The maximum output length of an inference was set to 200.",
"cite_spans": [
{
"start": 220,
"end": 242,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Settings",
"sec_num": "4.2"
},
{
"text": "The results are shown in Table 3 . Here, tool A and tool B are two commercially available tools, atomnum indicates the method based on the number of atoms described in section 3.3, and inchigen denotes the multitask learning method Figure 5 : Histogram of Jaccard similarities between incorrect structures generated by inchigen with BPE and their correct structures described in section 3.4. The notations BPE and OPSIN-TK indicate the use of the BPE-based and OPSIN tokenizers, respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 25,
"end": 32,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 232,
"end": 240,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Prediction Performance",
"sec_num": "5.1"
},
{
"text": "As confirmed in Table 3 , the proposed methods attained higher prediction performance the existing rule-based methods and the conventional Transformer-based model. inchigen with BPE showed 0.056, 0.062, and 0.095 points higher Fmeasure than OPSIN, tool A, and tool B, respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 16,
"end": 23,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Prediction Performance",
"sec_num": "5.1"
},
{
"text": "The F-measure was further improved by combining the two tokenizers (see the results of OPSIN-TK+BPE in Table 3 ). In the OPSIN-TK+BPE method, the Transformer-based method with BPE predicted the structures from chemical compound names that could be tokenized by the OPSIN tokenizer. The highest F-measure and precision (0.829 and 0.886, respectively) were achieved by inchigen with OPSIN-TK+BPE.",
"cite_spans": [],
"ref_spans": [
{
"start": 103,
"end": 110,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Prediction Performance",
"sec_num": "5.1"
},
{
"text": "In the Transformer-based models, the OPSIN tokenizer obtained higher precision than the BPEbased tokenizers because approximately 11.5% (1,293 / 11,194) of the chemical compounds in the test set could not be tokenized by OPSIN. Consequently, the precision was improved by the reduced number of outputs. In contrast, the recall was lower than in the BPE-based tokenizers.",
"cite_spans": [
{
"start": 136,
"end": 152,
"text": "(1,293 / 11,194)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction Performance",
"sec_num": "5.1"
},
{
"text": "These results clarify the impact of tokenizer outputs on the recall, precision, and F-measure scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction Performance",
"sec_num": "5.1"
},
{
"text": "Most of the predictions in the Transformer-based approach were grammatically correct SMILES strings. In this context, \"grammatically correct\" means that the chemical structure can be visualized from the predicted SMILES string using RDKit, and does not require the correct SMILES string of a chemical compound name. In particular, inchigen with BPE achieved grammatically correct predictions for 99 % of the test data, 10.6-17.4 % higher than OPSIN, tool A, and tool B. To evaluate the usefulness of the Transformer-based approach, we also analyzed the proportion of incorrect structure predictions that were grammatically correct SMILES strings but did not match the correct SMILES strings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5.2"
},
{
"text": "To this end, we measured the Jaccard similarity (Tanimoto similarity) 14 between each structure that was incorrectly predicted by inchigen with BPE and the correct structure. The Jaccard similarity, a common technique for measuring chemical compound similarities, is defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5.2"
},
{
"text": "J(X, Y ) = v X \u2022 v Y |v X + v Y | \u2212 v X \u2022 v Y ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5.2"
},
{
"text": "where the v X and v Y are binary chemical fingerprints of chemical compounds X and Y, respectively, represented by binary vectors. |v| is the L1 norm of v,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5.2"
},
{
"text": "and v X \u2022 v Y is the inner product of v X and v Y .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5.2"
},
{
"text": "Here, a chemical fingerprint expresses a chemical compound structure as a calculable vector. A famous type of fingerprint is a series of binary digits (bits) that represent the presence or absence of particular partial structures in the chemical compound. For example, the Molecular Access System key (Durant et al., 2002) , which is used as the fingerprints in the present evaluation, comprises 166 partial structures of chemical compounds. Figure 5 is a histogram of the Jaccard similarity scores obtained in this analysis. We find that most of the incorrect SMILES strings generated by inchigen with BPE possessed high Jaccard similarities to the correct SMILES strings. The average Jaccard similarity was 0.753. An incorrect structure generated by inchigen with BPE is compared with its correct structure in Figure 6 . The two structures differed only by whether ethylsulfanylbutane or methanethiol was bonded in the partial structures enclosed by the red ellipses. In other words, the two structures are very similar (Jaccard similarity = 0.76).",
"cite_spans": [
{
"start": 301,
"end": 322,
"text": "(Durant et al., 2002)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 442,
"end": 451,
"text": "Figure 5",
"ref_id": null
},
{
"start": 813,
"end": 821,
"text": "Figure 6",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5.2"
},
{
"text": "From this result, we observe that even when the proposed method generates an incorrect structure, Stepwise operations on this tree are continued until the structure has been reconstructed from the name. The construction is performed on substructures associated with the terms. As mentioned earlier, many of chemical compound names described in papers and patents do not comply with IUPAC names or other systematic nomenclatures, so are difficult to reconstruct using rule-based methods. In our preliminary experiments using OPSIN and commercially available tools, the F-measures of predicting the IUPAC names in the dataset ranged from 0.878 to 0.960. However, on the Synonyms dataset, the F-measures fell to 0.719-0.758.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5.2"
},
{
"text": "Recently, SMILES strings have been applied to chemical reaction prediction (Nam and Kim, 2016; Schwaller et al., 2019) . The method of Nam and Kim (2016) predicts SMILES strings representing products from SMILES strings representing reactants and reagents. This method employs a sequence-to-sequence model with an attention mechanism based on a recurrent neural network (Bahdanau et al., 2015) . Schwaller et al. (2019) achieved higher accuracy than Nam and Kim (2016) 's model by applying the conventional Transformer model (Vaswani et al., 2017) .",
"cite_spans": [
{
"start": 75,
"end": 94,
"text": "(Nam and Kim, 2016;",
"ref_id": "BIBREF9"
},
{
"start": 95,
"end": 118,
"text": "Schwaller et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 135,
"end": 153,
"text": "Nam and Kim (2016)",
"ref_id": "BIBREF9"
},
{
"start": 370,
"end": 393,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF0"
},
{
"start": 396,
"end": 419,
"text": "Schwaller et al. (2019)",
"ref_id": "BIBREF10"
},
{
"start": 450,
"end": 468,
"text": "Nam and Kim (2016)",
"ref_id": "BIBREF9"
},
{
"start": 525,
"end": 547,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Deep Learning methods using SMILES",
"sec_num": "6.2"
},
{
"text": "Similarly to our study, their models adapt SMILES strings to sequence-to-sequence models, but our target task (predicting chemical structures from their chemical compound names) differs from theirs. To improve the accuracy of our target task, we will improve the update speed and quality of our chemical compounds databases. We also intend to solve other chemistry problems, including chemical reactions, by predictive machine learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deep Learning methods using SMILES",
"sec_num": "6.2"
},
{
"text": "This paper introduced our Transformer-based prediction methods, which convert chemical compound names to SMILES strings trained with the constraint of the number of atoms of each element in the SMILES string. We also proposed a multitask learning approach that simultaneously learns the conversions to SMILES strings and InChI strings. In an experimental comparison evaluation, our proposed method achieved higher F-measures than the existing methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "In future work, we intend to explore various tokenization methods, and further improve the prediction performance. We also hope to apply the proposed loss function to multi-task learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "http://aiweb.cs.ehime-u.ac.jp/ pred-chem-struct",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://pubchem.ncbi.nlm.nih.gov/ 3 https://iupac.org 4 PubChem's definition of chemical compound names other than IUPAC names 5 Our Synonyms excludes DATABASE IDs from the original definition of Synonyms because DATABASE IDs can be efficiently recognized by rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In preliminary experiments, BPE achieved a higher Fmeasure than SentencePiece(Kudo and Richardson, 2018). Therefore, it was used for tokenizing the chemical compound names.8 https://github.com/glample/fastBPE",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Jaccard similarity, also called the Tanimoto similarity, measures the similarities between pairs of chemical compounds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The research results were achieved by the RIKEN AIP-FUJITSU Collaboration Center, Japan.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Reoptimization of mdl keys for use in drug discovery",
"authors": [
{
"first": "Joseph",
"middle": [
"L"
],
"last": "Durant",
"suffix": ""
},
{
"first": "Burton",
"middle": [
"A"
],
"last": "Leland",
"suffix": ""
},
{
"first": "Douglas",
"middle": [
"R"
],
"last": "Henry",
"suffix": ""
},
{
"first": "James",
"middle": [
"G"
],
"last": "Nourse",
"suffix": ""
}
],
"year": 2002,
"venue": "Journal of Chemical Information and Computer Sciences",
"volume": "42",
"issue": "6",
"pages": "1273--1280",
"other_ids": {
"DOI": [
"10.1021/ci010132r"
]
},
"num": null,
"urls": [],
"raw_text": "Joseph L. Durant, Burton A. Leland, Douglas R. Henry, and James G. Nourse. 2002. Reoptimization of mdl keys for use in drug discovery. Journal of Chemical Information and Computer Sciences, 42(6):1273- 1280.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Inchi, the iupac international chemical identifier",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Stephen R Heller",
"suffix": ""
},
{
"first": "Igor",
"middle": [],
"last": "Mcnaught",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Pletnev",
"suffix": ""
},
{
"first": "Dmitrii",
"middle": [],
"last": "Stein",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tchekhovskoi",
"suffix": ""
}
],
"year": 2015,
"venue": "Journal of cheminformatics",
"volume": "7",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen R Heller, Alan McNaught, Igor Pletnev, Stephen Stein, and Dmitrii Tchekhovskoi. 2015. Inchi, the iupac international chemical identifier. Journal of cheminformatics, 7(1):23.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The next level in chemical space navigation: going far beyond enumerable compound libraries",
"authors": [
{
"first": "Torsten",
"middle": [],
"last": "Hoffmann",
"suffix": ""
},
{
"first": "Marcus",
"middle": [],
"last": "Gastreich",
"suffix": ""
}
],
"year": 2019,
"venue": "Drug Discovery Today",
"volume": "24",
"issue": "5",
"pages": "1148--1156",
"other_ids": {
"DOI": [
"10.1016/j.drudis.2019.02.013"
]
},
"num": null,
"urls": [],
"raw_text": "Torsten Hoffmann and Marcus Gastreich. 2019. The next level in chemical space navigation: going far beyond enumerable compound libraries. Drug Dis- covery Today, 24(5):1148 -1156.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Categorical reparameterization with gumbel-softmax",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Jang",
"suffix": ""
},
{
"first": "Shixiang",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Poole",
"suffix": ""
}
],
"year": 2016,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Jang, Shixiang Gu, and Ben Poole. 2016. Cat- egorical reparameterization with gumbel-softmax. ArXiv, abs/1611.01144.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Richardson",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "66--71",
"other_ids": {
"DOI": [
"10.18653/v1/D18-2012"
]
},
"num": null,
"urls": [],
"raw_text": "Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "How many leads from hts? Drug Discovery Today",
"authors": [
{
"first": "Roger",
"middle": [],
"last": "Lahana",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "4",
"issue": "",
"pages": "447--448",
"other_ids": {
"DOI": [
"10.1016/S1359-6446(99)01393-8"
]
},
"num": null,
"urls": [],
"raw_text": "Roger Lahana. 1999. How many leads from hts? Drug Discovery Today, 4(10):447 -448.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Chemical name to structure: Opsin, an open source solution",
"authors": [
{
"first": "M",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"T"
],
"last": "Lowe",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Corbett",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"C"
],
"last": "Murray-Rust",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Glen",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Chemical Information and Modeling",
"volume": "51",
"issue": "3",
"pages": "739--753",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel M. Lowe, Peter T. Corbett, Peter Murray-Rust, and Robert C. Glen. 2011. Chemical name to struc- ture: Opsin, an open source solution. Journal of Chemical Information and Modeling, 51(3):739- 753.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Linking the neural machine translation and the prediction of organic chemistry reactions",
"authors": [
{
"first": "Juno",
"middle": [],
"last": "Nam",
"suffix": ""
},
{
"first": "Jurae",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juno Nam and Jurae Kim. 2016. Linking the neural ma- chine translation and the prediction of organic chem- istry reactions.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Molecular transformer: A model for uncertainty-calibrated chemical reaction prediction",
"authors": [
{
"first": "Philippe",
"middle": [],
"last": "Schwaller",
"suffix": ""
},
{
"first": "Teodoro",
"middle": [],
"last": "Laino",
"suffix": ""
},
{
"first": "Th\u00e9ophile",
"middle": [],
"last": "Gaudin",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Bolgar",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Costas",
"middle": [],
"last": "Hunter",
"suffix": ""
},
{
"first": "Alpha",
"middle": [
"A"
],
"last": "Bekas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2019,
"venue": "ACS central science",
"volume": "5",
"issue": "9",
"pages": "1572--1583",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philippe Schwaller, Teodoro Laino, Th\u00e9ophile Gaudin, Peter Bolgar, Christopher A Hunter, Costas Bekas, and Alpha A Lee. 2019. Molecular transformer: A model for uncertainty-calibrated chemical reaction prediction. ACS central science, 5(9):1572-1583.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1715--1725",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, L ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Multitask learning for chemical named entity recognition with chemical compound paraphrasing",
"authors": [
{
"first": "Taiki",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "Akihiro",
"middle": [],
"last": "Tamura",
"suffix": ""
},
{
"first": "Takashi",
"middle": [],
"last": "Ninomiya",
"suffix": ""
},
{
"first": "Takuya",
"middle": [],
"last": "Makino",
"suffix": ""
},
{
"first": "Tomoya",
"middle": [],
"last": "Iwakura",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "6243--6248",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taiki Watanabe, Akihiro Tamura, Takashi Ninomiya, Takuya Makino, and Tomoya Iwakura. 2019. Multi- task learning for chemical named entity recognition with chemical compound paraphrasing. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 6243-6248.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Smiles, a chemical language and information system. 1. introduction to methodology and encoding rules",
"authors": [
{
"first": "David",
"middle": [],
"last": "Weininger",
"suffix": ""
}
],
"year": 1988,
"venue": "Journal of Chemical Information and Computer Sciences",
"volume": "28",
"issue": "1",
"pages": "31--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Weininger. 1988. Smiles, a chemical language and information system. 1. introduction to method- ology and encoding rules. Journal of Chemical In- formation and Computer Sciences, 28(1):31-36.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Chemical structure of L-tyrosine (top), and its SMILES (middle) and InChI (bottom) representations naming category in PubChem includes ABBREVI-ATION and COMMON. As an example,",
"num": null,
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"text": "Overview of Transformer-based prediction of SMILES strings from chemical compound names",
"num": null,
"uris": null
},
"FIGREF3": {
"type_str": "figure",
"text": "Overview of multi-task learning for predicting both SMILES strings and InChI strings",
"num": null,
"uris": null
},
"FIGREF4": {
"type_str": "figure",
"text": "Example of a chemical structure mistakenly for \"fmoc-l-buthionine\". The red-edged ellipses enclose the partial structures that differ between the two chemical structures. the outcome does not deviate greatly from the correct structure.",
"num": null,
"uris": null
},
"TABREF0": {
"type_str": "table",
"text": "Examples of \"aspirin\" representations. In this table, ABBREVIATION and COMMON are Synonyms.",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF3": {
"type_str": "table",
"text": "",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF4": {
"type_str": "table",
"text": "ftp://ftp.ncbi.nlm.nih.gov/pubchem/",
"num": null,
"content": "<table><tr><td>method</td><td/><td colspan=\"3\">recall precision F-measure</td></tr><tr><td>Rule-based</td><td>OPSIN</td><td>0.693</td><td>0.836</td><td>0.758</td></tr><tr><td/><td>tool A</td><td>0.711</td><td>0.797</td><td>0.752</td></tr><tr><td/><td>tool B</td><td>0.653</td><td>0.800</td><td>0.719</td></tr><tr><td>Transformer-based</td><td colspan=\"2\">transformer 0.793</td><td>0.806</td><td>0.799</td></tr><tr><td>(BPE)</td><td>atomnum</td><td>0.798</td><td>0.808</td><td>0.803</td></tr><tr><td/><td>inchigen</td><td>0.810</td><td>0.819</td><td>0.814</td></tr><tr><td>Transformer-based</td><td colspan=\"2\">transformer 0.763</td><td>0.873</td><td>0.814</td></tr><tr><td colspan=\"2\">(OPSIN-TK + BPE) atomnum</td><td>0.768</td><td>0.876</td><td>0.818</td></tr><tr><td/><td>inchigen</td><td>0.779</td><td>0.886</td><td>0.829</td></tr><tr><td>Transformer-based</td><td colspan=\"2\">transformer 0.755</td><td>0.868</td><td>0.808</td></tr><tr><td>(OPSIN-TK)</td><td>atomnum</td><td>0.757</td><td>0.867</td><td>0.808</td></tr><tr><td/><td>inchigen</td><td>0.754</td><td>0.869</td><td>0.807</td></tr><tr><td/><td/><td colspan=\"3\">10 PubChem's compound identifier for a unique chemical</td></tr><tr><td/><td/><td>structure</td><td/><td/></tr><tr><td/><td/><td colspan=\"3\">11 SMILES strings written with isotopic and chiral specifi-</td></tr><tr><td/><td/><td>cations</td><td/><td/></tr></table>",
"html": null
},
"TABREF5": {
"type_str": "table",
"text": "Evaluation results of each converter for Synonyms. Transformer-based ones are our proposed methods. We evaluated the Transformer-based ones with different three tokenizers, BPE, OPSIN-TK+BPE, and OPSIN-TK.",
"num": null,
"content": "<table/>",
"html": null
}
}
}
}