ACL-OCL / Base_JSON /prefixS /json /spnlp /2020.spnlp-1.6.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:34:27.843214Z"
},
"title": "End-to-End Extraction of Structured Information from Business Documents with Pointer-Generator Networks",
"authors": [
{
"first": "Cl\u00e9ment",
"middle": [],
"last": "Sage",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "LIRIS",
"location": {
"country": "France"
}
},
"email": "[email protected]"
},
{
"first": "Alex",
"middle": [],
"last": "Aussem",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "LIRIS",
"location": {
"country": "France"
}
},
"email": "[email protected]"
},
{
"first": "V\u00e9ronique",
"middle": [],
"last": "Eglin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "LIRIS",
"location": {
"country": "France"
}
},
"email": "[email protected]"
},
{
"first": "Haytham",
"middle": [],
"last": "Elghazel",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "LIRIS",
"location": {
"country": "France"
}
},
"email": "[email protected]"
},
{
"first": "J\u00e9r\u00e9my",
"middle": [],
"last": "Espinas",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The predominant approaches for extracting key information from documents resort to classifiers predicting the information type of each word. However, the word level ground truth used for learning is expensive to obtain since it is not naturally produced by the extraction task. In this paper, we discuss a new method for training extraction models directly from the textual value of information. The extracted information of a document is represented as a sequence of tokens in the XML language. We learn to output this representation with a pointer-generator network that alternately copies the document words carrying information and generates the XML tags delimiting the types of information. The ability of our end-to-end method to retrieve structured information is assessed on a large set of business documents. We show that it performs competitively with a standard word classifier without requiring costly word level supervision.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "The predominant approaches for extracting key information from documents resort to classifiers predicting the information type of each word. However, the word level ground truth used for learning is expensive to obtain since it is not naturally produced by the extraction task. In this paper, we discuss a new method for training extraction models directly from the textual value of information. The extracted information of a document is represented as a sequence of tokens in the XML language. We learn to output this representation with a pointer-generator network that alternately copies the document words carrying information and generates the XML tags delimiting the types of information. The ability of our end-to-end method to retrieve structured information is assessed on a large set of business documents. We show that it performs competitively with a standard word classifier without requiring costly word level supervision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Companies and public administrations are daily confronted with an amount of incoming documents from which they want to extract key information as efficiently as possible. They often face known types of documents such as invoices or purchase orders, thus knowing what information types to extract. However, layouts are highly variable across document issuers as there are no widely adopted specifications constraining the positioning and textual representation of the information within documents. This makes information extraction a challenging task to automate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In addition to the incremental approaches based on layout identification (d'Andecy et al., 2018; Dhakal et al., 2019) , a number of recent works have proposed deep neural models to extract information in documents with yet unseen layouts. Following Palm et al. (2017) , most of these layoutfree approaches resort to classifiers that predict the information type of each document word. Yet, the information extraction task does not offer word level ground truth but rather the normalized textual values of each information type (Grali\u0144ski et al., 2020) . The word labels can thus be obtained by matching these textual values with the document words but this process is either time-consuming if manually performed or prone to errors if algorithmically performed. Indeed, extracted information may not appear verbatim in the document as its textual values are normalized. For example, the value \"2020-03-30\" for the document date field may be derived from the group of words \"Mar 30, 2020\". This forces the development of domain specific parsers to retrieve the matching words. Also, multiple document words can share the textual value of a extracted field while being semantically distinct, hence imposing additional heuristics for disambiguation. Otherwise, a street number may be wrongly interpreted as a product quantity, inducing noise in the word labels.",
"cite_spans": [
{
"start": 73,
"end": 96,
"text": "(d'Andecy et al., 2018;",
"ref_id": "BIBREF1"
},
{
"start": 97,
"end": 117,
"text": "Dhakal et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 249,
"end": 267,
"text": "Palm et al. (2017)",
"ref_id": "BIBREF24"
},
{
"start": 527,
"end": 551,
"text": "(Grali\u0144ski et al., 2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To the best of our knowledge, Palm et al. (2019) is the only related model that directly learns from naturally produced extraction results. However, the authors only tackle the recognition of independent and non-recurring fields such as the document date and omit the extraction of structured entities. Such entities are structures composed of multiple field values. Within documents, structured information is often contained in tables. For example, a product entity is usually described in a table row with its field values, such as price and quantity, being in different columns. Our work is intended to remedy this lack by proposing end-to-end methods for processing structured information. As a first step towards full end-to-end extraction, we focus in this paper on the recognition of fields whose values always appears verbatim in the document, thus eliminating the need for normalization operations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As illustrated in Figure 1 , extracted structured information can be represented in a markup lan- In this example, we retrieve the ordered products which are contained in the main table of the document. Two fields are recognized for each product entity: the ID number and the quantity.",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 26,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "guage that describes both its content and its structure. Among many others, we choose the XML language 1 for its simplicity. We define as many XML tag pairs as the number of entity and field types to extract. A pair of opening and closing field tags delimits a list of words constituting a field instance of the corresponding type. Following successful applications of sequenceto-sequence models in many NLP tasks (Otter et al., 2020), we employ a recurrent encoder-decoder architecture for outputting such XML representations. Conditioned on the sequence of words from the document, the decoder emits one token at each time step: either a XML tag or a word belonging to a field value. Since field values are often specific to a document or a issuer, extracted information cannot be generated from a fixed size vocabulary of words. Rather, we make use of pointing abilities of neural models (Vinyals et al., 2015) to copy words of the document that carry relevant information. Specifically, we adapt the Pointer-Generator Network (PGN) developed by See et al. (2017) for text summarization to our extraction needs. We evaluate the resulting model for extracting ordered products from purchase orders. We demonstrate that this end-to-end model performs competitively with a word classifier based model while avoiding 1 https://en.wikipedia.org/wiki/XML to create supervision at the word level.",
"cite_spans": [
{
"start": 891,
"end": 913,
"text": "(Vinyals et al., 2015)",
"ref_id": "BIBREF31"
},
{
"start": 1049,
"end": 1066,
"text": "See et al. (2017)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As mentioned before, most methods for information extraction in documents take the word labels for granted and rather focus on improving the encoding of the document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information extraction",
"sec_num": "2.1"
},
{
"text": "Holt and Chisholm (2018) combine heuristic filtering for identifying word candidates and a gradient boosting decision tree for independently scoring them. The strength of their model mainly lies on the wide range of engineered features describing syntactic, semantic, positional and visual content of each word as well as its local context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information extraction",
"sec_num": "2.1"
},
{
"text": "When extracting the main fields of invoices and purchase orders, Palm et al. 2017and Sage et al. (2019) both employ recurrent connections across the document to reinforce correlations between the class predictions of words. They show empirically that Recurrent Neural Networks (RNN) surpass classifiers whose prediction dependence is only due to local context knowledge introduced in the word representations. For this purpose, they arrange the words within a document as a unidimensional sequence and pass the word representations into a bidirectional LTSM (BLSTM) network for field classification. Similar to the state-of-the-art in Named Entity Recognition (Yadav and Bethard, 2018) , Jiang et al. (2019) also add a Conditional Random Field (CRF) on top of the BLSTM to refine predictions while extracting information from Chinese contracts.",
"cite_spans": [
{
"start": 85,
"end": 103,
"text": "Sage et al. (2019)",
"ref_id": "BIBREF29"
},
{
"start": 660,
"end": 685,
"text": "(Yadav and Bethard, 2018)",
"ref_id": "BIBREF33"
},
{
"start": 688,
"end": 707,
"text": "Jiang et al. (2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Information extraction",
"sec_num": "2.1"
},
{
"text": "Yet, unlike plain text, word spacing and alignments in both horizontal and vertical directions convey substantial clues for extracting information of documents. By imposing a spurious unidimensional word order, these architectures significantly favor transmission of context in one direction at the expense of the other. Lately, methods that explicitly consider the two dimensional structure of documents have emerged with two different approaches. Lohani et al. (2018) , Liu et al. (2019) and Hole\u010dek et al. (2019) represent documents by graphs, with each node corresponding to a word or a group of words and edges either connecting all the nodes or only spatially near neighbors. Convolutional or recurrent mechanisms are then applied to the graph for predicting the field type of each node.",
"cite_spans": [
{
"start": 449,
"end": 469,
"text": "Lohani et al. (2018)",
"ref_id": "BIBREF18"
},
{
"start": 472,
"end": 489,
"text": "Liu et al. (2019)",
"ref_id": "BIBREF17"
},
{
"start": 494,
"end": 515,
"text": "Hole\u010dek et al. (2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Information extraction",
"sec_num": "2.1"
},
{
"text": "Some authors rather represent a document page as a regular two dimensional grid by downscaling the document image. Each pixel of the grid contains at most one token -either a character or a word -and its associated representation. Then, they employ fully convolutional neural networks to model the document, either with dilated convolutions Palm et al., 2019) or encoder-decoder architectures performing alternately usual and transposed convolutions (Katti et al., 2018; Denk and Reisswig, 2019; Dang and Thanh, 2019) . Finally, all these works except Palm et al. (2019) output a segmentation mask representing the probabilities that each token contained in a pixel of the grid belong to the field types to extract. Katti et al. 2018and Denk and Reisswig (2019) additionally tackle tabular data extraction by predicting the coordinates of the table rows bounding boxes to identify the invoiced products.",
"cite_spans": [
{
"start": 341,
"end": 359,
"text": "Palm et al., 2019)",
"ref_id": "BIBREF23"
},
{
"start": 450,
"end": 470,
"text": "(Katti et al., 2018;",
"ref_id": "BIBREF14"
},
{
"start": 471,
"end": 495,
"text": "Denk and Reisswig, 2019;",
"ref_id": "BIBREF5"
},
{
"start": 496,
"end": 517,
"text": "Dang and Thanh, 2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Information extraction",
"sec_num": "2.1"
},
{
"text": "Instead of directly classifying each word of the document, Palm et al. (2019) output attention scores to measure the relevance of each word given the field type to extract. The relevant words are then copied and fed to learned neural parsers to generate a normalized string corresponding to the expected value of the field. The predicted string is measured by exact match with the ground truth. Evaluated on 7 fields types of invoices, their end-to-end method outperforms a logistic regression based model whose word labels are derived from end-to-end ground truth using heuristics. However, their approach cannot extract structured information such as the invoiced products.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information extraction",
"sec_num": "2.1"
},
{
"text": "Although there are publicly released datasets for the task of information extraction in documents (Jiang et al., 2019; Grali\u0144ski et al., 2020) , as far as we know, none of them are annotated to recognize structured data.",
"cite_spans": [
{
"start": 98,
"end": 118,
"text": "(Jiang et al., 2019;",
"ref_id": "BIBREF13"
},
{
"start": 119,
"end": 142,
"text": "Grali\u0144ski et al., 2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Information extraction",
"sec_num": "2.1"
},
{
"text": "A number of works prove that neural encoderdecoder models can produce well-formed and welltyped sequences in a structured language without supplying an explicit grammar of the language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structured language generation",
"sec_num": "2.2"
},
{
"text": "Extending traditional text recognition, some authors transform images of tables (Zhong et al., 2019; Deng et al., 2019) and mathematical formulas (Deng et al., 2017; Wu et al., 2018) into their LaTeX or HTML representations. After applying a convolutional encoder to the input image, they use a forward RNN based decoder to generate tokens in the target language. The decoder is enhanced with an attention mechanism over the final feature maps to help focusing on the image part that is recognized at the current time step.",
"cite_spans": [
{
"start": 80,
"end": 100,
"text": "(Zhong et al., 2019;",
"ref_id": "BIBREF36"
},
{
"start": 101,
"end": 119,
"text": "Deng et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 146,
"end": 165,
"text": "(Deng et al., 2017;",
"ref_id": "BIBREF3"
},
{
"start": 166,
"end": 182,
"text": "Wu et al., 2018)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Structured language generation",
"sec_num": "2.2"
},
{
"text": "Neural encoder-decoder architectures have also been used for semantic parsing which aims at converting natural language utterances to formal meaning representations (Dong and Lapata, 2016; Rabinovich et al., 2017) . The representations may be an executable language such as SQL and Prolog or more abstract representations like abstract syntax trees. Text being the modality of both input and output sequences, Jia and Liang (2016), Zhong et al. (2017) and McCann et al. (2018) include attentionbased copying abilities in their neural model to efficiently produce the rare or out-of-vocabulary words.",
"cite_spans": [
{
"start": 165,
"end": 188,
"text": "(Dong and Lapata, 2016;",
"ref_id": "BIBREF7"
},
{
"start": 189,
"end": 213,
"text": "Rabinovich et al., 2017)",
"ref_id": "BIBREF27"
},
{
"start": 432,
"end": 451,
"text": "Zhong et al. (2017)",
"ref_id": "BIBREF35"
},
{
"start": 456,
"end": 476,
"text": "McCann et al. (2018)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Structured language generation",
"sec_num": "2.2"
},
{
"text": "We assume that the text of a document is already transcribed before extracting its information. For scanned documents, we employ a commercial Optical Character Recognition (OCR) engine for retrieving the text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "The method we propose for extracting structured information from a document is depicted in Figure 1 . For each decoder time step, a generation probability p gen \u2208 [0, 1] is calculated, which weights the probability of generating XML tags from the vocabulary versus copying words from the document carrying information. The vocabulary distribution and the attention distribution are weighted and summed to obtain the final distribution. For the illustrated time step, the model mainly points to the word R-1141, i.e. the ID number of the first product.",
"cite_spans": [],
"ref_spans": [
{
"start": 91,
"end": 99,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "articles. The attention-based pointing mechanism allows to accurately reproduce factual information of articles by copying words that are not in the generator's vocabulary, e.g. rare proper nouns. Similarly, we take advantage of its pointing ability to copy the words from the document which carry relevant information while allowing the generator to produce the XML tags which structure the extracted information. In the following subsections, we describe in details our model and highlight key differences with the original PGN.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "Each word w i of the document is represented by a vector denoted r i . In complement to the word level embeddings used by See et al. (2017), we enrich representations with additional textual features to cope with the open vocabulary observed within the corpus of documents. First, we follow the C2W model of Ling et al. (2015) to form a textual representation q c i at the character level. To that end, we apply a BLSTM layer over the dense embed-dings associated to the characters of the word and concatenate the last hidden state in both directions. We also add the number n i of characters in the word and case features, i.e. the percentage \u03b1 i of its characters in upper case and a binary factor \u03b2 i indicating if it has a title form. We concatenate all these features to form the textual component r t i of the word representation:",
"cite_spans": [
{
"start": 308,
"end": 326,
"text": "Ling et al. (2015)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word representation",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "r t i = [q w i , q c i , n i , \u03b1 i , \u03b2 i ]",
"eq_num": "(1)"
}
],
"section": "Word representation",
"sec_num": "3.1"
},
{
"text": "where q w i is its word level embedding. To take into account the document layout, we also compute spatial features r s i of the word. These encompass the coordinates of the top-left and bottom-right edges of the word bounding box, normalized by the height and width of the page. We concatenate the spatial r s i and textual r t i components to build the word representation r i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word representation",
"sec_num": "3.1"
},
{
"text": "The words of the document are organized as a unidimensional sequence of length N by reading them in a top-left to bottom-right order. The word representations {r i } i=1..N are then fed to a two-layer BLSTM to obtain contextualized representations through the encoder hidden states {h i } i=1..N .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder",
"sec_num": "3.2"
},
{
"text": "Decoding is performed by a two-layer forward LSTM, producing a hidden state s t at each time step t. An attention mechanism is added on top of the decoder to compute the attention distribution a t over the document words and the context vector See et al. (2017) use the alignment function of Bahdanau et al. 2015, we employ the general form of Luong et al. (2015) as this is computationally less expensive while showing similar performances:",
"cite_spans": [
{
"start": 244,
"end": 261,
"text": "See et al. (2017)",
"ref_id": "BIBREF30"
},
{
"start": 344,
"end": 363,
"text": "Luong et al. (2015)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.3"
},
{
"text": "h * t = N i=1 a t i h i . While",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e t i = s T t W a h i (2) a t = softmax(e t )",
"eq_num": "(3)"
}
],
"section": "Decoder",
"sec_num": "3.3"
},
{
"text": "where W a is a matrix of learnable parameters. We simplify the computing of the vocabulary distribution P vocab as the generator is only in charge of producing the XML tags and thus has a vocabulary of limited size. We apply a unique dense layer instead of two and do not involve the context vector h * t in the expression of P vocab :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P vocab = softmax(V s t + b)",
"eq_num": "(4)"
}
],
"section": "Decoder",
"sec_num": "3.3"
},
{
"text": "where V and b are learnable parameters The generation probability p gen \u2208 [0, 1] for choosing between generating XML tags versus copying words from the document is computed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.3"
},
{
"text": "p gen = \u03c3(w T h h * t + w T s s t + w T x x t + b ptr ) (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.3"
},
{
"text": "where x t is the decoder input, vectors w h , w s , w x and scalar b ptr are learnable parameters and \u03c3 is the sigmoid function. Then, p gen weights the sum of the attention and vocabulary distributions to obtain the final distribution P (w) over the extended vocabulary, i.e. the union of all XML tags and unique textual values from the document words:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (w) = p gen P vocab (w)+(1\u2212p gen ) i:w i =w a t i",
"eq_num": "(6)"
}
],
"section": "Decoder",
"sec_num": "3.3"
},
{
"text": "Note that if a textual value appears multiple times in the document, the attention weights of all the corresponding words are summed for calculating its probability of being copied.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.3"
},
{
"text": "During training, the decoder input x t is the previous token of the ground truth sequence, while in inference mode, the previous token emitted by the decoder is used. An input token is either represented by a dense embedding if the token is a XML tag or by the textual feature set r t i of the corresponding words {w i } if the token is copied from the document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.3"
},
{
"text": "To help the model keeping track of words already copied, we concatenate the previous context vector h * t\u22121 with the input representation x t before applying the first decoder LSTM layer (Luong et al., 2015) . We also employ the coverage mechanism proposed in See et al. (2017) in order to reduce repetitions in the generated sequences. The idea is to combine the attention distributions of the previous time steps in the coverage vector c t = t\u22121 t =1 a t to compute the current attention distribution. We adapt their mechanism to our alignment function, thus changing the equation 2 to:",
"cite_spans": [
{
"start": 187,
"end": 207,
"text": "(Luong et al., 2015)",
"ref_id": "BIBREF20"
},
{
"start": 260,
"end": 277,
"text": "See et al. (2017)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e t i = s T t (W a h i + c t i w c )",
"eq_num": "(7)"
}
],
"section": "Decoder",
"sec_num": "3.3"
},
{
"text": "where w c is a vector of adjustable parameters. The training loss is the combination of the negative log-likelihood of the target tokens {w * t } t=1..T and the coverage loss which penalizes the model for repeatedly paying attention to the same words:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.3"
},
{
"text": "loss t = \u2212 log P (w * t ) + \u03bb N i=1 min(a t i , c t i ) (8) loss = 1 T T t=1 loss t (9)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.3"
},
{
"text": "where \u03bb is a scalar hyperparameter. When the decoding stage is performed, the resulting string is parsed according to the XML syntax to retrieve all the predicted entities and fields of the document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.3"
},
{
"text": "We train and evaluate our extraction model on a dataset of real world business documents which unfortunately cannot be publicly released. It consists of 219,476 purchase orders emanated by 17,664 issuers between April 2017 and May 2018. The dataset is multilingual and multicultural even if the documents mainly originate from the U.S. The number of purchase orders per issuer is at least 3 and at most 31, ensuring diversity of document layouts. Training, validation and test sets have distinct issuers to assess the ability of the model to generalize to unseen layouts. They have been constructed by randomly picking 70 %, 10 % and 20 % of the issuers, respectively. More detailed statistics of the dataset are given in the Table 1 . 3.52 Tokens in output sequence (Avg.) 32.24 Words per ID number instance (Avg.)",
"cite_spans": [
{
"start": 767,
"end": 773,
"text": "(Avg.)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 726,
"end": 733,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "1.36 Words per quantity instance (Avg.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "1.00",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "This dataset comes from a larger corpus of documents with their information extraction results that have been validated by end users of a commercial document automation software. Among all the types of information, we focus on the extraction of the ordered product entities which have two mandatory fields: ID number and quantity. From this corpus, we select the purchase orders whose location in the document is supplied for all its field instances. The knowledge of location comes from a layout-based incremental extraction system and ensures that we perfectly construct the labels for training a word classifier. Since a field instance can be composed of multiple words, we adopt the IOB (Inside, Outside, Beginning) tagging scheme of Ramshaw and Marcus (1999) for defining the field type of each document word.",
"cite_spans": [
{
"start": 738,
"end": 763,
"text": "Ramshaw and Marcus (1999)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "Our end-to-end model is compared on this dataset with a baseline extraction method based on a word classifier. This baseline encodes the document as the end-to-end model does, i.e. with the same operations for constructing the word representations r i and the encoder outputs h i . On top of the encoder, a dense layer with softmax activation is added with 5 output units. 4 of these units refer to the beginning and continuation of an instance for ID number and quantity fields. The remaining unit is dedicated to the Outside class, i.e. for the document words carrying information that we do not want to extract. The words with a predicted probability above 0.5 for one of the 4 field units are associ-ated with the corresponding class, otherwise we attribute the Outside class. Field instances are then constructed by merging words with beginning and continuing classes of the same field type. Finally, each quantity instance is paired with an ID number instance to form the product entities. To do so, the Hungarian algorithm (Kuhn, 1955) solves a linear sum assignment problem with the vertical distance on the document as the matching cost between two field instances. For our task, this pairing strategy is flawless if the field instances are perfectly extracted by the word classifier.",
"cite_spans": [
{
"start": 1030,
"end": 1042,
"text": "(Kuhn, 1955)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "The model hyperparameters are chosen according to the micro averaged gain on the validation set. The end-to-end model and baseline share the same hyperparameter values, except the number of BLSTM cells in each encoder layer that is fixed to 128 and 256 respectively, to ensure similar numbers of trainable parameters. The input character and word vocabularies are derived from the training set. We consider all observed characters while we follow the word vocabulary construction of Sage et al. (2019) designed for business documents. This results in vocabularies of respectively 5,592 and 25,677 elements. Their embedding has a size of 16 and 32 and are trained from scratch. The BLSTM layer iterating over characters of document words has 32 cells. For all BLSTM layers, each direction has n/2 LSTM cells and their output are concatenated to form n-dimensional vectors. The decoder layers have a size of 128 and are initialized by the last states of the encoding BLSTM layers. At inference time, we decode with a beam search of width 3 and we set the maximum length of the output sequence to the number of words in the document. This results in 1,400,908 and 1,515,733 trainable parameters for the PGN and the word classifier.",
"cite_spans": [
{
"start": 483,
"end": 501,
"text": "Sage et al. (2019)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "To deal with exploding gradients, we apply gradient norm clipping (Pascanu et al., 2013 ) with a clipping threshold of 5. The loss is minimized with the Adam optimizer, its learning rate is fixed to 0.001 the first 2 epochs and then exponentially decreases by a factor of 0.8. We stop the training when the micro gain on the validation set has not improved in the last 3 epochs. As suggested in See et al. (2017) , the coverage loss is added to the minimized loss only at the end of training, for one additional epoch. We weight its contribution by setting \u03bb = 0.1 as the original value of 1 makes the negative log-likelihood loss increase. The batch size is 8 if the model fits on GPU RAM, 4 other-wise.",
"cite_spans": [
{
"start": 66,
"end": 87,
"text": "(Pascanu et al., 2013",
"ref_id": "BIBREF25"
},
{
"start": 395,
"end": 412,
"text": "See et al. (2017)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "The experiments are carried out on a single NVIDIA TITAN X GPU. Model training takes from 3 to 10 days for 10 to 15 epochs. Due to the computational burden, the hyperparameters values have not been optimized thoroughly. Besides, we are not able to train the models on documents with more than 1800 words, which amounts to about 4 % of the training set being put aside. Yet, we evaluate the models on all documents of the validation and test sets. The implementation is based on the seq2seq subpackage of TensorFlow Addons (Luong et al., 2017) .",
"cite_spans": [
{
"start": 522,
"end": 542,
"text": "(Luong et al., 2017)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We evaluate the models by measuring how much work is saved by using them rather than manually doing the extraction. For this purpose, we first assign the predicted products of a document to the ground truth entities, then we count the number of deletions, insertions and modifications to match the ground truth field instances from the predicted instances that have been assigned. The modification counter is incremented by one when a predicted field value and its target do not exactly match. For a given field, we estimate the manual post-processing gain with the following edit distance:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Manual post-processing cost",
"sec_num": "6.1"
},
{
"text": "1 \u2212 # deletions + # insertions + # modifications N (10) where N is the number of ground truth instances in the document for this field. Micro averaged gain is calculated by summing the error counters of ID number and quantity fields and applying equation 10. We select the assignment between predicted and target entities that maximizes the micro gain of the document. To assess the post-processing gains across a set of documents, we sum the counters of each document before using equation 10.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Manual post-processing cost",
"sec_num": "6.1"
},
{
"text": "Our evaluation methodology is closely related to Katti et al. (2018) . However, they compute metrics independently for each field while we take into account the structure of entities in our evaluation.",
"cite_spans": [
{
"start": 49,
"end": 68,
"text": "Katti et al. (2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Manual post-processing cost",
"sec_num": "6.1"
},
{
"text": "We report in Table 2 the results of both extraction models on the test set. We retain the best epoch of each model according to the validation micro gain. All post-processing gains have positive values, meaning that it is more efficient to correct potential errors of models than manually perform the extraction from scratch (in this case, # insertions = N and # deletions = # modifications = 0). We note that the performances of the word classifier and PGN are quite similar. Even if its field level gains are a little behind, the PGN slightly surpasses the word classifier for recognizing whole documents. Both models significantly reduce human efforts as the end users do not have any corrections to make for more than 2 out of 3 documents. Besides, the PGN produces sequences that are well-formed according to the XML syntax for more than 99.5 % of the test documents.",
"cite_spans": [],
"ref_spans": [
{
"start": 13,
"end": 20,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Manual post-processing cost",
"sec_num": "6.1"
},
{
"text": "The comparison with the baseline confirms that the PGN has learned to produce relevant attention distributions in order to copy words carrying useful information. In particular, when the expected field value appears multiple times in the document, the PGN is able to localize the occurrence that is semantically correct, as illustrated in the document displayed in Figure 3 . As shown, the PGN focuses its attention on the word 1 in the table row of the product that is currently recognized. On the contrary, the model ignores the occurrences of 1 which are contained in the rest of the product table and in the address blocks. This behaviour is noteworthy since the model is not explicitly taught to perform this disambiguation.",
"cite_spans": [],
"ref_spans": [
{
"start": 365,
"end": 373,
"text": "Figure 3",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Visual inspection of the attention mechanism",
"sec_num": "6.2"
},
{
"text": "The main difficulty faced by both models is ambiguity in the ground truth as our dataset has been annotated by users from many distinct companies. Some documents contain multiple valid values for a field of a unique product. For example, there may be the references from both recipient and issuer for the ID number. The field value which is retained as ground truth then depends on further processing of the extracted information, e.g. integration into a Enterprise Resource Planning (ERP) system. This seriously prevents any extraction model from reaching the upper bound of post-processing gain metrics which is 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "Besides, the ID number field does not always have a dedicated physical column and rather appears within the description column, without keywords clearly introducing the field instances such as in Figure 1 . Also, its instances are constituted on average of more words than the quantity, making less likely the exact match between predicted and target instances. These additional complications explain the gap of model performances between the two fields.",
"cite_spans": [],
"ref_spans": [
{
"start": 196,
"end": 204,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "Unlike the word classifier based approach, the PGN tends to repeat itself by duplicating some field instances and skipping others. This is especially observed for documents having a large number of products, therefore large output sequences. To mea-sure the impact of these repetitions on metrics, we split the test set into 3 subsets according to the number of products contained in the document: no more than 3, between 4 and 14 and at least 15 entities. The last subset gathers documents with output sequences of at least 122 tokens. We recompute the metrics for each subset and report the micro averaged gains in Table 3 . The performances are stable for the word classifier whatever the number of entities in the document. The PGN is on par with the word classifier for the documents with a small number of products which constitute the vast majority of the dataset. However, its extraction performance greatly declines for large output sequences, indicating that the PGN is more affected by repetitions than the baseline. It is unclear why the coverage mechanism is not as successful on our task as it is for abstractive summarization (See et al., 2017) . We also tried to use the temporal attention from Paulus et al. (2018) to avoid copying the same words multiple times but this was unsuccessful too.",
"cite_spans": [
{
"start": 1141,
"end": 1159,
"text": "(See et al., 2017)",
"ref_id": "BIBREF30"
},
{
"start": 1211,
"end": 1231,
"text": "Paulus et al. (2018)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 617,
"end": 624,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "We discussed a novel method based on pointergenerator networks for extracting structured information from documents. We showed that learning directly from the textual value of information is a viable alternative to the costly word level supervision commonly used in information extraction. In this work, we focused on purchase orders but the approach could be used to extract any structured entity as long as its information type is known at training time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "Future work should aim to: i) reduce repetitions in the output sequences, ii) add parsing abilities into our encoder-decoder in order to transform the values of copied words. This will allow to process fields that need to be normalized when being extracted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
}
],
"back_matter": [
{
"text": "The work presented in this paper was supported by Esker. We thank them for providing the dataset on which experiments were performed and for insightful discussions about these researches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgment",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Field extraction by hybrid incremental and a-priori structural templates",
"authors": [
{
"first": "Emmanuel",
"middle": [],
"last": "Vincent Poulain D'andecy",
"suffix": ""
},
{
"first": "Mar\u00e7al",
"middle": [],
"last": "Hartmann",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rusi\u00f1ol",
"suffix": ""
}
],
"year": 2018,
"venue": "13th IAPR International Workshop on Document Analysis Systems (DAS)",
"volume": "",
"issue": "",
"pages": "251--256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vincent Poulain d'Andecy, Emmanuel Hartmann, and Mar\u00e7al Rusi\u00f1ol. 2018. Field extraction by hybrid incremental and a-priori structural templates. In 2018 13th IAPR International Workshop on Docu- ment Analysis Systems (DAS), pages 251-256. IEEE.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "End-to-end information extraction by characterlevel embedding and multi-stage attentional u-net",
"authors": [
{
"first": "Tuan",
"middle": [
"Anh"
],
"last": "",
"suffix": ""
},
{
"first": "Nguyen",
"middle": [],
"last": "Dang",
"suffix": ""
},
{
"first": "Dat",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Thanh",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the British Machine Vision Conference (BMVC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tuan Anh Nguyen Dang and Dat Nguyen Thanh. 2019. End-to-end information extraction by character- level embedding and multi-stage attentional u-net. In Proceedings of the British Machine Vision Con- ference (BMVC).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Image-to-markup generation with coarse-to-fine attention",
"authors": [
{
"first": "Yuntian",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Anssi",
"middle": [],
"last": "Kanervisto",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Alexander M",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "70",
"issue": "",
"pages": "980--989",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuntian Deng, Anssi Kanervisto, Jeffrey Ling, and Alexander M Rush. 2017. Image-to-markup gener- ation with coarse-to-fine attention. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 980-989. JMLR. org.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Challenges in end-to-end neural scientific table recognition",
"authors": [
{
"first": "Yuntian",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Rosenberg",
"suffix": ""
},
{
"first": "Gideon",
"middle": [],
"last": "Mann",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 International Conference on Document Analysis and Recognition (ICDAR)",
"volume": "",
"issue": "",
"pages": "894--901",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuntian Deng, David Rosenberg, and Gideon Mann. 2019. Challenges in end-to-end neural scientific ta- ble recognition. In 2019 International Conference on Document Analysis and Recognition (ICDAR), pages 894-901. IEEE.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "{BERT}grid: Contextualized embedding for 2d document representation and understanding",
"authors": [
{
"first": "I",
"middle": [],
"last": "Timo",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Denk",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Reisswig",
"suffix": ""
}
],
"year": 2019,
"venue": "Workshop on Document Intelligence at NeurIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timo I. Denk and Christian Reisswig. 2019. {BERT}grid: Contextualized embedding for 2d document representation and understanding. In Workshop on Document Intelligence at NeurIPS 2019.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "One-shot template matching for automatic document data capture",
"authors": [
{
"first": "Pranjal",
"middle": [],
"last": "Dhakal",
"suffix": ""
},
{
"first": "Manish",
"middle": [],
"last": "Munikar",
"suffix": ""
},
{
"first": "Bikram",
"middle": [],
"last": "Dahal",
"suffix": ""
}
],
"year": 2019,
"venue": "Artificial Intelligence for Transforming Business and Society",
"volume": "1",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pranjal Dhakal, Manish Munikar, and Bikram Da- hal. 2019. One-shot template matching for auto- matic document data capture. In 2019 Artificial Intelligence for Transforming Business and Society (AITB), volume 1, pages 1-6. IEEE.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Language to logical form with neural attention",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "33--43",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1004"
]
},
"num": null,
"urls": [],
"raw_text": "Li Dong and Mirella Lapata. 2016. Language to logi- cal form with neural attention. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 33-43, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Kleister: A novel task for information extraction involving long documents with complex layout",
"authors": [
{
"first": "Filip",
"middle": [],
"last": "Grali\u0144ski",
"suffix": ""
},
{
"first": "Tomasz",
"middle": [],
"last": "Stanis\u0142awek",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Wr\u00f3blewska",
"suffix": ""
},
{
"first": "Dawid",
"middle": [],
"last": "Lipi\u0144ski",
"suffix": ""
},
{
"first": "Agnieszka",
"middle": [],
"last": "Kaliska",
"suffix": ""
},
{
"first": "Paulina",
"middle": [],
"last": "Rosalska",
"suffix": ""
},
{
"first": "Bartosz",
"middle": [],
"last": "Topolski",
"suffix": ""
},
{
"first": "Przemys\u0142aw",
"middle": [],
"last": "Biecek",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2003.02356"
]
},
"num": null,
"urls": [],
"raw_text": "Filip Grali\u0144ski, Tomasz Stanis\u0142awek, Anna Wr\u00f3blewska, Dawid Lipi\u0144ski, Agnieszka Kaliska, Paulina Rosalska, Bartosz Topolski, and Prze- mys\u0142aw Biecek. 2020. Kleister: A novel task for information extraction involving long doc- uments with complex layout. arXiv preprint arXiv:2003.02356.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Line-items and table understanding in structured documents",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Hole\u010dek",
"suffix": ""
},
{
"first": "Anton\u00edn",
"middle": [],
"last": "Hoskovec",
"suffix": ""
},
{
"first": "Petr",
"middle": [],
"last": "Baudi\u0161",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Klinger",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.12577"
]
},
"num": null,
"urls": [],
"raw_text": "Martin Hole\u010dek, Anton\u00edn Hoskovec, Petr Baudi\u0161, and Pavel Klinger. 2019. Line-items and table under- standing in structured documents. arXiv preprint arXiv:1904.12577.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Extracting structured data from invoices",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Holt",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Chisholm",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Australasian Language Technology Association",
"volume": "",
"issue": "",
"pages": "53--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xavier Holt and Andrew Chisholm. 2018. Extracting structured data from invoices. In Proceedings of the Australasian Language Technology Association Workshop 2018, pages 53-59.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Icdar2019 competition on scanned receipt ocr and information extraction",
"authors": [
{
"first": "Zheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jianhua",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Dimosthenis",
"middle": [],
"last": "Karatzas",
"suffix": ""
},
{
"first": "Shijian",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "C",
"middle": [
"V"
],
"last": "Jawahar",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 International Conference on Document Analysis and Recognition (ICDAR)",
"volume": "",
"issue": "",
"pages": "1516--1520",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zheng Huang, Kai Chen, Jianhua He, Xiang Bai, Di- mosthenis Karatzas, Shijian Lu, and CV Jawahar. 2019. Icdar2019 competition on scanned receipt ocr and information extraction. In 2019 International Conference on Document Analysis and Recognition (ICDAR), pages 1516-1520. IEEE.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Data recombination for neural semantic parsing",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "12--22",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1002"
]
},
"num": null,
"urls": [],
"raw_text": "Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 12-22, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Integrating coordinates with context for information extraction in document images",
"authors": [
{
"first": "Zhaohui",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Yunrui",
"middle": [],
"last": "Lian",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Weidong",
"middle": [],
"last": "Qiu",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 International Conference on Document Analysis and Recognition (ICDAR)",
"volume": "",
"issue": "",
"pages": "363--368",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhaohui Jiang, Zheng Huang, Yunrui Lian, Jie Guo, and Weidong Qiu. 2019. Integrating coordinates with context for information extraction in document images. In 2019 International Conference on Docu- ment Analysis and Recognition (ICDAR), pages 363- 368. IEEE.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Chargrid: Towards understanding 2d documents",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Anoop R Katti",
"suffix": ""
},
{
"first": "Cordula",
"middle": [],
"last": "Reisswig",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Guder",
"suffix": ""
},
{
"first": "Steffen",
"middle": [],
"last": "Brarda",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Bickel",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "H\u00f6hne",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baptiste Faddoul",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4459--4469",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anoop R Katti, Christian Reisswig, Cordula Guder, Se- bastian Brarda, Steffen Bickel, Johannes H\u00f6hne, and Jean Baptiste Faddoul. 2018. Chargrid: Towards un- derstanding 2d documents. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4459-4469.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The hungarian method for the assignment problem",
"authors": [
{
"first": "",
"middle": [],
"last": "Harold W Kuhn",
"suffix": ""
}
],
"year": 1955,
"venue": "Naval research logistics quarterly",
"volume": "2",
"issue": "",
"pages": "83--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harold W Kuhn. 1955. The hungarian method for the assignment problem. Naval research logistics quar- terly, 2(1-2):83-97.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Finding function in form: Compositional character models for open vocabulary word representation",
"authors": [
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
},
{
"first": "Isabel",
"middle": [],
"last": "Trancoso",
"suffix": ""
},
{
"first": "Ram\u00f3n",
"middle": [],
"last": "Fermandez",
"suffix": ""
},
{
"first": "Silvio",
"middle": [],
"last": "Amir",
"suffix": ""
},
{
"first": "Lu\u00eds",
"middle": [],
"last": "Marujo",
"suffix": ""
},
{
"first": "Tiago",
"middle": [],
"last": "Lu\u00eds",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1520--1530",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1176"
]
},
"num": null,
"urls": [],
"raw_text": "Wang Ling, Chris Dyer, Alan W Black, Isabel Tran- coso, Ram\u00f3n Fermandez, Silvio Amir, Lu\u00eds Marujo, and Tiago Lu\u00eds. 2015. Finding function in form: Compositional character models for open vocabu- lary word representation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1520-1530, Lisbon, Portu- gal. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Graph convolution for multimodal information extraction from visually rich documents",
"authors": [
{
"first": "Xiaojing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Feiyu",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Qiong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Huasha",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "32--39",
"other_ids": {
"DOI": [
"10.18653/v1/N19-2005"
]
},
"num": null,
"urls": [],
"raw_text": "Xiaojing Liu, Feiyu Gao, Qiong Zhang, and Huasha Zhao. 2019. Graph convolution for multimodal in- formation extraction from visually rich documents. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 2 (Industry Papers), pages 32-39, Minneapo- lis, Minnesota. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "An invoice reading system using a graph convolutional network",
"authors": [
{
"first": "Devashish",
"middle": [],
"last": "Lohani",
"suffix": ""
},
{
"first": "Yolande",
"middle": [],
"last": "Bela\u00efd",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bela\u00efd",
"suffix": ""
}
],
"year": 2018,
"venue": "Asian Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "144--158",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Devashish Lohani, A Bela\u00efd, and Yolande Bela\u00efd. 2018. An invoice reading system using a graph convolu- tional network. In Asian Conference on Computer Vision, pages 144-158. Springer.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Neural machine translation (seq2seq) tutorial",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Brevdo",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Eugene Brevdo, and Rui Zhao. 2017. Neural machine translation (seq2seq) tutorial. https://github.com/tensorflow/nmt.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Effective approaches to attention-based neural machine translation",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1412--1421",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1166"
]
},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natu- ral Language Processing, pages 1412-1421, Lis- bon, Portugal. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The natural language decathlon",
"authors": [
{
"first": "Bryan",
"middle": [],
"last": "Mccann",
"suffix": ""
},
{
"first": "Nitish",
"middle": [],
"last": "Shirish Keskar",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2018,
"venue": "Multitask learning as question answering",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1806.08730"
]
},
"num": null,
"urls": [],
"raw_text": "Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2018. The natural language de- cathlon: Multitask learning as question answering. arXiv preprint arXiv:1806.08730.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A survey of the usages of deep learning for natural language processing",
"authors": [
{
"first": "W",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Otter",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Julian",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Medina",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kalita",
"suffix": ""
}
],
"year": 2020,
"venue": "IEEE Transactions on Neural Networks and Learning Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel W Otter, Julian R Medina, and Jugal K Kalita. 2020. A survey of the usages of deep learning for natural language processing. IEEE Transactions on Neural Networks and Learning Systems.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Attend, copy, parse end-to-end information extraction from documents",
"authors": [
{
"first": "Rasmus",
"middle": [],
"last": "Berg Palm",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Laws",
"suffix": ""
},
{
"first": "Ole",
"middle": [],
"last": "Winther",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 International Conference on Document Analysis and Recognition (ICDAR)",
"volume": "",
"issue": "",
"pages": "329--336",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rasmus Berg Palm, Florian Laws, and Ole Winther. 2019. Attend, copy, parse end-to-end information extraction from documents. In 2019 International Conference on Document Analysis and Recognition (ICDAR), pages 329-336. IEEE.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Cloudscan-a configuration-free invoice analysis system using recurrent neural networks",
"authors": [
{
"first": "Rasmus",
"middle": [],
"last": "Berg Palm",
"suffix": ""
},
{
"first": "Ole",
"middle": [],
"last": "Winther",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Laws",
"suffix": ""
}
],
"year": 2017,
"venue": "14th IAPR International Conference on Document Analysis and Recognition (ICDAR)",
"volume": "",
"issue": "",
"pages": "406--413",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rasmus Berg Palm, Ole Winther, and Florian Laws. 2017. Cloudscan-a configuration-free invoice analy- sis system using recurrent neural networks. In 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), pages 406-413. IEEE.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "On the difficulty of training recurrent neural networks",
"authors": [
{
"first": "Razvan",
"middle": [],
"last": "Pascanu",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2013,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1310--1318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In International Conference on Machine Learning, pages 1310-1318.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A deep reinforced model for abstractive summarization",
"authors": [
{
"first": "Romain",
"middle": [],
"last": "Paulus",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive sum- marization. In International Conference on Learn- ing Representations.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Abstract syntax networks for code generation and semantic parsing",
"authors": [
{
"first": "Maxim",
"middle": [],
"last": "Rabinovich",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Stern",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1139--1149",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1105"
]
},
"num": null,
"urls": [],
"raw_text": "Maxim Rabinovich, Mitchell Stern, and Dan Klein. 2017. Abstract syntax networks for code generation and semantic parsing. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1139- 1149, Vancouver, Canada. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Text chunking using transformation-based learning",
"authors": [
{
"first": "A",
"middle": [],
"last": "Lance",
"suffix": ""
},
{
"first": "Mitchell P",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Marcus",
"suffix": ""
}
],
"year": 1999,
"venue": "Natural language processing using very large corpora",
"volume": "",
"issue": "",
"pages": "157--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lance A Ramshaw and Mitchell P Marcus. 1999. Text chunking using transformation-based learning. In Natural language processing using very large cor- pora, pages 157-176. Springer.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Recurrent Neural Network Approach for Table Field Extraction in Business Documents",
"authors": [
{
"first": "Cl\u00e9ment",
"middle": [],
"last": "Sage",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Aussem",
"suffix": ""
},
{
"first": "Haytham",
"middle": [],
"last": "Elghazel",
"suffix": ""
},
{
"first": "V\u00e9ronique",
"middle": [],
"last": "Eglin",
"suffix": ""
},
{
"first": "J\u00e9r\u00e9my",
"middle": [],
"last": "Espinas",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Document Analysis and Recognition",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1109/ICDAR.2019.00211"
]
},
"num": null,
"urls": [],
"raw_text": "Cl\u00e9ment Sage, Alex Aussem, Haytham Elghazel, V\u00e9ronique Eglin, and J\u00e9r\u00e9my Espinas. 2019. Re- current Neural Network Approach for Table Field Extraction in Business Documents. In International Conference on Document Analysis and Recognition, ICDAR 2019, Sydney, Australia.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Get to the point: Summarization with pointergenerator networks",
"authors": [
{
"first": "Abigail",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1073--1083",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073- 1083.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Pointer networks",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Meire",
"middle": [],
"last": "Fortunato",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "2692--2700",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in neural in- formation processing systems, pages 2692-2700.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Image-to-markup generation via paired adversarial learning",
"authors": [
{
"first": "Jin-Wen",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Yan-Ming",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xu-Yao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Cheng-Lin",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Joint European Conference on Machine Learning and Knowledge Discovery in Databases",
"volume": "",
"issue": "",
"pages": "18--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jin-Wen Wu, Fei Yin, Yan-Ming Zhang, Xu-Yao Zhang, and Cheng-Lin Liu. 2018. Image-to-markup generation via paired adversarial learning. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 18-34. Springer.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "A survey on recent advances in named entity recognition from deep learning models",
"authors": [
{
"first": "Vikas",
"middle": [],
"last": "Yadav",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2145--2158",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vikas Yadav and Steven Bethard. 2018. A survey on re- cent advances in named entity recognition from deep learning models. In Proceedings of the 27th Inter- national Conference on Computational Linguistics, pages 2145-2158.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Cutie: Learning to understand documents with convolutional universal text information extractor",
"authors": [
{
"first": "Xiaohui",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Zhuo",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Xiaoguang",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1903.12363"
]
},
"num": null,
"urls": [],
"raw_text": "Xiaohui Zhao, Zhuo Wu, and Xiaoguang Wang. 2019. Cutie: Learning to understand documents with convolutional universal text information extractor. arXiv preprint arXiv:1903.12363.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Seq2sql: Generating structured queries from natural language using reinforcement learning",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1709.00103"
]
},
"num": null,
"urls": [],
"raw_text": "Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2sql: Generating structured queries from natural language using reinforcement learning. arXiv preprint arXiv:1709.00103.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Image-based table recognition: data, model, and evaluation",
"authors": [
{
"first": "Xu",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Elaheh",
"middle": [],
"last": "Shafieibavani",
"suffix": ""
},
{
"first": "Antonio Jimeno",
"middle": [],
"last": "Yepes",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.10683"
]
},
"num": null,
"urls": [],
"raw_text": "Xu Zhong, Elaheh ShafieiBavani, and Antonio Ji- meno Yepes. 2019. Image-based table recogni- tion: data, model, and evaluation. arXiv preprint arXiv:1911.10683.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"text": "A purchase order (a) and the XML representation of its extracted information (b).",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Figure 2. The model is derived from the PGN of See et al. (2017) proposed for summarization of news",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "Illustration of the pointer-generator network for extracting structured information of the document in",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF4": {
"text": "A sample document with filled bounding boxes around words whose colors depend on their attention weights. For sake of readability, we only highlight the top 15 words. We show attention values for the 6 th time step of the pointer-generator network, after having outputted the tokens <Product>, <IDNumber>, THX-63972D, </IDNumber> and <Quantity>. The model rightly points to the word 1 to extract the quantity value of the first product.",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"content": "<table><tr><td>Training documents</td><td>154,450</td></tr><tr><td>Validation documents</td><td>22,261</td></tr><tr><td>Test documents</td><td>42,765</td></tr><tr><td>Words per document (Avg.)</td><td>411</td></tr><tr><td>Pages per document (Avg.)</td><td>1.52</td></tr><tr><td>Product entities per document (Avg.)</td><td/></tr></table>",
"text": "Statistics of our dataset.",
"html": null,
"type_str": "table",
"num": null
},
"TABREF1": {
"content": "<table><tr><td/><td>ID number</td><td>Quantity</td><td>Micro avg.</td><td>% Perfect</td></tr><tr><td>Word classifier</td><td>0.754</td><td>0.855</td><td>0.804</td><td>67.4</td></tr><tr><td>PGN</td><td>0.711</td><td>0.832</td><td>0.771</td><td>68.2</td></tr></table>",
"text": "Post-processing gains when extracting the products from the test documents. % Perfect column indicates the percentage of documents perfectly processed by each model.",
"html": null,
"type_str": "table",
"num": null
},
"TABREF2": {
"content": "<table><tr><td/><td>N \u2264 3</td><td>3 &lt; N &lt; 15</td><td>N \u2265 15</td></tr><tr><td>Documents</td><td>33,332</td><td>7,820</td><td>1,613</td></tr><tr><td>Product entities</td><td>46,893</td><td>53,771</td><td>44,094</td></tr><tr><td>Word classifier</td><td>0.804</td><td>0.807</td><td>0.801</td></tr><tr><td>PGN</td><td>0.820</td><td>0.791</td><td>0.696</td></tr><tr><td>Without coverage</td><td>0.799</td><td>0.817</td><td>0.671</td></tr></table>",
"text": "Micro averaged gains over the test set conditioned on the number N of products in the document.",
"html": null,
"type_str": "table",
"num": null
}
}
}
}