jordiclive commited on
Commit
7faa9d5
1 Parent(s): 2e42f53

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +96 -316
README.md CHANGED
@@ -13,169 +13,108 @@ tags:
13
  - long-document
14
  - long-form
15
  datasets:
16
- - kmfoda/booksum
17
  metrics:
18
  - rouge
19
  widget:
20
- - text: large earthquakes along a given fault segment do not occur at random intervals
21
- because it takes time to accumulate the strain energy for the rupture. The rates
22
- at which tectonic plates move and accumulate strain at their boundaries are approximately
23
- uniform. Therefore, in first approximation, one may expect that large ruptures
24
- of the same fault segment will occur at approximately constant time intervals.
25
- If subsequent main shocks have different amounts of slip across the fault, then
26
- the recurrence time may vary, and the basic idea of periodic mainshocks must be
27
- modified. For great plate boundary ruptures the length and slip often vary by
28
- a factor of 2. Along the southern segment of the San Andreas fault the recurrence
29
- interval is 145 years with variations of several decades. The smaller the standard
30
- deviation of the average recurrence interval, the more specific could be the long
31
- term prediction of a future mainshock.
 
 
32
  example_title: earthquakes
33
- - text: ' A typical feed-forward neural field algorithm. Spatiotemporal coordinates
34
- are fed into a neural network that predicts values in the reconstructed domain.
35
- Then, this domain is mapped to the sensor domain where sensor measurements are
36
- available as supervision. Class and Section Problems Addressed Generalization
37
- (Section 2) Inverse problems, ill-posed problems, editability; symmetries. Hybrid
38
- Representations (Section 3) Computation & memory efficiency, representation capacity,
39
- editability: Forward Maps (Section 4) Inverse problems Network Architecture (Section
40
- 5) Spectral bias, integration & derivatives. Manipulating Neural Fields (Section
41
- 6) Edit ability, constraints, regularization. Table 2: The five classes of techniques
42
- in the neural field toolbox each addresses problems that arise in learning, inference,
43
- and control. (Section 3). We can supervise reconstruction via differentiable forward
44
- maps that transform Or project our domain (e.g, 3D reconstruction via 2D images;
45
- Section 4) With appropriate network architecture choices, we can overcome neural
46
- network spectral biases (blurriness) and efficiently compute derivatives and integrals
47
- (Section 5). Finally, we can manipulate neural fields to add constraints and regularizations,
48
- and to achieve editable representations (Section 6). Collectively, these classes
49
- constitute a ''toolbox'' of techniques to help solve problems with neural fields
50
- There are three components in a conditional neural field: (1) An encoder or inference
51
- function € that outputs the conditioning latent variable 2 given an observation
52
- 0 E(0) =2. 2 is typically a low-dimensional vector, and is often referred to aS
53
- a latent code Or feature code_ (2) A mapping function 4 between Z and neural field
54
- parameters O: Y(z) = O; (3) The neural field itself $. The encoder € finds the
55
- most probable z given the observations O: argmaxz P(2/0). The decoder maximizes
56
- the inverse conditional probability to find the most probable 0 given Z: arg-
57
- max P(Olz). We discuss different encoding schemes with different optimality guarantees
58
- (Section 2.1.1), both global and local conditioning (Section 2.1.2), and different
59
- mapping functions Y (Section 2.1.3) 2. Generalization Suppose we wish to estimate
60
- a plausible 3D surface shape given a partial or noisy point cloud. We need a suitable
61
- prior over the sur- face in its reconstruction domain to generalize to the partial
62
- observations. A neural network expresses a prior via the function space of its
63
- architecture and parameters 0, and generalization is influenced by the inductive
64
- bias of this function space (Section 5).'
65
  example_title: scientific paper
66
- - text: ' the big variety of data coming from diverse sources is one of the key properties
67
- of the big data phenomenon. It is, therefore, beneficial to understand how data
68
- is generated in various environments and scenarios, before looking at what should
69
- be done with this data and how to design the best possible architecture to accomplish
70
- this The evolution of IT architectures, described in Chapter 2, means that the
71
- data is no longer processed by a few big monolith systems, but rather by a group
72
- of services In parallel to the processing layer, the underlying data storage has
73
- also changed and became more distributed This, in turn, required a significant
74
- paradigm shift as the traditional approach to transactions (ACID) could no longer
75
- be supported. On top of this, cloud computing is becoming a major approach with
76
- the benefits of reducing costs and providing on-demand scalability but at the
77
- same time introducing concerns about privacy, data ownership, etc In the meantime
78
- the Internet continues its exponential growth: Every day both structured and unstructured
79
- data is published and available for processing: To achieve competitive advantage
80
- companies have to relate their corporate resources to external services, e.g.
81
- financial markets, weather forecasts, social media, etc While several of the sites
82
- provide some sort of API to access the data in a more orderly fashion; countless
83
- sources require advanced web mining and Natural Language Processing (NLP) processing
84
- techniques: Advances in science push researchers to construct new instruments
85
- for observing the universe O conducting experiments to understand even better
86
- the laws of physics and other domains. Every year humans have at their disposal
87
- new telescopes, space probes, particle accelerators, etc These instruments generate
88
- huge streams of data, which need to be stored and analyzed. The constant drive
89
- for efficiency in the industry motivates the introduction of new automation techniques
90
- and process optimization: This could not be done without analyzing the precise
91
- data that describe these processes. As more and more human tasks are automated,
92
- machines provide rich data sets, which can be analyzed in real-time to drive efficiency
93
- to new levels. Finally, it is now evident that the growth of the Internet of Things
94
- is becoming a major source of data. More and more of the devices are equipped
95
- with significant computational power and can generate a continuous data stream
96
- from their sensors. In the subsequent sections of this chapter, we will look at
97
- the domains described above to see what they generate in terms of data sets. We
98
- will compare the volumes but will also look at what is characteristic and important
99
- from their respective points of view. 3.1 The Internet is undoubtedly the largest
100
- database ever created by humans. While several well described; cleaned, and structured
101
- data sets have been made available through this medium, most of the resources
102
- are of an ambiguous, unstructured, incomplete or even erroneous nature. Still,
103
- several examples in the areas such as opinion mining, social media analysis, e-governance,
104
- etc, clearly show the potential lying in these resources. Those who can successfully
105
- mine and interpret the Internet data can gain unique insight and competitive advantage
106
- in their business An important area of data analytics on the edge of corporate
107
- IT and the Internet is Web Analytics.'
108
  example_title: data science textbook
109
- - text: 'Transformer-based models have shown to be very useful for many NLP tasks.
110
- However, a major limitation of transformers-based models is its O(n^2)O(n 2) time
111
- & memory complexity (where nn is sequence length). Hence, it''s computationally
112
- very expensive to apply transformer-based models on long sequences n > 512n>512.
113
- Several recent papers, e.g. Longformer, Performer, Reformer, Clustered attention
114
- try to remedy this problem by approximating the full attention matrix. You can
115
- checkout 🤗''s recent blog post in case you are unfamiliar with these models.
116
-
117
- BigBird (introduced in paper) is one of such recent models to address this issue.
118
- BigBird relies on block sparse attention instead of normal attention (i.e. BERT''s
119
- attention) and can handle sequences up to a length of 4096 at a much lower computational
120
- cost compared to BERT. It has achieved SOTA on various tasks involving very long
121
- sequences such as long documents summarization, question-answering with long contexts.
122
-
123
- BigBird RoBERTa-like model is now available in 🤗Transformers. The goal of this
124
- post is to give the reader an in-depth understanding of big bird implementation
125
- & ease one''s life in using BigBird with 🤗Transformers. But, before going into
126
- more depth, it is important to remember that the BigBird''s attention is an approximation
127
- of BERT''s full attention and therefore does not strive to be better than BERT''s
128
- full attention, but rather to be more efficient. It simply allows to apply transformer-based
129
- models to much longer sequences since BERT''s quadratic memory requirement quickly
130
- becomes unbearable. Simply put, if we would have compute & ∞ time, BERT''s attention
131
- would be preferred over block sparse attention (which we are going to discuss
132
- in this post).
133
-
134
- If you wonder why we need more compute when working with longer sequences, this
135
- blog post is just right for you!
136
-
137
- Some of the main questions one might have when working with standard BERT-like
138
- attention include:
139
-
140
- Do all tokens really have to attend to all other tokens? Why not compute attention
141
- only over important tokens? How to decide what tokens are important? How to attend
142
- to just a few tokens in a very efficient way? In this blog post, we will try to
143
- answer those questions.
144
-
145
- What tokens should be attended to? We will give a practical example of how attention
146
- works by considering the sentence ''BigBird is now available in HuggingFace for
147
- extractive question answering''. In BERT-like attention, every word would simply
148
- attend to all other tokens.
149
-
150
- Let''s think about a sensible choice of key tokens that a queried token actually
151
- only should attend to by writing some pseudo-code. Will will assume that the token
152
- available is queried and build a sensible list of key tokens to attend to.
153
-
154
- >>> # let''s consider following sentence as an example >>> example = [''BigBird'',
155
- ''is'', ''now'', ''available'', ''in'', ''HuggingFace'', ''for'', ''extractive'',
156
- ''question'', ''answering'']
157
-
158
- >>> # further let''s assume, we''re trying to understand the representation of
159
- ''available'' i.e. >>> query_token = ''available'' >>> # We will initialize an
160
- empty `set` and fill up the tokens of our interest as we proceed in this section.
161
- >>> key_tokens = [] # => currently ''available'' token doesn''t have anything
162
- to attend Nearby tokens should be important because, in a sentence (sequence of
163
- words), the current word is highly dependent on neighboring past & future tokens.
164
- This intuition is the idea behind the concept of sliding attention.'
 
 
 
 
 
 
165
  example_title: bigbird blog intro
166
- - text: 'The majority of available text summarization datasets include short-form
167
- source documents that lack long-range causal and temporal dependencies, and often
168
- contain strong layout and stylistic biases. While relevant, such datasets will
169
- offer limited challenges for future generations of text summarization systems.
170
- We address these issues by introducing BookSum, a collection of datasets for long-form
171
- narrative summarization. Our dataset covers source documents from the literature
172
- domain, such as novels, plays and stories, and includes highly abstractive, human
173
- written summaries on three levels of granularity of increasing difficulty: paragraph-,
174
- chapter-, and book-level. The domain and structure of our dataset poses a unique
175
- set of challenges for summarization systems, which include: processing very long
176
- documents, non-trivial causal and temporal dependencies, and rich discourse structures.
177
- To facilitate future work, we trained and evaluated multiple extractive and abstractive
178
- summarization models as baselines for our dataset.'
 
 
 
179
  example_title: BookSum Abstract
180
  inference:
181
  parameters:
@@ -187,165 +126,6 @@ inference:
187
  length_penalty: 0.3
188
  encoder_no_repeat_ngram_size: 3
189
  num_beams: 4
190
- model-index:
191
- - name: pszemraj/led-large-book-summary
192
- results:
193
- - task:
194
- type: summarization
195
- name: Summarization
196
- dataset:
197
- name: kmfoda/booksum
198
- type: kmfoda/booksum
199
- config: kmfoda--booksum
200
- split: test
201
- metrics:
202
- - type: rouge
203
- value: 31.7308
204
- name: ROUGE-1
205
- verified: true
206
- verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjJmZjMxYTY0OGU3MzNjNmIzNmYyODNlNDg2ZGRhZDAzNTMwMDM5YWMxODc1OTc1ZWE3MzM2OTg1ODFhZDBkNCIsInZlcnNpb24iOjF9.B8BCKgySYVZW910_1zP0LfCpQYJbAe6loyWut76JlgZb2kV1_x9ybqtNESX0ka-lNqhYyXUNDpuS-7pTmsJVDg
207
- - type: rouge
208
- value: 5.3311
209
- name: ROUGE-2
210
- verified: true
211
- verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzViMmY4ODFjYTc5ODk5MmRhMDQ3ZDRiYWQwMDg0OTk3ZTA4NDAxYTNiNDgyMmI4NDA3ZDMwYWViOTBkODBjNyIsInZlcnNpb24iOjF9.MOhJLDcgvv93mVFL1igIgIiTAH3b2Xa4gmBObq7RF44Mmu8Kxtd1KP7rOlDVFOrtrsooGPGsyE1GMCQ2kqeMDg
212
- - type: rouge
213
- value: 16.1465
214
- name: ROUGE-L
215
- verified: true
216
- verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzNjMzEwMTliZGE3ZmQ4M2UxMDAyMTY3YzJjZmMyMDYyN2YyNDM0N2VhNzI1MDc1YTg4MTRjMmEzNjVkNTk1NCIsInZlcnNpb24iOjF9.XLJ-DVKiYLlbw5E5rWADKbzUzf5fNHhlTCWPCC5dU4NI9Yeh76aR7TPt36ZzLDwTBknnR8KHqlaF8F8YAvBUAg
217
- - type: rouge
218
- value: 29.0883
219
- name: ROUGE-LSUM
220
- verified: true
221
- verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTcwNzEwMmE5NjQxZTkzYmQyZDZmNzllYzYyNGI5OTMyNWMwNjdiM2I2YmM5YjdmY2E5OWQ3OTk3ZDA1MTc3YyIsInZlcnNpb24iOjF9.d6rFxjCB6RJNI_pn2DNNSjuZe4rdvj0RatkaTJRp5lP0F_AFfU5Zn9zRWzZJV7V-xMauIc4UhfdoLp9r_-CABA
222
- - type: loss
223
- value: 4.815707206726074
224
- name: loss
225
- verified: true
226
- verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTMwMTgxMmJkODY3MjkzOWJhMzJhOTIxMWVkODhjZmM0MWUzMWQ1N2JkZjRhOTQxNmU1YWVjYzQ0MDNlZWI3OSIsInZlcnNpb24iOjF9.mkBQHYhYFfDV6F4klXGJ1dSsF-pbCs-6F9zcw6IYznwmXUjtk7m5J4Zt4JAju5LKz4YizvEcUCl_L0WddnfvDA
227
- - type: gen_len
228
- value: 154.9036
229
- name: gen_len
230
- verified: true
231
- verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTc0ZmM1ZDM4MDE0MzY3MDM3OWJhNDkzZjJkZDdkMjU5M2JmMDJjYTIxODA1OTllNmY5ZWQzZDlmNWFiYzk4NiIsInZlcnNpb24iOjF9.VQ_O_xSTz870tnM08PJXQOwg9OsNNwI_HVX4S7AuW57_FzGGyRaWSuGE5SWzRS4Tur9YP0QxV4VV0Yoaoi3IAA
232
- - task:
233
- type: summarization
234
- name: Summarization
235
- dataset:
236
- name: samsum
237
- type: samsum
238
- config: samsum
239
- split: test
240
- metrics:
241
- - type: rouge
242
- value: 33.4484
243
- name: ROUGE-1
244
- verified: true
245
- verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTk4Yjg1YTc4YmY0MzBiZDU4ZjFhNzI4MjZkMWU1MzBlOWNlMjQ5ODMzY2YzYzRhYjJkMGUzNmI3ZjdkMzIzZSIsInZlcnNpb24iOjF9.AqS8A1OUiM0IZFBEGirv5F3Novk8lSUYSfPc3bYWLA6t-W7wgup3qA207eGbE5j9CkDWZ7QrSG1U6Z9A0sOqAA
246
- - type: rouge
247
- value: 10.4249
248
- name: ROUGE-2
249
- verified: true
250
- verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2U4NjUyNTFmOGM5OTlhZDMyMTlmM2E4OWI2NGFiMDAyMGJjMzRjNWNlMGEyYWFmNTE5ZWMxM2I0ZGZmNWNmOCIsInZlcnNpb24iOjF9.SgJcHJ4qoRWXFvFiwv1PUutWktvsxQNynVPEv-GtBgxd6WI7o561ONyco5U-5tcyE_1SbSCJzz-L-R-q3cvoDA
251
- - type: rouge
252
- value: 24.5802
253
- name: ROUGE-L
254
- verified: true
255
- verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmQ5MDI5MzdiNGE5NDM0MmU5OThmZTBkNjkxMzg5N2IxNGVlODdhZTZhNjg3NzFjYWEyMzA3MTQxNjMyMjRkOCIsInZlcnNpb24iOjF9.Bg5dHqCcJjmxa-xGWNR5lD9g3quX7lKkH0pjiTd2xE5WiPoLLN2c0mYa2GovdW7__WnYwhhHC7es03jmvyZbCw
256
- - type: rouge
257
- value: 29.8226
258
- name: ROUGE-LSUM
259
- verified: true
260
- verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGFhOTEwNGM1MmZkNDk2ZjQ1Y2MyNjM3MGI5MGY3MWVkM2I0MjU2NWFiYmEwMjE4MTJlZWIwOGQ2MjQ3YjgzYSIsInZlcnNpb24iOjF9.W_aQKs10oXQdKEczJBGM3iiwJgb-VaXTpyA3sGof5WbhHf9vITAQA-xvynh5LgKtXQ1zjx737hnHgjEsu_Y0Cw
261
- - type: loss
262
- value: 4.176078796386719
263
- name: loss
264
- verified: true
265
- verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2JhODQ5YTZkNDZkZGYyNGU2MzkxMWU5MTEwMGM2YmVjZTA5YzI5NTMxMDNhYjhlOTAxMzFiMDYwYmM0MjEzZCIsInZlcnNpb24iOjF9.OvZrPBOR5jhkoTGBgsInkH7j3_xpacXHDoT7UIXEnyXzadfBO-O-K6fjalLNZw8wSkbjHIFcL_6S_qTTxPsNAQ
266
- - type: gen_len
267
- value: 65.4005
268
- name: gen_len
269
- verified: true
270
- verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2NhYjc3ZjQzNDEwYmMzOTM0ODkyZTJhZWNhNzZhYmEyZTYxMzA2YTYzMWFjOTA5ZjlhYWMzODg3NzY1ZTUwYSIsInZlcnNpb24iOjF9.vk9bgmtQFeRwdY3VXjtrJr_5wUCIeoAkI3kO0cHxhxmJo6RvUnyXiut72FuB-mlLZvqgiNkaZ-u_bh0Z3DjuCw
271
- - task:
272
- type: summarization
273
- name: Summarization
274
- dataset:
275
- name: billsum
276
- type: billsum
277
- config: default
278
- split: test
279
- metrics:
280
- - type: rouge
281
- value: 40.5843
282
- name: ROUGE-1
283
- verified: true
284
- verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTVjMDkyMWZjYTQ0NzgzNGUxZjNiMTg3NjU1MWJlNTQ2MWQ1NjE1MDk1OTU4ZjJiNGQ5ODg3Y2VlMWUyMzllNyIsInZlcnNpb24iOjF9.OhqBcVIuHk7fzmdrsWMvUe1bLeVMZVstZUoZpP7C1vR-3aIDl7r6eBmPrt5w-KcNq5p4teNPBsq7oKzbd5ZgDQ
285
- - type: rouge
286
- value: 17.3401
287
- name: ROUGE-2
288
- verified: true
289
- verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGQxYmQzMmE0OTcyNTM5NmMwNjIxNzYxZDcwMDFkYzJkOWY4YWY3NTdhZGRhZDdlMDAxNzcwODQ5OGM3Mzc1MCIsInZlcnNpb24iOjF9.Pksn25EEqvmx757N7Swrd4yXc_xU7-AMN9yNe8lrbBa-l1LoI_2PUASvnjML4f705cfuyMAfb0FkFp5WfER2AA
290
- - type: rouge
291
- value: 25.1256
292
- name: ROUGE-L
293
- verified: true
294
- verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjhjYzI5MDBiMjk2NTY3MDNmZTdiOGYwMTRlYjIwZjAwMjdlNTAyYzdhYTJlODQ4MjYzYmQ3MjRlYTA2YzhhZSIsInZlcnNpb24iOjF9.1jPepsweS2bzIqDverQzzhmhFGch7gpoEGFGqQ8zW7K10aUKWFX8lt-uZAmTa1Z5ZhzyXGBzc3dReFPhWRRJBg
295
- - type: rouge
296
- value: 34.6619
297
- name: ROUGE-LSUM
298
- verified: true
299
- verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2VkZDIxNWJjOTA0NzFjOTIwOTdjYjc1M2EyNDVjZjY2ZjY3MjIxNDk3YTc5YWExNzAwN2FhOTc1NjVhYjBkYiIsInZlcnNpb24iOjF9.8opqHSUckPohoSF9jfPTpXDz2AtDwvdMqOdIXx2kE1tkOcbLPbOBfcc8RhRR98y8S26yC6EYFhFnf03CV2ejAQ
300
- - type: loss
301
- value: 4.792657375335693
302
- name: loss
303
- verified: true
304
- verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTY5ZTRkMGU3OGVkODMzMDU5OWE1NTM5YjA4NDliZDlmNzc2NzZjNjFmNTA3M2EwY2NmN2E0MWJmZjQ5ZDliMiIsInZlcnNpb24iOjF9.KCKdk8xt2NWcMmYKV3-9eVEsFm9MqGllSMu9QCFJFIQlnyNXllHKdBLouoaGQz8IRYXvZKH8_TLDPIQx-31jAg
305
- - type: gen_len
306
- value: 163.9394
307
- name: gen_len
308
- verified: true
309
- verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzdkZDYyZGUzYmFkZmI2NjUwYmQ0MzZjMmIyZjI1YTFiMzM4OThiZjBiMzljOTVkZTgwMjA0NTE5OGM2YmFjMiIsInZlcnNpb24iOjF9.XyMZLUdkUIF32KTJMuv_bJswQCx_Tfg4Fx823cURUixSeoIKps8_a634AreZ3Z8kb7bfE_sFGh3rM9KWsMxlDw
310
- - task:
311
- type: summarization
312
- name: Summarization
313
- dataset:
314
- name: multi_news
315
- type: multi_news
316
- config: default
317
- split: test
318
- metrics:
319
- - type: rouge
320
- value: 39.0834
321
- name: ROUGE-1
322
- verified: true
323
- verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjYzMmVlMDM4MTNkMTI4MjAyMTU2YTg1ZWQwNTI1MmJlNGUwZmE1NTRmYTljZTQwY2RlMjcxOTgyZGMyYTc0ZiIsInZlcnNpb24iOjF9.6yuSr7UmsFatwqQ-mEO4gmsEtWI05kGB5Ib2pnl05H1OiPT2uUwmqdUytUw8KTx9u1jv9q0cTF1cL-n2kPEJAA
324
- - type: rouge
325
- value: 11.4043
326
- name: ROUGE-2
327
- verified: true
328
- verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMWI5N2U2ZWI1ODM2MWUwOTIzYTAzNmRhNDA2OWEzZWRjMGEzMjBmY2EwN2YyYzU1NWE0YjIyZDE3MWE0MmMxZCIsInZlcnNpb24iOjF9.wonuxbBl25TzEaHUH_E816nHJ1OSXKfkaq7eJzbLpsfeGwcDklxUSxZxRO7VBiBMaY3Qttf9ywmEIPp40HnpBA
329
- - type: rouge
330
- value: 19.1813
331
- name: ROUGE-L
332
- verified: true
333
- verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjU1NDZhN2NkMzZiZGJkODE4NDZiYjViOTZkNGMyNDlkNjBlZmFjYzU1N2IzMjFjYjY1MDU1Zjk2MzA0M2U4NyIsInZlcnNpb24iOjF9.bTCRzv3J9NiCh4aV23tAWGTvrdQCv_RS40zGwC4AJXtGS40cY7tJHYwBf9U9_rCetDBxqfjJpdaUbCAOglxLAA
334
- - type: rouge
335
- value: 35.1581
336
- name: ROUGE-LSUM
337
- verified: true
338
- verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDNhNTUyZjE4NjYxYjIzYThmMDM2YWNhM2QwYzY1ODI2ZTE3NmNjMmVhOTAzZjZlOWQwYzc1NzU2NDNjNzIxMyIsInZlcnNpb24iOjF9.cWlSbEBgrMN5D-fV_yL9geNMyMkIItcVO3wehNJPzFi3E0v1-4q8pnX-UgjLzto8X7JLi6as2V_HtZE4-C-CDw
339
- - type: loss
340
- value: 4.654905319213867
341
- name: loss
342
- verified: true
343
- verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTc5Nzk0ODhiNWUzNTAxNzk2YzZmMjU2NDliY2UzOTYyYTdmZGEyYjI5NDNhOTE0MGUxOTgxMGVjMmNhM2UyMSIsInZlcnNpb24iOjF9.eBBAebcl3AwkrjR6a8BvoSjDfpw8LWTRFjyIFHVzspvoOKVfnO8_NB_UeR_K127OwXyoZ70Z7X_aKJOe-2kTDA
344
- - type: gen_len
345
- value: 186.2494
346
- name: gen_len
347
- verified: true
348
- verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOWI2NjVlYjgwYWJiMjcyMDUzMzEwNDNjZTMxMDM0MjAzMzk1ZmIwY2Q1ZDQ2Y2M5NDBlMDEzYzFkNWEyNzJmNiIsInZlcnNpb24iOjF9.iZ1Iy7FuWL4GH7LS5EylVj5eZRC3L2ZsbYQapAkMNzR_VXPoMGvoM69Hp-kU7gW55tmz2V4Qxhvoz9cM8fciBA
349
  ---
350
 
351
  # Longformer Encoder-Decoder (LED) for Narrative-Esque Long Text Summarization
@@ -472,4 +252,4 @@ The following hyperparameters were used during training:
472
  - Transformers 4.19.2
473
  - Pytorch 1.11.0+cu113
474
  - Datasets 2.2.2
475
- - Tokenizers 0.12.1
 
13
  - long-document
14
  - long-form
15
  datasets:
16
+ - jordiclive/scored_summarization_datasets
17
  metrics:
18
  - rouge
19
  widget:
20
+ - text: >-
21
+ large earthquakes along a given fault segment do not occur at random
22
+ intervals because it takes time to accumulate the strain energy for the
23
+ rupture. The rates at which tectonic plates move and accumulate strain at
24
+ their boundaries are approximately uniform. Therefore, in first
25
+ approximation, one may expect that large ruptures of the same fault segment
26
+ will occur at approximately constant time intervals. If subsequent main
27
+ shocks have different amounts of slip across the fault, then the recurrence
28
+ time may vary, and the basic idea of periodic mainshocks must be modified.
29
+ For great plate boundary ruptures the length and slip often vary by a factor
30
+ of 2. Along the southern segment of the San Andreas fault the recurrence
31
+ interval is 145 years with variations of several decades. The smaller the
32
+ standard deviation of the average recurrence interval, the more specific
33
+ could be the long term prediction of a future mainshock.
34
  example_title: earthquakes
35
+ - text: ' A typical feed-forward neural field algorithm. Spatiotemporal coordinates are fed into a neural network that predicts values in the reconstructed domain. Then, this domain is mapped to the sensor domain where sensor measurements are available as supervision. Class and Section Problems Addressed Generalization (Section 2) Inverse problems, ill-posed problems, editability; symmetries. Hybrid Representations (Section 3) Computation & memory efficiency, representation capacity, editability: Forward Maps (Section 4) Inverse problems Network Architecture (Section 5) Spectral bias, integration & derivatives. Manipulating Neural Fields (Section 6) Edit ability, constraints, regularization. Table 2: The five classes of techniques in the neural field toolbox each addresses problems that arise in learning, inference, and control. (Section 3). We can supervise reconstruction via differentiable forward maps that transform Or project our domain (e.g, 3D reconstruction via 2D images; Section 4) With appropriate network architecture choices, we can overcome neural network spectral biases (blurriness) and efficiently compute derivatives and integrals (Section 5). Finally, we can manipulate neural fields to add constraints and regularizations, and to achieve editable representations (Section 6). Collectively, these classes constitute a ''toolbox'' of techniques to help solve problems with neural fields There are three components in a conditional neural field: (1) An encoder or inference function € that outputs the conditioning latent variable 2 given an observation 0 E(0) =2. 2 is typically a low-dimensional vector, and is often referred to aS a latent code Or feature code_ (2) A mapping function 4 between Z and neural field parameters O: Y(z) = O; (3) The neural field itself $. The encoder € finds the most probable z given the observations O: argmaxz P(2/0). The decoder maximizes the inverse conditional probability to find the most probable 0 given Z: arg- max P(Olz). We discuss different encoding schemes with different optimality guarantees (Section 2.1.1), both global and local conditioning (Section 2.1.2), and different mapping functions Y (Section 2.1.3) 2. Generalization Suppose we wish to estimate a plausible 3D surface shape given a partial or noisy point cloud. We need a suitable prior over the sur- face in its reconstruction domain to generalize to the partial observations. A neural network expresses a prior via the function space of its architecture and parameters 0, and generalization is influenced by the inductive bias of this function space (Section 5).'
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
  example_title: scientific paper
37
+ - text: ' the big variety of data coming from diverse sources is one of the key properties of the big data phenomenon. It is, therefore, beneficial to understand how data is generated in various environments and scenarios, before looking at what should be done with this data and how to design the best possible architecture to accomplish this The evolution of IT architectures, described in Chapter 2, means that the data is no longer processed by a few big monolith systems, but rather by a group of services In parallel to the processing layer, the underlying data storage has also changed and became more distributed This, in turn, required a significant paradigm shift as the traditional approach to transactions (ACID) could no longer be supported. On top of this, cloud computing is becoming a major approach with the benefits of reducing costs and providing on-demand scalability but at the same time introducing concerns about privacy, data ownership, etc In the meantime the Internet continues its exponential growth: Every day both structured and unstructured data is published and available for processing: To achieve competitive advantage companies have to relate their corporate resources to external services, e.g. financial markets, weather forecasts, social media, etc While several of the sites provide some sort of API to access the data in a more orderly fashion; countless sources require advanced web mining and Natural Language Processing (NLP) processing techniques: Advances in science push researchers to construct new instruments for observing the universe O conducting experiments to understand even better the laws of physics and other domains. Every year humans have at their disposal new telescopes, space probes, particle accelerators, etc These instruments generate huge streams of data, which need to be stored and analyzed. The constant drive for efficiency in the industry motivates the introduction of new automation techniques and process optimization: This could not be done without analyzing the precise data that describe these processes. As more and more human tasks are automated, machines provide rich data sets, which can be analyzed in real-time to drive efficiency to new levels. Finally, it is now evident that the growth of the Internet of Things is becoming a major source of data. More and more of the devices are equipped with significant computational power and can generate a continuous data stream from their sensors. In the subsequent sections of this chapter, we will look at the domains described above to see what they generate in terms of data sets. We will compare the volumes but will also look at what is characteristic and important from their respective points of view. 3.1 The Internet is undoubtedly the largest database ever created by humans. While several well described; cleaned, and structured data sets have been made available through this medium, most of the resources are of an ambiguous, unstructured, incomplete or even erroneous nature. Still, several examples in the areas such as opinion mining, social media analysis, e-governance, etc, clearly show the potential lying in these resources. Those who can successfully mine and interpret the Internet data can gain unique insight and competitive advantage in their business An important area of data analytics on the edge of corporate IT and the Internet is Web Analytics.'
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
  example_title: data science textbook
39
+ - text: >-
40
+ Transformer-based models have shown to be very useful for many NLP tasks.
41
+ However, a major limitation of transformers-based models is its O(n^2)O(n 2)
42
+ time & memory complexity (where nn is sequence length). Hence, it's
43
+ computationally very expensive to apply transformer-based models on long
44
+ sequences n > 512n>512. Several recent papers, e.g. Longformer, Performer,
45
+ Reformer, Clustered attention try to remedy this problem by approximating
46
+ the full attention matrix. You can checkout 🤗's recent blog post in case
47
+ you are unfamiliar with these models.
48
+
49
+ BigBird (introduced in paper) is one of such recent models to address this
50
+ issue. BigBird relies on block sparse attention instead of normal attention
51
+ (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a
52
+ much lower computational cost compared to BERT. It has achieved SOTA on
53
+ various tasks involving very long sequences such as long documents
54
+ summarization, question-answering with long contexts.
55
+
56
+ BigBird RoBERTa-like model is now available in 🤗Transformers. The goal of
57
+ this post is to give the reader an in-depth understanding of big bird
58
+ implementation & ease one's life in using BigBird with 🤗Transformers. But,
59
+ before going into more depth, it is important to remember that the BigBird's
60
+ attention is an approximation of BERT's full attention and therefore does
61
+ not strive to be better than BERT's full attention, but rather to be more
62
+ efficient. It simply allows to apply transformer-based models to much longer
63
+ sequences since BERT's quadratic memory requirement quickly becomes
64
+ unbearable. Simply put, if we would have compute & time, BERT's
65
+ attention would be preferred over block sparse attention (which we are going
66
+ to discuss in this post).
67
+
68
+ If you wonder why we need more compute when working with longer sequences,
69
+ this blog post is just right for you!
70
+
71
+ Some of the main questions one might have when working with standard
72
+ BERT-like attention include:
73
+
74
+ Do all tokens really have to attend to all other tokens? Why not compute
75
+ attention only over important tokens? How to decide what tokens are
76
+ important? How to attend to just a few tokens in a very efficient way? In
77
+ this blog post, we will try to answer those questions.
78
+
79
+ What tokens should be attended to? We will give a practical example of how
80
+ attention works by considering the sentence 'BigBird is now available in
81
+ HuggingFace for extractive question answering'. In BERT-like attention,
82
+ every word would simply attend to all other tokens.
83
+
84
+ Let's think about a sensible choice of key tokens that a queried token
85
+ actually only should attend to by writing some pseudo-code. Will will assume
86
+ that the token available is queried and build a sensible list of key tokens
87
+ to attend to.
88
+
89
+ >>> # let's consider following sentence as an example >>> example =
90
+ ['BigBird', 'is', 'now', 'available', 'in', 'HuggingFace', 'for',
91
+ 'extractive', 'question', 'answering']
92
+
93
+ >>> # further let's assume, we're trying to understand the representation of
94
+ 'available' i.e. >>> query_token = 'available' >>> # We will initialize an
95
+ empty `set` and fill up the tokens of our interest as we proceed in this
96
+ section. >>> key_tokens = [] # => currently 'available' token doesn't have
97
+ anything to attend Nearby tokens should be important because, in a sentence
98
+ (sequence of words), the current word is highly dependent on neighboring
99
+ past & future tokens. This intuition is the idea behind the concept of
100
+ sliding attention.
101
  example_title: bigbird blog intro
102
+ - text: >-
103
+ The majority of available text summarization datasets include short-form
104
+ source documents that lack long-range causal and temporal dependencies, and
105
+ often contain strong layout and stylistic biases. While relevant, such
106
+ datasets will offer limited challenges for future generations of text
107
+ summarization systems. We address these issues by introducing BookSum, a
108
+ collection of datasets for long-form narrative summarization. Our dataset
109
+ covers source documents from the literature domain, such as novels, plays
110
+ and stories, and includes highly abstractive, human written summaries on
111
+ three levels of granularity of increasing difficulty: paragraph-, chapter-,
112
+ and book-level. The domain and structure of our dataset poses a unique set
113
+ of challenges for summarization systems, which include: processing very long
114
+ documents, non-trivial causal and temporal dependencies, and rich discourse
115
+ structures. To facilitate future work, we trained and evaluated multiple
116
+ extractive and abstractive summarization models as baselines for our
117
+ dataset.
118
  example_title: BookSum Abstract
119
  inference:
120
  parameters:
 
126
  length_penalty: 0.3
127
  encoder_no_repeat_ngram_size: 3
128
  num_beams: 4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
129
  ---
130
 
131
  # Longformer Encoder-Decoder (LED) for Narrative-Esque Long Text Summarization
 
252
  - Transformers 4.19.2
253
  - Pytorch 1.11.0+cu113
254
  - Datasets 2.2.2
255
+ - Tokenizers 0.12.1