Librarian Bot: Add base_model information to model
#3
by
librarian-bot
- opened
README.md
CHANGED
@@ -1,23 +1,104 @@
|
|
1 |
---
|
2 |
-
|
3 |
language:
|
4 |
- en
|
|
|
5 |
tags:
|
6 |
- summarization
|
7 |
- pegasus
|
8 |
-
license: apache-2.0
|
9 |
datasets:
|
10 |
- kmfoda/booksum
|
11 |
metrics:
|
12 |
- rouge
|
13 |
widget:
|
14 |
-
- text:
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
inference:
|
22 |
parameters:
|
23 |
max_length: 64
|
@@ -26,8 +107,8 @@ inference:
|
|
26 |
repetition_penalty: 2.4
|
27 |
length_penalty: 0.5
|
28 |
num_beams: 4
|
29 |
-
early_stopping:
|
30 |
-
|
31 |
---
|
32 |
|
33 |
|
|
|
1 |
---
|
|
|
2 |
language:
|
3 |
- en
|
4 |
+
license: apache-2.0
|
5 |
tags:
|
6 |
- summarization
|
7 |
- pegasus
|
|
|
8 |
datasets:
|
9 |
- kmfoda/booksum
|
10 |
metrics:
|
11 |
- rouge
|
12 |
widget:
|
13 |
+
- text: large earthquakes along a given fault segment do not occur at random intervals
|
14 |
+
because it takes time to accumulate the strain energy for the rupture. The rates
|
15 |
+
at which tectonic plates move and accumulate strain at their boundaries are approximately
|
16 |
+
uniform. Therefore, in first approximation, one may expect that large ruptures
|
17 |
+
of the same fault segment will occur at approximately constant time intervals.
|
18 |
+
If subsequent main shocks have different amounts of slip across the fault, then
|
19 |
+
the recurrence time may vary, and the basic idea of periodic mainshocks must be
|
20 |
+
modified. For great plate boundary ruptures the length and slip often vary by
|
21 |
+
a factor of 2. Along the southern segment of the San Andreas fault the recurrence
|
22 |
+
interval is 145 years with variations of several decades. The smaller the standard
|
23 |
+
deviation of the average recurrence interval, the more specific could be the long
|
24 |
+
term prediction of a future mainshock.
|
25 |
+
example_title: earthquakes
|
26 |
+
- text: ' A typical feed-forward neural field algorithm. Spatiotemporal coordinates
|
27 |
+
are fed into a neural network that predicts values in the reconstructed domain.
|
28 |
+
Then, this domain is mapped to the sensor domain where sensor measurements are
|
29 |
+
available as supervision. Class and Section Problems Addressed Generalization
|
30 |
+
(Section 2) Inverse problems, ill-posed problems, editability; symmetries. Hybrid
|
31 |
+
Representations (Section 3) Computation & memory efficiency, representation capacity,
|
32 |
+
editability: Forward Maps (Section 4) Inverse problems Network Architecture (Section
|
33 |
+
5) Spectral bias, integration & derivatives. Manipulating Neural Fields (Section
|
34 |
+
6) Edit ability, constraints, regularization. Table 2: The five classes of techniques
|
35 |
+
in the neural field toolbox each addresses problems that arise in learning, inference,
|
36 |
+
and control. (Section 3). We can supervise reconstruction via differentiable forward
|
37 |
+
maps that transform Or project our domain (e.g, 3D reconstruction via 2D images;
|
38 |
+
Section 4) With appropriate network architecture choices, we can overcome neural
|
39 |
+
network spectral biases (blurriness) and efficiently compute derivatives and integrals
|
40 |
+
(Section 5). Finally, we can manipulate neural fields to add constraints and regularizations,
|
41 |
+
and to achieve editable representations (Section 6). Collectively, these classes
|
42 |
+
constitute a ''toolbox'' of techniques to help solve problems with neural fields
|
43 |
+
There are three components in a conditional neural field: (1) An encoder or inference
|
44 |
+
function € that outputs the conditioning latent variable 2 given an observation
|
45 |
+
0 E(0) =2. 2 is typically a low-dimensional vector, and is often referred to aS
|
46 |
+
a latent code Or feature code_ (2) A mapping function 4 between Z and neural field
|
47 |
+
parameters O: Y(z) = O; (3) The neural field itself $. The encoder € finds the
|
48 |
+
most probable z given the observations O: argmaxz P(2/0). The decoder maximizes
|
49 |
+
the inverse conditional probability to find the most probable 0 given Z: arg-
|
50 |
+
max P(Olz). We discuss different encoding schemes with different optimality guarantees
|
51 |
+
(Section 2.1.1), both global and local conditioning (Section 2.1.2), and different
|
52 |
+
mapping functions Y (Section 2.1.3) 2. Generalization Suppose we wish to estimate
|
53 |
+
a plausible 3D surface shape given a partial or noisy point cloud. We need a suitable
|
54 |
+
prior over the sur- face in its reconstruction domain to generalize to the partial
|
55 |
+
observations. A neural network expresses a prior via the function space of its
|
56 |
+
architecture and parameters 0, and generalization is influenced by the inductive
|
57 |
+
bias of this function space (Section 5).'
|
58 |
+
example_title: scientific paper
|
59 |
+
- text: ' the big variety of data coming from diverse sources is one of the key properties
|
60 |
+
of the big data phenomenon. It is, therefore, beneficial to understand how data
|
61 |
+
is generated in various environments and scenarios, before looking at what should
|
62 |
+
be done with this data and how to design the best possible architecture to accomplish
|
63 |
+
this The evolution of IT architectures, described in Chapter 2, means that the
|
64 |
+
data is no longer processed by a few big monolith systems, but rather by a group
|
65 |
+
of services In parallel to the processing layer, the underlying data storage has
|
66 |
+
also changed and became more distributed This, in turn, required a significant
|
67 |
+
paradigm shift as the traditional approach to transactions (ACID) could no longer
|
68 |
+
be supported. On top of this, cloud computing is becoming a major approach with
|
69 |
+
the benefits of reducing costs and providing on-demand scalability but at the
|
70 |
+
same time introducing concerns about privacy, data ownership, etc In the meantime
|
71 |
+
the Internet continues its exponential growth: Every day both structured and unstructured
|
72 |
+
data is published and available for processing: To achieve competitive advantage
|
73 |
+
companies have to relate their corporate resources to external services, e.g.
|
74 |
+
financial markets, weather forecasts, social media, etc While several of the sites
|
75 |
+
provide some sort of API to access the data in a more orderly fashion; countless
|
76 |
+
sources require advanced web mining and Natural Language Processing (NLP) processing
|
77 |
+
techniques: Advances in science push researchers to construct new instruments
|
78 |
+
for observing the universe O conducting experiments to understand even better
|
79 |
+
the laws of physics and other domains. Every year humans have at their disposal
|
80 |
+
new telescopes, space probes, particle accelerators, etc These instruments generate
|
81 |
+
huge streams of data, which need to be stored and analyzed. The constant drive
|
82 |
+
for efficiency in the industry motivates the introduction of new automation techniques
|
83 |
+
and process optimization: This could not be done without analyzing the precise
|
84 |
+
data that describe these processes. As more and more human tasks are automated,
|
85 |
+
machines provide rich data sets, which can be analyzed in real-time to drive efficiency
|
86 |
+
to new levels. Finally, it is now evident that the growth of the Internet of Things
|
87 |
+
is becoming a major source of data. More and more of the devices are equipped
|
88 |
+
with significant computational power and can generate a continuous data stream
|
89 |
+
from their sensors. In the subsequent sections of this chapter, we will look at
|
90 |
+
the domains described above to see what they generate in terms of data sets. We
|
91 |
+
will compare the volumes but will also look at what is characteristic and important
|
92 |
+
from their respective points of view. 3.1 The Internet is undoubtedly the largest
|
93 |
+
database ever created by humans. While several well described; cleaned, and structured
|
94 |
+
data sets have been made available through this medium, most of the resources
|
95 |
+
are of an ambiguous, unstructured, incomplete or even erroneous nature. Still,
|
96 |
+
several examples in the areas such as opinion mining, social media analysis, e-governance,
|
97 |
+
etc, clearly show the potential lying in these resources. Those who can successfully
|
98 |
+
mine and interpret the Internet data can gain unique insight and competitive advantage
|
99 |
+
in their business An important area of data analytics on the edge of corporate
|
100 |
+
IT and the Internet is Web Analytics.'
|
101 |
+
example_title: data science textbook
|
102 |
inference:
|
103 |
parameters:
|
104 |
max_length: 64
|
|
|
107 |
repetition_penalty: 2.4
|
108 |
length_penalty: 0.5
|
109 |
num_beams: 4
|
110 |
+
early_stopping: true
|
111 |
+
base_model: google/pegasus-large
|
112 |
---
|
113 |
|
114 |
|