_id
stringlengths 36
36
| text
stringlengths 5
665k
| marker
stringlengths 3
6
| marker_offsets
sequence | label
stringlengths 28
32
|
---|---|---|---|---|
0d372263-d5d4-48a8-8930-cfdf68ed5de6 | Lemma 2.5 (Bombieri–Pila [1]})
Let \(\mathcal {C}\) be an absolutely irreducible curve (over the rationals) of degree \(d\ge 2\) and \(N\ge \exp (d^{6})\) .
Then the number of integral points on \(\mathcal {C}\) and inside the square \([0, N] \times [0, N]\) does not exceed \(N^{1/d} \exp ( 12 \sqrt{d \log N \log \log N})\) .
| [1] | [
[
25,
28
]
] | https://openalex.org/W1975111868 |
06832116-7ceb-4535-b3e0-e6a23463f653 | One of the main advantages of Hamiltonian Truncation is that by diagonalizing the Hamiltonian, one obtains the eigenvalue spectrum as well as the eigenvectors of the theory at strong coupling. Together, these represent an enormous amount of information about the dynamics of the theory, allowing one to compute real-time evolution of any initial state. Actually, one can compute both Lorentzian and Euclidean correlators, and Euclidean correlators of local operators in the vacuum, which are the correlators naturally computed in LFT, are straightforward to obtain from the eigenvector data. For instance, the two-point function of a local operator \({\cal O}\) in the vacuum can be obtained through the evaluation of the spectral function \(\rho _{\cal O}\) for \({\cal O}\) , as a sum over states \(\rho _{\cal O}(s) \sim \sum _j |\langle {\cal O}(0) | j \rangle |^2 \delta (s-s_j)\) . The Euclidean two-point function is then the Euclidean free propagator weighted by the spectral function. But now one can also use the spectral function to compute correlators in the Lorentzian regime as well, or indeed under any complex rotation of the momenta or positions! This procedure works much better than one might expect. In particular, truncation discretizes the spectrum, so the spectral function becomes a sum over unphysical delta functions even when it should be a smooth function. Nevertheless, methods that take advantage of unitarity and analyticity have been developed to resolve this issue [1]}, [2]} and accurately restore the smooth underlying spectral functions. In Fig. REF (right), we show these smoothing methods applied to the Zamolodchikov \(C\) -function (the integral of the \(T_{--}\) spectral function) in 2d QCD with \(N_c=3\) and a very light quark in the fundamental. This theory is strongly coupled in the IR, where it is expected to be dual to the Sine-Gordon model, allowing a precise test of the smoothed \(C\) -function from truncation. Remarkably, even in this Lorentzian regime, Hamiltonian Truncation achieves better than 1% accuracy using a basis of only 77 states.
These methods can likely be understood more thoroughly and improved.
| [1] | [
[
1500,
1503
]
] | https://openalex.org/W3031722725 |
435aa1ea-fa3c-4fdb-8515-8d48c1fcae17 | One strategy for overcoming this problem is to find more efficient truncation subspaces that lead to faster convergence. For instance, in [1]}, the authors studied 2d \(\phi ^4\) theory using DLCQ, and with improved algorithms and computational resources the authors were able to take the truncation parameter \(K=96\) , corresponding to about 60 million states. Such a large truncation made it possible to reliably establish that the truncation error in the critical coupling \(\lambda _c\) converged to zero like \(O(\frac{1}{\sqrt{K}})\) as \(K\rightarrow \infty \) . By contrast, in the LCT basis, the truncation error converges like \(O(\frac{1}{K})\) ; in fact, to good approximation, the critical coupling in LCT and DLCQ as a function of \(K\) are related by \(\lambda _c^{(\rm LCT)}(K) = \lambda _c^{(\rm DLCQ)}(K^2)\) .In 2d \(\phi ^4\) theory in both DLCQ and LCT, the number of states as a function of \(K\) is the number \(P(K)\) of integer partitions of \(K\) , which grows rapidly with \(K\) like \(P(K) \approx \frac{1}{4 K \sqrt{3}} e^{ \pi \sqrt{\frac{2}{3}K}}\) ; for instance, \(P(20)= 627\) , while \(P(400)=6.7 \times 10^{18}\) .
So, different choices can lead to vastly different rates of convergence as the size of the subspace is increased. Additional strategies can be employed to identify more efficient subspaces that keep only the most important states in the basis; see section VII of [2]} for examples.
| [2] | [
[
1423,
1426
]
] | https://openalex.org/W2952812443 |
3b39e339-f35f-47bb-9d86-8e14239d7fc5 | Finally, this paper can also be viewed as a continuation of [1]} where the
case of rough initial conditions and \((H_0,H)\in (1/2,1)^2\) was covered.
| [1] | [
[
60,
63
]
] | https://openalex.org/W2963641456 |
7680072b-9f92-403e-9b79-1a54842a5419 | I implemented this algorithm relying on freely available open-source and
cross-platform software packages only. The source is available online
(https://github.com/eldad-a/ridge-directed-ring-detector).
Most of the heavy lifting is achieved using the Cython language [1]}.
It has a Python-like syntax from which a C code is automatically generated and
compiled. This allows the code to be short and easy to read while enjoying
the performance of C.
For example, this implementation exploits the Numpy/Cython strided direct data
access [1]}, [3]} by fully sorting the votes.
In the image pre-processing step the image is smoothed using a Gaussian
convolution and the smoothed image spatial derivatives are calculated using a
5\(\times \) 5 2nd order Sobel operator; these operations are
done using OpenCV's Python bindings [4]}.
| [4] | [
[
821,
824
]
] | https://openalex.org/W3210232381 |
a0686f25-c075-4d36-9981-f32d63099d64 | examples from the comparative assessment of the algorithm robustness
referred to in the manuscript and described in the [sec:methods]Methods section can be found in
Supplementary fig:EDCirclescomparison.
Detailed algorithm
As mentioned in the main text, the standard circle Hough transform is often
avoided not only for its challenging local maximum detection in noisy 3d space
but for its heavy memory requirements as well.
The standard circle Hough transform requires a 3-dimensional array of
accumulators. The coordinates of each array element are the parameters of a
candidate circle. The value of the accumulator at these coordinates indicates
how well this circle is represented in the image.
Code optimisation for high-performance and small memory footprint is achieved
following this scheme:
Image pre-processing step:
the image is smoothed using a Gaussian convolution and the smoothed
image spatial derivatives are calculated using a 5\(\times \) 5
2nd order Sobel operator [1]}. Using
these derivatives, the local least principal curvature \(k_-\) is
estimated as the smaller eigen-value of the Hessian matrix.
One-pass ridge detection and votes collection:
for each pixel in \(k_-\) which is smaller than a pre-defined curvature
threshold (the latter is no greater than zero), the corresponding
\(X_-\) is calculated. If this pixel is found to be a local
minimum along the direction of \(X_-\) , its coordinates are
recorded in the ridge container. At this stage, its votes are collected
as well, that is, the potential circles parameters to which it may
belong.
Sort the votes stack according to the radii:
this allows performing the parameter space incrementing procedure
equi-radius level by level. In order to achieve higher performance, the
votes are further sorted by the row index and then by the column index
exploiting the numpy/cython strided direct data access
[2]}, [3]}. For this reason each circle parameter triple is
represented as an integer using a bijection.
Circle parameter space population and local maximum detection
via radius-dependent smoothing and normalisation:
This is done using two arrays representing a sub-space of the full
circle parameter space. Each consists of 3 consecutive
equi-radius levels; the first for the raw accumulators sub-space, the
second for the smoothed and normalised one, where local maxima are to be
searched for. There are two votes thresholding steps: an integer
threshold for the raw accumulators and a floating point
threshold, a fraction of \(2\pi \) , for the smoothed and normalised array
elements.
In describing the procedure it is assumed that the \(r-2\) and \(r-1\)
levels have already been populated in both sub-space triples and
the \(r\) levels are blank, i.e. all zeros. As long as the votes drawn
from the votes stack point to the same radius, the corresponding
radius-level is populated by incrementing the indicated accumulator. Recall
that the votes are fully sorted hence all votes pointing to a certain
voxel will come out from the stack in a row. Every time a new circle
parameter triple is encountered, its coordinates are recorded as
modified. In case the previously incremented voxel has surpassed the
1st votes threshold its coordinates are recorded as a
hotspot – a circle candidate.
Once there are no more votes for this \(r\) -level, it is mapped to the
second subspace: for each hotspot voxel a spatial average is
calculated, weighted by a Gaussian function, which width is linearly
dependent on the radius; the value of the average is then normalised by
\(1/r\) .
After mapping all the hostpots of the current \(r\) -level, a local
maximum is searched for among the hotspots of the \((r-1)\) -level
which pass the 2nd votes threshold. This is done
using a nearest neighbours comparison within a \(3\times 3\times 3\)
voxels box. Array elements which are local maximum and exceed the
threshold are registered as rings.
Once all hotspots have been processed, all modifications to the
\((r-2)\) -level are undone as its data are no longer needed. By this it
is made ready to be regarded as the next \(r\) -level and a cyclic
permutation among the levels takes place. In practice, this
is performed by accessing the equi-radius levels using the modulo
operation – the radius indices are calculated using \(r\pmod {3}\) .
Sub-pixeling via circle fit:
the detected ridge coordinates are subjected to a circle fit via the
non-exclusive classification induced by the results of the directed
circle Hough transform. The coordinates in the ridge container are
clustered based on annuli masks dictated by the detected rings and
sub-pixel accuracy of the rings parameters is achieved.
Additional notes
The ridge detection can be used to achieve a compressed
representation of the features in the image. This can be done by
storing a hash table associating ridge coordinates as keys with their
corresponding \(X_-\) as values.
The algorithm is not restricted to directed ridges as it can be
replaced by directed edges in case these are better descriptors of the
features in the image. This is achieved by replacing the Hessian by the
Gradient. In this case, the gradient magnitude replacing \(k_-\) has to
be a local maximum along the gradient direction.
To reduce false detection, the radii range is extended such that the
Hough transform is over the range \(\left[ r_{min}-1,r_{max}+1 \right]\) ,
but local maximum detection are searched for within the original
range.
In case additional performance per processing unit is required, one
could use a lower resolution in discretising the circle parameter
space. Measuring the effect of this on the accuracy is left for future
work.
Using several colours, the method should be, in principle, extendible
to even higher particle densities.
Application of the proposed algorithm for particle tracking and
discussion of alternative methods
When tracking small light emitting objects, such as fluorescent particles under
the microscope, the appearance of rings is often a sign of the object going out
of focus. Normally this results in the loss of the tracked
object, which is thereafter considered as a hindering background source.
However, these rings carry information of the 3-dimensional position of the
particle.
This has been used for localising a single light scattering magnetic bead
based on matching the radial intensity profile to an empirical set of reference
images [4]}. An axial range of 10 was demonstrated and a
temporal resolution of 25 was achieved using the knowledge of the
particle's previous position.
In fact, for fluorescent particles the radius of the most visible
ring of each particle precisely indicates its axial position – the radius follows a
simple scaling with the particle distance from the focal plane (see
Supplementary fig:rad2z). A similar approach was recently described in [5]}, where the
measurements were, once again, limited to a single particle in the observation
volume, with an axial range of 3 and temporal resolution of
10.
In comparison with other existing methods for 3d particle tracking, the method
presented here is advantageous when it comes to long measurements, temporal
resolution and concurrency, as well as real-time applications.
The confocal scanning microscope requires scanning the volume of interest.
Therefore it is slower and cannot yet provide instantaneous information of the
whole volume.
Unlike Holographic microscopy [6]}, [7]}, the proposed method
does not pose long and heavy computational demands which is restrictive
for real-time applications or when large datasets are required for statistics.
One could expect the optical method discussed here to produce patterns which
are symmetric about the focal plane. When this applies, it may result in an
ambiguity with respect to whether the particle is above or below focus. Our
optical arrangement (see the [sec:methods]Methods section) shows clear diffraction rings
only on one side.
Furthermore, as particles approach focus, the radius of the outer-most ring
becomes too small to resolve. For these reasons the focal plane is placed
outside the volume of interest (as reflected in the Supplementary fig:rad2z).
Optical astigmatism offers a mean for discriminating between the two sides of
the optical axis [8]}, [9]}, [10]}. The introduction of a
cylindrical lens results in the deformation of a circular spot into an
ellipsoidal one as a fluorescent particle goes further away from focus, with
the ellipse major axis of a particle above focus aligned perpendicular to a one
below. In Ref. [8]} the axial range was limited to a couple of microns
above and below focus; in Refs. [9]}, [10]} it was restricted to
less than a micron. Within these ranges the tracer image can be approximated by
an elliptical gaussian pattern. However, extending the range generates
elliptical rings as well; see Figure 1 in Ref. [8]}. This requires dealing with two
species of patterns, spots and rings. Moreover, deforming circular rings into
elliptical ones, the dimensionality of the parameter space increases, and so
does the technical complexity of the image analysis.
Therefore the advantage of the stronger signal, by working closer to focus on
both its sides, is expected to have a heavy computational cost once the range
is extended such that diffraction rings appear as well.
The method presented here requires working away from focus. Rings
visibility decreases as the fluorescence signal spreads over a larger area,
thus setting the lower bound for the exposure time.
Nevertheless, I have found that the fluorescence signal-to-noise ratio allowed
tracking particles moving chaotically at speeds exceeding
[per-mode=symbol]400.
Supplementary figures
<FIGURE><FIGURE><FIGURE> | [3] | [
[
1896,
1899
]
] | https://openalex.org/W3005347330 |
a6f778cf-0d9c-48a0-aa7f-7bc281274d13 | examples from the comparative assessment of the algorithm robustness
referred to in the manuscript and described in the [sec:methods]Methods section can be found in
Supplementary fig:EDCirclescomparison.
Detailed algorithm
As mentioned in the main text, the standard circle Hough transform is often
avoided not only for its challenging local maximum detection in noisy 3d space
but for its heavy memory requirements as well.
The standard circle Hough transform requires a 3-dimensional array of
accumulators. The coordinates of each array element are the parameters of a
candidate circle. The value of the accumulator at these coordinates indicates
how well this circle is represented in the image.
Code optimisation for high-performance and small memory footprint is achieved
following this scheme:
Image pre-processing step:
the image is smoothed using a Gaussian convolution and the smoothed
image spatial derivatives are calculated using a 5\(\times \) 5
2nd order Sobel operator [1]}. Using
these derivatives, the local least principal curvature \(k_-\) is
estimated as the smaller eigen-value of the Hessian matrix.
One-pass ridge detection and votes collection:
for each pixel in \(k_-\) which is smaller than a pre-defined curvature
threshold (the latter is no greater than zero), the corresponding
\(X_-\) is calculated. If this pixel is found to be a local
minimum along the direction of \(X_-\) , its coordinates are
recorded in the ridge container. At this stage, its votes are collected
as well, that is, the potential circles parameters to which it may
belong.
Sort the votes stack according to the radii:
this allows performing the parameter space incrementing procedure
equi-radius level by level. In order to achieve higher performance, the
votes are further sorted by the row index and then by the column index
exploiting the numpy/cython strided direct data access
[2]}, [3]}. For this reason each circle parameter triple is
represented as an integer using a bijection.
Circle parameter space population and local maximum detection
via radius-dependent smoothing and normalisation:
This is done using two arrays representing a sub-space of the full
circle parameter space. Each consists of 3 consecutive
equi-radius levels; the first for the raw accumulators sub-space, the
second for the smoothed and normalised one, where local maxima are to be
searched for. There are two votes thresholding steps: an integer
threshold for the raw accumulators and a floating point
threshold, a fraction of \(2\pi \) , for the smoothed and normalised array
elements.
In describing the procedure it is assumed that the \(r-2\) and \(r-1\)
levels have already been populated in both sub-space triples and
the \(r\) levels are blank, i.e. all zeros. As long as the votes drawn
from the votes stack point to the same radius, the corresponding
radius-level is populated by incrementing the indicated accumulator. Recall
that the votes are fully sorted hence all votes pointing to a certain
voxel will come out from the stack in a row. Every time a new circle
parameter triple is encountered, its coordinates are recorded as
modified. In case the previously incremented voxel has surpassed the
1st votes threshold its coordinates are recorded as a
hotspot – a circle candidate.
Once there are no more votes for this \(r\) -level, it is mapped to the
second subspace: for each hotspot voxel a spatial average is
calculated, weighted by a Gaussian function, which width is linearly
dependent on the radius; the value of the average is then normalised by
\(1/r\) .
After mapping all the hostpots of the current \(r\) -level, a local
maximum is searched for among the hotspots of the \((r-1)\) -level
which pass the 2nd votes threshold. This is done
using a nearest neighbours comparison within a \(3\times 3\times 3\)
voxels box. Array elements which are local maximum and exceed the
threshold are registered as rings.
Once all hotspots have been processed, all modifications to the
\((r-2)\) -level are undone as its data are no longer needed. By this it
is made ready to be regarded as the next \(r\) -level and a cyclic
permutation among the levels takes place. In practice, this
is performed by accessing the equi-radius levels using the modulo
operation – the radius indices are calculated using \(r\pmod {3}\) .
Sub-pixeling via circle fit:
the detected ridge coordinates are subjected to a circle fit via the
non-exclusive classification induced by the results of the directed
circle Hough transform. The coordinates in the ridge container are
clustered based on annuli masks dictated by the detected rings and
sub-pixel accuracy of the rings parameters is achieved.
Additional notes
The ridge detection can be used to achieve a compressed
representation of the features in the image. This can be done by
storing a hash table associating ridge coordinates as keys with their
corresponding \(X_-\) as values.
The algorithm is not restricted to directed ridges as it can be
replaced by directed edges in case these are better descriptors of the
features in the image. This is achieved by replacing the Hessian by the
Gradient. In this case, the gradient magnitude replacing \(k_-\) has to
be a local maximum along the gradient direction.
To reduce false detection, the radii range is extended such that the
Hough transform is over the range \(\left[ r_{min}-1,r_{max}+1 \right]\) ,
but local maximum detection are searched for within the original
range.
In case additional performance per processing unit is required, one
could use a lower resolution in discretising the circle parameter
space. Measuring the effect of this on the accuracy is left for future
work.
Using several colours, the method should be, in principle, extendible
to even higher particle densities.
Application of the proposed algorithm for particle tracking and
discussion of alternative methods
When tracking small light emitting objects, such as fluorescent particles under
the microscope, the appearance of rings is often a sign of the object going out
of focus. Normally this results in the loss of the tracked
object, which is thereafter considered as a hindering background source.
However, these rings carry information of the 3-dimensional position of the
particle.
This has been used for localising a single light scattering magnetic bead
based on matching the radial intensity profile to an empirical set of reference
images [4]}. An axial range of 10 was demonstrated and a
temporal resolution of 25 was achieved using the knowledge of the
particle's previous position.
In fact, for fluorescent particles the radius of the most visible
ring of each particle precisely indicates its axial position – the radius follows a
simple scaling with the particle distance from the focal plane (see
Supplementary fig:rad2z). A similar approach was recently described in [5]}, where the
measurements were, once again, limited to a single particle in the observation
volume, with an axial range of 3 and temporal resolution of
10.
In comparison with other existing methods for 3d particle tracking, the method
presented here is advantageous when it comes to long measurements, temporal
resolution and concurrency, as well as real-time applications.
The confocal scanning microscope requires scanning the volume of interest.
Therefore it is slower and cannot yet provide instantaneous information of the
whole volume.
Unlike Holographic microscopy [6]}, [7]}, the proposed method
does not pose long and heavy computational demands which is restrictive
for real-time applications or when large datasets are required for statistics.
One could expect the optical method discussed here to produce patterns which
are symmetric about the focal plane. When this applies, it may result in an
ambiguity with respect to whether the particle is above or below focus. Our
optical arrangement (see the [sec:methods]Methods section) shows clear diffraction rings
only on one side.
Furthermore, as particles approach focus, the radius of the outer-most ring
becomes too small to resolve. For these reasons the focal plane is placed
outside the volume of interest (as reflected in the Supplementary fig:rad2z).
Optical astigmatism offers a mean for discriminating between the two sides of
the optical axis [8]}, [9]}, [10]}. The introduction of a
cylindrical lens results in the deformation of a circular spot into an
ellipsoidal one as a fluorescent particle goes further away from focus, with
the ellipse major axis of a particle above focus aligned perpendicular to a one
below. In Ref. [8]} the axial range was limited to a couple of microns
above and below focus; in Refs. [9]}, [10]} it was restricted to
less than a micron. Within these ranges the tracer image can be approximated by
an elliptical gaussian pattern. However, extending the range generates
elliptical rings as well; see Figure 1 in Ref. [8]}. This requires dealing with two
species of patterns, spots and rings. Moreover, deforming circular rings into
elliptical ones, the dimensionality of the parameter space increases, and so
does the technical complexity of the image analysis.
Therefore the advantage of the stronger signal, by working closer to focus on
both its sides, is expected to have a heavy computational cost once the range
is extended such that diffraction rings appear as well.
The method presented here requires working away from focus. Rings
visibility decreases as the fluorescence signal spreads over a larger area,
thus setting the lower bound for the exposure time.
Nevertheless, I have found that the fluorescence signal-to-noise ratio allowed
tracking particles moving chaotically at speeds exceeding
[per-mode=symbol]400.
Supplementary figures
<FIGURE><FIGURE><FIGURE> | [4] | [
[
6430,
6433
]
] | https://openalex.org/W2136290525 |
69d4a345-d18d-48bb-bfcc-b387c903b096 | examples from the comparative assessment of the algorithm robustness
referred to in the manuscript and described in the [sec:methods]Methods section can be found in
Supplementary fig:EDCirclescomparison.
Detailed algorithm
As mentioned in the main text, the standard circle Hough transform is often
avoided not only for its challenging local maximum detection in noisy 3d space
but for its heavy memory requirements as well.
The standard circle Hough transform requires a 3-dimensional array of
accumulators. The coordinates of each array element are the parameters of a
candidate circle. The value of the accumulator at these coordinates indicates
how well this circle is represented in the image.
Code optimisation for high-performance and small memory footprint is achieved
following this scheme:
Image pre-processing step:
the image is smoothed using a Gaussian convolution and the smoothed
image spatial derivatives are calculated using a 5\(\times \) 5
2nd order Sobel operator [1]}. Using
these derivatives, the local least principal curvature \(k_-\) is
estimated as the smaller eigen-value of the Hessian matrix.
One-pass ridge detection and votes collection:
for each pixel in \(k_-\) which is smaller than a pre-defined curvature
threshold (the latter is no greater than zero), the corresponding
\(X_-\) is calculated. If this pixel is found to be a local
minimum along the direction of \(X_-\) , its coordinates are
recorded in the ridge container. At this stage, its votes are collected
as well, that is, the potential circles parameters to which it may
belong.
Sort the votes stack according to the radii:
this allows performing the parameter space incrementing procedure
equi-radius level by level. In order to achieve higher performance, the
votes are further sorted by the row index and then by the column index
exploiting the numpy/cython strided direct data access
[2]}, [3]}. For this reason each circle parameter triple is
represented as an integer using a bijection.
Circle parameter space population and local maximum detection
via radius-dependent smoothing and normalisation:
This is done using two arrays representing a sub-space of the full
circle parameter space. Each consists of 3 consecutive
equi-radius levels; the first for the raw accumulators sub-space, the
second for the smoothed and normalised one, where local maxima are to be
searched for. There are two votes thresholding steps: an integer
threshold for the raw accumulators and a floating point
threshold, a fraction of \(2\pi \) , for the smoothed and normalised array
elements.
In describing the procedure it is assumed that the \(r-2\) and \(r-1\)
levels have already been populated in both sub-space triples and
the \(r\) levels are blank, i.e. all zeros. As long as the votes drawn
from the votes stack point to the same radius, the corresponding
radius-level is populated by incrementing the indicated accumulator. Recall
that the votes are fully sorted hence all votes pointing to a certain
voxel will come out from the stack in a row. Every time a new circle
parameter triple is encountered, its coordinates are recorded as
modified. In case the previously incremented voxel has surpassed the
1st votes threshold its coordinates are recorded as a
hotspot – a circle candidate.
Once there are no more votes for this \(r\) -level, it is mapped to the
second subspace: for each hotspot voxel a spatial average is
calculated, weighted by a Gaussian function, which width is linearly
dependent on the radius; the value of the average is then normalised by
\(1/r\) .
After mapping all the hostpots of the current \(r\) -level, a local
maximum is searched for among the hotspots of the \((r-1)\) -level
which pass the 2nd votes threshold. This is done
using a nearest neighbours comparison within a \(3\times 3\times 3\)
voxels box. Array elements which are local maximum and exceed the
threshold are registered as rings.
Once all hotspots have been processed, all modifications to the
\((r-2)\) -level are undone as its data are no longer needed. By this it
is made ready to be regarded as the next \(r\) -level and a cyclic
permutation among the levels takes place. In practice, this
is performed by accessing the equi-radius levels using the modulo
operation – the radius indices are calculated using \(r\pmod {3}\) .
Sub-pixeling via circle fit:
the detected ridge coordinates are subjected to a circle fit via the
non-exclusive classification induced by the results of the directed
circle Hough transform. The coordinates in the ridge container are
clustered based on annuli masks dictated by the detected rings and
sub-pixel accuracy of the rings parameters is achieved.
Additional notes
The ridge detection can be used to achieve a compressed
representation of the features in the image. This can be done by
storing a hash table associating ridge coordinates as keys with their
corresponding \(X_-\) as values.
The algorithm is not restricted to directed ridges as it can be
replaced by directed edges in case these are better descriptors of the
features in the image. This is achieved by replacing the Hessian by the
Gradient. In this case, the gradient magnitude replacing \(k_-\) has to
be a local maximum along the gradient direction.
To reduce false detection, the radii range is extended such that the
Hough transform is over the range \(\left[ r_{min}-1,r_{max}+1 \right]\) ,
but local maximum detection are searched for within the original
range.
In case additional performance per processing unit is required, one
could use a lower resolution in discretising the circle parameter
space. Measuring the effect of this on the accuracy is left for future
work.
Using several colours, the method should be, in principle, extendible
to even higher particle densities.
Application of the proposed algorithm for particle tracking and
discussion of alternative methods
When tracking small light emitting objects, such as fluorescent particles under
the microscope, the appearance of rings is often a sign of the object going out
of focus. Normally this results in the loss of the tracked
object, which is thereafter considered as a hindering background source.
However, these rings carry information of the 3-dimensional position of the
particle.
This has been used for localising a single light scattering magnetic bead
based on matching the radial intensity profile to an empirical set of reference
images [4]}. An axial range of 10 was demonstrated and a
temporal resolution of 25 was achieved using the knowledge of the
particle's previous position.
In fact, for fluorescent particles the radius of the most visible
ring of each particle precisely indicates its axial position – the radius follows a
simple scaling with the particle distance from the focal plane (see
Supplementary fig:rad2z). A similar approach was recently described in [5]}, where the
measurements were, once again, limited to a single particle in the observation
volume, with an axial range of 3 and temporal resolution of
10.
In comparison with other existing methods for 3d particle tracking, the method
presented here is advantageous when it comes to long measurements, temporal
resolution and concurrency, as well as real-time applications.
The confocal scanning microscope requires scanning the volume of interest.
Therefore it is slower and cannot yet provide instantaneous information of the
whole volume.
Unlike Holographic microscopy [6]}, [7]}, the proposed method
does not pose long and heavy computational demands which is restrictive
for real-time applications or when large datasets are required for statistics.
One could expect the optical method discussed here to produce patterns which
are symmetric about the focal plane. When this applies, it may result in an
ambiguity with respect to whether the particle is above or below focus. Our
optical arrangement (see the [sec:methods]Methods section) shows clear diffraction rings
only on one side.
Furthermore, as particles approach focus, the radius of the outer-most ring
becomes too small to resolve. For these reasons the focal plane is placed
outside the volume of interest (as reflected in the Supplementary fig:rad2z).
Optical astigmatism offers a mean for discriminating between the two sides of
the optical axis [8]}, [9]}, [10]}. The introduction of a
cylindrical lens results in the deformation of a circular spot into an
ellipsoidal one as a fluorescent particle goes further away from focus, with
the ellipse major axis of a particle above focus aligned perpendicular to a one
below. In Ref. [8]} the axial range was limited to a couple of microns
above and below focus; in Refs. [9]}, [10]} it was restricted to
less than a micron. Within these ranges the tracer image can be approximated by
an elliptical gaussian pattern. However, extending the range generates
elliptical rings as well; see Figure 1 in Ref. [8]}. This requires dealing with two
species of patterns, spots and rings. Moreover, deforming circular rings into
elliptical ones, the dimensionality of the parameter space increases, and so
does the technical complexity of the image analysis.
Therefore the advantage of the stronger signal, by working closer to focus on
both its sides, is expected to have a heavy computational cost once the range
is extended such that diffraction rings appear as well.
The method presented here requires working away from focus. Rings
visibility decreases as the fluorescence signal spreads over a larger area,
thus setting the lower bound for the exposure time.
Nevertheless, I have found that the fluorescence signal-to-noise ratio allowed
tracking particles moving chaotically at speeds exceeding
[per-mode=symbol]400.
Supplementary figures
<FIGURE><FIGURE><FIGURE> | [9] | [
[
8288,
8291
],
[
8654,
8657
]
] | https://openalex.org/W2066521860 |
8004e0a2-0c76-4b41-a58b-dc5f8dad66c7 | Question \(\textbf {1}\) ([1]}). Does every concordance class contain a unique minimal representative?
| [1] | [
[
27,
30
]
] | https://openalex.org/W2080689500 |
046d647c-13f9-4394-a347-29b2c1ce0e81 | Note that we have \(\widehat{HFK}(K^{\prime }, g_3(K^{\prime })) \ne 0 \) .
Zemke [1]} proved that \(K^{\prime } \leqq K\) induces an injective homomorphism \(\widehat{HFK}(K^{\prime }) \rightarrow \widehat{HFK}(K)\) .
Therefore \(\widehat{HFK}(K, g_3(K^{\prime })) \ne 0\) . This implies that
\( g_{3}(K^{\prime }) \leqq \max \lbrace a \in \mathbb {Z} \ | \ \widehat{HFK}(K, a) \ne 0\rbrace = g_{3}(K).\)
We will prove that \(g_{3}(K) \leqq g_{3}(K^{\prime })\) .
By the assumption, we have \(g_{3}(K)=g_{4}(K)\) .
Furthermore, we have \( g_{4}(K)=g_{4}(K^{\prime }) \) since the 4-ball genus is a concordance invariant.
Therefore we have
\( g_{3}(K)= g_{4}(K)=g_{4}(K^{\prime }) \leqq g_{3}(K^{\prime }).\)
This implies that \(g_{3}(K^{\prime })=g_{3}(K)\) .
| [1] | [
[
82,
85
]
] | https://openalex.org/W2982496727 |
bcedad1c-6128-4c1f-aa95-5f1acf9ca5ac | When looking at particle listings collected by Particle Data Group (PDG) [1]}, three \(\omega \) states, the \(\omega (2205)\) , \(\omega (2290)\) , \(\omega (2330)\) , as further state was presented here. In Ref. [2]}, Lanzhou group studied the possibility of the \(\omega (2290)\) and \(\omega (2330)\) as \(\omega (4^3S_1)=\omega (4S)\) state, and the \(\omega (2205)\) as \(\omega (3^3D_1)=\omega (3D)\) state. But, there still exists difference of theoretical result and experimental data of total width when making the above assignment [2]}. A main reason is that the \(\omega (2205)\) , \(\omega (2290)\) , \(\omega (2330)\) states were not established in experiment. Their resonance parameters can only be as reference for constructing the \(\omega (4S)\) and \(\omega (3D)\) .
Additionally, when putting these reported \(\omega \) states in experiments together, we can obviously find the difference of their resonance parameters as shown in Fig. REF , where five \(\omega \) states are accumulated in the same energy range, which is puzzling for us. This messy situation should be clarified by further theoretical and experimental effort.
<FIGURE> | [1] | [
[
73,
76
]
] | https://openalex.org/W2899140785 |
41d80deb-d226-4d31-94c4-2670843a6d73 |
where \(p_2\) , \(p_3\) , and \(p_4\) are four momenta of final states \(\pi ^0\) , \(\pi ^0\) and \(\omega \) , respectively, and
\(\tilde{g}_{\rho \sigma \alpha \beta }=\frac{1}{2}(\tilde{g}_{\rho \alpha }\tilde{g}_{\sigma \beta }+\tilde{g}_{\rho \beta }\tilde{g}_{\sigma \alpha })-\frac{1}{3}\tilde{g}_{\rho \sigma }\tilde{g}_{\alpha \beta }\) .
The \(\rho ^*\) , \(b_1\) and \(f_2\) denote \(\rho \) /\(\rho (1450)\) , \(b_1(1235)\) and \(f_2(1270)\) , respectively, where their resonance parameters are taken from PDG [1]}, which are collected in Table REF .
The coupling constants included in Eqs. (REF ) - () are calculated from the branching ratios of the corresponding decay modes, which are also collected in Table REF .
Here, the coupling constants involved with the \(\omega (4S)\) and \(\omega (3D)\) can be calculated from the branching ratios of the corresponding decay modes given in Fig. REF .
The branching ratio of the \(\rho (1450)\) decaying to \(\omega \pi ^0\) is adopted to be \(60\%\) , which is estimated by the values of \(\mathcal {B}(\rho (1450)\rightarrow e^+e^-)\times \mathcal {B}(\rho (1450)\rightarrow \omega \pi )=3.7\times 10^{-6}\) [1]} and \(\mathcal {B}(\rho (1450)\rightarrow e^+e^-)=6.2\times 10^{-6}\) [3]}, and the branching ratio of the \(b_{1}(1235)\) decay into \(\omega \pi ^0\) is taken as \(100\%\) [4]}.
Besides, the coupling constant \(g_{\rho \omega \pi ^0}\) is fixed to be \(16.0\, \rm {GeV}^{-1}\) as estimated by the QCD sum rules [5]}.
<TABLE> | [3] | [
[
1256,
1259
]
] | https://openalex.org/W3174592032 |
732655db-0124-4318-8319-4b0f2162ebf4 | The lattice size was further studied in [1]} and [2]} in a wider context of plane convex bodies. It was shown in [1]} that in dimensions 2 and 3
a so-called reduced basis computes \(\operatorname{ls_\square }(P)\) . The generalized Gauss algorithm, analyzed by Kaib and Schnorr in [4]}, is a fast algorithm for finding a reduced basis in dimension 2. As explained in [1]}, when this algorithm is used to find the lattice size of a lattice polygon, it outperforms the algorithm of [6]}, [7]}.
| [6] | [
[
480,
483
]
] | https://openalex.org/W2124549878 |
3ad0440c-b03b-4f58-af9d-6702cc738cef | A fast algorithm for finding a reduced basis with respect to a convex body \(P\subset \mathbb {R}^2\) was given in [1]}.
It was shown in [2]} and [3]} that if the standard basis is reduced then one can easily find \(\operatorname{ls_\square }(P)\) and \(\operatorname{ls_\Delta }(P)\) , as we summarize in Theorem REF below.
| [3] | [
[
147,
150
]
] | https://openalex.org/W4206008544 |
8f38fd4a-351c-4c6f-a884-ac8fc21ab4b8 | Remark 2.2 It was shown in Lemma 3 of [1]} that for any lattice convex polygon \(P\subset \mathbb {R}^2\) there exist numbers \(w,h\ge 0\) with \(wh\le 4 A(P)\) , such that \([0,w]\times [0,h]\)
contains a lattice-equivalent copy of \(P\) . Note that Theorem REF strengthens this result replacing the constant of 4 with \(\frac{8}{3}\) , and also extends the result to plane convex bodies.
| [1] | [
[
38,
41
]
] | https://openalex.org/W2143379314 |
b915ab75-63ea-4a69-9a0a-3a0c994e9bc1 | In Section , we extend the matching of the model parameters to the usual cosmological parameters \(H_0\) , \(\Omega _M\) and \(\Omega _\Lambda \) presented in [1]} and [2]}, by taking into account the quantum backreaction effect of the scalar field on the Hubble rate, which is relevant in the most recent stages of the Universe evolution.
| [2] | [
[
170,
173
]
] | https://openalex.org/W2766797976 |
d0f16322-e062-4492-bbb2-e6420d2a2fe1 | The model parameters have to be matched to the current cosmological parameters in order to study its actual predictions. In the last few e-foldings the backreaction of the scalar field on the background metric due to its energy-momentum tensor becomes very important, to the point that eventually it becomes a fundamental contribution to the total energy density of the Universe and in the future it will lead the expansion. A full solution of the problem would unavoidably be numerical. However some simplified understanding is possible in analytical form. For the moment we approximate the FLRW background simply as a matter-dominated Universe, but in Section we will refine the matching by a more consistent account for the quantum backreaction at late times.
As usual in the literature, let us call \(\Omega _M\) the energy density fraction today due to non-relativistic matter (baryons and dark matter), \(\Omega _R\) the little fraction due to energy density in radiation and, assuming zero spatial curvatureCMB data from Planck combined with baryon acoustic oscillations (BAO) give the constraint \(\Omega _K=0.0007\pm 0.0019\) on the spatial curvature parameter, see [1]}., \(\Omega _\Lambda \simeq 1-\Omega _M\) is the dark energy contribution (in the form of a cosmological constant). The Hubble parameter today is \(H_0\) .
| [1] | [
[
1179,
1182
]
] | https://openalex.org/W3105678606 |
9ca242a0-62c4-4dd5-a88b-9a488181e75e | where \({\mathcal {L}}^{-1}\) is the inverse Laplace transform, which can be evaluated numerically or expressed as a line integral (called Bromwich or Fourier-Mellin integral), see e.g. Section 15.12 of [1]}.
Once the PDF \(p\left(\left[h\right]_{2}\right)\) is known, either from eq. (REF ) or via eq. (REF ), the goal probability \({\rm P}\left(\left[H^2\right]_{V_2}<H_2^2\right)={\rm P}\left(\left[h\right]_{2}<1\right)\) can be found by integration.
| [1] | [
[
204,
207
]
] | https://openalex.org/W2995136929 |
fa1a345b-b83f-41c0-a928-2e19c75481a4 | The transport properties of the conduction electron system of MnIr in the
temperature range around room temperature are governed by electron-phonon
interaction and the exchange interaction mediated by \(H_{c-S}\) as we argue
below. We assume the system to be anisotropic in spin space, with preferred
direction along the \(z-\) axis. We will consider both three-dimensional and
two-dimensional spin fluctuations, where the \(3d\) model appears to describe
the experiment [1]} better, as we shall see.
| [1] | [
[
473,
476
]
] | https://openalex.org/W2135947060 |
cb8e1af1-78a0-4b55-9ebe-e579c1be0ac1 | Fast forward to 2022, the HFLAV analysis has added to the previous results from Babar, Belle and LHCb [1]}, [2]}, [3]}, [4]} the most recent results from Belle [5]}, [6]}, [7]} and LHCb [8]}, [9]}, [10]}. The HFLAV determination finds lower average values and smaller errors, namely [11]}
\(R(D) = 0.358 \pm 0.025 \pm 0.012 \qquad \qquad R(D^{\ast })= 0.285 \pm 0.010 \pm 0.008\)
| [1] | [
[
102,
105
]
] | https://openalex.org/W2278462536 |
935f821c-f1b1-46e2-8579-636cf48841ba | which coincides with the result obtained in [1]} and shows that this energy density evolves like matter.
| [1] | [
[
44,
47
]
] | https://openalex.org/W2890305526 |
46ced27d-d922-451a-8c11-7687aa23f2c8 | Gradient descent, a simple and fundamental algorithm, is known to find an \(\epsilon \) -approximate first-order stationary point of problem (REF ) (where \(\Vert \nabla f(\mathbf {x})\Vert \le \epsilon \) ) in \({\cal O}(\epsilon ^{-2})\) iterations [1]}. This rate is optimal among the first-order methods under the gradient Lipschitz condition [2]}, [3]}. When additional structure is assumed, such as the Hessian Lipschitz condition, improvement is possible.
| [1] | [
[
252,
255
]
] | https://openalex.org/W3141595720 |
2852fcd0-2fb0-44e5-a423-114faba0a98a | All of the above methods [1]}, [2]}, [3]}, [4]}, [5]} share the \({\cal O}(\epsilon ^{-7/4}\log \frac{1}{\epsilon })\) complexity, which has a \({\cal O}(\log \frac{1}{\epsilon })\) factor. To the best of our knowledge, even applying the methods designed to find second-order stationary point to the easier problem of finding first-order stationary point, the \({\cal O}(\log \frac{1}{\epsilon })\) factor still cannot be removed. On the other hand, almost all the existing methods are complex with nested loops. Even the single-loop method proposed in [5]} needs the negative curvature exploitation procedure.
| [1] | [
[
25,
28
]
] | https://openalex.org/W2613615043 |
67c796b7-3228-40d7-8e0d-1e538e366c4c | All of the above methods [1]}, [2]}, [3]}, [4]}, [5]} share the \({\cal O}(\epsilon ^{-7/4}\log \frac{1}{\epsilon })\) complexity, which has a \({\cal O}(\log \frac{1}{\epsilon })\) factor. To the best of our knowledge, even applying the methods designed to find second-order stationary point to the easier problem of finding first-order stationary point, the \({\cal O}(\log \frac{1}{\epsilon })\) factor still cannot be removed. On the other hand, almost all the existing methods are complex with nested loops. Even the single-loop method proposed in [5]} needs the negative curvature exploitation procedure.
| [2] | [
[
31,
34
]
] | https://openalex.org/W2546420264 |
1a6f96b2-ab53-4001-949f-017aa01f4ebe | All of the above methods [1]}, [2]}, [3]}, [4]}, [5]} share the \({\cal O}(\epsilon ^{-7/4}\log \frac{1}{\epsilon })\) complexity, which has a \({\cal O}(\log \frac{1}{\epsilon })\) factor. To the best of our knowledge, even applying the methods designed to find second-order stationary point to the easier problem of finding first-order stationary point, the \({\cal O}(\log \frac{1}{\epsilon })\) factor still cannot be removed. On the other hand, almost all the existing methods are complex with nested loops. Even the single-loop method proposed in [5]} needs the negative curvature exploitation procedure.
| [4] | [
[
43,
46
]
] | https://openalex.org/W3105314490 |
252be7f1-9d87-4ca3-986a-5602b270acd3 | In contrast with other nonconvex accelerated methods, our method does not invoke any additional techniques, such as the negative curvature exploitation, the optimization of regularized surrogate functions, or the minimization of cubic Newton steps. Especially, although the single-loop algorithm proposed in [1]} is very simple, it still needs the negative curvature exploitation, which should evaluate the objective function. Our method avoids the negative curvature exploitation, and thus it is possible to extend to other problems, such as finite-sum nonconvex optimization.
| [1] | [
[
308,
311
]
] | https://openalex.org/W2963487351 |
93d15bcd-fe0f-4382-8666-756a88f4d629 | Consequently, for a multi-agent problem, we see the need for
two types of predicates viz. local and global. Local
predicates are of the form \(p_{lo}:\mathcal {S}_A\rightarrow \mathbb {B}\) whereas global predicates have the form
\(p_{gl}:\mathcal {S}\rightarrow \mathbb {B}\) where \(\mathbb {B}\)
is the Boolean space. We introduce two simple extensions of
\(\mathtt {reach}\) [1]} to demonstrate the
capabilities of this distinction.
| [1] | [
[
383,
386
]
] | https://openalex.org/W2970564042 |
5f2f9961-4d3c-49a6-910d-69592dbcc5d5 | We use Google Cloud Speechhttps://cloud.google.com/speech-to-text as an off-the-shelf speech recognizer to convert speech to text. A multilingual sentence embedding model [1]} is used to obtain a vector corresponding to the query, which is then used to retrieve the most similar how-to by cosine similarity in the UGIF-DataSet corpus.
| [1] | [
[
171,
174
]
] | https://openalex.org/W3039695075 |
d62a51b1-4485-452a-a6b6-f8682effef75 | We also evaluated our best performing model on the PixelHelp dataset [1]}. Table REF shows that UGIF-DataSet is a harder dataset with significantly greater headroom for improvement especially in non-EN languages.
<FIGURE><TABLE> | [1] | [
[
69,
72
]
] | https://openalex.org/W3034392229 |
a53ea7b1-94ec-4e6e-96c4-c22dec411643 | For our macroanalysis, we want to see how our selected texts divide among the different academic disciplines. As a base map for the disciplinary space (analogous to a world map for geospatial space), we use the UCSD Map of Science [1]} which was created by mining scientific and humanities journals indexed by Thomson Reuters' Web of Science and Elsevier's Scopus. The map represents 554 sub-disciplines—e.g., Contemporary Philosophy, Zoology, Earthquake Engineering—that are further aggregated into 13 core disciplines, appearing similar to continents on the map—e.g., Biology, Earth Sciences, Humanities. Each of the 554 sub-disciplines has a set of journals and keywords associated with it.
| [1] | [
[
231,
234
]
] | https://openalex.org/W2036137014 |
6a81ad32-b63c-4e2e-89ff-abcb7889394e | Table REF shows the top topics when the \(k=60\) topic model is queried using the single word `anthropomorphism'. The topic model checking problem [1]}—i.e., how to assess the quality of the model’s topics—remains an important open problem in topic modeling. Nevertheless, most of the topics in the model can be quickly summarized. Inspection of this list indicates that `anthropomorphism' relates most strongly to a theological topic (38), a biological topic (16), a philosophical topic (51), an anthropological topic (58), and a child development topic (12). The topic model thus serves to disambiguate the different senses of 'anthropomorphism', especially between contexts where the discussion is about anthropomorphized deities (38) and contexts where it is about nonhuman animals (16), with the second topic being the most obvious attractor for researchers interested in comparative psychology. The second-to-last topic (1) is targeted on bibliographic citations, and is dominated by bibliographic abbreviations and some common German and French words that were not in the English language stop list used during initial corpus preparation. Although from one perspective this may seem like a `junk' topic, this topic is nonetheless very useful to a scholar seeking citations buried in the unstructured pages in the corpus.
<TABLE><TABLE> | [1] | [
[
149,
152
]
] | https://openalex.org/W2174706414 |
b68aac67-4463-4974-918b-e6863ab6e439 | As QAs can only minimize a specific objective function, any other problem must be cast/reduced to this Hamiltonian.
Casting computes the coefficients of the Hamiltonian such that the global minima of the Hamiltonian represents the global optima of the problem of interest [1]}, [2]}, [3]}.
Embedding maps the problem graph to the topology of the QA.
As QAs have limited connectivity between qubits, embedding encodes a program qubit with higher connectivity by using a chain of physical qubits.
The problem of limited connectivity exists even on most existing gate-based quantum computers and can be overcome by inserting SWAP operations [4]}, [5]}, [6]}.
However, a similar approach is impractical for QAs as they can only execute a single QMI. Unlike gate-based systems, QAs available today with 5000-plus qubits [7]}, [8]}, [9]} are much larger, scale faster,
and have the potential to power a wide range of real-world applications—including, but not limited to,
planning [10]}, scheduling [11]}, [12]}, constraint satisfaction problems [13]}, Boolean satisfiability (SAT) [14]}, [2]}, matrix factorization [16]}, cryptography [17]}, [18]}, fault detection and system diagnosis [19]}, compressive sensing [20]}, [21]}, control of automated vehicles [22]}, finance [23]}, material design [24]}, and protein folding [25]}.
Although promising, QA hardware suffers from various drawbacks such as noise, device errors, limited programmability and low annealing time, which degrade their reliability [26]}, [3]}, [28]}.
Addressing these limitations requires device-level enhancements that may span generations of QAs.
Therefore, software techniques to improve the reliability of QAs is an important area of research [29]}, [30]}, [31]}, [32]}, [33]}, [34]}, [35]}.
| [26] | [
[
1497,
1501
]
] | https://openalex.org/W2564229214 |
1b0a7caa-d92e-4360-9f5d-84b5397ffe45 | where \(e^{AB}_{k|j}\) is the error rate introduced by Eve's faked state with phase \(j\pi /2\) given that Alice's phase is \(\theta =j\pi /2\) . \(e^{AE}_{k|j}\) is the error rate of Eve for given \(k\) and \(j\) . The error rate \(e^{AB}\) and \(e^{AE}\) are shown in Fig.REF (b), which clearly shows that the error rate between Alice and Eve is much smaller than the error rate between Alice and Bob. Here we remark that although \(e^{AE}\) is smaller than \(e^{AB}\) , it does not means no secret key can be derived due to the fact that post-processing is not symmetric between Eve and Bob. In fact, if we want to show our attack is succeed and the QKD system is insecure, we must show that the lower bound of the estimated key rate given that Eve implements her attack but the legitimate parties ignore it is larger than the upper bound of key rate under the given attack [1]}. For example, our analysis shows that, when our attack is implemented but the legitimate parties ignore it, the estimated key rate per pulse by Alice and Bob can be larger than \(10^{-3}\) in some parameters regimes, but in fact our attack belongs to intercept-and-resend attack (Eve measures all the signals and resend her prepared pulses to Bob), which corresponds to an entanglement-breaking channel and no secret key can be generated under this channel. In other words, the upper bound of key rate under our attack is zero. Thus all the estimated key are insecure. In the following, we give a detailed analysis.
| [1] | [
[
884,
887
]
] | https://openalex.org/W2149344243 |
418e40b8-3747-4243-b32a-b6c86da52581 | Existing sharded blockchain designs generally use a static hash-based object-to-shard assignment [1]}, [2]}, [3]}, [4]}, [5]}, [6]}.
The hash space of object identifiers is divided equally between shards, and hashing the identifier of an object allows clients and miners to deterministically determine its location without using additional indexing services. In the long run, hash-based allocation equally spreads the load across shards but causes loss of data locality. Frequently interacting accounts may be spread across multiple shards causing costly cross-shard interactions [7]}. Furthermore, a fixed assignment cannot always react to activity bursts of accounts located in a single shard, causing short-term load imbalance. Both problems become more pronounced with an increasing number of shards and with an increasing number of accounts involved in each transaction, e.g. as the result of the smart contracts executions.
<FIGURE> | [1] | [
[
97,
100
]
] | https://openalex.org/W3202227545 |
0733388e-6344-43fc-b897-47864a10bba6 | Existing sharded blockchain designs generally use a static hash-based object-to-shard assignment [1]}, [2]}, [3]}, [4]}, [5]}, [6]}.
The hash space of object identifiers is divided equally between shards, and hashing the identifier of an object allows clients and miners to deterministically determine its location without using additional indexing services. In the long run, hash-based allocation equally spreads the load across shards but causes loss of data locality. Frequently interacting accounts may be spread across multiple shards causing costly cross-shard interactions [7]}. Furthermore, a fixed assignment cannot always react to activity bursts of accounts located in a single shard, causing short-term load imbalance. Both problems become more pronounced with an increasing number of shards and with an increasing number of accounts involved in each transaction, e.g. as the result of the smart contracts executions.
<FIGURE> | [2] | [
[
103,
106
]
] | https://openalex.org/W2794533297 |
746bc5df-6735-4988-a361-3aed1c3cd1e5 | Contributions.
We present Shard Scheduler, a novel approach for deciding and enforcing object placement and migration decisions in sharded, account-based blockchains. Our scheduler balances the load between shards and improves data locality.
It leverages the possibility to initiate account migrations when necessary and seeks to maximize the global throughput of the blockchain.
At the same time, Shard Scheduler remains simple, deterministic, and verifiable for all the miners in the network to prevent abuse. Shard Scheduler is executed by the miners and does not require modifications of the clients, who are nonetheless able to verify the legitimacy of migration decisions taken as part of their transaction execution. Finally, Shard Scheduler makes scheduling decisions worth enacting for rationale miners through economic incentives.
We do not seek to propose novel mechanisms for handling cross-shard transactions and account migrations, but rather build upon the different proposals by other authors [1]}, [2]}, [3]}, [4]}, [5]}, [6]}. We only make minimal and common assumptions on the capabilities of the underlying sharded blockchain, allowing Shard Scheduler to be implemented on top of a vast range of account-based blockchains, from the upcoming evolution of Ethereum [6]}, to current systems such as Zilliqa [8]}.
| [3] | [
[
1021,
1024
]
] | https://openalex.org/W2535104337 |
4d4634eb-faea-4b68-a895-22995c4c152e | Previous work on building detection has shown that mixing cross entropy loss with Dice loss is effective [1]}. We observed in informal experiments some further improvement with a closely related formulation, Focal Tversky Loss, which is defined as:
\(L_\mathrm {FTL}(y, \hat{y}, \beta , \gamma ) = \left( 1 - \frac{\sum _i y_i \hat{y}_i + \epsilon }{\sum _i (1-\beta )y_i + \sum _i \beta \hat{y}_i + \epsilon } \right) ^\gamma \ ,\)
| [1] | [
[
105,
108
]
] | https://openalex.org/W2806581075 |
7d1a80c0-1c30-43d1-9bcf-408318a4e047 | If we assume that dark matter follows a Maxwell-Boltzmann distribution in the rest frame of the Galactic Center, with dark matter velocity dispersion \(v_d=270\) km/s, then in the rest frame of Earth, the dark matter velocity distribution is given by [1]}
\(f(u_\chi )=\sqrt{\dfrac{3}{2\pi }}\dfrac{u_\chi }{v_\oplus v_d}\left(\mathrm {exp}\left[-\frac{3(u_\chi -v_\oplus )^2}{2v_d^2}\right]-\mathrm {exp}\left[-\frac{3(u_\chi +v_\oplus )^2}{2v_d^2}\right]\right)\,,\)
| [1] | [
[
252,
255
]
] | https://openalex.org/W2589953753 |
8e948a05-ca40-425a-9030-fe9d4cf0c7d0 | Relation to the operator of [1]}.
Interestingly, this construction of the coarse graph \({\widehat{G}}\) coincides with the coarse Laplace operator for a sparsified vertex set \({\widehat{V}}\) constructed by [1]}.
We will use this view of the Laplace operator later; hence we briefly introduce the construction of [1]} (adapted to our setting):
Given the vertex map \(\pi : V\rightarrow {\widehat{V}}\) , we set a \(n\times N\) matrix \(P\) by \(P[r, i] = \left\lbrace \begin{array}{ll}\frac{1}{\left|{\pi }^{-1}({\hat{v}}_r)\right|} & \text{ if } v_{i} \in {\pi }^{-1}({\hat{v}}_r) \\ 0 & \text{ otherwise }\end{array}\right.\) .
In what follows, we denote \(\gamma _r:= \left|{\pi }^{-1}({\hat{v}}_r)\right|\) for any \(r\in [1,n]\) , which is the size of the cluster of \({\hat{v}}_r\) in \(V\) .
\(P\) can be considered as the weighted projection matrix of the vertex set from \(V\) to \({\widehat{V}}\) .
Let \(P^+\) denote the Moore-Penrose pseudoinverse of \(P\) , which can be intuitively viewed as a way to lift a function on \({\widehat{V}}\) (a vector in \({\mathbb {R}}^n\) ) to a function over \(V\) (a vector in \({\mathbb {R}}^N\) ).
As shown in [1]}, \(P^+\) is the \(N\times n\) matrix where \(P^+[i, r] = 1\) if and only if \({\pi }(v_i) = {\hat{v}}_r\) . See Appendix REF for a toy example.
Finally, [1]} defines an operator for the coarsened vertex set \({\widehat{V}}\) to be
\(\tilde{L}_{\widehat{V}}= (P^+)^T {L}P^+\) . Intuitively, \({{\widehat{L}}}\) operators on \(n\) -vectors. For any \(n\) -vector \({\hat{f}}\in {\mathbb {R}}^n\) , \(\tilde{L}_{\widehat{V}}{\hat{f}}\) first lifts \({\hat{f}}\) to a \(N\) -vector \(f = P^+ {\hat{f}}\) , and then perform \({L}\) on \(f\) , and then project it down to \(n\) -dimensional via \((P^+)^T\) .
| [1] | [
[
28,
31
],
[
212,
215
],
[
318,
321
],
[
1174,
1177
],
[
1336,
1339
]
] | https://openalex.org/W2960658350 |
1363c1c5-700a-4548-8ad9-26d06bbbb72e | We now have an input graph \(G = (V, E)\) and a coarse graph \({\widehat{G}}\) induced from the sparsified node set \({\widehat{V}}\) , and we wish to compare their corresponding Laplace operators. However, as \({\mathcal {O}_G}\) operates on \({\mathbb {R}}^N\) (i.e, functions on the vertex set \(V\) of \(G\) ) and \({\mathcal {O}_{{\widehat{G}}}}\) operates on \({\mathbb {R}}^n\) , we will compare them by their effects on “corresponding" objects. [1]}, [2]} proposed to use the quadratic form to measure the similarity between the two linear operators. In particular, given a linear operator \(A\) on \({\mathbb {R}}^N\) and any \(x\in {\mathbb {R}}^N\) , \({\mathsf {Q}}_A(x) = x^T A x\) . The quadratic form has also been used for measuring spectral approximation under edge sparsification.
The proof of the following result is in Appendix REF .
| [1] | [
[
459,
462
]
] | https://openalex.org/W2963096160 |
ab417d57-176e-4f51-b6b7-0003d14f63c7 | Optimization. All models are trained with Adam optimizer [1]} with a learning rate of 0.001 and batch size 600. We use Pytorch [2]} and Pytorch Geometric [3]}
for all of our implementation. We train graphs one by one where for each graph we train the model to minimize the loss for certain epochs (see hyper-parameters for details) before moving to the next graph. We save the model that performs best on the validation graphs and test it on the test graphs.
| [2] | [
[
127,
130
]
] | https://openalex.org/W2899771611 |
6db03bdb-b189-4039-8bda-1133d39bbace | Conjecture 1.2 (Alspach-Liversidge [1]})
Every abelian group is strongly sequenceable.
| [1] | [
[
35,
38
]
] | https://openalex.org/W3081200114 |
c65e4fff-ff99-4c37-97f0-3ff61c13660c |
\(k \le 9\) [1]};
\(k\le 12\) when \(G\) is cyclic and \(n=pt\) , where \(p\) is prime and \(t\le 4\) (see [2]} and [3]});
\(k\le 12\) when \(G\) is cyclic and \(n=mt\) , where all the prime factors of \(m\) are bigger than \(k!/2\) and \(t\le 4\) [3]};
\(k = n-3\) when \(n\) is prime and \(\Sigma A \ne 0\) [2]};
\(k = n-2\) when \(G\) is cyclic and \(\Sigma A \ne 0\) [6]};
\(k = n-1\) [7]}, [8]};
\(n\le 21\) and \(n\le 23\) when \(\Sigma A=0\) [9]};
\(n\le 25\) when G is cyclic and \(\Sigma A=0\) [10]}.
| [9] | [
[
476,
479
]
] | https://openalex.org/W2962886618 |
2574c022-2dc5-409b-96c2-fc5424ff618c |
\(k \le 9\) [1]};
\(k\le 12\) when \(G\) is cyclic and \(n=pt\) , where \(p\) is prime and \(t\le 4\) (see [2]} and [3]});
\(k\le 12\) when \(G\) is cyclic and \(n=mt\) , where all the prime factors of \(m\) are bigger than \(k!/2\) and \(t\le 4\) [3]};
\(k = n-3\) when \(n\) is prime and \(\Sigma A \ne 0\) [2]};
\(k = n-2\) when \(G\) is cyclic and \(\Sigma A \ne 0\) [6]};
\(k = n-1\) [7]}, [8]};
\(n\le 21\) and \(n\le 23\) when \(\Sigma A=0\) [9]};
\(n\le 25\) when G is cyclic and \(\Sigma A=0\) [10]}.
| [10] | [
[
534,
538
]
] | https://openalex.org/W1889896615 |
6f3b5593-03ab-4e7e-9d78-6025aadce9c1 | In this section, we apply a method that relies on the Non-Vanishing Corollary of the Combinatorial Nullstellensatz, see [1]}, [2]}. Given a prime \(p\) (in the following \(p\) will be always assumed to be a prime), this corollary allows us to obtain a non-zero point to suitable polynomials on \(\mathbb {Z}_p\) derived starting from the ones defined in [3]}. Then, after some manipulations, surprisingly, we obtain a polynomial whose expression does not depend on the cardinality of \(A\) and this allows us to have a result that is very general on the parameter \(k=|A|\) .
| [3] | [
[
357,
360
]
] | https://openalex.org/W2962902758 |
2a6ac5aa-c14d-491c-a9cf-333db249b951 | Assumption 2 [1]}. Denote \(t_n\) as the number of leaves in each tree, and \(a_n\) as the number of training data points used to build each tree. Let \(a_n \rightarrow \infty \) , \(t_n \rightarrow \infty \) , we assume \(t_n (\log a_n)^9/a_n \rightarrow 0\) . This assumption, as a regularity condition, controls the rate at which the trees in the random forest grow.
| [1] | [
[
13,
16
]
] | https://openalex.org/W2162387923 |
e23b7122-07d3-4b6e-b6bc-4f8aba87f30e | Importantly, if the set of instruments changes during the two-step selection process (i.e., certain instruments are determined to violate exclusion or relevance requirements, and are thus dropped), we will repeat both of the lasso selection steps (1 and 2) with the remaining instruments, until the selection ceases to change. This iterative approach increases the likelihood that our selected instruments are simultaneously valid and strong. Moreover, we expect this procedure should work well with sufficient data, because when \(V_i\) contains only excluded instruments (satisfied asymptotically based on Theorem REF ), [1]} shows that asymptotically \(S_i\) will be the linearly optimal set of instruments. We offer greater detail about the instrumental variable selection procedure outlined in Steps 1 and 2 in Appendix . In general, for each \(\widehat{X}^{(i)}, i \in \lbrace 1, \dots , M\rbrace \) , we can use this procedure to select a set of strong, excluded instruments, \(S_i\) .
| [1] | [
[
624,
627
]
] | https://openalex.org/W2163162137 |
89d1f4ac-cba7-496d-9cf2-bc99a6264f83 | The problem of measurement error has been studied extensively in the econometrics literature. In regression models, measurement error in independent covariates is a form of endogeneity [1]} and is known to lead to biased coefficient estimates, not only for the mis-measured covariate, but also for coefficients associated with other, precisely measured covariates appearing in the same regression (unless the precisely measured covariates are strictly independent of the measurement error). In contrast to the common (mis-held) belief that measurement error only leads to attenuation of coefficient on the mis-measured covariate (i.e., bias toward zero), the actual direction of bias is difficult to anticipate, particularly as the econometric specification or the structure of the measurement error grow more complicated [2]}, [3]}, [4]}. In general, ignoring measurement error may lead to errors in sign, magnitude, and statistical significance of coefficient estimates.
| [1] | [
[
185,
188
]
] | https://openalex.org/W4248069063 |
8354c962-e11e-4acf-a635-dcb686b9a337 | Now consider the SIMEX correction procedure [1]}. In the simulation step, SIMEX creates \(\widehat{X_1}^{(\lambda )} = \widehat{X_1} + \sqrt{\lambda }z = X_1+e+\sqrt{\lambda }z\) , where \(z \sim N(0, \sigma _e^2)\) , and thereby introduces more measurement error. Note that \(Var(\widehat{X_1}^{(\lambda )}) = \sigma _1^2+(1+\lambda )\sigma _e^2\) , and \(Cov(X_2,\widehat{X_1}^{(\lambda )})=\sigma _{2e}\) because \(z\) is independently generated. Following the same derivation above, we know that regressing \(Y\) on \(\lbrace \widehat{X_1}^{(\lambda )},X_2\rbrace \) , we would have \(\widehat{\beta _2}^{(\lambda )} = \beta _2 - \beta _1 \frac{\sigma _{2e}\sigma _1^2}{\sigma _2^2 \left(\sigma _1^2 + (1+\lambda )\sigma _e^2\right) - \sigma _{2e}^2}\) , or equivalently, \(|\widehat{\beta _2}^{(\lambda )} - \beta _2| = \left|\beta _1 \frac{\sigma _{2e}\sigma _1^2}{\sigma _2^2 \left(\sigma _1^2 + (1+\lambda )\sigma _e^2\right) - \sigma _{2e}^2}\right|\) . In the extrapolation step, SIMEX estimates \(\widehat{\beta _2}^{(-1)}\) , i.e., the coefficient that would be obtained had there been no measurement error (note that \(\widehat{\beta _2}^{(-1)} \equiv \widehat{\beta _2}^{SIMEX}\) ). Accordingly, \(|\widehat{\beta _2}^{(-1)} - \beta _2| = \left|\beta _1 \frac{\sigma _{2e}\sigma _1^2}{\sigma _2^2 \sigma _1^2 - \sigma _{2e}^2}\right|\) .
| [1] | [
[
44,
47
]
] | https://openalex.org/W2061250208 |
3397cb6e-4673-44c8-83ce-2870457214c6 | Equalization in hearing devices is commonly performed using a single loudspeaker [1]}, i.e., a single equalization filter is computed to match the sound pressure of the aided ear and the open ear. Computing this equalization filter usually requires the inversion of the (estimated) acoustic transfer function (ATF) between the hearing device loudspeaker and the eardrum. However, since this ATF typically has zeros inside and outside the unit circle, perfect inversion with a stable and causal filter cannot be achieved [2]}, [3]}. Hence, approximate solutions are required to obtain a good equalization filter when using a single loudspeaker [4]}, [5]}, [6]}, [7]}, [8]}, e.g., equalizing only the minimum-phase component [6]}, [4]} or by including a so-called acausal delay [7]}, [8]}. On the contrary, using multiple loudspeakers perfect equalization can be achieved when the conditions of the multiple-input/output inverse theorem (MINT) are satisfied [3]}. Briefly, MINT states that perfect inversion of a multi-channel system can be achieved if all channels are co-prime, i.e., they do not share common zeros, and the equalization filters are of sufficient length. However, since multi-loudspeaker equalization using MINT is known to be very sensitive to small changes in the ATFs [14]}, regularization is commonly applied to increase the robustness [15]}, [16]} or other optimization criteria are considered [17]}, [18]}. Multi-loudspeaker equalization for acoustic transparency in hearing devices was considered in [19]}, where the equalization filters were shown to exhibit common zeros, rendering the application of MINT difficult.
| [1] | [
[
81,
84
]
] | https://openalex.org/W2617383836 |
0a90fe40-d9d9-4a25-bb37-ffb67be73a72 | Equalization in hearing devices is commonly performed using a single loudspeaker [1]}, i.e., a single equalization filter is computed to match the sound pressure of the aided ear and the open ear. Computing this equalization filter usually requires the inversion of the (estimated) acoustic transfer function (ATF) between the hearing device loudspeaker and the eardrum. However, since this ATF typically has zeros inside and outside the unit circle, perfect inversion with a stable and causal filter cannot be achieved [2]}, [3]}. Hence, approximate solutions are required to obtain a good equalization filter when using a single loudspeaker [4]}, [5]}, [6]}, [7]}, [8]}, e.g., equalizing only the minimum-phase component [6]}, [4]} or by including a so-called acausal delay [7]}, [8]}. On the contrary, using multiple loudspeakers perfect equalization can be achieved when the conditions of the multiple-input/output inverse theorem (MINT) are satisfied [3]}. Briefly, MINT states that perfect inversion of a multi-channel system can be achieved if all channels are co-prime, i.e., they do not share common zeros, and the equalization filters are of sufficient length. However, since multi-loudspeaker equalization using MINT is known to be very sensitive to small changes in the ATFs [14]}, regularization is commonly applied to increase the robustness [15]}, [16]} or other optimization criteria are considered [17]}, [18]}. Multi-loudspeaker equalization for acoustic transparency in hearing devices was considered in [19]}, where the equalization filters were shown to exhibit common zeros, rendering the application of MINT difficult.
| [4] | [
[
643,
646
],
[
729,
732
]
] | https://openalex.org/W2738284403 |
83b37ba1-14c9-446e-a0d7-cbd30b7188d5 | Equalization in hearing devices is commonly performed using a single loudspeaker [1]}, i.e., a single equalization filter is computed to match the sound pressure of the aided ear and the open ear. Computing this equalization filter usually requires the inversion of the (estimated) acoustic transfer function (ATF) between the hearing device loudspeaker and the eardrum. However, since this ATF typically has zeros inside and outside the unit circle, perfect inversion with a stable and causal filter cannot be achieved [2]}, [3]}. Hence, approximate solutions are required to obtain a good equalization filter when using a single loudspeaker [4]}, [5]}, [6]}, [7]}, [8]}, e.g., equalizing only the minimum-phase component [6]}, [4]} or by including a so-called acausal delay [7]}, [8]}. On the contrary, using multiple loudspeakers perfect equalization can be achieved when the conditions of the multiple-input/output inverse theorem (MINT) are satisfied [3]}. Briefly, MINT states that perfect inversion of a multi-channel system can be achieved if all channels are co-prime, i.e., they do not share common zeros, and the equalization filters are of sufficient length. However, since multi-loudspeaker equalization using MINT is known to be very sensitive to small changes in the ATFs [14]}, regularization is commonly applied to increase the robustness [15]}, [16]} or other optimization criteria are considered [17]}, [18]}. Multi-loudspeaker equalization for acoustic transparency in hearing devices was considered in [19]}, where the equalization filters were shown to exhibit common zeros, rendering the application of MINT difficult.
| [5] | [
[
649,
652
]
] | https://openalex.org/W1958699567 |
45c9ea88-d570-458b-bf09-3b7292ffc735 | In this paper we propose a unified procedure to design an equalization filter that can be applied when using either a single loudspeaker or multiple loudspeakers to achieve acoustic transparency. The equalization filter is computed by minimizing a least-squares cost function, where we show that for the considered scenario the multi-loudspeaker system exhibits common zeros. Since these common zeros are, however, exactly known, we propose to exploit this knowledge and reformulate the optimization problem accordingly. In order to account for potential non-minimum phase components, we propose to incorporate an acausal delay in the filter computation, similarly as proposed for single-loudspeaker equalization in [1]}, [2]}. Furthermore, to counteract comb-filtering effects we propose a frequency-dependent regularization term to reduce the hearing device playback when the leakage signal and the desired signal at the eardrum are of similar magnitude, similar as proposed for single-loudspeaker equalization in [1]}. While regularization can also be used to increase the robustness of the equalization filters to unknown acoustic transfer functions, in this paper we propose to improve the robustness by considering multiple set of measurements in the optimization. Although some of these ideas were already presented in [4]}, [1]}, the main objective of this paper is to present a unified procedure that can be used both for single-loudspeaker equalization and multi-loudspeaker equalization.
<FIGURE> | [2] | [
[
722,
725
]
] | https://openalex.org/W2938342388 |
d721e6ce-e0fa-4058-ae5f-2d8f3a82adaf | where \(M_t = \frac{\delta }{1 - \delta }(1 - \delta ^{t-1})\) . The variable
\(\overline{S_{M,t}}(\theta ) =\frac{1}{M_t}\sum _{i=1}^{t-1} \delta ^i S_k(y_{t - i}, \theta )\) can be interpreted as an estimator of the expected score at time \(t\) (or equivalently \(\overline{L_{M,t}}(\theta )\) can be interpreted as an estimator of the expected loss at time \(t\) ).
\(M_t\) is a measure of the relevance of information available at time \(t\) . This generalises the idea of maximising the model weights so that the simple average of the scores is maximised as proposed in [1]}. [2]} justify the simple average for cases when the data generating process is ergodic which guarantees the existence of an optimal model pool. The use of an average can also be justified under the simpler condition that the scores are ergodic. We consider weighted averages as more appropriate and robust than simple averages in time series problems (the simple average is still available by choosing \(\delta =1\) ) for the reason that only more recent information may be useful to predict the expected score.
A similar, more informal, argument is made by [3]} to justify taking exponentially weighted moving average of forecast errors.
| [3] | [
[
1142,
1145
]
] | https://openalex.org/W3125775736 |
b3934a2e-b113-47bf-9f22-448c1ffc49e4 | Knowledge graph embedding (KGE) successfully handled the problems of symbolic nature in various KGs. In KGE, the components of KG: entities and relations were embedded into a low dimensional continuous vector space, while specific properties of the original graph were preserved. Accordingly, the inherent structure of the KG is preserved, while the manipulation is simplified. KGE has recently attracted increasing interest in knowledge base completion and inference, with progressive advancement from the translational models (TransE[1]}, TransH[2]}, and DistMult[3]}) to the recent deep CNN models (e.g., ConvE[4]} and ConvKB[5]}). However, in these embedding models, each triplet was independently processed, resulting in the loss of the potential information of knowledge base. Rich semantical and latent relationships between them have not been exploited, which are inherently implicit in the local neighborhood surrounding a triplet.
| [4] | [
[
613,
616
]
] | https://openalex.org/W2964116313 |
07f46848-7726-4a44-b840-8de87afd76fc | Translation based embedding models are a popular form of representation model. While translational models learn representations using simple operations and limited parameters, they produce low quality representations. Shortcomings of translation based models however, limits their practicability as knowledge completion algorithm. In contrast, Convolution based models learn more expressive representations due to their parameter efficiency and consideration of complex relations. Several recent works suggest that convolutional neural network (CNN) based models generate richer and more expressive feature embeddings and hence also perform well on relation prediction. Dettmers et al. [1]} proposed ConvE—the first model applying CNN for the KB completion task. ConvE uses stacked 2D convolutional filters on reshaping of entity and relation representations, thus increasing their expressive power, while remaining parameter efficient at the same time. ConvKB [2]} is another convolution based method which applies convolutional filters of width l on the stacked subject, relation and object embeddings for computing score.
| [2] | [
[
961,
964
]
] | https://openalex.org/W2774837955 |
02677ea2-29b3-4d8d-b019-f5c1bbeba1e6 | FB15k-237. FB15K [1]} includes the subset of knowledge base triplets, originally derived from
Freebase. The dataset consists of 310,116 triplets with 14,541 entities, 237 relations. As shown in Table
1, the dataset is randomly split. The FB15k-237 is consists of textual mentions of Freebase entity pairs
and knowledge base relation triplets [2]}.
| [2] | [
[
342,
345
]
] | https://openalex.org/W2250184916 |
0eb174c2-8a6b-4680-9137-5f4474b40089 | \(\bullet \) Translation-based methods: Relatively simple vector-based methods. For example, TransE[1]},
TransR[2]}, DistMult[3]} and ComplEx[4]}.
| [1] | [
[
100,
103
]
] | https://openalex.org/W2283196293 |
40af0db5-367b-4f05-a3bd-f121a6c4d909 | \(\bullet \) Translation-based methods: Relatively simple vector-based methods. For example, TransE[1]},
TransR[2]}, DistMult[3]} and ComplEx[4]}.
| [2] | [
[
112,
115
]
] | https://openalex.org/W1426956448 |
eb8db84d-a5b7-4b37-8c70-747dd9f43c52 | \(\bullet \) Translation-based methods: Relatively simple vector-based methods. For example, TransE[1]},
TransR[2]}, DistMult[3]} and ComplEx[4]}.
| [3] | [
[
126,
129
]
] | https://openalex.org/W1533230146 |
998b9c85-06ad-464c-a9f8-f1b537342885 | \(\bullet \) Deep Neural network-based methods: A series of deep non-linear neural network-based
methods, including convolution based models (including ConvE[1]}, ConvKB[2]} and ConvR[3]}), and
graph neural network based models (including R-GCN[4]}, CompGCN[5]} and SACN[6]}).
| [4] | [
[
245,
248
]
] | https://openalex.org/W2604314403 |
2be163f7-75fc-46b2-a316-eb435c7655c4 | (i) is the standard local well-posedness result when \(p<1+\frac{4}{d}\) , see [1]}, Chapter 3 and globality comes from the conservation of mass. In the critical case the global theory is more involved, see [2]}. The ill-posedness part is proven in [3]}, [4]}. (ii) is the content of [5]}. For (iii) see [6]} and for (iv) see [7]}. For dimension 2 see [8]}, [9]}.
| [8] | [
[
352,
355
]
] | https://openalex.org/W3133600443 |
acac1fe4-46cf-49d3-ab41-445c63b6bbab | and thanks to the gain in controlling the space time norms of \(u_L\) we can expect to solve this problem in \(H^s\) . For an illustration of the method see [1]}. For probabilistic well-posedness of (REF ) below the scaling regularity see [2]}.
| [1] | [
[
158,
161
]
] | https://openalex.org/W3102383043 |
48f03293-8678-4895-8925-fc8648cb3f87 | We follow the lines of [1]} where a similar result is proven. Let us write
\(w-w_N &=z_N-(\operatorname{id}-\mathbf {S}_N)(v_N) \\& = z_N - \mathbf {P}_{>N}(v_L) - \mathbf {P}_{>N}(w_N)\,,\)
where
\(z_N(t)w(t)-\mathbf {S}_Nw_N(t) + (\operatorname{id}-\mathbf {S}_N)v_L\,,\)
and satisfies
\(z_N(t_0)=\mathbf {P}_{>N}v(t_0)\,.\)
We recall that the local well-posedness theory developed in Proposition REF applies verbatim to equation (REF ) uniformly in \(N\) . For \(\lambda \) lare enough, the \(v_N=v_L+w_N\) exist on the time interval \([t_0,t]\) and we have \(\Vert w_N\Vert _{Y^{\sigma }_{[t_0,t]}} \leqslant C(t_0,t)\) . Then the Bernstein inequality ensures that \(\Vert \mathbf {P}_{>N}w_N\Vert _{L^{\infty }([t_0,t],\mathcal {H}^{\sigma ^{\prime }})} \leqslant C(t_0,t)N^{\sigma ^{\prime }-\sigma }\) .
From the Bernstein inequality it follows that since \(v(t_0)\in \mathcal {H}^{\sigma }\) we have \(\Vert \mathbf {P}_{>N}v_L\Vert _{L^{\infty }([t_0,t],\mathcal {H}^{\sigma ^{\prime }})}\leqslant C(t_0,t)N^{\sigma ^{\prime }-\sigma }\) .
We are left with estimating the \(Y^{\sigma ^{\prime }}_{[t_0,t]}\) norm of \(z_N\) . In order to do so, we observe that \(z_N\) satisfies:
\(i\partial _tz_N-Hz_N &=\cos (2t)^{\frac{d}{2}(p-1)-2}\left(F(v_L+w)-\mathbf {S}_NF(v_L+w-z_N)\right) \\&=\cos (2t)^{\frac{d}{2}(p-1)-2}(\operatorname{id}-\mathbf {S}_N)F(v_L+w)\nonumber \\& + \cos (2t)^{\frac{d}{2}(p-1)-2}\mathbf {S}_N \left(F(v_L+w)-F(v_L+w-z_N)\right)\nonumber \,,\)
where \(F\) is defined by \(F(X)=X|X|^{p-1}\) , the nonlinear term. Using the decomposition in the right-hand side of (REF ) and the Strichartz estimates from Proposition REF and a local existence time associated to \(w(t_0)\) given by Lemma REF , we obtain that for \(\delta \leqslant \tau \) :
\(\Vert z_N\Vert _{Y^{\sigma ^{\prime }}_{[t_0,t_0+\delta ]}}&\leqslant \Vert \mathbf {P}_{>N}w(t_0)\Vert _{\mathcal {H}^{\sigma ^{\prime }}} \\&+ \Vert \cos (2t)^{\frac{d}{2}(p-1)-2}(\operatorname{id}-\mathbf {S}_N)F(v_L+w)\Vert _{\tilde{Y}^{\sigma ^{\prime }}_{t_0,[t_0+\delta ]}} \\&+ \Vert \cos (2t)^{\frac{d}{2}(p-1)-2}\mathbf {S}_N \left(F(v_L+w)-F(v_L+w-z_N)\right)\Vert _{\tilde{Y}^{\sigma ^{\prime }}_{([t_0,t_0+\delta ]}} \,.\)
We now deal with each term (REF ), (), (). For (REF ), by Bernstein's inequality we have
\(\Vert \mathbf {P}_{>N}w(t_0)\Vert _{\mathcal {H}^{\sigma ^{\prime }}}\leqslant CN^{\sigma ^{\prime } - \sigma } \Vert w(t_0)\Vert _{\mathcal {H}^{\sigma }}\,.\)
Using the local theory estimates from Lemma REF and Bernstein's inequality, we obtain
\(\Vert \cos (2t)^{\frac{d}{2}(p-1)-2}(\operatorname{id}-\mathbf {S}_N)F(v_L+w)\Vert _{\tilde{Y}^{\sigma ^{\prime }}_{[t_0,t_0+\delta ]}} \leqslant CN^{\sigma ^{\prime }-\sigma } \left(\frac{\pi }{4}-t\right)^{-\beta }\delta ^{\alpha } \lambda ^p\,,\)
and finally with the estimates from Lemma REF again we also have:
\(\Vert \cos (2t)^{\frac{d}{2}(p-1)-2}\mathbf {S}_N& \left(F(v_L+w)-F(v_L+w-z_N)\right)\Vert _{\tilde{Y}^{\sigma ^{\prime }}_{[t_0,t_0+\delta ]}} \\&\leqslant C\lambda ^{p-1} \left(\frac{\pi }{4}-t\right)^{-\beta }\delta ^{\alpha } \Vert z_N\Vert _{Y^{\sigma ^{\prime }}([t_0,t_0+\delta ])}\,.\)
Combining these estimates and for \(\delta \) small enough we have
\(\Vert z_N\Vert _{Y^{\sigma }_{t_0,\delta }} \leqslant \frac{1}{2} \Vert z_N\Vert _{Y^{\sigma }_{t_0,\delta }} + \frac{C(t)\Vert w(t_0)\Vert _{\mathcal {H}^{\sigma }}}{2}N^{\sigma ^{\prime }-\sigma }\,.\)
Finally we have proved that \(\Vert z_N\Vert _{Y^{\sigma }([t_0,t_0+\delta ])} \leqslant N^{\sigma ^{\prime }-\sigma }C(t)\Vert w(t_0)\Vert _{\mathcal {H}^{\sigma }}\) . We need to iterate this estimate in time until we reach time \(t\) and we need to check that only a finite number of steps are required. We remark that the local existence time \(\tau \) can be chosen uniformly in \([t_0,t]\) thanks to the remark following Proposition REF and that \(\delta \) can be chosen only depending on \(t\) and \(\lambda \) , thus is uniform in \([t_0,t]\) . Repeating the argument \(\lfloor \frac{t}{\tau }\rfloor \) times gives
\(\Vert z_N\Vert _{Y^{\sigma }_{[t_0,t]}}&\leqslant N^{\sigma ^{\prime } - \sigma } C(t)\sum _{n=1}^{\lfloor \frac{t}{\tau }\rfloor } \Vert w(t_0+n\tau )\Vert _{\mathcal {H}^{\sigma }}\\&\leqslant N^{\sigma ^{\prime }- \sigma }C(t)\sum _{n=1}^{\lceil \frac{t}{\tau }\rceil } 2^n\Vert w(t_0)\Vert _{\mathcal {H}^{\sigma }}\,,\)
where we used that on an interval of local well-posedness the size of the function in \(\mathcal {H}^{\sigma }\) does not grow more than the double. Since \(\tau \) does only depend only on \(t\) and the parameters we obtain the required estimate on \(z_N\) so that finally
\(\Vert w-w_N\Vert _{L^{\infty }([t_0,t],\mathcal {H}^{\sigma ^{\prime }})} \leqslant \Vert (\operatorname{id}-\mathbf {S}_N)(v_L+w_N)\Vert _{Y^{\sigma ^{\prime }}_{t_0,t}}+\Vert z_N\Vert _{Y^{\sigma ^{\prime }}_{t_0,t}} \leqslant N^{\sigma ^{\prime }-\sigma }C(t_0,t)\)
as claimed.
| [1] | [
[
23,
26
]
] | https://openalex.org/W1990089063 |
1da3f863-b9bb-41fd-b8d8-d63360b6e9f8 | which converge for \(i_1>1\) ; see, e.g. [1]}, [2]}, [3]}, [4]}, [5]}, [6]}, [7]}. The
multiple zeta values are limits of the finite harmonic sums
\(A_{(i_1,i_2,\dots ,i_k)}(n)=\sum _{n\ge n_1>n_2>\dots >n_k\ge 1} \frac{1}{n_1^{i_1}n_2^{i_2}\cdots n_k^{i_k}} ,\)
| [2] | [
[
47,
50
]
] | https://openalex.org/W2000117364 |
a2ef7c54-e35e-4133-bb10-5a3021c4a056 | which converge for \(i_1>1\) ; see, e.g. [1]}, [2]}, [3]}, [4]}, [5]}, [6]}, [7]}. The
multiple zeta values are limits of the finite harmonic sums
\(A_{(i_1,i_2,\dots ,i_k)}(n)=\sum _{n\ge n_1>n_2>\dots >n_k\ge 1} \frac{1}{n_1^{i_1}n_2^{i_2}\cdots n_k^{i_k}} ,\)
| [3] | [
[
53,
56
]
] | https://openalex.org/W2033549448 |
0c0a8326-be6a-4b73-b50a-df1739ef90a0 | which converge for \(i_1>1\) ; see, e.g. [1]}, [2]}, [3]}, [4]}, [5]}, [6]}, [7]}. The
multiple zeta values are limits of the finite harmonic sums
\(A_{(i_1,i_2,\dots ,i_k)}(n)=\sum _{n\ge n_1>n_2>\dots >n_k\ge 1} \frac{1}{n_1^{i_1}n_2^{i_2}\cdots n_k^{i_k}} ,\)
| [7] | [
[
77,
80
]
] | https://openalex.org/W1967513586 |
f5bb7c1e-b33f-4a9a-8c1e-5a848d1daa92 | The main features of the hyperbolic-elliptic coupled system (REF )-(REF ) on the half line
are different from the previous study on scalar viscous Burgers equation on the half line in [1]},
or the Cauchy problem (REF ) (equivalent to the scalar form (REF ) with convolution),
due to the following reasons:
| [1] | [
[
184,
187
]
] | https://openalex.org/W1986004380 |
a6d84808-d246-4a6e-8009-84259ac7f85e | The special case: \(0=u_-<u_+\) has been considered by Ruan and Zhu in [1]},
so we will focus on the case of \(0<u_-<u_+\) .
The case \(u_->0\) means that the fluid blows in through the boundary \(x=0\) .
Hence, this initial boundary problem is called the in-flow problem.
It is worth noticing that the boundary condition \(u(0,t)=u_-\) is necessary for the well-posedness of the problem since the characteristic speed of the first hyperbolic equation (REF )\(_1\) is positive at boundary \(x=0\) .
Moreover, for the second elliptic equation (REF )\(_2\) , we need boundary condition on \(q(0,t)\) to ensure the well-posedness of the problem (REF ).
From Lemma REF \(\mathrm {(ii)}\) with \(k=1\) , we note that the boundary value of \(q(x,t)\) can be defined as \(q(0,t)=0\) .
Therefore, in the case of \(0<u_-<u_+\) , the problem (REF )-(REF ) is rewritten as
\( {\left\lbrace \begin{array}{ll}u_t+(\frac{1}{2}u^2)_x+q_x=0, \ \ \ \ x\in \mathbb {R}_+, \ \ \ \ t>0, \\[1mm]-q_{xx}+q+u_x=0, \ \ \ \ x\in \mathbb {R}_+, \ \ \ \ t>0,\\[1mm]u(0,t)=u_-, \ \ \ \ q(0,t)=0, \ \ \ \ t\ge 0,\\[1mm]u(x,0)=u_0(x)={\left\lbrace \begin{array}{ll}=u_-, \ \ \ \ x=0,\\[2mm]\rightarrow u_+, \ \ \ \ x\rightarrow +\infty .\end{array}\right.}\end{array}\right.}\)
| [1] | [
[
72,
75
]
] | https://openalex.org/W281580366 |
76542b39-6b27-4e05-a7d4-d92e1eb56e74 | In order to find a policy that maximizes the weighted value function defined in (REF ), we use the value iteration algorithm [1]}. In this algorithm, we proceed backwards: we start by determining the optimal action at time slot \(T\) for each state, and successively consider the previous stages, until reaching time slot 0 (see Algorithm 1).
Finite-Horizon Value Iteration Algorithm
[1]
Initialization: for each s
\(u_T^*(s) \leftarrow 0\) , \(\overline{u}^*_T(s) \leftarrow 0\) , \(u_{\xi ,T}^*(s) \leftarrow 0\)
Endfor
\(t\leftarrow T-1\)
while \(t\geqslant 0\)
For each s
update \(u_t^*(s)\) , \(\overline{u}_t^*(s)\) , and \({u}_{\xi , t}^*(s)\) according to (REF ), (), and (), respectively
EndFor
\(t \leftarrow t-1\)
EndWhile
| [1] | [
[
125,
128
]
] | https://openalex.org/W2119567691 |
1c05ecca-522f-4de8-8695-9e3a42e3c5d2 | The learning algorithm converges to the optimal state-action value function when each state-action pair is performed infinitely often and when the learning rate parameter satisfies for each \((s_t,\ell )\) pair (the proof is given in [1]}, [2]} and skipped here for brevity),
\(\sum _{n=1}^{\infty }{\alpha _n}(s_t,\ell )= \infty , \quad \mbox{ and } \quad \sum _{n=1}^{\infty }{\alpha _n^2(s_t,\ell )} < \infty .\)
| [1] | [
[
235,
238
]
] | https://openalex.org/W32403112 |
dc982ad5-844d-475f-bdaf-0e0dcd2be0d3 | As was discussed in the previous section the replica method enables one to study the critical behavior of RIM basing upon effective field theory with cubic symmetry [1]}. It is worth mentioning that RG methods do not fix the concentration of impurities where the critical behavior corresponding to RIM starts to realize. Therefore further we assume the impurity concentration to be very (infinitely) small. Relevant Landau-Wilson action is as follows:
\(&&S_d = \int d\vec{x}\Biggl \lbrace \frac{1}{2} \Big [\left[\partial \varphi _{0}\right]^2+ \psi (x)\varphi _{0}^2\nonumber \\ &&\qquad \qquad \qquad \qquad \qquad +m_0^2\varphi _{0}^2\Big ]+ \frac{1}{4!} g_{0}\left[\varphi _{0}^2\right]^2\Biggr \rbrace ,\)
| [1] | [
[
165,
168
]
] | https://openalex.org/W2022068533 |
425170c3-f435-44b2-b01a-3ef0e5986179 | Baseline-MSE: a similar approach to [1]} by combining MSE loss with GAN. 3D UNetIdentical to the 2D UNet used in this work with all the 2D convolutional and deconvolutional layers replaced by their 3D counterparts. and LSGAN is used for fair comparison;
| [1] | [
[
36,
39
]
] | https://openalex.org/W2617128058 |
84e36b6d-8e0c-4f2a-82d3-f22d7ed6cbab | Baseline-Perceptual: a similar approach to [1]} by combining perceptual loss with GAN. It is also based on our UNet and LSGAN infrastructure for fair comparison;
| [1] | [
[
43,
46
]
] | https://openalex.org/W2743780012 |
5e24cba0-3506-4c2f-8a71-c549ce477ef4 | The end-to-end flavor system is built using ESPnet [1]}. The data
augmentation strategies experimented in Kaldi are reused. Convolution-augmented
Transformer (Conformer) is applied with relative positional encoding-based self
attention [2]}. Encoder is constructed using 12 layers of Conformer
blocks. Each block consists of a feed-forward module, a multi-head self
attention (MHSA) module and a convolution module followed by another feed-forward
module. The hidden dimension of the linear layers in feed-forward modules is
2048. The output dimension of each blocks and the dimension of the MHSA are both
256. Specifically, the number of heads in MHSA is 4. The kernel size of the
convolution module is 15.
| [1] | [
[
51,
54
]
] | https://openalex.org/W2962780374 |
e026a755-18d7-4a16-8992-57a1f48324ab | The end-to-end flavor system is built using ESPnet [1]}. The data
augmentation strategies experimented in Kaldi are reused. Convolution-augmented
Transformer (Conformer) is applied with relative positional encoding-based self
attention [2]}. Encoder is constructed using 12 layers of Conformer
blocks. Each block consists of a feed-forward module, a multi-head self
attention (MHSA) module and a convolution module followed by another feed-forward
module. The hidden dimension of the linear layers in feed-forward modules is
2048. The output dimension of each blocks and the dimension of the MHSA are both
256. Specifically, the number of heads in MHSA is 4. The kernel size of the
convolution module is 15.
| [2] | [
[
236,
239
]
] | https://openalex.org/W3097777922 |
7af7b854-9304-4c79-9392-fe0bcda1b57a | Let us make a couple of observations about types of \(X_{\overrightarrow{G}}(\mathbf {x},t)\) -equality that arise. By setting \(t=1\) , we know equal \(X_{\overrightarrow{G}}(\mathbf {x},t)\) means the underlying undirected graphs must have equal \(X_G(\mathbf {x})\) and the examples in Figure REF show two scenarios: either the underlying undirected graphs are isomorphic, or they are not isomorphic but have equal \(X_G(\mathbf {x})\) . The \(X_G(\mathbf {x})\) -equality implied by Figure REF (c) is the one given by Stanley in [1]}.
| [1] | [
[
536,
539
]
] | https://openalex.org/W2001124661 |
83bb5517-ecdf-4ffb-a193-5cc79681f970 | where the sum is over all \((P,\omega )\) -partitions \(f:P \rightarrow \mathbb {P}\) . When all the edges of \((P,\omega )\) are weak, so the sum is over \(P\) -partitions, we will denote the \(P\) -partition enumerator \(K_{(P,\omega )}(\mathbf {x})\) simply by \(K_P(\mathbf {x})\) or just \(K_P\) . Similarly, we will use \(\overline{K}_{P}\) to denote \(K_{(P,\omega )}(\mathbf {x})\) when all the edges are strict, thus enumerating strict \(P\) -partitions. Comparing (REF ) and (REF ) when \(P\) is the poset corresponding to a directed acyclic graph \(\overrightarrow{G}\) , we see that \(\overline{K}_{P}(\mathbf {x})\) is exactly the coefficient of the highest power of \(t\) in \(X_{\overrightarrow{G}}(\mathbf {x},t)\) . This connection between \(\overline{K}_{P}\) and \(X_{\overrightarrow{G}}(\mathbf {x}, t)\) has previously been mentioned in [1]} [2]}. As a corollary, if Conjecture REF is true, then so is Conjecture REF .
| [1] | [
[
868,
871
]
] | https://openalex.org/W3101753412 |
0244744e-95ea-46e5-b977-adb3b948fd11 | Theorem 2.5 ([1]}, [2]}, [3]})
Let \((P,\omega )\) be a labeled poset with \(|P|=n\) . Then
\(K_{(P,\omega )}= \sum _{\pi \in \mathcal {L}(P,\omega )} F_{\mathrm {des}(\pi ), n} = \sum _{\pi \in \mathcal {L}(P,\omega )} F_{\mathrm {co}(\pi )}\,.\)
| [1] | [
[
13,
16
]
] | https://openalex.org/W4238232089 |
85b485f8-ae95-47f8-be8c-834889677bc9 | Theorem 2.5 ([1]}, [2]}, [3]})
Let \((P,\omega )\) be a labeled poset with \(|P|=n\) . Then
\(K_{(P,\omega )}= \sum _{\pi \in \mathcal {L}(P,\omega )} F_{\mathrm {des}(\pi ), n} = \sum _{\pi \in \mathcal {L}(P,\omega )} F_{\mathrm {co}(\pi )}\,.\)
| [2] | [
[
19,
22
]
] | https://openalex.org/W2077309624 |
ccdfbda9-dd3e-42b8-8c6b-ef410ce00c69 | Recently, deep learning has also been explored for hand gestures recognition. [1]}, [2]}, [3]}, [4]}, [5]}. One approach is to encode joint sequences into texture images and feed into Convolutional Neural Networks (CNNs) in order to extract discriminative features for gesture recognition. Several methods [3]}, [5]}, [8]}, [9]} have been proposed along this approach. However, these methods cannot effectively and efficiently express the dependency between joints, since hand joints are not distributed in a regular grid but in a non-Euclidean domain. To address this problem, graph convolutional networks (GCN) [10]}, [9]}, [12]} expressing the dependency among joints with a graph have been proposed. For instance, the method in [10]} sets four types of edges to capture relationship between non-adjacent joints. However, the topology of the graph is fixed which is ineffective to deal with varying joint relationships for different hand gestures. A typical example is that, the connection between tip of thumb and tip of forefinger in gesture “Write" is likely to be strong, but it is not the case for gestures “Prick" and “Tap". Modelling gesture dependent collaboration among joints is especially important for robust hand gesture recognition. Furthermore, conventional ST-GCN has limited receptive field in temporal domain, hence, long-term temporal information cannot be effectively learned.
| [9] | [
[
324,
327
],
[
620,
623
]
] | https://openalex.org/W2952200000 |
37c5bc5b-3872-4997-a677-b345d8dbd86f | Some related and representative works on hand gesture recognition are introduced in this section. They can be categorized into two approaches: handcrafted features-based [1]}, [2]}, [3]}, [4]}, [5]} and deep learning-based [6]}, [7]}, [8]}, [9]}, [10]}, [11]}.
| [5] | [
[
194,
197
]
] | https://openalex.org/W2467634805 |
08e07be1-7908-4868-b7d4-aa987b1f6439 | Some related and representative works on hand gesture recognition are introduced in this section. They can be categorized into two approaches: handcrafted features-based [1]}, [2]}, [3]}, [4]}, [5]} and deep learning-based [6]}, [7]}, [8]}, [9]}, [10]}, [11]}.
| [7] | [
[
229,
232
]
] | https://openalex.org/W2792060773 |
c392a8e3-f7f0-4953-bcf1-bf263db39a49 | where \(\sum _{k}^{K_{v}}\mathbf {W_{k}f_{in}}\mathbf {A_{k}}\) captures local structure of joints and their connected neighbors and \(\mathbf {W_{g}f_{in}}\mathbf {A_g}\) captures the global collaboration among all the joints. In this way, the proposed self-attention based GCN, termed as SAGCN, can process both local and global features together. Finally, the spatial features at each time step are processed over time with convolution in the same way as TCN in [1]}.
<FIGURE><FIGURE> | [1] | [
[
467,
470
]
] | https://openalex.org/W2963076818 |
c7eb5473-704c-4323-8427-128b59a7feec | The FPHA dataset [1]} contains 1175 sequences from 45 different gesture classes with high viewpoint, speed, intra-subject variability and inter-subject variability of style, viewpoint and scale. This dataset is captured in 3 different scenarios (kitchen, office and social) and performed by 6 subjects. Compare with DHG 14/28 dataset, FPHA dataset has 21 hand joints and the palm joint is missed. This is a challenging dataset due to the similar motion patterns and involvement of many different objects. The same evaluation strategy in [1]} are used.
| [1] | [
[
17,
20
],
[
538,
541
]
] | https://openalex.org/W2605973302 |
465c0c52-e2f7-4f3f-bd7b-b02ee8748b7d | Our experimental apparatus, based on that of Zhong et al., is shown in Fig. REF [1]}. Charlie generates 1 ns pulses at a rate of 1 MHz that he sends to Alice and Bob through single-mode optical fibers. Alice and Bob use phase modulators to select which of the basis states they transmit back to Charlie. To maintain temporal overlap and phase stability between Alice and Bob's states, the geometry is chosen to be that of a Sagnac interferometer, in which Alice's and Bob's photons propagate in opposite directions around a loop. Charlie uses single-photon counting modules (SPCMs) to detect the photons that return to him (the detected signal level is kept to < 0.01 photons per pulse). To eliminate dark and background counts the photon arrival times are recorded with time-to-digital converters, and only photons that arrive within 2.4 ns of the expected time are counted.
<FIGURE> | [1] | [
[
81,
84
]
] | https://openalex.org/W2916781987 |
5ecdcc61-c67c-45c7-8262-d63d03b8dbd7 | Based on the observations above, we propose PhysChem, a novel neural architecture that captures and fuses physical and chemical information of molecules. PhysChem is composed of two specialist networks, namely a physicist network (PhysNet) and a chemist network (ChemNet), who understand molecules physically and chemically.PhysNet and ChemNet are novel architectures, not to be confused with previous works with similar or identical names [1]}, [2]}, [3]}. PhysNet is a neural physical engine that learns dominant conformations of molecules via simulating molecular dynamics in a generalized space. In PhysNet, implicit positions and momenta of atoms are initialized by encoding input features. Forces between pairs of atoms are learned with neural networks, according to which the system moves following laws of classic mechanics. Final positions of atoms are supervised with labeled conformations under spatial-invariant losses. ChemNet utilizes a message-passing framework [4]} to capture chemical characteristics of atoms and bonds. ChemNet generates messages from atom states and local geometries, and then updates the states of both atoms and bonds. Output molecular representations are merged from atomic states and supervised with labeled chemical / biomedical properties. Besides focusing on their own specialties, two networks also cooperate by sharing expertise: PhysNet consults the hidden representations of chemical bonds in ChemNet to generate torsion forces, whereas ChemNet leverages the local geometries of the intermediate conformations in PhysNet.
| [4] | [
[
977,
980
]
] | https://openalex.org/W2606780347 |
7e13ba0f-1369-4077-ad8a-cd57eb92524c | Molecular Representation Learning Early molecular fingerprints commonly encoded line or graph notations of molecules with rule-based algorithms [1]}, [2]}, [3]}. Along with the spurt of deep learning, deep molecular representations gradually prevailed [4]}, [5]}, [6]}, [7]}. More recently, researchers started to focus on incorporating 3D conformations of molecules into their representations [8]}, [9]}, [10]}, [11]}. Models that leveraged 3D geometries of molecules generally performed better than those that simply used graph notations, whereas most 3D models required labeled conformations of the target molecules. This limited the applicability of these models. Among previous studies, message-passing neural networks (MPNNs) proposed a universal framework of encoding molecular graphs, which assumed that nodes in graphs (atoms in molecules) passed messages to their neighbors, and then aggregated received messages to update their states. A general message-passing layer calculated
\({\mathbf {m}}_i = \sum _{j \in {\mathcal {N}}(v)} M({\mathbf {h}}_i, {\mathbf {h}}_j, {\mathbf {e}}_{i,j}), \quad {\mathbf {h}}_i \leftarrow U({\mathbf {h}}_i, {\mathbf {m}}_i), \quad i \in {\mathcal {V}},\)
| [11] | [
[
414,
418
]
] | https://openalex.org/W2996443485 |
930b1ed7-cd7f-4339-a799-95ebc09aabf3 | Neural Physical Engines Recent studies showed that neural networks are capable of learning annotated (or pseudo) potentials and forces in particle systems, which made fast molecular simulations [1]}, [2]} and protein-folding tasks [3]} possible. Notably, it was further shown in [4]}, [5]} that neural networks alone can simulate molecular dynamics for conformation prediction. As an instance, HamNet [4]} proposed a neural physical engine that operated on a generalized space, where positions and momentums of atoms were defined as high-dimensional vectors. In the engine, atoms moved following Hamiltonian Equations with parameterized kinetic, potential and dissipation functions. PhysNet in our work is a similar engine. Nevertheless, instead of learning parameterized energies and calculating their negative derivatives as forces, we directly parameterize the forces between each pair of atoms. In addition, HamNet considered gravitations and repulsions of molecules based on implicit positions, while it ignored the effects of chemical interactions: for example, the types of chemical bonds were ignored in the energy functions. PhysChem fixes this issue via the cooperation mechanism between two specialist networks. Specifically, PhysNet takes chemical expertise (the bond states) from ChemNet and introduces torsion forces, i.e. forces that origin from torsions in chemical bonds, into the dynamics. See Section REF for more details.
| [1] | [
[
195,
198
]
] | https://openalex.org/W2742127985 |
630ea363-72a1-4439-bd43-cdf33e53fd60 |
We then adopt the initialization method in [1]} to generate initial positions (\({\mathbf {q}}^{(0)}\) ) and momenta (\({\mathbf {p}}^{(0)}\) ) for atoms: a bond-strength adjacency matrix \({\mathbf {A}}\in {\mathbb {R}}^{n \times n}\) is estimated with sigmoid-activated FC layers on bond features, according to which a GCN captures the chemical environments of atoms (as \(\tilde{{\mathbf {v}}}\) ); an LSTM then determines unique positions for atoms, especially for those with identical chemical environments (carbons in benzene, for example). Denoted in formula, the initialization follows
\({\mathbf {A}}(i,j) = {\left\lbrace \begin{array}{ll}0, & (i, j) \notin {\mathcal {E}}\\\text{FC}_{\text{sigmoid}}\left({\mathbf {x}}^{\text{e}}_{i,j}\right), & (i, j) \in {\mathcal {E}}\end{array}\right.}, \qquad \tilde{{\mathbf {V}}} = \text{GCN}\left({\mathbf {A}}, {\mathbf {V}}^{(0)}\right),\)
\(\left\lbrace \left({\mathbf {q}}^{(0)}_i \oplus {\mathbf {p}}^{(0)}_i\right)\right\rbrace = \text{LSTM}\left(\left\lbrace \tilde{{\mathbf {v}}}_i\right\rbrace \right), \quad i \in {\mathcal {V}}\)
| [1] | [
[
44,
47
]
] | https://openalex.org/W3125513453 |
2e211245-9bd9-4efe-8e32-d548dffd84d9 |
Subsequently, ChemNet utilizes a similar architecture to [1]} to conduct message passing. Centric atoms aggregate the received messages with attention scores determined by the bond states:
\({\mathbf {m}}_i^{(l)} = \sum _{j \in {\mathcal {N}}(i)} \alpha ^{(l)}_{i,j} {\mathbf {m}}^{(l)}_{i,j}, \quad \left\lbrace \alpha ^{(l)}_{i,j} \ | \ j \in {\mathcal {N}}(i)\right\rbrace = \text{softmax}\left(\left\lbrace \text{FC}_{\text{}}\left({\mathbf {e}}^{(l)}_{i,j}\right) \ | \ j \in {\mathcal {N}}(i)\right\rbrace \right).\)
| [1] | [
[
58,
61
]
] | https://openalex.org/W2968734407 |
e23b5642-ef8a-4186-b303-e79b5545f60f | Baselines For conformation learning tasks, we compared PhysChem with i) a Distance Geometry [1]} method tuned with the Universal Force Fields (UFF) [2]}, which was implemented in the RDKit packageWe use the 2020.03.1.0 version of the RDKit package at http://www.rdkit.org/. and thus referred to as RDKit; ii) CVGAE and CVGAE+UFF [3]}, which learned to generate low-energy molecular conformations with deep generative graph neural networks (either with or without UFF tuning); iii) HamEng [4]}, which learned stable conformations via simulating Hamiltonian mechanics with neural physical engines. For property prediction tasks, we compared PhysChem with i) MoleculeNet, for which we reported the best performances achieved by methods collected in [5]} (before 2017); ii) 3DGCN [6]}, which augmented conventional GCNs with input bond geometries; iii) DimeNet [7]}, which conducted directional message-passing by representing pairs of atoms; iv) Attentive FP [8]}, which used local and global attentive layers to derive molecular representations; and v) CMPNN [9]}, which used communicative kernels to conduct deep message-passing. We conducted experiments with official implementations of HamEng, CVGAE,Attentive FP and CMPNN; for other baselines, as identical evaluation schemes were adopted, we referred to the reported performances in corresponding citations and left unreported entries blank.
| [5] | [
[
747,
750
]
] | https://openalex.org/W2594183968 |
4c5c4ae4-1757-48ee-88b1-71f947d5bb89 | Baselines For conformation learning tasks, we compared PhysChem with i) a Distance Geometry [1]} method tuned with the Universal Force Fields (UFF) [2]}, which was implemented in the RDKit packageWe use the 2020.03.1.0 version of the RDKit package at http://www.rdkit.org/. and thus referred to as RDKit; ii) CVGAE and CVGAE+UFF [3]}, which learned to generate low-energy molecular conformations with deep generative graph neural networks (either with or without UFF tuning); iii) HamEng [4]}, which learned stable conformations via simulating Hamiltonian mechanics with neural physical engines. For property prediction tasks, we compared PhysChem with i) MoleculeNet, for which we reported the best performances achieved by methods collected in [5]} (before 2017); ii) 3DGCN [6]}, which augmented conventional GCNs with input bond geometries; iii) DimeNet [7]}, which conducted directional message-passing by representing pairs of atoms; iv) Attentive FP [8]}, which used local and global attentive layers to derive molecular representations; and v) CMPNN [9]}, which used communicative kernels to conduct deep message-passing. We conducted experiments with official implementations of HamEng, CVGAE,Attentive FP and CMPNN; for other baselines, as identical evaluation schemes were adopted, we referred to the reported performances in corresponding citations and left unreported entries blank.
| [6] | [
[
777,
780
]
] | https://openalex.org/W2901003004 |
ea562589-9da9-4fa9-9e9b-6175e6d58602 | Baselines For conformation learning tasks, we compared PhysChem with i) a Distance Geometry [1]} method tuned with the Universal Force Fields (UFF) [2]}, which was implemented in the RDKit packageWe use the 2020.03.1.0 version of the RDKit package at http://www.rdkit.org/. and thus referred to as RDKit; ii) CVGAE and CVGAE+UFF [3]}, which learned to generate low-energy molecular conformations with deep generative graph neural networks (either with or without UFF tuning); iii) HamEng [4]}, which learned stable conformations via simulating Hamiltonian mechanics with neural physical engines. For property prediction tasks, we compared PhysChem with i) MoleculeNet, for which we reported the best performances achieved by methods collected in [5]} (before 2017); ii) 3DGCN [6]}, which augmented conventional GCNs with input bond geometries; iii) DimeNet [7]}, which conducted directional message-passing by representing pairs of atoms; iv) Attentive FP [8]}, which used local and global attentive layers to derive molecular representations; and v) CMPNN [9]}, which used communicative kernels to conduct deep message-passing. We conducted experiments with official implementations of HamEng, CVGAE,Attentive FP and CMPNN; for other baselines, as identical evaluation schemes were adopted, we referred to the reported performances in corresponding citations and left unreported entries blank.
| [9] | [
[
1058,
1061
]
] | https://openalex.org/W3034516664 |
d71a7bf9-177c-48dc-a927-a98fd0ad2f41 | The training and inferences of PhysChem and the baselines were conducted on a total of 8 NVIDIA Tesla P100 GPUs. We recorded the running time of PhysChem and baselines including CVGAE [1]}, Attentive FP [2]} and HamNet [3]} of the conformation learning and property prediction tasks on QM9. The results are shown in Table REF . For HamNet, we separately reported the running time of its two modules, namely the Hamiltonian Engine (Ham. Eng.) and the Fingerprint Generator (FP Gen.). Our model displayed comparable efficiency to the introduced baselines on both tasks.
<TABLE> | [1] | [
[
184,
187
]
] | https://openalex.org/W3105259638 |
2d9e2d4d-88ce-464c-9fbb-0885d3923068 | in \(L^2({\mathbb {R}}^2)^2\) .
This unitary transformation is a two-dimensional
version of the Foldy-Wouthuysen-Tani transformation;
see [1]}, [2]}, [3]}.
As is well-known,
one can infer from (REF ) that the Dirac operator \({\mathbb {D}}_{m} \) is absolutely continuous and that its
spectrum is given by
\(\sigma ({\mathbb {D}}_{m} )=(-\infty , \, -m] \cup [m, \, \infty )\) .
| [2] | [
[
144,
147
]
] | https://openalex.org/W4245209597 |
ed906496-21ea-4d19-a827-6d3b030f3c76 | where \(I_{0}(z), I_{1}(z), K_{z}(x), K_{1}(z)\) refer to the modified Bessel function of first and second kind and zero and first order, respectively [1]}.
Using \(I_{0}(z)K_{1}(z)+I_{1}(z)K_{0}(z) = z^{-1}\) and combining Eqs. (REF ) and (REF ) we arrive at
\(s\tilde{S}(s\vert \ast ) = \frac{\kappa _{d}qK_{1}(qa)}{qK_{1}(qa)(s+\kappa _{d})+\kappa _{a}/(2\pi a D)sK_{0}(qa)}.\)
| [1] | [
[
152,
155
]
] | https://openalex.org/W2801179766 |
fde80c3c-f175-41af-8fe8-9a5661bc7057 | Irrespective of the single or two-stream method, all deep learning frameworks are sensitive to the loss which needs to be minimized [1]}, [2]}. Several classical works showed that the gradient descent to minimize cross-entropy performs better in terms of classification and has fast convergence, however, to some extent, leads to the overfitting. Several regularization techniques such as dropout, L1, L2, etc., have been used to overcome the overfitting issues together with several other exotic objectives performed exceptionally well than the standard cross-entropy [3]}. Recently, a work [4]} proposed a Label Smoothing (LS) technique that improves the accuracy significantly by computing cross-entropy with a weighted mixture of targets with uniform distribution instead of hard-coded targets.
| [4] | [
[
592,
595
]
] | https://openalex.org/W2183341477 |
a1b242dc-5a20-4f40-b36a-e1b647872f10 | (iv) Second leap hyper-Zagreb Index
\(HLM_2(G) &=&\sum \limits _{uv\in E(G)}[deg_2(u)deg_2(v)]^2\\&=& 2(4)^{2}+4(6)^{2}+4(12)^{2}+2(p-1)(9)^{2}+4(p-1)(15)^{2}+p(20)^{2}+\\& & 3(p-1)(25)^{2}\\&=& 2(16)+2(36)+4(144)+2(p-1)(81)+4(p-1)(225)+400p+\\& & 3(p-1)(625)\\&=& 32+72+576+162(p-1)+900(p-1)+400p+1875(p-1)\\&=& 3337p-2257\)
Next, on the basis of above mentioned result we obtain the first and second leap Zagreb polynomials as well as first and second leap hyper-Zagreb polynomials for \(Z_p\) , \(p\ge 2\) .
Theorem 2 Let \(G\) be a molecular graph of zigzag benzenoid system \(Z_p\) , \(p\ge 2\) . Then
(i) \(LM_1(G:x)=2x^4+4x^5+2(p-1)x^6+4x^7+4(p-1)x^8+px^9+3(p-1)x^{10}\)
(ii) \(LM_2(G:x)=2x^4+4x^6+2(p-1)x^9+4x^{12}+4(p-1)x^{15}+px^{20}+3(p-1)x^{25}\)
(iii) \(HLM_1(G:x)=2x^{16}+4x^{25}+4x^{49}+4(p-1)x^{64}+px^{81}+3(p-1)x^{100}+2(p-1)x^{36}\)
(iv) \(HLM_2(G:x)=2x^{16}+4x^{36}+2(p-1)x^{81}+4x^{144}+4(p-1)x^{225}+px^{400}+3(p-1)x^{625}\)
From Table 1 and equations \(5-8\) , we obtain the following polynomials for the molecular structure \(Z_{p}\) .
(i) First leap Zagreb Polynomial
\(LM_1(G:x)&=&\sum \limits _{uv\in E(G)}x^{[deg_2(u)+deg_2(v)]}\\&=&2x^4+4x^5+2(p-1)x^6+4x^7+4(p-1)x^8+px^9+3(p-1)x^{10}\)
(ii) Second leap Zagreb Polynomial
\(LM_2(G:x)&=&\sum \limits _{uv\in E(G)}x^{[deg_2(u)deg_2(v)]}\\&=&2x^4+4x^6+2(p-1)x^9+4x^{12}+4(p-1)x^{15}+px^{20}+\\& &3(p-1)x^{25}\)
(iii) First leap hyper-Zagreb Polynomial
\(HLM_1(G:x)&=&\sum \limits _{uv\in E(G)}x^{[deg_2(u)+deg_2(v)]^2}\\&=&2x^{4^2}+4x^{5^2}+4x^{7^2}+4(p-1)x^{8^2}+px^{9^2}+3(p-1)x^{10^2}+\\& & 2(p-1)x^{6^2}\\&=&2x^{16}+4x^{25}+2(p-1)x^{36}+4x^{49}+4(p-1)x^{64}+px^{81}+\\& & 3(p-1)x^{100}\)
(iv) Second leap hyper-Zagreb Polynomial
\(HLM_2(G:x)&=&\sum \limits _{uv\in E(G)}x^{[deg_2(u)deg_2(v)]^2}\\&=&2x^{16}+4x^{36}+2(p-1)x^{81}+4x^{144}+4(p-1)x^{225}+px^{400}+\\& & 3(p-1)x^{625}\)
Now, the newly defined \(k\) -distance degree based topological indices such as the leap somber index, hyper leap forgotten index, leap \(Y\) index, and leap \(Y\) coindex are obtained for the molecular structure \(Z_p\) , \(p\ge 2\) .
Theorem 3 Let \(G\) be a molecular graph of Zigzag benzenoid system \(Z_p\) , \(p\ge 2\) . Then
(i) \(LSO(G)=4(1+\sqrt{5}+\sqrt{7})+(p-1)(8\sqrt{2}+3\sqrt{1}0+2\sqrt{6})+3p\)
(ii) \(HLF(G)=14453p-9468\)
(iii) \(LY(G)=1655p-930\)
(iv)\(\overline{LY}(G)=-1292p^2+2068p-776.\)
Utilising edge partition from Table 1, we calculate the new \(k\) -distance degree based topological indices defined in equations \(9-13\) as:
(i) Leap Somber Index
\(LSO(G) &=& \sum _{uv\in E(G)}\sqrt{deg_{2}(u)+deg_{2}(v)}\\ &=&2(4)^{\frac{1}{2}}+4(5)^{\frac{1}{2}}+4(7)^{\frac{1}{2}}+4(p-1)(8)^{\frac{1}{2}}+p(9)^{\frac{1}{2}}+3(p-1)(10)^{\frac{1}{2}}+\\& & 2(p-1)(6)^{\frac{1}{2}}\\&=&4 +4\sqrt{5}+4\sqrt{7}+8(p-1)\sqrt{2}+3p+3(p-1)\sqrt{10}+2(p-1)\sqrt{6}\\&=& 4(1+\sqrt{5}+\sqrt{7})+(p-1)(8\sqrt{2}+3\sqrt{1}0+2\sqrt{6})+3p\)
(iii) Hyper leap forgotten Index
\(HLF(G)&=&\sum \limits _{uv\in E(G)}[deg^2_2(u)_+deg^2_2(v)]^2\\&=&2(4+4)^2+4(4+9)^2+4(9+16)^2+4(p-1)(9+25)^2+p(16+25)^2\\& & +3(p-1)(25+25)^2+2(p-1)(9+9)^2\\&=&2(64)+4(169)+4(625)+4(p-1)(1156)+1681p+3(p-1)(2500)+\\& & 2(p-1)(324)\\&=&128+676+2500+4624(p-1)+1681p+7500(p-1)+648(p-1)\\&=&3304+(p-1)(4624+7500+648)+1681p\\&=&14453p-9468\)
(iv) Leap Y Index
\(LY(G)&=&\sum \limits _{uv\in E(G)}[deg^3_2(u)+deg^3_2(v)]\\&=&2(8+8)+4(8+27)+4(27+64)+4(p-1)(27+125)+p(64+125)\\& & +3(p-1)(125+125)+2(p-1)(27+27)\\&=&32+140+364+608(p-1)+189p+750(p-1)+108(p-1)\\&=&536+1466(p-1)+189p\\&=&1655p-930\)
(v) Leap Y co-index
\(\overline{LY}(G)&=&(p-1)(LF(G))-(LY(G))\\&=&pLF(G)-pLY(G)-LF(G)+LY(G)\\&=&p(363p-154)-p(1655p-930)-(363p-154)+(1655p-930)\\&=&363p^2-154p-1655p^2+930p-363p+154+1655p-930\\&=&-1292p^2+2068p-776.\)
where, \(LF(G) = 363p-154\) .
Rhombic Benzenoid System.
In this subsection, we discuss the whole structure and the edge partition technique of the molecular structure of rhombic benzenoid system \(R_p\) . Furthermore, the results related to the distance degree, \(k\) -distance degree based topological indices and distance degree based polynomials are obtained.
Consider a benzenoid system in which hexagons are arranged to form a rhombic shape \(R_{p}\) , where \(p\) denotes the number of hexagons along each rhombic boundary is shown in Figure 3. There are \(2p(p + 2)\) vertices and \(3p^2 + 4p – 1\) edges in this benzenoid system.
<FIGURE>We partitioned the edge set into three groups based on the distance degree of each edge's end vertices. The edges having \(deg_{2}(u)=2,deg_{2}(v)=3\) appear on the extreme upside and downside corner of the rhombus, which are exactly 4. On the outer boundary of the rhombus \(deg_{2}(u)=3,deg_{2}(v)=3\) lies on the outer middle corner of the left and right side of the rhombus and \(deg_{2}(u)=3,deg_{2}(v)=4\) lies on the upper and lower side of the outer middle corner as well as on the both side of the extreme upside and downside corner of the rhombus, which are counted as exactly 2 and 8, respectively. Whereas, the edges having \(deg_{2}(u)=4,deg_{2}(v)=4\) , \(deg_{2}(u)=4,deg_{2}(v)=6\) and \(deg_{2}(u)=6,deg_{2}(v)=6\) are appearing on the inner filling of the rhombus except \(deg_{2}(u)=4,deg_{2}(v)=4\) which lies on the outer boundary of the rhombus and exist if \(p>2\) and their frequencies are like \(8(p-2)\) , \(4(p-1)\) and \((p-1)^{2}+2(p-1)(p-2)\) . Table 2 summarize the edge partition of rhombic benzenoid system \(R_p\) .
<TABLE>Next, the distance degree based topological indices such as first and second leap Zagreb index asnwell as the first and second leap hyper Zagreb index for the molecular structure \(R_p\) , \(p\ge 2\) are obtained.
Theorem 4 If \(G\) be a molecular graph of rhombic benzenoid system \(R_p\) , \(p\ge 2\) . Then
(i) \(LM_1(G)=36p^2+8p-20\)
(ii) \(LM_2(G)=108p^2-64p-34\)
(iii) \(HLM_1(G)=432p^2-240p-140\)
(iv) \(HLM_2(G)=3888p^2-6016p+1538\)
Let \(G\) be a molecular graph of rhombic benzenoid system \(R_p\) , \(p\ge 2\) . Then \(|V(G)|=2p(p+2)\) and \(|E(G)|=3p^2+4p-1\) . Using edge partition from Table 2, we compute different indices for \(R_p\) defined in equations \(1-4\) .
(i) First leap Zagreb Index
\(LM_1(G)&=&\sum \limits _{uv\in E(G)}[deg_2(u)+deg_2(v)]\\&=&4(5)+2(6)+8(7)+8(p-2)(8)+4(p-1)(10)+((p-1)^2+\\& &2(p-1)(p-2))(12)\\&=&20+12+56+64(p-2)+40(p-1)+12(3p^{2}-8p+5)\\&=&88-128-40+60+p(64+40-96)+36p^{2}\\&=&36p^{2}+8p-20\)
(ii) Second leap Zagreb Index
\(LM_2(G)&=&\sum \limits _{uv\in E(G)}[deg_2(u)deg_2(v)]\\&=&4(6)+2(9)+8(12)+8(p-2)(16)+4(p-1)(24)+((p-1)^2+\\& &2(p-1)(p-2))36\\&=&24+18+96+128(p-2)+96(p-1)+36(3p^{2}-8p+5)\\&=&138-256-96+180+p(128+96-288)+108p^{2}\\&=&108p^{2}-64p-34\)
(iii) First leap hyper-Zagreb Index
\(HLM_1(G)&=&\sum \limits _{uv\in E(G)}[deg_2(u)+deg_2(v)]^2\\&=&4(5)^{2}+2(6)^{2}+8(7)^{2}+8(p-2)(8)^{2}+4(p-1)(10)^{2}+((p-1)^2\\& &+2(p-1)(p-2))(12)^{2}\\&=&4(25)+2(36)+8(49)+8(p-2)(64)+4(p-1)(100)+((p-1)^2\\& &+2(p-1)(p-2))(144)\\&=&100+72+392+512(p-2)+400(p-1)+144(3p^{2}-8p+5)\\&=&564-1024-400+720+p(512+400-1152)+432p^{2}\\&=&432p^{2}-240p-140\)
(iv) Second leap hyper-Zagreb Index
\(HLM_2(G) &=&\sum \limits _{uv\in E(G)}[deg_2(u)deg_2(v)]^2\\&=&4(6)^{2}+2(9)^{2}+8(12)^{2}+8(p-2)(16)^{2}+4(p-1)(24)^{2}+((p-1)^2\\& &+2(p-1)(p-2))36^{2}\\&=&144+162+1152+2048(p-2)+2304(p-1)+1296(3p^{2}-8p+5)\\&=&1458-4096-2304+6480+p(2048+2304-10368)+3888p^{2}\\&=&3888p^{2}-6016p+1538\)
Further, the first and second leap Zagreb as well as the first and second leap hyper Zagreb polynomials for \(R_p\) , \(p\ge 2\) are obtained.
Theorem 5 Let \(G\) be a molecular graph of rhombic benzenoid system \(R_p\) , \(p\ge 2\) . Then
(i) \(LM_1(G:x)=4x^5+2x^6+8x^7+8(p-2)x^8+4(p-1)x^{10}+(3p^2-8p+5)x^{12}\)
(ii) \(LM_2(G:x)=4x^6+2x^9+8x^{12}+8(p-2)x^{16}+4(p-1)x^{24}+(3p^2-8p+5)x^{36}\)
(iii) \(HLM_1(G:x)=4x^{25}+2x^{36}+8x^{49}+8(p-2)x^{64}+4(p-1)x^{100}+(3p^2-8p+5)x^{144}\)
(iv) \(HLM_1(G:x)=4x^{36}+2x^{81}+8x^{144}+8(p-2)x^{256}+4(p-1)x^{576}+(3p^2-8p+5)x^{1296}\)
Now, we continuing to obtain the polynomials for \(R_{p}\) mentioned in equations \(5-8\) and are obtained using Table 2.
(i) First leap Zagreb Polynomial
\(LM_1(G:x)&=&\sum \limits _{uv\in E(G)}x^{[deg_2(u)+deg_2(v)]}\\&=&4x^5+2x^6+8x^7+8(p-2)x^8+4(p-1)x^{10}+(3p^2-8p+5)x^{12}\)
(ii) Second leap Zagreb Polynomial
\(LM_2(G:x)&=&\sum \limits _{uv\in E(G)}x^{[deg_2(u)deg_2(v)]}\\&=&4x^6+2x^9+8x^{12}+8(p-2)x^{16}+4(p-1)x^{24}+(3p^2-8p+5)x^{36}\)
(iii) First leap hyper-Zagreb Polynomial
\(HL_1M(G,x)&=& \sum \limits _{uv\in E(G)}x^{[deg_2(u)deg_2(v)]^2}\\&=& 4x^{5^2}+2x^{6^2}+8x^{7^2}+8(p-2)x^{8^2}+4(p-1)x^{10^2}+(3p^2-8p+5)x^{12^2}\\&=&4x^{25}+2x^{36}+8x^{49}+8(p-2)x^{64}+4(p-1)x^{100}+(3p^2-8p+5)x^{144}\)
(iv) Second leap hyper-Zagreb Polynomial
\(HL_2M(G,x)&=&\sum \limits _{uv\in E(G)}x^{[deg_2(u)deg_2(v)]^2}\\&=&4x^{6^2}+2x^{9^2}+8x^{12^2}+8(p-2)x^{16^2}+4(p-1)x^{24^2}+(3p^2-8p+5)x^{36^2}\\&=&4x^{36}+2x^{81}+8x^{144}+8(p-2)x^{256}+4(p-1)x^{576}+(3p^2-8p+5)x^{1296}\)
Next, the newly defined \(k\) -distance degree based topological indices for the molecular structure rhombic benzenoid system \(R_p\) , \(p\ge 2\) are obtained.
Theorem 6 Let \(G\) be a molecular graph of rhombic benzenoid system \(R_p\) , \(p\ge 2\) . Then
(i) \(LSO(G)=6p^2\sqrt{3}+p(16\sqrt{2}+4\sqrt{10}-16\sqrt{3})+4(\sqrt{5}-\sqrt{10})+2(\sqrt{6}+4\sqrt{7}+5\sqrt{3}-16\sqrt{2})\)
(ii)\(HLF(G)=15552p^2-22464p+5044\)
(iii) \(LY(G)=1296p^2-1312p-32\)
(iv) \(\overline{LY}(G)=-1080p^3+2280p^2-1160p-40\) .
By using edge partition from Table 2, we calculate the new k-distance degree based topological indices for \(R_{p}\) defined in equations \(9-13\) as:
(i) Leap Somber Index
\(LSo(G)&=&\sum \limits _{uv\in E(G)}\sqrt{deg_2(u)+deg_2(v)}\\&=& 4(5)^{\frac{1}{2}}+ 2(6)^{\frac{1}{2}}+ 8(7)^{\frac{1}{2}}+ 8(p-2)(8)^{\frac{1}{2}}+4(p-1)(10)^{\frac{1}{2}}+((p-1)^2+\\& &2(p-1)(p-2))(12)^{\frac{1}{2}}\\&=&4\sqrt{5}+2\sqrt{6}+8\sqrt{7}+16(p-2)\sqrt{2}+4(p-1)\sqrt{10}+(3p^2-8p+5)2\sqrt{3}\\&=&4\sqrt{5}+2\sqrt{6}+8\sqrt{7}+16p\sqrt{2}-32\sqrt{2}+4p\sqrt{10}-4\sqrt{10}+6p^2\sqrt{3}-\\& &16p\sqrt{3}+10\sqrt{3}\\&=&6p^2\sqrt{3}+p(16\sqrt{2}+4\sqrt{10}-16\sqrt{3})+4(\sqrt{5}-\sqrt{10})+2(\sqrt{6}+4\sqrt{7}+\\& &5\sqrt{3}-16\sqrt{2})\)
(iii) Hyper leap forgotten Index
\(HLF(G)&=&\sum \limits _{uv\in E(G)}[deg^2_2(u)+deg^2_2(v)]^2\\&=&4(13)^2+2(18)^2+8(25)^2+8(p-2)(32)^2+4(p-1)(52)^2+(3p^2-8p+5)\\& &(72)^2\\&=&676+648+5000+8192(p-2)+10816(p-1)+15552p^2-41472p\\& &+25920\\&=&15552p^2-22464p+5044\)
(iv) Leap Y Index
\(LY(G)&=&\sum \limits _{uv\in E(G)}[deg^3_2(u)+deg^3_2(v)]\\&=&4(8+27)+2(27+27)+8(27+64)+8(p-2)(64+64)+4(p-1)\\& &(64+216)+((p-1)^2+2(p-1)(p-2))(216+216)\\&=&140+108+13824+32768(p-2)+85296(p-1)+1296p^2-3456p+\\& &2160\\&=&1296p^2-1312p-32\)
(v) Leap Y coindex
\(\overline{LY}(G)&=&(p-1)(k-F(G))-(k-Y(G))\\&=&(p-1)(216p^2-112p-72)-(1296p^2-1312p-32)\\&=&(p-1)(-1080p^2+1200p+40)\\&=&-1080p^3+2280p^2-1160p-40.\)
where, \(LF(G) = 216p^2-112p-72.\)
Numerical Results and Discussions
In this section, we show numerical results for the distance-degree based topological indices for zigzag and rhombic benzenoid systems. We compute numerical tables for distance-degree based indices such as first and second leap Zagreb index, the first and second leap hyper-Zagreb index, leap Somber index, hyper leap forgotten index, leap \(Y\) index, and leap \(Y\) coindex, for various values of \(p\) ( Table 3-6). Moreover, we plot line graphs ( Figure 4-7) for some values of \(p\) to investigate the behaviour of these topological indices.
<TABLE><FIGURE><TABLE><FIGURE><TABLE><FIGURE><TABLE><FIGURE>From Figures 4, 5, 6, and 7, we determined that as \(p\) increases there is an increase in all the indices except leap \(Y\) coindex of \(Z_p\) . The leap \(Y\) coindex of \(Z_p\) decreases as the values of \(p\) increasing.
The modified first Zagreb connection index is a chemical descriptor that first appears in a 1972 for the calculation of the total electron energy of alternant hydrocarbons [1]}. According to research, the first Zagreb connection index performs better as a predictor of entropy, enthalpy of vaporisation, standard enthalpy of vaporisation, and acentric factor of octane isomers than the other Zagreb connection indices.
From Table 3 and Figure 4, we analyze that for a zigzag benzenoid system the second hyper-leap Zagreb index attains maximum predictive ability than the other three leap Zagreb indices.
A dataset of octane isomers was used to assess the forgotten index's predictive ability. Moreover, it was discovered that the forgotten-predictive index's power is pretty comparable to first zagreb index [2]}. The zigzag benzenoid system and the rhombic benzenoid system have higher leap and hyper leap forgotten index then the first leap and first hyper leap zagreb index.
In mathematical chemistry, the somber indices are very predictive. These can be acquired for the graph energy upper and lower bounds, as well as the relationship between the Sombor index and graph energy for molecular graph types [3]}. The leap somber index for these two molecular graphs increases as \(p\) increases.
The Y-coindex is one of the most useful correlation indices for comprehending the physicochemical properties of octane isomers [4]}. From Table 4 and Figure 5, we analyse that the leap Y- index attains maximum value as \(p\) increases and leap Y- coindex attains minimum value as \(p\) increases for zigzag benzenoid system.
Moreover, for rhombic benzenoid system, the leap Y-index and leap Y-coindex attains maximum value as \(p\) increases (see Table 6 & Figure 7).
Conclusion and Future Work
The topological indices can be used to identify the physicochemical characteristics of chemical compounds. In this work, we utilize edge partition technique for obtaining results related to distance-degree based topological indices such as first and second leap Zagreb indices, first and second leap hyper-Zagreb indices for the zigzag and rhombic benzenoid system. Furthermore, the expression for new \(k\) -distance degree-based topological indices such as leap Somber index, hyper leap forgotten index, leap \(Y\) index and leap \(Y\) coindex of the zigzag and rhombic benzenoid system are derived. Also, we obtained their numerical results and plotted the graphs of these indices for some values of \(p\) in order to determine the significance of the physicochemical characteristics of the zigzag and rhombic benzenoid systems.
We are mention some possible directions for future research as multiplicative k-distance degree-based topological indices for certain molecular graphs and to determine the predictive ability of physio-chemical characteristics in the case of a dataset of octane isomers. Also, one can find the bounds for the newly defined \(k\) -distance degree-based topological indices.
Novelty statement
Graphs invariants plays an important role to analyze the abstract structures of molecular graph of chemical compounds. Topological index is one of the graph invariant that describes the topology of a chemical compounds based on a molecular structure. Various distance-degree based topological indices of chemical graphs has been recently computed. But there are still many chemical compounds for which distance-degree based topological indices has not been found yet. Therefore, in this article, we compute some new distance-degree based topological indices for the molecular graph of certain classes of benzenoid system.
Furthermore, we compute their numerical results and plot their line graphs for the comparision of these indices.
Data Availibility
No data were used to support the finding of this study.
Conflicts of Interest
There are no conflicts of interest declared by the authors.
Conflicts of Interest:
There are no conflicts of interest to be declared by the authors.
Funding:
Not Applicable. No funds have been received.
Authors contributions:
First draft was prepared by Sohan Lal and Vijay Kumar Bhat; figures have been prepared by Sohan Lal and Karnika Sharma; all authors have reviewed the final draft.
| [2] | [
[
12666,
12669
]
] | https://openalex.org/W2000076674 |
1478143a-7983-4069-be6e-5822df644bf8 | It has been shown that one can extend the thermodynamic phase space of a
Reissner-Nordstrom (RN) black holes in an anti-de Sitter (AdS) space, by
considering the cosmological constant as a thermodynamic pressure, \(P=-\Lambda /8\pi \) and its conjugate quantity as a thermodynamic volume [1]}, [2]}, [3]}, [4]}, [5]}, [6]}. The studies on the critical behavior of black hole
spacetimes, in a wide range of gravity theories, have got a lot of
enthusiasm. Let us review some works in this direction. For example, \(P\) -\(V\)
criticality of charged AdS black holes has been investigated in [7]}
and it was shown that indeed there is a complete analogy for RN-AdS black
holes with the van der Walls liquid-gas system. In particular, it was found
that the critical exponents of this system coincide with those of the van
der Waals system [7]}. When the gauge field is the Born-Infeld
nonlinear electrodynamics, extended phase space thermodynamics of
charged-AdS black holes have been investigated in [9]}. In this
case, one needs to introduce a new thermodynamic quantity conjugate to the
Born-Infeld parameter which is required for consistency of both the first
law of thermodynamics and the corresponding Smarr relation [9]}.
The studies were also extended to the rotating black holes. In this regards,
phase transition, critical behavior, and critical exponents of Myers-Perry
black holes have been explored in [11]}. Besides, it was shown that
charged and rotating black holes in three spacetime dimensions do not
exhibit critical phenomena [9]}. Other studies on the critical
behavior of black hole spacetimes in an extended phase space have been
carried out in [13]}, [14]}, [15]}, [16]}, [17]}, [18]}, [19]}.
| [15] | [
[
1679,
1683
]
] | https://openalex.org/W2043131456 |
Subsets and Splits