text
stringlengths 138
2.38k
| labels
sequencelengths 6
6
| Predictions
sequencelengths 1
3
|
---|---|---|
Title: Putin's peaks: Russian election data revisited,
Abstract: We study the anomalous prevalence of integer percentages in the last
parliamentary (2016) and presidential (2018) Russian elections. We show how
this anomaly in Russian federal elections has evolved since 2000. | [
0,
0,
0,
1,
0,
0
] | [
"Statistics",
"Quantitative Finance"
] |
Title: Pattern recognition techniques for Boson Sampling validation,
Abstract: The difficulty of validating large-scale quantum devices, such as Boson
Samplers, poses a major challenge for any research program that aims to show
quantum advantages over classical hardware. To address this problem, we propose
a novel data-driven approach wherein models are trained to identify common
pathologies using unsupervised machine learning methods. We illustrate this
idea by training a classifier that exploits K-means clustering to distinguish
between Boson Samplers that use indistinguishable photons from those that do
not. We train the model on numerical simulations of small-scale Boson Samplers
and then validate the pattern recognition technique on larger numerical
simulations as well as on photonic chips in both traditional Boson Sampling and
scattershot experiments. The effectiveness of such method relies on
particle-type-dependent internal correlations present in the output
distributions. This approach performs substantially better on the test data
than previous methods and underscores the ability to further generalize its
operation beyond the scope of the examples that it was trained on. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Physics"
] |
Title: Applications of Trajectory Data from the Perspective of a Road Transportation Agency: Literature Review and Maryland Case Study,
Abstract: Transportation agencies have an opportunity to leverage
increasingly-available trajectory datasets to improve their analyses and
decision-making processes. However, this data is typically purchased from
vendors, which means agencies must understand its potential benefits beforehand
in order to properly assess its value relative to the cost of acquisition.
While the literature concerned with trajectory data is rich, it is naturally
fragmented and focused on technical contributions in niche areas, which makes
it difficult for government agencies to assess its value across different
transportation domains. To overcome this issue, the current paper explores
trajectory data from the perspective of a road transportation agency interested
in acquiring trajectories to enhance its analyses. The paper provides a
literature review illustrating applications of trajectory data in six areas of
road transportation systems analysis: demand estimation, modeling human
behavior, designing public transit, traffic performance measurement and
prediction, environment and safety. In addition, it visually explores 20
million GPS traces in Maryland, illustrating existing and suggesting new
applications of trajectory data. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Towards the dual motivic Steenrod algebra in positive characteristic,
Abstract: The dual motivic Steenrod algebra with mod $\ell$ coefficients was computed
by Voevodsky over a base field of characteristic zero, and by Hoyois, Kelly,
and {\O}stv{\ae}r over a base field of characteristic $p \neq \ell$. In the
case $p = \ell$, we show that the conjectured answer is a retract of the actual
answer. We also describe the slices of the algebraic cobordism spectrum $MGL$:
we show that the conjectured form of $s_n MGL$ is a retract of the actual
answer. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: The Paulsen Problem, Continuous Operator Scaling, and Smoothed Analysis,
Abstract: The Paulsen problem is a basic open problem in operator theory: Given vectors
$u_1, \ldots, u_n \in \mathbb R^d$ that are $\epsilon$-nearly satisfying the
Parseval's condition and the equal norm condition, is it close to a set of
vectors $v_1, \ldots, v_n \in \mathbb R^d$ that exactly satisfy the Parseval's
condition and the equal norm condition? Given $u_1, \ldots, u_n$, the squared
distance (to the set of exact solutions) is defined as $\inf_{v} \sum_{i=1}^n
\| u_i - v_i \|_2^2$ where the infimum is over the set of exact solutions.
Previous results show that the squared distance of any $\epsilon$-nearly
solution is at most $O({\rm{poly}}(d,n,\epsilon))$ and there are
$\epsilon$-nearly solutions with squared distance at least $\Omega(d\epsilon)$.
The fundamental open question is whether the squared distance can be
independent of the number of vectors $n$.
We answer this question affirmatively by proving that the squared distance of
any $\epsilon$-nearly solution is $O(d^{13/2} \epsilon)$. Our approach is based
on a continuous version of the operator scaling algorithm and consists of two
parts. First, we define a dynamical system based on operator scaling and use it
to prove that the squared distance of any $\epsilon$-nearly solution is $O(d^2
n \epsilon)$. Then, we show that by randomly perturbing the input vectors, the
dynamical system will converge faster and the squared distance of an
$\epsilon$-nearly solution is $O(d^{5/2} \epsilon)$ when $n$ is large enough
and $\epsilon$ is small enough. To analyze the convergence of the dynamical
system, we develop some new techniques in lower bounding the operator capacity,
a concept introduced by Gurvits to analyze the operator scaling algorithm. | [
1,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Computer Science"
] |
Title: Iron Snow in the Martian Core?,
Abstract: The decline of Mars' global magnetic field some 3.8-4.1 billion years ago is
thought to reflect the demise of the dynamo that operated in its liquid core.
The dynamo was probably powered by planetary cooling and so its termination is
intimately tied to the thermochemical evolution and present-day physical state
of the Martian core. Bottom-up growth of a solid inner core, the
crystallization regime for Earth's core, has been found to produce a long-lived
dynamo leading to the suggestion that the Martian core remains entirely liquid
to this day. Motivated by the experimentally-determined increase in the Fe-S
liquidus temperature with decreasing pressure at Martian core conditions, we
investigate whether Mars' core could crystallize from the top down. We focus on
the "iron snow" regime, where newly-formed solid consists of pure Fe and is
therefore heavier than the liquid. We derive global energy and entropy
equations that describe the long-timescale thermal and magnetic history of the
core from a general theory for two-phase, two-component liquid mixtures,
assuming that the snow zone is in phase equilibrium and that all solid falls
out of the layer and remelts at each timestep. Formation of snow zones occurs
for a wide range of interior and thermal properties and depends critically on
the initial sulfur concentration, x0. Release of gravitational energy and
latent heat during growth of the snow zone do not generate sufficient entropy
to restart the dynamo unless the snow zone occupies at least 400 km of the
core. Snow zones can be 1.5-2 Gyrs old, though thermal stratification of the
uppermost core, not included in our model, likely delays onset. Models that
match the available magnetic and geodetic constraints have x0~10% and snow
zones that occupy approximately the top 100 km of the present-day Martian core. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Buy your coffee with bitcoin: Real-world deployment of a bitcoin point of sale terminal,
Abstract: In this paper we discuss existing approaches for Bitcoin payments, as
suitable for a small business for small-value transactions. We develop an
evaluation framework utilizing security, usability, deployability criteria,,
examine several existing systems, tools. Following a requirements engineering
approach, we designed, implemented a new Point of Sale (PoS) system that
satisfies an optimal set of criteria within our evaluation framework. Our open
source system, Aunja PoS, has been deployed in a real world cafe since October
2014. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Quantitative Finance"
] |
Title: Tackling non-linearities with the effective field theory of dark energy and modified gravity,
Abstract: We present the extension of the effective field theory framework to the
mildly non-linear scales. The effective field theory approach has been
successfully applied to the late time cosmic acceleration phenomenon and it has
been shown to be a powerful method to obtain predictions about cosmological
observables on linear scales. However, mildly non-linear scales need to be
consistently considered when testing gravity theories because a large part of
the data comes from those scales. Thus, non-linear corrections to predictions
on observables coming from the linear analysis can help in discriminating among
different gravity theories. We proceed firstly by identifying the necessary
operators which need to be included in the effective field theory Lagrangian in
order to go beyond the linear order in perturbations and then we construct the
corresponding non-linear action. Moreover, we present the complete recipe to
map any single field dark energy and modified gravity models into the
non-linear effective field theory framework by considering a general action in
the Arnowitt-Deser-Misner formalism. In order to illustrate this recipe we
proceed to map the beyond-Horndeski theory and low-energy Horava gravity into
the effective field theory formalism. As a final step we derived the 4th order
action in term of the curvature perturbation. This allowed us to identify the
non-linear contributions coming from the linear order perturbations which at
the next order act like source terms. Moreover, we confirm that the stability
requirements, ensuring the positivity of the kinetic term and the speed of
propagation for scalar mode, are automatically satisfied once the viability of
the theory is demanded at linear level. The approach we present here will allow
to construct, in a model independent way, all the relevant predictions on
observables at mildly non-linear scales. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: A recommender system to restore images with impulse noise,
Abstract: We build a collaborative filtering recommender system to restore images with
impulse noise for which the noisy pixels have been previously identified. We
define this recommender system in terms of a new color image representation
using three matrices that depend on the noise-free pixels of the image to
restore, and two parameters: $k$, the number of features; and $\lambda$, the
regularization factor. We perform experiments on a well known image database to
test our algorithm and we provide image quality statistics for the results
obtained. We discuss the roles of bias and variance in the performance of our
algorithm as determined by the values of $k$ and $\lambda$, and provide
guidance on how to choose the values of these parameters. Finally, we discuss
the possibility of using our collaborative filtering recommender system to
perform image inpainting and super-resolution. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Early warning signal for interior crises in excitable systems,
Abstract: The ability to reliably predict critical transitions in dynamical systems is
a long-standing goal of diverse scientific communities. Previous work focused
on early warning signals related to local bifurcations (critical slowing down)
and non-bifurcation type transitions. We extend this toolbox and report on a
characteristic scaling behavior (critical attractor growth) which is indicative
of an impending global bifurcation, an interior crisis in excitable systems. We
demonstrate our early warning signal in a conceptual climate model as well as
in a model of coupled neurons known to exhibit extreme events. We observed
critical attractor growth prior to interior crises of chaotic as well as
strange-nonchaotic attractors. These observations promise to extend the classes
of transitions that can be predicted via early warning signals. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Mathematics"
] |
Title: STACCATO: A Novel Solution to Supernova Photometric Classification with Biased Training Sets,
Abstract: We present a new solution to the problem of classifying Type Ia supernovae
from their light curves alone given a spectroscopically confirmed but biased
training set, circumventing the need to obtain an observationally expensive
unbiased training set. We use Gaussian processes (GPs) to model the
supernovae's (SN) light curves, and demonstrate that the choice of covariance
function has only a small influence on the GPs ability to accurately classify
SNe. We extend and improve the approach of Richards et al (2012} -- a diffusion
map combined with a random forest classifier -- to deal specifically with the
case of biassed training sets. We propose a novel method, called STACCATO
(SynThetically Augmented Light Curve ClassificATiOn') that synthetically
augments a biased training set by generating additional training data from the
fitted GPs. Key to the success of the method is the partitioning of the
observations into subgroups based on their propensity score of being included
in the training set. Using simulated light curve data, we show that STACCATO
increases performance, as measured by the area under the Receiver Operating
Characteristic curve (AUC), from 0.93 to 0.96, close to the AUC of 0.977
obtained using the 'gold standard' of an unbiased training set and
significantly improving on the previous best result of 0.88. STACCATO also
increases the true positive rate for SNIa classification by up to a factor of
50 for high-redshift/low brightness SNe. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Statistics"
] |
Title: Coupling of multiscale and multi-continuum approaches,
Abstract: Simulating complex processes in fractured media requires some type of model
reduction. Well-known approaches include multi-continuum techniques, which have
been commonly used in approximating subgrid effects for flow and transport in
fractured media. Our goal in this paper is to (1) show a relation between
multi-continuum approaches and Generalized Multiscale Finite Element Method
(GMsFEM) and (2) to discuss coupling these approaches for solving problems in
complex multiscale fractured media. The GMsFEM, a systematic approach,
constructs multiscale basis functions via local spectral decomposition in
pre-computed snapshot spaces. We show that GMsFEM can automatically identify
separate fracture networks via local spectral problems. We discuss the relation
between these basis functions and continuums in multi-continuum methods. The
GMsFEM can automatically detect each continuum and represent the interaction
between the continuum and its surrounding (matrix). For problems with
simplified fracture networks, we propose a simplified basis construction with
the GMsFEM. This simplified approach is effective when the fracture networks
are known and have simplified geometries. We show that this approach can
achieve a similar result compared to the results using the GMsFEM with spectral
basis functions. Further, we discuss the coupling between the GMsFEM and
multi-continuum approaches. In this case, many fractures are resolved while for
unresolved fractures, we use a multi-continuum approach with local
Representative Volume Element (RVE) information. As a result, the method deals
with a system of equations on a coarse grid, where each equation represents one
of the continua on the fine grid. We present various basis construction
mechanisms and numerical results. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Physics"
] |
Title: Minimal surfaces in the 3-sphere by desingularizing intersecting Clifford tori,
Abstract: For each integer $k \geq 2$, we apply gluing methods to construct sequences
of minimal surfaces embedded in the round $3$-sphere. We produce two types of
sequences, all desingularizing collections of intersecting Clifford tori.
Sequences of the first type converge to a collection of $k$ Clifford tori
intersecting with maximal symmetry along these two circles. Near each of the
circles, after rescaling, the sequences converge smoothly on compact subsets to
a Karcher-Scherk tower of order $k$. Sequences of the second type desingularize
a collection of the same $k$ Clifford tori supplemented by an additional
Clifford torus equidistant from the original two circles of intersection, so
that the latter torus orthogonally intersects each of the former $k$ tori along
a pair of disjoint orthogonal circles, near which the corresponding rescaled
sequences converge to a singly periodic Scherk surface. The simpler examples of
the first type resemble surfaces constructed by Choe and Soret \cite{CS} by
different methods where the number of handles desingularizing each circle is
the same. There is a plethora of new examples which are more complicated and on
which the number of handles for the two circles differs. Examples of the second
type are new as well. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Novel paradigms for advanced distribution grid energy management,
Abstract: The electricity distribution grid was not designed to cope with load dynamics
imposed by high penetration of electric vehicles, neither to deal with the
increasing deployment of distributed Renewable Energy Sources. Distribution
System Operators (DSO) will increasingly rely on flexible Distributed Energy
Resources (flexible loads, controllable generation and storage) to keep the
grid stable and to ensure quality of supply. In order to properly integrate
demand-side flexibility, DSOs need new energy management architectures, capable
of fostering collaboration with wholesale market actors and pro-sumers. We
propose the creation of Virtual Distribution Grids (VDG) over a common physical
infrastructure , to cope with heterogeneity of resources and actors, and with
the increasing complexity of distribution grid management and related resources
allocation problems. Focusing on residential VDG, we propose an agent-based
hierarchical architecture for providing Demand-Side Management services through
a market-based approach, where households transact their surplus/lack of energy
and their flexibility with neighbours, aggregators, utilities and DSOs. For
implementing the overall solution, we consider fine-grained control of smart
homes based on Inter-net of Things technology. Homes seamlessly transact
self-enforcing smart contracts over a blockchain-based generic platform.
Finally, we extend the architecture to solve existing problems on smart home
control, beyond energy management. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Quantitative Finance"
] |
Title: Understand Functionality and Dimensionality of Vector Embeddings: the Distributional Hypothesis, the Pairwise Inner Product Loss and Its Bias-Variance Trade-off,
Abstract: Vector embedding is a foundational building block of many deep learning
models, especially in natural language processing. In this paper, we present a
theoretical framework for understanding the effect of dimensionality on vector
embeddings. We observe that the distributional hypothesis, a governing
principle of statistical semantics, requires a natural unitary-invariance for
vector embeddings. Motivated by the unitary-invariance observation, we propose
the Pairwise Inner Product (PIP) loss, a unitary-invariant metric on the
similarity between two embeddings. We demonstrate that the PIP loss captures
the difference in functionality between embeddings, and that the PIP loss is
tightly connect with two basic properties of vector embeddings, namely
similarity and compositionality. By formulating the embedding training process
as matrix factorization with noise, we reveal a fundamental bias-variance
trade-off between the signal spectrum and noise power in the dimensionality
selection process. This bias-variance trade-off sheds light on many empirical
observations which have not been thoroughly explained, for example the
existence of an optimal dimensionality. Moreover, we discover two new results
about vector embeddings, namely their robustness against over-parametrization
and their forward stability. The bias-variance trade-off of the PIP loss
explicitly answers the fundamental open problem of dimensionality selection for
vector embeddings. | [
0,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics",
"Mathematics"
] |
Title: Strongly correlated one-dimensional Bose-Fermi quantum mixtures: symmetry and correlations,
Abstract: We consider multi-component quantum mixtures (bosonic, fermionic, or mixed)
with strongly repulsive contact interactions in a one-dimensional harmonic
trap. In the limit of infinitely strong repulsion and zero temperature, using
the class-sum method, we study the symmetries of the spatial wave function of
the mixture. We find that the ground state of the system has the most symmetric
spatial wave function allowed by the type of mixture. This provides an example
of the generalized Lieb-Mattis theorem. Furthermore, we show that the symmetry
properties of the mixture are embedded in the large-momentum tails of the
momentum distribution, which we evaluate both at infinite repulsion by an exact
solution and at finite interactions using a numerical DMRG approach. This
implies that an experimental measurement of the Tan's contact would allow to
unambiguously determine the symmetry of any kind of multi-component mixture. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Joint Pose and Principal Curvature Refinement Using Quadrics,
Abstract: In this paper we present a novel joint approach for optimising surface
curvature and pose alignment. We present two implementations of this joint
optimisation strategy, including a fast implementation that uses two frames and
an offline multi-frame approach. We demonstrate an order of magnitude
improvement in simulation over state of the art dense relative point-to-plane
Iterative Closest Point (ICP) pose alignment using our dense joint
frame-to-frame approach and show comparable pose drift to dense point-to-plane
ICP bundle adjustment using low-cost depth sensors. Additionally our improved
joint quadric based approach can be used to more accurately estimate surface
curvature on noisy point clouds than previous approaches. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Correspondences without a Core,
Abstract: We study the formal properties of correspondences of curves without a core,
focusing on the case of étale correspondences. The motivating examples come
from Hecke correspondences of Shimura curves. Given a correspondence without a
core, we construct an infinite graph $\mathcal{G}_{gen}$ together with a large
group of "algebraic" automorphisms $A$. The graph $\mathcal{G}_{gen}$ measures
the "generic dynamics" of the correspondence. We construct specialization maps
$\mathcal{G}_{gen}\rightarrow\mathcal{G}_{phys}$ to the "physical dynamics" of
the correspondence. We also prove results on the number of bounded étale
orbits, in particular generalizing a recent theorem of Hallouin and Perret. We
use a variety of techniques: Galois theory, the theory of groups acting on
infinite graphs, and finite group schemes. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: A representation theorem for stochastic processes with separable covariance functions, and its implications for emulation,
Abstract: Many applications require stochastic processes specified on two- or
higher-dimensional domains; spatial or spatial-temporal modelling, for example.
In these applications it is attractive, for conceptual simplicity and
computational tractability, to propose a covariance function that is separable;
e.g., the product of a covariance function in space and one in time. This paper
presents a representation theorem for such a proposal, and shows that all
processes with continuous separable covariance functions are second-order
identical to the product of second-order uncorrelated processes. It discusses
the implications of separable or nearly separable prior covariances for the
statistical emulation of complicated functions such as computer codes, and
critically reexamines the conventional wisdom concerning emulator structure,
and size of design. | [
0,
0,
1,
1,
0,
0
] | [
"Statistics",
"Mathematics"
] |
Title: Veamy: an extensible object-oriented C++ library for the virtual element method,
Abstract: This paper summarizes the development of Veamy, an object-oriented C++
library for the virtual element method (VEM) on general polygonal meshes, whose
modular design is focused on its extensibility. The linear elastostatic and
Poisson problems in two dimensions have been chosen as the starting stage for
the development of this library. The theory of the VEM, upon which Veamy is
built, is presented using a notation and a terminology that resemble the
language of the finite element method (FEM) in engineering analysis. Several
examples are provided to demonstrate the usage of Veamy, and in particular, one
of them features the interaction between Veamy and the polygonal mesh generator
PolyMesher. A computational performance comparison between VEM and FEM is also
conducted. Veamy is free and open source software. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Electrostatic and induction effects in the solubility of water in alkanes,
Abstract: Experiments show that at 298~K and 1 atm pressure the transfer free energy,
$\mu^{\rm ex}$, of water from its vapor to liquid normal alkanes $C_nH_{2n+2}$
($n=5\ldots12$) is negative. Earlier it was found that with the united-atom
TraPPe model for alkanes and the SPC/E model for water, one had to artificially
enhance the attractive alkane-water cross interaction to capture this behavior.
Here we revisit the calculation of $\mu^{\rm ex}$ using the polarizable AMOEBA
and the non-polarizable Charmm General (CGenFF) forcefields. We test both the
AMOEBA03 and AMOEBA14 water models; the former has been validated with the
AMOEBA alkane model while the latter is a revision of AMOEBA03 to better
describe liquid water. We calculate $\mu^{\rm ex}$ using the test particle
method. With CGenFF, $\mu^{\rm ex}$ is positive and the error relative to
experiments is about 1.5 $k_{\rm B}T$. With AMOEBA, $\mu^{\rm ex}$ is negative
and deviations relative to experiments are between 0.25 $k_{\rm B}T$ (AMOEBA14)
and 0.5 $k_{\rm B}T$ (AMOEBA03). Quantum chemical calculations in a continuum
solvent suggest that zero point effects may account for some of the deviation.
Forcefield limitations notwithstanding, electrostatic and induction effects,
commonly ignored in considerations of water-alkane interactions, appear to be
decisive in the solubility of water in alkanes. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Spin tracking of polarized protons in the Main Injector at Fermilab,
Abstract: The Main Injector (MI) at Fermilab currently produces high-intensity beams of
protons at energies of 120 GeV for a variety of physics experiments.
Acceleration of polarized protons in the MI would provide opportunities for a
rich spin physics program at Fermilab. To achieve polarized proton beams in the
Fermilab accelerator complex, detailed spin tracking simulations with realistic
parameters based on the existing facility are required. This report presents
studies at the MI using a single 4-twist Siberian snake to determine the
depolarizing spin resonances for the relevant synchrotrons. Results will be
presented first for a perfect MI lattice, followed by a lattice that includes
the real MI imperfections, such as the measured magnet field errors and
quadrupole misalignments. The tolerances of each of these factors in
maintaining polarization in the Main Injector will be discussed. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: A class of states supporting diffusive spin dynamics in the isotropic Heisenberg model,
Abstract: The spin transport in isotropic Heisenberg model in the sector with zero
magnetization is generically super-diffusive. Despite that, we here demonstrate
that for a specific set of domain-wall-like initial product states it can
instead be diffusive. We theoretically explain the time evolution of such
states by showing that in the limiting regime of weak spatial modulation they
are approximately product states for very long times, and demonstrate that even
in the case of larger spatial modulation the bipartite entanglement entropy
grows only logarithmically in time. In the limiting regime we derive a simple
closed equation governing the dynamics, which in the continuum limit and for
the initial step magnetization profile results in a solution expressed in terms
of Fresnel integrals. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Multi-message Authentication over Noisy Channel with Secure Channel Codes,
Abstract: In this paper, we investigate multi-message authentication to combat
adversaries with infinite computational capacity. An authentication framework
over a wiretap channel $(W_1,W_2)$ is proposed to achieve information-theoretic
security with the same key. The proposed framework bridges the two research
areas in physical (PHY) layer security: secure transmission and message
authentication. Specifically, the sender Alice first transmits message $M$ to
the receiver Bob over $(W_1,W_2)$ with an error correction code; then Alice
employs a hash function (i.e., $\varepsilon$-AWU$_2$ hash functions) to
generate a message tag $S$ of message $M$ using key $K$, and encodes $S$ to a
codeword $X^n$ by leveraging an existing strongly secure channel coding with
exponentially small (in code length $n$) average probability of error; finally,
Alice sends $X^n$ over $(W_1,W_2)$ to Bob who authenticates the received
messages. We develop a theorem regarding the requirements/conditions for the
authentication framework to be information-theoretic secure for authenticating
a polynomial number of messages in terms of $n$. Based on this theorem, we
propose an authentication protocol that can guarantee the security
requirements, and prove its authentication rate can approach infinity when $n$
goes to infinity. Furthermore, we design and implement an efficient and
feasible authentication protocol over binary symmetric wiretap channel (BSWC)
by using \emph{Linear Feedback Shifting Register} based (LFSR-based) hash
functions and strong secure polar code. Through extensive experiments, it is
demonstrated that the proposed protocol can achieve low time cost, high
authentication rate, and low authentication error rate. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Directed negative-weight percolation,
Abstract: We consider a directed variant of the negative-weight percolation model in a
two-dimensional, periodic, square lattice. The problem exhibits edge weights
which are taken from a distribution that allows for both positive and negative
values. Additionally, in this model variant all edges are directed. For a given
realization of the disorder, a minimally weighted loop/path configuration is
determined by performing a non-trivial transformation of the original lattice
into a minimum weight perfect matching problem. For this problem, fast
polynomial-time algorithms are available, thus we could study large systems
with high accuracy. Depending on the fraction of negatively and positively
weighted edges in the lattice, a continuous phase transition can be identified,
whose characterizing critical exponents we have estimated by a finite-size
scaling analyses of the numerically obtained data. We observe a strong change
of the universality class with respect to standard directed percolation, as
well as with respect to undirected negative-weight percolation. Furthermore,
the relation to directed polymers in random media is illustrated. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Mathematics"
] |
Title: Integrating Lipschitzian Dynamical Systems using Piecewise Algorithmic Differentiation,
Abstract: In this article we analyze a generalized trapezoidal rule for initial value
problems with piecewise smooth right hand side \(F:\R^n\to\R^n\). When applied
to such a problem the classical trapezoidal rule suffers from a loss of
accuracy if the solution trajectory intersects a nondifferentiability of \(F\).
The advantage of the proposed generalized trapezoidal rule is threefold:
Firstly we can achieve a higher convergence order than with the classical
method. Moreover, the method is energy preserving for piecewise linear
Hamiltonian systems. Finally, in analogy to the classical case we derive a
third order interpolation polynomial for the numerical trajectory. In the
smooth case the generalized rule reduces to the classical one. Hence, it is a
proper extension of the classical theory. An error estimator is given and
numerical results are presented. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Computer Science"
] |
Title: On the Chemistry of the Young Massive Protostellar core NGC 2264 CMM3,
Abstract: We present the first gas-grain astrochemical model of the NGC 2264 CMM3
protostellar core. The chemical evolution of the core is affected by changing
its physical parameters such as the total density and the amount of
gas-depletion onto grain surfaces as well as the cosmic ray ionisation rate,
$\zeta$. We estimated $\zeta_{\text {CMM3}}$ = 1.6 $\times$ 10$^{-17}$
s$^{-1}$. This value is 1.3 times higher than the standard CR ionisation rate,
$\zeta_{\text {ISM}}$ = 1.3 $\times$ 10$^{-17}$ s$^{-1}$. Species response
differently to changes into the core physical conditions, but they are more
sensitive to changes in the depletion percentage and CR ionisation rate than to
variations in the core density. Gas-phase models highlighted the importance of
surface reactions as factories of large molecules and showed that for sulphur
bearing species depletion is important to reproduce observations.
Comparing the results of the reference model with the most recent millimeter
observations of the NGC 2264 CMM3 core showed that our model is capable of
reproducing the observed abundances of most of the species during early stages
($\le$ 3$\times$10$^4$ yrs) of their chemical evolution. Models with variations
in the core density between 1 - 20 $\times$ 10$^6$ cm$^{-3}$ are also in good
agreement with observations during the early time interval 1 $\times$ 10$^4 <$
t (yr) $<$ 5 $\times$ 10$^4$. In addition, models with higher CR ionisation
rates (5 - 10) $\times \zeta_{\text {ISM}}$ are often overestimating the
fractional abundances of the species. However, models with $\zeta_{\text
{CMM3}}$ = 5 $\zeta_{\text {ISM}}$ may best fit observations at times $\sim$ 2
$\times$ 10$^4$ yrs. Our results suggest that CMM3 is (1 - 5) $\times$ 10$^4$
yrs old. Therefore, the core is chemically young and it may host a Class 0
object as suggested by previous studies. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Uniform Consistency in Stochastic Block Model with Continuous Community Label,
Abstract: \cite{bickel2009nonparametric} developed a general framework to establish
consistency of community detection in stochastic block model (SBM). In most
applications of this framework, the community label is discrete. For example,
in \citep{bickel2009nonparametric,zhao2012consistency} the degree corrected SBM
is assumed to have a discrete degree parameter. In this paper, we generalize
the method of \cite{bickel2009nonparametric} to give consistency analysis of
maximum likelihood estimator (MLE) in SBM with continuous community label. We
show that there is a standard procedure to transform the $||\cdot||_2$ error
bound to the uniform error bound. We demonstrate the application of our general
results by proving the uniform consistency (strong consistency) of the MLE in
the exponential network model with interaction effect. Unfortunately, in the
continuous parameter case, the condition ensuring uniform consistency we
obtained is much stronger than that in the discrete parameter case, namely
$n\mu_n^5/(\log n)^{8}\rightarrow\infty$ versus $n\mu_n/\log
n\rightarrow\infty$. Where $n\mu_n$ represents the average degree of the
network. But continuous is the limit of discrete. So it is not surprising as we
show that by discretizing the community label space into sufficiently small
(but not too small) pieces and applying the MLE on the discretized community
label space, uniform consistency holds under almost the same condition as in
discrete community label space. Such a phenomenon is surprising since the
discretization does not depend on the data or the model. This reminds us of the
thresholding method. | [
0,
0,
0,
1,
0,
0
] | [
"Statistics",
"Mathematics"
] |
Title: Fine-scale population structure analysis in Armadillidium vulgare (Isopoda: Oniscidea) reveals strong female philopatry,
Abstract: In the last decades, dispersal studies have benefitted from the use of
molecular markers for detecting patterns differing between categories of
individuals, and have highlighted sex-biased dispersal in several species. To
explain this phenomenon, sex-related handicaps such as parental care have been
recently proposed as a hypothesis. Herein we tested this hypothesis in
Armadillidium vulgare, a terrestrial isopod in which females bear the totality
of the high parental care costs. We performed a fine-scale analysis of
sex-specific dispersal patterns, using males and females originating from five
sampling points located within 70 meters of each other. Based on microsatellite
markers and both F-statistics and spatial autocorrelation analyses, our results
revealed that while males did not present a significant structure at this
geographic scale, females were significantly more similar to each other when
they were collected in the same sampling point. These results support the
sex-handicap hypothesis, and we suggest that widening dispersal studies to
other isopods or crustaceans, displaying varying levels of parental care but
differing in their ecology or mating system, might shed light on the processes
underlying the evolution of sex-biased dispersal. | [
0,
0,
0,
0,
1,
0
] | [
"Quantitative Biology"
] |
Title: Fast transforms over finite fields of characteristic two,
Abstract: An additive fast Fourier transform over a finite field of characteristic two
efficiently evaluates polynomials at every element of an $\mathbb{F}_2$-linear
subspace of the field. We view these transforms as performing a change of basis
from the monomial basis to the associated Lagrange basis, and consider the
problem of performing the various conversions between these two bases, the
associated Newton basis, and the '' novel '' basis of Lin, Chung and Han (FOCS
2014). Existing algorithms are divided between two families, those designed for
arbitrary subspaces and more efficient algorithms designed for specially
constructed subspaces of fields with degree equal to a power of two. We
generalise techniques from both families to provide new conversion algorithms
that may be applied to arbitrary subspaces, but which benefit equally from the
specially constructed subspaces. We then construct subspaces of fields with
smooth degree for which our algorithms provide better performance than existing
algorithms. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Universality of group embeddability,
Abstract: Working in the framework of Borel reducibility, we study various notions of
embeddability between groups. We prove that the embeddability between countable
groups, the topological embeddability between (discrete) Polish groups, and the
isometric embeddability between separable groups with a bounded bi-invariant
complete metric are all invariantly universal analytic quasi-orders. This
strengthens some results from [Wil14] and [FLR09]. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: CUR Decompositions, Similarity Matrices, and Subspace Clustering,
Abstract: A general framework for solving the subspace clustering problem using the CUR
decomposition is presented. The CUR decomposition provides a natural way to
construct similarity matrices for data that come from a union of unknown
subspaces $\mathscr{U}=\underset{i=1}{\overset{M}\bigcup}S_i$. The similarity
matrices thus constructed give the exact clustering in the noise-free case.
Additionally, this decomposition gives rise to many distinct similarity
matrices from a given set of data, which allow enough flexibility to perform
accurate clustering of noisy data. We also show that two known methods for
subspace clustering can be derived from the CUR decomposition. An algorithm
based on the theoretical construction of similarity matrices is presented, and
experiments on synthetic and real data are presented to test the method.
Additionally, an adaptation of our CUR based similarity matrices is utilized
to provide a heuristic algorithm for subspace clustering; this algorithm yields
the best overall performance to date for clustering the Hopkins155 motion
segmentation dataset. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Approximating Geometric Knapsack via L-packings,
Abstract: We study the two-dimensional geometric knapsack problem (2DK) in which we are
given a set of n axis-aligned rectangular items, each one with an associated
profit, and an axis-aligned square knapsack. The goal is to find a
(non-overlapping) packing of a maximum profit subset of items inside the
knapsack (without rotating items). The best-known polynomial-time approximation
factor for this problem (even just in the cardinality case) is (2 + \epsilon)
[Jansen and Zhang, SODA 2004].
In this paper, we break the 2 approximation barrier, achieving a
polynomial-time (17/9 + \epsilon) < 1.89 approximation, which improves to
(558/325 + \epsilon) < 1.72 in the cardinality case. Essentially all prior work
on 2DK approximation packs items inside a constant number of rectangular
containers, where items inside each container are packed using a simple greedy
strategy. We deviate for the first time from this setting: we show that there
exists a large profit solution where items are packed inside a constant number
of containers plus one L-shaped region at the boundary of the knapsack which
contains items that are high and narrow and items that are wide and thin. As a
second major and the main algorithmic contribution of this paper, we present a
PTAS for this case. We believe that this will turn out to be useful in future
work in geometric packing problems.
We also consider the variant of the problem with rotations (2DKR), where
items can be rotated by 90 degrees. Also, in this case, the best-known
polynomial-time approximation factor (even for the cardinality case) is (2 +
\epsilon) [Jansen and Zhang, SODA 2004]. Exploiting part of the machinery
developed for 2DK plus a few additional ideas, we obtain a polynomial-time (3/2
+ \epsilon)-approximation for 2DKR, which improves to (4/3 + \epsilon) in the
cardinality case. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Empirical Bayes Matrix Completion,
Abstract: We develop an empirical Bayes (EB) algorithm for the matrix completion
problems. The EB algorithm is motivated from the singular value shrinkage
estimator for matrix means by Efron and Morris (1972). Since the EB algorithm
is essentially the EM algorithm applied to a simple model, it does not require
heuristic parameter tuning other than tolerance. Numerical results demonstrated
that the EB algorithm achieves a good trade-off between accuracy and efficiency
compared to existing algorithms and that it works particularly well when the
difference between the number of rows and columns is large. Application to real
data also shows the practical utility of the EB algorithm. | [
0,
0,
1,
1,
0,
0
] | [
"Statistics",
"Mathematics",
"Computer Science"
] |
Title: Excitonic Instability and Pseudogap Formation in Nodal Line Semimetal ZrSiS,
Abstract: Electron correlation effects are studied in ZrSiS using a combination of
first-principles and model approaches. We show that basic electronic properties
of ZrSiS can be described within a two-dimensional lattice model of two nested
square lattices. High degree of electron-hole symmetry characteristic for ZrSiS
is one of the key features of this model. Having determined model parameters
from first-principles calculations, we then explicitly take electron-electron
interactions into account and show that at moderately low temperatures ZrSiS
exhibits excitonic instability, leading to the formation of a pseudogap in the
electronic spectrum. The results can be understood in terms of
Coulomb-interaction-assisted pairing of electrons and holes reminiscent to that
of an excitonic insulator. Our finding allows us to provide a physical
interpretation to the unusual mass enhancement of charge carriers in ZrSiS
recently observed experimentally. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Learning a Deep Convolution Network with Turing Test Adversaries for Microscopy Image Super Resolution,
Abstract: Adversarially trained deep neural networks have significantly improved
performance of single image super resolution, by hallucinating photorealistic
local textures, thereby greatly reducing the perception difference between a
real high resolution image and its super resolved (SR) counterpart. However,
application to medical imaging requires preservation of diagnostically relevant
features while refraining from introducing any diagnostically confusing
artifacts. We propose using a deep convolutional super resolution network
(SRNet) trained for (i) minimising reconstruction loss between the real and SR
images, and (ii) maximally confusing learned relativistic visual Turing test
(rVTT) networks to discriminate between (a) pair of real and SR images (T1) and
(b) pair of patches in real and SR selected from region of interest (T2). The
adversarial loss of T1 and T2 while backpropagated through SRNet helps it learn
to reconstruct pathorealism in the regions of interest such as white blood
cells (WBC) in peripheral blood smears or epithelial cells in histopathology of
cancerous biopsy tissues, which are experimentally demonstrated here.
Experiments performed for measuring signal distortion loss using peak signal to
noise ratio (pSNR) and structural similarity (SSIM) with variation of SR scale
factors, impact of rVTT adversarial losses, and impact on reporting using SR on
a commercially available artificial intelligence (AI) digital pathology system
substantiate our claims. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Quantitative Biology"
] |
Title: Richardson's solutions in the real- and complex-energy spectrum,
Abstract: The constant pairing Hamiltonian holds exact solutions worked out by
Richardson in the early Sixties. This exact solution of the pairing Hamiltonian
regained interest at the end of the Nineties. The discret complex-energy states
had been included in the Richardson's solutions by Hasegawa et al. [1]. In this
contribution we reformulate the problem of determining the exact eigenenergies
of the pairing Hamiltonian when the continuum is included through the single
particle level density. The solutions with discret complex-energy states is
recovered by analytic continuation of the equations to the complex energy
plane. This formulation may be applied to loosely bound system where the
correlations with the continuum-spectrum of energy is really important. Some
details are given to show how the many-body eigenenergy emerges as sum of the
pair-energies. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Mathematics"
] |
Title: A 588 Gbps LDPC Decoder Based on Finite-Alphabet Message Passing,
Abstract: An ultra-high throughput low-density parity check (LDPC) decoder with an
unrolled full-parallel architecture is proposed, which achieves the highest
decoding throughput compared to previously reported LDPC decoders in the
literature. The decoder benefits from a serial message-transfer approach
between the decoding stages to alleviate the well-known routing congestion
problem in parallel LDPC decoders. Furthermore, a finite-alphabet message
passing algorithm is employed to replace the variable node update rule of the
standard min-sum decoder with look-up tables, which are designed in a way that
maximizes the mutual information between decoding messages. The proposed
algorithm results in an architecture with reduced bit-width messages, leading
to a significantly higher decoding throughput and to a lower area as compared
to a min-sum decoder when serial message-transfer is used. The architecture is
placed and routed for the standard min-sum reference decoder and for the
proposed finite-alphabet decoder using a custom pseudo-hierarchical backend
design strategy to further alleviate routing congestions and to handle the
large design. Post-layout results show that the finite-alphabet decoder with
the serial message-transfer architecture achieves a throughput as large as 588
Gbps with an area of 16.2 mm$^2$ and dissipates an average power of 22.7 pJ per
decoded bit in a 28 nm FD-SOI library. Compared to the reference min-sum
decoder, this corresponds to 3.1 times smaller area and 2 times better energy
efficiency. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: On q-analogues of quadratic Euler sums,
Abstract: In this paper we define the generalized q-analogues of Euler sums and present
a new family of identities for q-analogues of Euler sums by using the method of
Jackson q-integral rep- resentations of series. We then apply it to obtain a
family of identities relating quadratic Euler sums to linear sums and
q-polylogarithms. Furthermore, we also use certain stuffle products to evaluate
several q-series with q-harmonic numbers. Some interesting new results and
illustrative examples are considered. Finally, we can obtain some explicit
relations for the classical Euler sums when q approaches to 1. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: An iterative aggregation and disaggregation approach to the calculation of steady-state distributions of continuous processes,
Abstract: A mapping of the process on a continuous configuration space to the symbolic
representation of the motion on a discrete state space will be combined with an
iterative aggregation and disaggregation (IAD) procedure to obtain steady state
distributions of the process. The IAD speeds up the convergence to the unit
eigenvector, which is the steady state distribution, by forming smaller
aggregated matrices whose unit eigenvector solutions are used to refine
approximations of the steady state vector until convergence is reached. This
method works very efficiently and can be used together with distributed or
parallel computing methods to obtain high resolution images of the steady state
distribution of complex atomistic or energy landscape type problems. The method
is illustrated in two numerical examples. In the first example the transition
matrix is assumed to be known. The second example represents an overdamped
Brownian motion process subject to a dichotomously changing external potential. | [
0,
1,
0,
0,
0,
0
] | [
"Mathematics",
"Statistics",
"Physics"
] |
Title: Waveform and Spectrum Management for Unmanned Aerial Systems Beyond 2025,
Abstract: The application domains of civilian unmanned aerial systems (UASs) include
agriculture, exploration, transportation, and entertainment. The expected
growth of the UAS industry brings along new challenges: Unmanned aerial vehicle
(UAV) flight control signaling requires low throughput, but extremely high
reliability, whereas the data rate for payload data can be significant. This
paper develops UAV number projections and concludes that small and micro UAVs
will dominate the US airspace with accelerated growth between 2028 and 2032. We
analyze the orthogonal frequency division multiplexing (OFDM) waveform because
it can provide the much needed flexibility, spectral efficiency, and,
potentially, reliability and derive suitable OFDM waveform parameters as a
function of UAV flight characteristics. OFDM also lends itself to agile
spectrum access. Based on our UAV growth predictions, we conclude that dynamic
spectrum access is needed and discuss the applicability of spectrum sharing
techniques for future UAS communications. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Physics"
] |
Title: Mining Public Opinion about Economic Issues: Twitter and the U.S. Presidential Election,
Abstract: Opinion polls have been the bridge between public opinion and politicians in
elections. However, developing surveys to disclose people's feedback with
respect to economic issues is limited, expensive, and time-consuming. In recent
years, social media such as Twitter has enabled people to share their opinions
regarding elections. Social media has provided a platform for collecting a
large amount of social media data. This paper proposes a computational public
opinion mining approach to explore the discussion of economic issues in social
media during an election. Current related studies use text mining methods
independently for election analysis and election prediction; this research
combines two text mining methods: sentiment analysis and topic modeling. The
proposed approach has effectively been deployed on millions of tweets to
analyze economic concerns of people during the 2012 US presidential election. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Quantitative Finance",
"Statistics"
] |
Title: Sports stars: analyzing the performance of astronomers at visualization-based discovery,
Abstract: In this data-rich era of astronomy, there is a growing reliance on automated
techniques to discover new knowledge. The role of the astronomer may change
from being a discoverer to being a confirmer. But what do astronomers actually
look at when they distinguish between "sources" and "noise?" What are the
differences between novice and expert astronomers when it comes to visual-based
discovery? Can we identify elite talent or coach astronomers to maximize their
potential for discovery? By looking to the field of sports performance
analysis, we consider an established, domain-wide approach, where the expertise
of the viewer (i.e. a member of the coaching team) plays a crucial role in
identifying and determining the subtle features of gameplay that provide a
winning advantage. As an initial case study, we investigate whether the
SportsCode performance analysis software can be used to understand and document
how an experienced HI astronomer makes discoveries in spectral data cubes. We
find that the process of timeline-based coding can be applied to spectral cube
data by mapping spectral channels to frames within a movie. SportsCode provides
a range of easy to use methods for annotation, including feature-based codes
and labels, text annotations associated with codes, and image-based drawing.
The outputs, including instance movies that are uniquely associated with coded
events, provide the basis for a training program or team-based analysis that
could be used in unison with discipline specific analysis software. In this
coordinated approach to visualization and analysis, SportsCode can act as a
visual notebook, recording the insight and decisions in partnership with
established analysis methods. Alternatively, in situ annotation and coding of
features would be a valuable addition to existing and future visualisation and
analysis packages. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Computer Science"
] |
Title: Dense 3D Facial Reconstruction from a Single Depth Image in Unconstrained Environment,
Abstract: With the increasing demands of applications in virtual reality such as 3D
films, virtual Human-Machine Interactions and virtual agents, the analysis of
3D human face analysis is considered to be more and more important as a
fundamental step for those virtual reality tasks. Due to information provided
by an additional dimension, 3D facial reconstruction enables aforementioned
tasks to be achieved with higher accuracy than those based on 2D facial
analysis. The denser the 3D facial model is, the more information it could
provide. However, most existing dense 3D facial reconstruction methods require
complicated processing and high system cost. To this end, this paper presents a
novel method that simplifies the process of dense 3D facial reconstruction by
employing only one frame of depth data obtained with an off-the-shelf RGB-D
sensor. The experiments showed competitive results with real world data. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Concentration phenomena for a fractional Schrödinger-Kirchhoff type equation,
Abstract: In this paper we deal with the multiplicity and concentration of positive
solutions for the following fractional Schrödinger-Kirchhoff type equation
\begin{equation*} M\left(\frac{1}{\varepsilon^{3-2s}}
\iint_{\mathbb{R}^{6}}\frac{|u(x)- u(y)|^{2}}{|x-y|^{3+2s}} dxdy +
\frac{1}{\varepsilon^{3}} \int_{\mathbb{R}^{3}} V(x)u^{2}
dx\right)[\varepsilon^{2s} (-\Delta)^{s}u+ V(x)u]= f(u) \, \mbox{in}
\mathbb{R}^{3} \end{equation*} where $\varepsilon>0$ is a small parameter,
$s\in (\frac{3}{4}, 1)$, $(-\Delta)^{s}$ is the fractional Laplacian, $M$ is a
Kirchhoff function, $V$ is a continuous positive potential and $f$ is a
superlinear continuous function with subcritical growth. By using penalization
techniques and Ljusternik-Schnirelmann theory, we investigate the relation
between the number of positive solutions with the topology of the set where the
potential attains its minimum. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Physics"
] |
Title: Synchrotron radiation induced magnetization in magnetically-doped and pristine topological insulators,
Abstract: Quantum mechanics postulates that any measurement influences the state of the
investigated system. Here, by means of angle-, spin-, and time-resolved
photoemission experiments and ab initio calculations we demonstrate how
non-equal depopulation of the Dirac cone (DC) states with opposite momenta in
V-doped and pristine topological insulators (TIs) created by a photoexcitation
by linearly polarized synchrotron radiation (SR) is followed by the
hole-generated uncompensated spin accumulation and the SR-induced magnetization
via the spin-torque effect. We show that the photoexcitation of the DC is
asymmetric, that it varies with the photon energy, and that it practically does
not change during the relaxation. We find a relation between the
photoexcitation asymmetry, the generated spin accumulation and the induced spin
polarization of the DC and V 3d states. Experimentally the SR-generated
in-plane and out-of-plane magnetization is confirmed by the
$k_{\parallel}$-shift of the DC position and by the splitting of the states at
the Dirac point even above the Curie temperature. Theoretical predictions and
estimations of the measurable physical quantities substantiate the experimental
results. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Health Care Expenditures, Financial Stability, and Participation in the Supplemental Nutrition Assistance Program (SNAP),
Abstract: This paper examines the association between household healthcare expenses and
participation in the Supplemental Nutrition Assistance Program (SNAP) when
moderated by factors associated with financial stability of households. Using a
large longitudinal panel encompassing eight years, this study finds that an
inter-temporal increase in out-of-pocket medical expenses increased the
likelihood of household SNAP participation in the current period. Financially
stable households with precautionary financial assets to cover at least 6
months worth of household expenses were significantly less likely to
participate in SNAP. The low income households who recently experienced an
increase in out of pocket medical expenses but had adequate precautionary
savings were less likely than similar households who did not have precautionary
savings to participate in SNAP. Implications for economists, policy makers, and
household finance professionals are discussed. | [
0,
0,
0,
0,
0,
1
] | [
"Quantitative Finance"
] |
Title: Improved estimates for polynomial Roth type theorems in finite fields,
Abstract: We prove that, under certain conditions on the function pair $\varphi_1$ and
$\varphi_2$, bilinear average $p^{-1}\sum_{y\in
\mathbb{F}_p}f_1(x+\varphi_1(y)) f_2(x+\varphi_2(y))$ along curve $(\varphi_1,
\varphi_2)$ satisfies certain decay estimate. As a consequence, Roth type
theorems hold in the setting of finite fields. In particular, if
$\varphi_1,\varphi_2\in \mathbb{F}_p[X]$ with $\varphi_1(0)=\varphi_2(0)=0$ are
linearly independent polynomials, then for any $A\subset \mathbb{F}_p,
|A|=\delta p$ with $\delta>c p^{-\frac{1}{12}}$, there are $\gtrsim
\delta^3p^2$ triplets $x,x+\varphi_1(y), x+\varphi_2(y)\in A$. This extends a
recent result of Bourgain and Chang who initiated this type of problems, and
strengthens the bound in a result of Peluse, who generalized Bourgain and
Chang's work. The proof uses discrete Fourier analysis and algebraic geometry. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Evaluating Overfit and Underfit in Models of Network Community Structure,
Abstract: A common data mining task on networks is community detection, which seeks an
unsupervised decomposition of a network into structural groups based on
statistical regularities in the network's connectivity. Although many methods
exist, the No Free Lunch theorem for community detection implies that each
makes some kind of tradeoff, and no algorithm can be optimal on all inputs.
Thus, different algorithms will over or underfit on different inputs, finding
more, fewer, or just different communities than is optimal, and evaluation
methods that use a metadata partition as a ground truth will produce misleading
conclusions about general accuracy. Here, we present a broad evaluation of over
and underfitting in community detection, comparing the behavior of 16
state-of-the-art community detection algorithms on a novel and structurally
diverse corpus of 406 real-world networks. We find that (i) algorithms vary
widely both in the number of communities they find and in their corresponding
composition, given the same input, (ii) algorithms can be clustered into
distinct high-level groups based on similarities of their outputs on real-world
networks, and (iii) these differences induce wide variation in accuracy on link
prediction and link description tasks. We introduce a new diagnostic for
evaluating overfitting and underfitting in practice, and use it to roughly
divide community detection methods into general and specialized learning
algorithms. Across methods and inputs, Bayesian techniques based on the
stochastic block model and a minimum description length approach to
regularization represent the best general learning approach, but can be
outperformed under specific circumstances. These results introduce both a
theoretically principled approach to evaluate over and underfitting in models
of network community structure and a realistic benchmark by which new methods
may be evaluated and compared. | [
1,
0,
0,
1,
1,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Named Entity Evolution Recognition on the Blogosphere,
Abstract: Advancements in technology and culture lead to changes in our language. These
changes create a gap between the language known by users and the language
stored in digital archives. It affects user's possibility to firstly find
content and secondly interpret that content. In previous work we introduced our
approach for Named Entity Evolution Recognition~(NEER) in newspaper
collections. Lately, increasing efforts in Web preservation lead to increased
availability of Web archives covering longer time spans. However, language on
the Web is more dynamic than in traditional media and many of the basic
assumptions from the newspaper domain do not hold for Web data. In this paper
we discuss the limitations of existing methodology for NEER. We approach these
by adapting an existing NEER method to work on noisy data like the Web and the
Blogosphere in particular. We develop novel filters that reduce the noise and
make use of Semantic Web resources to obtain more information about terms. Our
evaluation shows the potentials of the proposed approach. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Truncated Cramér-von Mises test of normality,
Abstract: A new test of normality based on a standardised empirical process is
introduced in this article.
The first step is to introduce a Cramér-von Mises type statistic with
weights equal to the inverse of the standard normal density function supported
on a symmetric interval $[-a_n,a_n]$ depending on the sample size $n.$ The
sequence of end points $a_n$ tends to infinity, and is chosen so that the
statistic goes to infinity at the speed of $\ln \ln n.$ After substracting the
mean, a suitable test statistic is obtained, with the same asymptotic law as
the well-known Shapiro-Wilk statistic. The performance of the new test is
described and compared with three other well-known tests of normality, namely,
Shapiro-Wilk, Anderson-Darling and that of del Barrio-Matrán, Cuesta
Albertos, and Rodr\'{\i}guez Rodr\'{\i}guez, by means of power calculations
under many alternative hypotheses. | [
0,
0,
1,
1,
0,
0
] | [
"Statistics",
"Mathematics"
] |
Title: A boundary integral equation method for mode elimination and vibration confinement in thin plates with clamped points,
Abstract: We consider the bi-Laplacian eigenvalue problem for the modes of vibration of
a thin elastic plate with a discrete set of clamped points. A high-order
boundary integral equation method is developed for efficient numerical
determination of these modes in the presence of multiple localized defects for
a wide range of two-dimensional geometries. The defects result in
eigenfunctions with a weak singularity that is resolved by decomposing the
solution as a superposition of Green's functions plus a smooth regular part.
This method is applied to a variety of regular and irregular domains and two
key phenomena are observed. First, careful placement of clamping points can
entirely eliminate particular eigenvalues and suggests a strategy for
manipulating the vibrational characteristics of rigid bodies so that
undesirable frequencies are removed. Second, clamping of the plate can result
in partitioning of the domain so that vibrational modes are largely confined to
certain spatial regions. This numerical method gives a precision tool for
tuning the vibrational characteristics of thin elastic plates. | [
0,
0,
1,
0,
0,
0
] | [
"Physics",
"Mathematics",
"Computer Science"
] |
Title: On Optimal Generalizability in Parametric Learning,
Abstract: We consider the parametric learning problem, where the objective of the
learner is determined by a parametric loss function. Employing empirical risk
minimization with possibly regularization, the inferred parameter vector will
be biased toward the training samples. Such bias is measured by the cross
validation procedure in practice where the data set is partitioned into a
training set used for training and a validation set, which is not used in
training and is left to measure the out-of-sample performance. A classical
cross validation strategy is the leave-one-out cross validation (LOOCV) where
one sample is left out for validation and training is done on the rest of the
samples that are presented to the learner, and this process is repeated on all
of the samples. LOOCV is rarely used in practice due to the high computational
complexity. In this paper, we first develop a computationally efficient
approximate LOOCV (ALOOCV) and provide theoretical guarantees for its
performance. Then we use ALOOCV to provide an optimization algorithm for
finding the regularizer in the empirical risk minimization framework. In our
numerical experiments, we illustrate the accuracy and efficiency of ALOOCV as
well as our proposed framework for the optimization of the regularizer. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics",
"Mathematics"
] |
Title: A Neural Language Model for Dynamically Representing the Meanings of Unknown Words and Entities in a Discourse,
Abstract: This study addresses the problem of identifying the meaning of unknown words
or entities in a discourse with respect to the word embedding approaches used
in neural language models. We proposed a method for on-the-fly construction and
exploitation of word embeddings in both the input and output layers of a neural
model by tracking contexts. This extends the dynamic entity representation used
in Kobayashi et al. (2016) and incorporates a copy mechanism proposed
independently by Gu et al. (2016) and Gulcehre et al. (2016). In addition, we
construct a new task and dataset called Anonymized Language Modeling for
evaluating the ability to capture word meanings while reading. Experiments
conducted using our novel dataset show that the proposed variant of RNN
language model outperformed the baseline model. Furthermore, the experiments
also demonstrate that dynamic updates of an output layer help a model predict
reappearing entities, whereas those of an input layer are effective to predict
words following reappearing entities. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Extreme values of the Riemann zeta function on the 1-line,
Abstract: We prove that there are arbitrarily large values of $t$ such that
$|\zeta(1+it)| \geq e^{\gamma} (\log_2 t + \log_3 t) + \mathcal{O}(1)$. This
essentially matches the prediction for the optimal lower bound in a conjecture
of Granville and Soundararajan. Our proof uses a new variant of the "long
resonator" method. While earlier implementations of this method crucially
relied on a "sparsification" technique to control the mean-square of the
resonator function, in the present paper we exploit certain self-similarity
properties of a specially designed resonator function. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Robust two-qubit gates in a linear ion crystal using a frequency-modulated driving force,
Abstract: In an ion trap quantum computer, collective motional modes are used to
entangle two or more qubits in order to execute multi-qubit logical gates. Any
residual entanglement between the internal and motional states of the ions
results in loss of fidelity, especially when there are many spectator ions in
the crystal. We propose using a frequency-modulated (FM) driving force to
minimize such errors. In simulation, we obtained an optimized FM two-qubit gate
that can suppress errors to less than 0.01\% and is robust against frequency
drifts over $\pm$1 kHz. Experimentally, we have obtained a two-qubit gate
fidelity of $98.3(4)\%$, a state-of-the-art result for two-qubit gates with 5
ions. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Computer Science"
] |
Title: Symbolic Versus Numerical Computation and Visualization of Parameter Regions for Multistationarity of Biological Networks,
Abstract: We investigate models of the mitogenactivated protein kinases (MAPK) network,
with the aim of determining where in parameter space there exist multiple
positive steady states. We build on recent progress which combines various
symbolic computation methods for mixed systems of equalities and inequalities.
We demonstrate that those techniques benefit tremendously from a newly
implemented graph theoretical symbolic preprocessing method. We compare
computation times and quality of results of numerical continuation methods with
our symbolic approach before and after the application of our preprocessing. | [
1,
0,
0,
0,
0,
0
] | [
"Mathematics",
"Quantitative Biology",
"Computer Science"
] |
Title: Normal forms of dispersive scalar Poisson brackets with two independent variables,
Abstract: We classify the dispersive Poisson brackets with one dependent variable and
two independent variables, with leading order of hydrodynamic type, up to Miura
transformations. We show that, in contrast to the case of a single independent
variable for which a well known triviality result exists, the Miura equivalence
classes are parametrised by an infinite number of constants, which we call
numerical invariants of the brackets. We obtain explicit formulas for the first
few numerical invariants. | [
0,
1,
1,
0,
0,
0
] | [
"Mathematics",
"Physics"
] |
Title: Semiparametric Contextual Bandits,
Abstract: This paper studies semiparametric contextual bandits, a generalization of the
linear stochastic bandit problem where the reward for an action is modeled as a
linear function of known action features confounded by an non-linear
action-independent term. We design new algorithms that achieve
$\tilde{O}(d\sqrt{T})$ regret over $T$ rounds, when the linear function is
$d$-dimensional, which matches the best known bounds for the simpler
unconfounded case and improves on a recent result of Greenewald et al. (2017).
Via an empirical evaluation, we show that our algorithms outperform prior
approaches when there are non-linear confounding effects on the rewards.
Technically, our algorithms use a new reward estimator inspired by
doubly-robust approaches and our proofs require new concentration inequalities
for self-normalized martingales. | [
0,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Six operations formalism for generalized operads,
Abstract: This paper shows that generalizations of operads equipped with their
respective bar/cobar dualities are related by a six operations formalism
analogous to that of classical contexts in algebraic geometry. As a consequence
of our constructions, we prove intertwining theorems which govern derived
Koszul duality of push-forwards and pull-backs. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Learning linear structural equation models in polynomial time and sample complexity,
Abstract: The problem of learning structural equation models (SEMs) from data is a
fundamental problem in causal inference. We develop a new algorithm --- which
is computationally and statistically efficient and works in the
high-dimensional regime --- for learning linear SEMs from purely observational
data with arbitrary noise distribution. We consider three aspects of the
problem: identifiability, computational efficiency, and statistical efficiency.
We show that when data is generated from a linear SEM over $p$ nodes and
maximum degree $d$, our algorithm recovers the directed acyclic graph (DAG)
structure of the SEM under an identifiability condition that is more general
than those considered in the literature, and without faithfulness assumptions.
In the population setting, our algorithm recovers the DAG structure in
$\mathcal{O}(p(d^2 + \log p))$ operations. In the finite sample setting, if the
estimated precision matrix is sparse, our algorithm has a smoothed complexity
of $\widetilde{\mathcal{O}}(p^3 + pd^7)$, while if the estimated precision
matrix is dense, our algorithm has a smoothed complexity of
$\widetilde{\mathcal{O}}(p^5)$. For sub-Gaussian noise, we show that our
algorithm has a sample complexity of $\mathcal{O}(\frac{d^8}{\varepsilon^2}
\log (\frac{p}{\sqrt{\delta}}))$ to achieve $\varepsilon$ element-wise additive
error with respect to the true autoregression matrix with probability at most
$1 - \delta$, while for noise with bounded $(4m)$-th moment, with $m$ being a
positive integer, our algorithm has a sample complexity of
$\mathcal{O}(\frac{d^8}{\varepsilon^2} (\frac{p^2}{\delta})^{1/m})$. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics",
"Mathematics"
] |
Title: On Bousfield's problem for solvable groups of finite Prüfer rank,
Abstract: For a group $G$ and $R=\mathbb Z,\mathbb Z/p,\mathbb Q$ we denote by $\hat
G_R$ the $R$-completion of $G.$ We study the map $H_n(G,K)\to H_n(\hat G_R,K),$
where $(R,K)=(\mathbb Z,\mathbb Z/p),(\mathbb Z/p,\mathbb Z/p),(\mathbb
Q,\mathbb Q).$ We prove that $H_2(G,K)\to H_2(\hat G_R,K)$ is an epimorphism
for a finitely generated solvable group $G$ of finite Prüfer rank. In
particular, Bousfield's $HK$-localisation of such groups coincides with the
$K$-completion for $K=\mathbb Z/p,\mathbb Q.$ Moreover, we prove that
$H_n(G,K)\to H_n(\hat G_R,K)$ is an epimorphism for any $n$ if $G$ is a
finitely presented group of the form $G=M\rtimes C,$ where $C$ is the infinite
cyclic group and $M$ is a $C$-module. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Approximate Analytical Solution of a Cancer Immunotherapy Model by the Application of Differential Transform and Adomian Decomposition Methods,
Abstract: Immunotherapy plays a major role in tumour treatment, in comparison with
other methods of dealing with cancer. The Kirschner-Panetta (KP) model of
cancer immunotherapy describes the interaction between tumour cells, effector
cells and interleukin-2 which are clinically utilized as medical treatment. The
model selects a rich concept of immune-tumour dynamics. In this paper,
approximate analytical solutions to KP model are represented by using the
differential transform and Adomian decomposition. The complicated nonlinearity
of the KP system causes the application of these two methods to require more
involved calculations. The approximate analytical solutions to the model are
compared with the results obtained by numerical fourth order Runge-Kutta
method. | [
0,
0,
0,
0,
1,
0
] | [
"Mathematics",
"Quantitative Biology"
] |
Title: Spectrum of signless 1-Laplacian on simplicial complexes,
Abstract: We first develop a general framework for signless 1-Laplacian defined in
terms of the combinatorial structure of a simplicial complex. The structure of
the eigenvectors and the complex feature of eigenvalues are studied. The
Courant nodal domain theorem for partial differential equation is extended to
the signless 1-Laplacian on complex. We also study the effects of a wedge sum
and a duplication of a motif on the spectrum of the signless 1-Laplacian, and
identify some of the combinatorial features of a simplicial complex that are
encoded in its spectrum. A special result is that the independent number and
clique covering number on a complex provide lower and upper bounds of the
multiplicity of the largest eigenvalue of signless 1-Laplacian, respectively,
which has no counterpart of $p$-Laplacian for any $p>1$. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Inter-Area Oscillation Damping With Non-Synchronized Wide-Area Power System Stabilizer,
Abstract: One of the major issues in an interconnected power system is the low damping
of inter-area oscillations which significantly reduces the power transfer
capability. Advances in Wide-Area Measurement System (WAMS) makes it possible
to use the information from geographical distant location to improve power
system dynamics and performances. A speed deviation based Wide-Area Power
System Stabilizer (WAPSS) is known to be effective in damping inter-area modes.
However, the involvement of wide-area signals gives rise to the problem of
time-delay, which may degrade the system performance. In general, time-stamped
synchronized signals from Phasor Data Concentrator (PDC) are used for WAPSS, in
which delays are introduced in both local and remote signals. One can opt for a
feedback of remote signal only from PDC and uses the local signal as it is
available, without time synchronization. This paper utilizes configurations of
time-matched synchronized and nonsychronized feedback and provides the
guidelines to design the controller. The controllers are synthesized using
$H_\infty$ control with regional pole placement for ensuring adequate dynamic
performance. To show the effectiveness of the proposed approach, two power
system models have been used for the simulations. It is shown that the
controllers designed based on the nonsynchronized signals are more robust to
time time delay variations than the controllers using synchronized signal. | [
1,
0,
0,
0,
0,
0
] | [
"Physics",
"Mathematics"
] |
Title: NVIDIA Tensor Core Programmability, Performance & Precision,
Abstract: The NVIDIA Volta GPU microarchitecture introduces a specialized unit, called
"Tensor Core" that performs one matrix-multiply-and-accumulate on 4x4 matrices
per clock cycle. The NVIDIA Tesla V100 accelerator, featuring the Volta
microarchitecture, provides 640 Tensor Cores with a theoretical peak
performance of 125 Tflops/s in mixed precision. In this paper, we investigate
current approaches to program NVIDIA Tensor Cores, their performances and the
precision loss due to computation in mixed precision.
Currently, NVIDIA provides three different ways of programming
matrix-multiply-and-accumulate on Tensor Cores: the CUDA Warp Matrix Multiply
Accumulate (WMMA) API, CUTLASS, a templated library based on WMMA, and cuBLAS
GEMM. After experimenting with different approaches, we found that NVIDIA
Tensor Cores can deliver up to 83 Tflops/s in mixed precision on a Tesla V100
GPU, seven and three times the performance in single and half precision
respectively. A WMMA implementation of batched GEMM reaches a performance of 4
Tflops/s. While precision loss due to matrix multiplication with half precision
input might be critical in many HPC applications, it can be considerably
reduced at the cost of increased computation. Our results indicate that HPC
applications using matrix multiplications can strongly benefit from using of
NVIDIA Tensor Cores. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: A spectroscopic survey of Orion KL between 41.5 and 50 GHz,
Abstract: Orion KL is one of the most frequently observed sources in the Galaxy, and
the site where many molecular species have been discovered for the first time.
With the availability of powerful wideband backends, it is nowadays possible to
complete spectral surveys in the entire mm-range to obtain a spectroscopically
unbiased chemical picture of the region. In this paper we present a sensitive
spectral survey of Orion KL, made with one of the 34m antennas of the Madrid
Deep Space Communications Complex in Robledo de Chavela, Spain. The spectral
range surveyed is from 41.5 to 50 GHz, with a frequency spacing of 180 kHz
(equivalent to about 1.2 km/s, depending on the exact frequency). The rms
achieved ranges from 8 to 12 mK. The spectrum is dominated by the J=1-0 SiO
maser lines and by radio recombination lines (RRLs), which were detected up to
Delta_n=11. Above a 3-sigma level, we identified 66 RRLs and 161 molecular
lines corresponding to 39 isotopologues from 20 molecules; a total of 18 lines
remain unidentified, two of them above a 5-sigma level. Results of radiative
modelling of the detected molecular lines (excluding masers) are presented. At
this frequency range, this is the most sensitive survey and also the one with
the widest band. Although some complex molecules like CH_3CH_2CN and CH_2CHCN
arise from the hot core, most of the detected molecules originate from the low
temperature components in Orion KL. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Restricted Boltzmann Machines for Robust and Fast Latent Truth Discovery,
Abstract: We address the problem of latent truth discovery, LTD for short, where the
goal is to discover the underlying true values of entity attributes in the
presence of noisy, conflicting or incomplete information. Despite a multitude
of algorithms to address the LTD problem that can be found in literature, only
little is known about their overall performance with respect to effectiveness
(in terms of truth discovery capabilities), efficiency and robustness. A
practical LTD approach should satisfy all these characteristics so that it can
be applied to heterogeneous datasets of varying quality and degrees of
cleanliness.
We propose a novel algorithm for LTD that satisfies the above requirements.
The proposed model is based on Restricted Boltzmann Machines, thus coined
LTD-RBM. In extensive experiments on various heterogeneous and publicly
available datasets, LTD-RBM is superior to state-of-the-art LTD techniques in
terms of an overall consideration of effectiveness, efficiency and robustness. | [
0,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Above-threshold ionization (ATI) in multicenter molecules: the role of the initial state,
Abstract: A possible route to extract electronic and nuclear dynamics from molecular
targets with attosecond temporal and nanometer spatial resolution is to employ
recolliding electrons as `probes'. The recollision process in molecules is,
however, very challenging to treat using {\it ab initio} approaches. Even for
the simplest diatomic systems, such as H$_2$, today's computational
capabilities are not enough to give a complete description of the electron and
nuclear dynamics initiated by a strong laser field. As a consequence,
approximate qualitative descriptions are called to play an important role. In
this contribution we extend the work presented in N. Suárez {\it et al.},
Phys.~Rev. A {\bf 95}, 033415 (2017), to three-center molecular targets.
Additionally, we incorporate a more accurate description of the molecular
ground state, employing information extracted from quantum chemistry software
packages. This step forward allows us to include, in a detailed way, both the
molecular symmetries and nodes present in the high-occupied molecular orbital.
We are able to, on the one hand, keep our formulation as analytical as in the
case of diatomics, and, on the other hand, to still give a complete description
of the underlying physics behind the above-threshold ionization process. The
application of our approach to complex multicenter - with more than 3 centers,
targets appears to be straightforward. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Generic Singularities of 3D Piecewise Smooth Dynamical Systems,
Abstract: The aim of this paper is to provide a discussion on current directions of
research involving typical singularities of 3D nonsmooth vector fields. A brief
survey of known results is presented. The main purpose of this work is to
describe the dynamical features of a fold-fold singularity in its most basic
form and to give a complete and detailed proof of its local structural
stability (or instability). In addition, classes of all topological types of a
fold-fold singularity are intrinsically characterized. Such proof essentially
follows firstly from some lines laid out by Colombo, García, Jeffrey,
Teixeira and others and secondly offers a rigorous mathematical treatment under
clear and crisp assumptions and solid arguments. One should to highlight that
the geometric-topological methods employed lead us to the completely
mathematical understanding of the dynamics around a T-singularity. This
approach lends itself to applications in generic bifurcation theory. It is
worth to say that such subject is still poorly understood in higher dimension. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Physics"
] |
Title: A Big Data Analysis Framework Using Apache Spark and Deep Learning,
Abstract: With the spreading prevalence of Big Data, many advances have recently been
made in this field. Frameworks such as Apache Hadoop and Apache Spark have
gained a lot of traction over the past decades and have become massively
popular, especially in industries. It is becoming increasingly evident that
effective big data analysis is key to solving artificial intelligence problems.
Thus, a multi-algorithm library was implemented in the Spark framework, called
MLlib. While this library supports multiple machine learning algorithms, there
is still scope to use the Spark setup efficiently for highly time-intensive and
computationally expensive procedures like deep learning. In this paper, we
propose a novel framework that combines the distributive computational
abilities of Apache Spark and the advanced machine learning architecture of a
deep multi-layer perceptron (MLP), using the popular concept of Cascade
Learning. We conduct empirical analysis of our framework on two real world
datasets. The results are encouraging and corroborate our proposed framework,
in turn proving that it is an improvement over traditional big data analysis
methods that use either Spark or Deep learning as individual elements. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Evolution of an eroding cylinder in single and lattice arrangements,
Abstract: The coupled evolution of an eroding cylinder immersed in a fluid within the
subcritical Reynolds range is explored with scale resolving simulations.
Erosion of the cylinder is driven by fluid shear stress. Kármán vortex
shedding features in the wake and these oscillations occur on a significantly
smaller time scale compared to the slowly eroding cylinder boundary. Temporal
and spatial averaging across the cylinder span allows mean wall statistics such
as wall shear to be evaluated; with geometry evolving in 2-D and the flow field
simulated in 3-D. The cylinder develops into a rounded triangular body with
uniform wall shear stress which is in agreement with existing theory and
experiments. We introduce a node shuffle algorithm to reposition nodes around
the cylinder boundary with a uniform distribution such that the mesh quality is
preserved under high boundary deformation. A cylinder is then modelled within
an infinite array of other cylinders by simulating a repeating unit cell and
their profile evolution is studied. A similar terminal form is discovered for
large cylinder spacings with consistent flow conditions and an intermediate
profile was found with a closely packed lattice before reaching the common
terminal form. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: FLAME: A Fast Large-scale Almost Matching Exactly Approach to Causal Inference,
Abstract: A classical problem in causal inference is that of matching, where treatment
units need to be matched to control units. Some of the main challenges in
developing matching methods arise from the tension among (i) inclusion of as
many covariates as possible in defining the matched groups, (ii) having matched
groups with enough treated and control units for a valid estimate of Average
Treatment Effect (ATE) in each group, and (iii) computing the matched pairs
efficiently for large datasets. In this paper we propose a fast method for
approximate and exact matching in causal analysis called FLAME (Fast
Large-scale Almost Matching Exactly). We define an optimization objective for
match quality, which gives preferences to matching on covariates that can be
useful for predicting the outcome while encouraging as many matches as
possible. FLAME aims to optimize our match quality measure, leveraging
techniques that are natural for query processing in the area of database
management. We provide two implementations of FLAME using SQL queries and
bit-vector techniques. | [
1,
0,
0,
1,
0,
0
] | [
"Statistics",
"Computer Science"
] |
Title: Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications,
Abstract: We propose a stochastic extension of the primal-dual hybrid gradient
algorithm studied by Chambolle and Pock in 2011 to solve saddle point problems
that are separable in the dual variable. The analysis is carried out for
general convex-concave saddle point problems and problems that are either
partially smooth / strongly convex or fully smooth / strongly convex. We
perform the analysis for arbitrary samplings of dual variables, and obtain
known deterministic results as a special case. Several variants of our
stochastic method significantly outperform the deterministic variant on a
variety of imaging tasks. | [
1,
0,
1,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Multiscale Hierarchical Convolutional Networks,
Abstract: Deep neural network algorithms are difficult to analyze because they lack
structure allowing to understand the properties of underlying transforms and
invariants. Multiscale hierarchical convolutional networks are structured deep
convolutional networks where layers are indexed by progressively higher
dimensional attributes, which are learned from training data. Each new layer is
computed with multidimensional convolutions along spatial and attribute
variables. We introduce an efficient implementation of such networks where the
dimensionality is progressively reduced by averaging intermediate layers along
attribute indices. Hierarchical networks are tested on CIFAR image data bases
where they obtain comparable precisions to state of the art networks, with much
fewer parameters. We study some properties of the attributes learned from these
databases. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science"
] |
Title: Caveats for information bottleneck in deterministic scenarios,
Abstract: Information bottleneck (IB) is a method for extracting information from one
random variable $X$ that is relevant for predicting another random variable
$Y$. To do so, IB identifies an intermediate "bottleneck" variable $T$ that has
low mutual information $I(X;T)$ and high mutual information $I(Y;T)$. The "IB
curve" characterizes the set of bottleneck variables that achieve maximal
$I(Y;T)$ for a given $I(X;T)$, and is typically explored by maximizing the "IB
Lagrangian", $I(Y;T) - \beta I(X;T)$. In some cases, $Y$ is a deterministic
function of $X$, including many classification problems in supervised learning
where the output class $Y$ is a deterministic function of the input $X$. We
demonstrate three caveats when using IB in any situation where $Y$ is a
deterministic function of $X$: (1) the IB curve cannot be recovered by
maximizing the IB Lagrangian for different values of $\beta$; (2) there are
"uninteresting" trivial solutions at all points of the IB curve; and (3) for
multi-layer classifiers that achieve low prediction error, different layers
cannot exhibit a strict trade-off between compression and prediction, contrary
to a recent proposal. We also show that when $Y$ is a small perturbation away
from being a deterministic function of $X$, these three caveats arise in an
approximate way. To address problem (1), we propose a functional that, unlike
the IB Lagrangian, can recover the IB curve in all cases. We demonstrate the
three caveats on the MNIST dataset. | [
0,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Monte Carlo Simulation of Charge Transport in Graphene (Simulazione Monte Carlo per il trasporto di cariche nel grafene),
Abstract: Simulations of charge transport in graphene are presented by implementing a
recent method published on the paper: V. Romano, A. Majorana, M. Coco, "DSMC
method consistent with the Pauli exclusion principle and comparison with
deterministic solutions for charge transport in graphene", Journal of
Computational Physics 302 (2015) 267-284. After an overview of the most
important aspects of the semiclassical transport model for the dynamics of
electrons in monolayer graphene, it is made a comparison in computational time
between MATLAB and Fortran implementations of the algorithms. Therefore it is
studied the case of graphene on substrates which it is produced original
results by introducing models for the distribution of distances between
graphene's atoms and impurities. Finally simulations, by choosing different
kind of substrates, are done.
-----
Le simulazioni per il trasporto di cariche nel grafene sono presentate
implementando un recente metodo pubblicato nell'articolo: V. Romano, A.
Majorana, M. Coco, "DSMC method consistent with the Pauli exclusion principle
and comparison with deterministic solutions for charge transport in graphene",
Journal of Computational Physics 302 (2015) 267-284. Dopo una panoramica sugli
aspetti più importanti del modello di trasporto semiclassico per la dinamica
degli elettroni nel grafene sospeso, è stato effettuato un confronto del
tempo computazionale tra le implementazioni MATLAB e Fortran dell'algoritmo.
Inoltre è stato anche studiato il caso del grafene su substrato su cui sono
stati prodotti dei risultati originali considerando dei modelli per la
distribuzione delle distanze tra gli atomi del grafene e le impurezze. Infine
sono state effettuate delle simulazioni scegliendo substrati di diversa natura. | [
1,
1,
0,
0,
0,
0
] | [
"Physics",
"Computer Science"
] |
Title: Embedded-Graph Theory,
Abstract: In this paper, we propose a new type of graph, denoted as "embedded-graph",
and its theory, which employs a distributed representation to describe the
relations on the graph edges. Embedded-graphs can express linguistic and
complicated relations, which cannot be expressed by the existing edge-graphs or
weighted-graphs. We introduce the mathematical definition of embedded-graph,
translation, edge distance, and graph similarity. We can transform an
embedded-graph into a weighted-graph and a weighted-graph into an edge-graph by
the translation method and by threshold calculation, respectively. The edge
distance of an embedded-graph is a distance based on the components of a target
vector, and it is calculated through cosine similarity with the target vector.
The graph similarity is obtained considering the relations with linguistic
complexity. In addition, we provide some examples and data structures for
embedded-graphs in this paper. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: An Equation-By-Equation Method for Solving the Multidimensional Moment Constrained Maximum Entropy Problem,
Abstract: An equation-by-equation (EBE) method is proposed to solve a system of
nonlinear equations arising from the moment constrained maximum entropy problem
of multidimensional variables. The design of the EBE method combines ideas from
homotopy continuation and Newton's iterative methods. Theoretically, we
establish the local convergence under appropriate conditions and show that the
proposed method, geometrically, finds the solution by searching along the
surface corresponding to one component of the nonlinear problem. We will
demonstrate the robustness of the method on various numerical examples,
including: (1) A six-moment one-dimensional entropy problem with an explicit
solution that contains components of order $10^0-10^3$ in magnitude; (2)
Four-moment multidimensional entropy problems with explicit solutions where the
resulting systems to be solved ranging from $70-310$ equations; (3) Four- to
eight-moment of a two-dimensional entropy problem, which solutions correspond
to the densities of the two leading EOFs of the wind stress-driven large-scale
oceanic model. In this case, we find that the EBE method is more accurate
compared to the classical Newton's method, the MATLAB generic solver, and the
previously developed BFGS-based method, which was also tested on this problem.
(4) Four-moment constrained of up to five-dimensional entropy problems which
solutions correspond to multidimensional densities of the components of the
solutions of the Kuramoto-Sivashinsky equation. For the higher dimensional
cases of this example, the EBE method is superior because it automatically
selects a subset of the prescribed moment constraints from which the maximum
entropy solution can be estimated within the desired tolerance. This selection
feature is particularly important since the moment constrained maximum entropy
problems do not necessarily have solutions in general. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Statistics"
] |
Title: Exploring the Space of Black-box Attacks on Deep Neural Networks,
Abstract: Existing black-box attacks on deep neural networks (DNNs) so far have largely
focused on transferability, where an adversarial instance generated for a
locally trained model can "transfer" to attack other learning models. In this
paper, we propose novel Gradient Estimation black-box attacks for adversaries
with query access to the target model's class probabilities, which do not rely
on transferability. We also propose strategies to decouple the number of
queries required to generate each adversarial sample from the dimensionality of
the input. An iterative variant of our attack achieves close to 100%
adversarial success rates for both targeted and untargeted attacks on DNNs. We
carry out extensive experiments for a thorough comparative evaluation of
black-box attacks and show that the proposed Gradient Estimation attacks
outperform all transferability based black-box attacks we tested on both MNIST
and CIFAR-10 datasets, achieving adversarial success rates similar to well
known, state-of-the-art white-box attacks. We also apply the Gradient
Estimation attacks successfully against a real-world Content Moderation
classifier hosted by Clarifai. Furthermore, we evaluate black-box attacks
against state-of-the-art defenses. We show that the Gradient Estimation attacks
are very effective even against these defenses. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Nonvanishing of central $L$-values of Maass forms,
Abstract: With the method of moments and the mollification method, we study the central
$L$-values of GL(2) Maass forms of weight $0$ and level $1$ and establish a
positive-proportional nonvanishing result of such values in the aspect of large
spectral parameter in short intervals, which is qualitatively optimal in view
of Weyl's law. As an application of this result and a formula of Katok--Sarnak,
we give a nonvanishing result on the first Fourier coefficients of Maass forms
of weight $\frac{1}{2}$ and level $4$ in the Kohnen plus space. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: A Universally Optimal Multistage Accelerated Stochastic Gradient Method,
Abstract: We study the problem of minimizing a strongly convex and smooth function when
we have noisy estimates of its gradient. We propose a novel multistage
accelerated algorithm that is universally optimal in the sense that it achieves
the optimal rate both in the deterministic and stochastic case and operates
without knowledge of noise characteristics. The algorithm consists of stages
that use a stochastic version of Nesterov's accelerated algorithm with a
specific restart and parameters selected to achieve the fastest reduction in
the bias-variance terms in the convergence rate bounds. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Mathematics",
"Statistics"
] |
Title: Radio Observation of Venus at Meter Wavelengths using the GMRT,
Abstract: The Venusian surface has been studied by measuring radar reflections and
thermal radio emission over a wide spectral region of several centimeters to
meter wavelengths from the Earth-based as well as orbiter platforms. The
radiometric observations, in the decimeter (dcm) wavelength regime showed a
decreasing trend in the observed brightness temperature (Tb) with increasing
wavelength. The thermal emission models available at present have not been able
to explain the radiometric observations at longer wavelength (dcm) to a
satisfactory level. This paper reports the first interferometric imaging
observations of Venus below 620 MHz. They were carried out at 606, 332.9 and
239.9 MHz using the Giant Meterwave Radio Telescope (GMRT). The Tb values
derived at the respective frequencies are 526 K, 409 K and < 426 K, with errors
of ~7% which are generally consistent with the reported Tb values at 608 MHz
and 430 MHz by previous investigators, but are much lower than those derived
from high-frequency observations at 1.38-22.46 GHz using the VLA. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Vision-based Autonomous Landing in Catastrophe-Struck Environments,
Abstract: Unmanned Aerial Vehicles (UAVs) equipped with bioradars are a life-saving
technology that can enable identification of survivors under collapsed
buildings in the aftermath of natural disasters such as earthquakes or gas
explosions. However, these UAVs have to be able to autonomously land on debris
piles in order to accurately locate the survivors. This problem is extremely
challenging as the structure of these debris piles is often unknown and no
prior knowledge can be leveraged. In this work, we propose a computationally
efficient system that is able to reliably identify safe landing sites and
autonomously perform the landing maneuver. Specifically, our algorithm computes
costmaps based on several hazard factors including terrain flatness, steepness,
depth accuracy and energy consumption information. We first estimate dense
candidate landing sites from the resulting costmap and then employ clustering
to group neighboring sites into a safe landing region. Finally, a minimum-jerk
trajectory is computed for landing considering the surrounding obstacles and
the UAV dynamics. We demonstrate the efficacy of our system using experiments
from a city scale hyperrealistic simulation environment and in real-world
scenarios with collapsed buildings. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Physics"
] |
Title: Data-Driven Sparse Sensor Placement for Reconstruction,
Abstract: Optimal sensor placement is a central challenge in the design, prediction,
estimation, and control of high-dimensional systems. High-dimensional states
can often leverage a latent low-dimensional representation, and this inherent
compressibility enables sparse sensing. This article explores optimized sensor
placement for signal reconstruction based on a tailored library of features
extracted from training data. Sparse point sensors are discovered using the
singular value decomposition and QR pivoting, which are two ubiquitous matrix
computations that underpin modern linear dimensionality reduction. Sparse
sensing in a tailored basis is contrasted with compressed sensing, a universal
signal recovery method in which an unknown signal is reconstructed via a sparse
representation in a universal basis. Although compressed sensing can recover a
wider class of signals, we demonstrate the benefits of exploiting known
patterns in data with optimized sensing. In particular, drastic reductions in
the required number of sensors and improved reconstruction are observed in
examples ranging from facial images to fluid vorticity fields. Principled
sensor placement may be critically enabling when sensors are costly and
provides faster state estimation for low-latency, high-bandwidth control.
MATLAB code is provided for all examples. | [
1,
0,
1,
0,
0,
0
] | [
"Computer Science",
"Mathematics",
"Statistics"
] |
Title: Fast trimers in one-dimensional extended Fermi-Hubbard model,
Abstract: We consider a one-dimensional two component extended Fermi-Hubbard model with
nearest neighbor interactions and mass imbalance between the two species. We
study the stability of trimers, various observables for detecting them, and
expansion dynamics. We generalize the definition of the trimer gap to include
the formation of different types of clusters originating from nearest neighbor
interactions. Expansion dynamics reveal rapidly propagating trimers, with
speeds exceeding doublon propagation in strongly interacting regime. We present
a simple model for understanding this unique feature of the movement of the
trimers, and we discuss the potential for experimental realization. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: A Geometric Approach for Real-time Monitoring of Dynamic Large Scale Graphs: AS-level graphs illustrated,
Abstract: The monitoring of large dynamic networks is a major chal- lenge for a wide
range of application. The complexity stems from properties of the underlying
graphs, in which slight local changes can lead to sizable variations of global
prop- erties, e.g., under certain conditions, a single link cut that may be
overlooked during monitoring can result in splitting the graph into two
disconnected components. Moreover, it is often difficult to determine whether a
change will propagate globally or remain local. Traditional graph theory
measure such as the centrality or the assortativity of the graph are not
satisfying to characterize global properties of the graph. In this paper, we
tackle the problem of real-time monitoring of dynamic large scale graphs by
developing a geometric approach that leverages notions of geometric curvature
and recent development in graph embeddings using Ollivier-Ricci curvature [47].
We illustrate the use of our method by consid- ering the practical case of
monitoring dynamic variations of global Internet using topology changes
information provided by combining several BGP feeds. In particular, we use our
method to detect major events and changes via the geometry of the embedding of
the graph. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Cluster decomposition of full configuration interaction wave functions: a tool for chemical interpretation of systems with strong correlation,
Abstract: Approximate full configuration interaction (FCI) calculations have recently
become tractable for systems of unforeseen size thanks to stochastic and
adaptive approximations to the exponentially scaling FCI problem. The result of
an FCI calculation is a weighted set of electronic configurations, which can
also be expressed in terms of excitations from a reference configuration. The
excitation amplitudes contain information on the complexity of the electronic
wave function, but this information is contaminated by contributions from
disconnected excitations, i.e. those excitations that are just products of
independent lower-level excitations. The unwanted contributions can be removed
via a cluster decomposition procedure, making it possible to examine the
importance of connected excitations in complicated multireference molecules
which are outside the reach of conventional algorithms. We present an
implementation of the cluster decomposition analysis and apply it to both true
FCI wave functions, as well as wave functions generated from the adaptive
sampling CI (ASCI) algorithm. The cluster decomposition is useful for
interpreting calculations in chemical studies, as a diagnostic for the
convergence of various excitation manifolds, as well as as a guidepost for
polynomially scaling electronic structure models. Applications are presented
for (i) the double dissociation of water, (ii) the carbon dimer, (iii) the
{\pi} space of polyacenes, as well as (iv) the chromium dimer. While the
cluster amplitudes exhibit rapid decay with increasing rank for the first three
systems, even connected octuple excitations still appear important in Cr$_2$,
suggesting that spin-restricted single-reference coupled-cluster approaches may
not be tractable for some problems in transition metal chemistry. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Chemistry"
] |
Title: Comment on Jackson's analysis of electric charge quantization due to interaction with Dirac's magnetic monopole,
Abstract: In J.D. Jackson's Classical Electrodynamics textbook, the analysis of Dirac's
charge quantization condition in the presence of a magnetic monopole has a
mathematical omission and an all too brief physical argument that might mislead
some students. This paper presents a detailed derivation of Jackson's main
result, explains the significance of the missing term, and highlights the close
connection between Jackson's findings and Dirac's original argument. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Mathematics"
] |
Title: On the coefficients of the Alekseev Torossian associator,
Abstract: This paper explains a method to calculate the coefficients of the
Alekseev-Torossian associator as linear combinations of iterated integrals of
Kontsevich weight forms of Lie graphs. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Exact spectral decomposition of a time-dependent one-particle reduced density matrix,
Abstract: We determine the exact time-dependent non-idempotent one-particle reduced
density matrix and its spectral decomposition for a harmonically confined
two-particle correlated one-dimensional system when the interaction terms in
the Schrödinger Hamiltonian are changed abruptly. Based on this matrix in
coordinate space we derivea precise condition for the equivalence of the purity
and the overlap-square of the correlated and non-correlated wave functions as
the system evolves in time. This equivalence holds only if the interparticle
interactions are affected, while the confinement terms are unaffected within
the stability range of the system. Under this condition we also analyze various
time-dependent measures of entanglement and demonstrate that, depending on the
magnitude of the changes made in the Schrödinger Hamiltonian, periodic,
logarithmically incresing or constant value behavior of the von Neumann entropy
can occur. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Mathematics"
] |
Title: A short variational proof of equivalence between policy gradients and soft Q learning,
Abstract: Two main families of reinforcement learning algorithms, Q-learning and policy
gradients, have recently been proven to be equivalent when using a softmax
relaxation on one part, and an entropic regularization on the other. We relate
this result to the well-known convex duality of Shannon entropy and the softmax
function. Such a result is also known as the Donsker-Varadhan formula. This
provides a short proof of the equivalence. We then interpret this duality
further, and use ideas of convex analysis to prove a new policy inequality
relative to soft Q-learning. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Non-parametric Message Important Measure: Storage Code Design and Transmission Planning for Big Data,
Abstract: Storage and transmission in big data are discussed in this paper, where
message importance is taken into account. Similar to Shannon Entropy and Renyi
Entropy, we define non-parametric message important measure (NMIM) as a measure
for the message importance in the scenario of big data, which can characterize
the uncertainty of random events. It is proved that the proposed NMIM can
sufficiently describe two key characters of big data: rare events finding and
large diversities of events. Based on NMIM, we first propose an effective
compressed encoding mode for data storage, and then discuss the channel
transmission over some typical channel models. Numerical simulation results
show that using our proposed strategy occupies less storage space without
losing too much message importance, and there are growth region and saturation
region for the maximum transmission, which contributes to designing of better
practical communication system. | [
0,
0,
1,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: A Versatile Approach to Evaluating and Testing Automated Vehicles based on Kernel Methods,
Abstract: Evaluation and validation of complicated control systems are crucial to
guarantee usability and safety. Usually, failure happens in some very rarely
encountered situations, but once triggered, the consequence is disastrous.
Accelerated Evaluation is a methodology that efficiently tests those
rarely-occurring yet critical failures via smartly-sampled test cases. The
distribution used in sampling is pivotal to the performance of the method, but
building a suitable distribution requires case-by-case analysis. This paper
proposes a versatile approach for constructing sampling distribution using
kernel method. The approach uses statistical learning tools to approximate the
critical event sets and constructs distributions based on the unique properties
of Gaussian distributions. We applied the method to evaluate the automated
vehicles. Numerical experiments show proposed approach can robustly identify
the rare failures and significantly reduce the evaluation time. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Closure structures parameterized by systems of isotone Galois connections,
Abstract: We study properties of classes of closure operators and closure systems
parameterized by systems of isotone Galois connections. The parameterizations
express stronger requirements on idempotency and monotony conditions of closure
operators. The present approach extends previous approaches to fuzzy closure
operators which appeared in analysis of object-attribute data with graded
attributes and reasoning with if-then rules in graded setting and is also
related to analogous results developed in linear temporal logic. In the paper,
we present foundations of the operators and include examples of general
problems in data analysis where such operators appear. | [
1,
0,
0,
0,
0,
0
] | [
"Mathematics",
"Computer Science"
] |
Title: High Isolation Improvement in a Compact UWB MIMO Antenna,
Abstract: A compact multiple-input-multiple-output (MIMO) antenna with very high
isolation is proposed for ultrawide-band (UWB) applications. The antenna with a
compact size of 30.1x20.5 mm^2 (0.31${\lambda}_0$ x0.21${\lambda}_0$ ) consists
of two planar-monopole antenna elements. It is found that isolation of more
than 25 dB can be achieved between two parallel monopole antenna elements. For
the low-frequency isolation, an efficient technique of bending the feed-line
and applying a new protruded ground is introduced. To increase isolation, a
design based on suppressing surface wave, near-field, and far-field coupling is
applied. The simulation and measurement results of the proposed antenna with
the good agreement are presented and show a bandwidth with S 11 < -10 dB, S 12
< -25 dB ranged from 3.1 to 10.6 GHz making the proposed antenna a good
candidate for UWB MIMO systems. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Computer Science"
] |
Title: Identifying and Alleviating Concept Drift in Streaming Tensor Decomposition,
Abstract: Tensor decompositions are used in various data mining applications from
social network to medical applications and are extremely useful in discovering
latent structures or concepts in the data. Many real-world applications are
dynamic in nature and so are their data. To deal with this dynamic nature of
data, there exist a variety of online tensor decomposition algorithms. A
central assumption in all those algorithms is that the number of latent
concepts remains fixed throughout the entire stream. However, this need not be
the case. Every incoming batch in the stream may have a different number of
latent concepts, and the difference in latent concepts from one tensor batch to
another can provide insights into how our findings in a particular application
behave and deviate over time. In this paper, we define "concept" and "concept
drift" in the context of streaming tensor decomposition, as the manifestation
of the variability of latent concepts throughout the stream. Furthermore, we
introduce SeekAndDestroy, an algorithm that detects concept drift in streaming
tensor decomposition and is able to produce results robust to that drift. To
the best of our knowledge, this is the first work that investigates concept
drift in streaming tensor decomposition. We extensively evaluate SeekAndDestroy
on synthetic datasets, which exhibit a wide variety of realistic drift. Our
experiments demonstrate the effectiveness of SeekAndDestroy, both in the
detection of concept drift and in the alleviation of its effects, producing
results with similar quality to decomposing the entire tensor in one shot.
Additionally, in real datasets, SeekAndDestroy outperforms other streaming
baselines, while discovering novel useful components. | [
0,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Unconstrained inverse quadratic programming problem,
Abstract: The paper covers a formulation of the inverse quadratic programming problem
in terms of unconstrained optimization where it is required to find the unknown
parameters (the matrix of the quadratic form and the vector of the quasi-linear
part of the quadratic form) provided that approximate estimates of the optimal
solution of the direct problem and those of the target function to be minimized
in the form of pairs of values lying in the corresponding neighborhoods are
only known. The formulation of the inverse problem and its solution are based
on the least squares method. In the explicit form the inverse problem solution
has been derived in the form a system of linear equations. The parameters
obtained can be used for reconstruction of the direct quadratic programming
problem and determination of the optimal solution and the extreme value of the
target function, which were not known formerly. It is possible this approach
opens new ways in over applications, for example, in neurocomputing and quadric
surfaces fitting. Simple numerical examples have been demonstrated. A scenario
in the Octave/MATLAB programming language has been proposed for practical
implementation of the method. | [
1,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Computer Science"
] |
Title: Families of Thue equations associated with a rank one subgroup of the unit group of a number field,
Abstract: Twisting a binary form $F_0(X,Y)\in{\mathbb{Z}}[X,Y]$ of degree $d\ge 3$ by
powers $\upsilon^a$ ($a\in{\mathbb{Z}}$) of an algebraic unit $\upsilon$ gives
rise to a binary form $F_a(X,Y)\in{\mathbb{Z}}[X,Y]$. More precisely, when $K$
is a number field of degree $d$, $\sigma_1,\sigma_2,\dots,\sigma_d$ the
embeddings of $K$ into $\mathbb{C}$, $\alpha$ a nonzero element in $K$,
$a_0\in{\mathbb{Z}}$, $a_0>0$ and $$ F_0(X,Y)=a_0\displaystyle\prod_{i=1}^d
(X-\sigma_i(\alpha) Y), $$ then for $a\in{\mathbb{Z}}$ we set $$
F_a(X,Y)=\displaystyle a_0\prod_{i=1}^d (X-\sigma_i(\alpha\upsilon^a) Y). $$
Given $m\ge 0$, our main result is an effective upper bound for the solutions
$(x,y,a)\in{\mathbb{Z}}^3$ of the Diophantine inequalities $$ 0<|F_a(x,y)|\le m
$$ for which $xy\not=0$ and ${\mathbb{Q}}(\alpha \upsilon^a)=K$. Our estimate
involves an effectively computable constant depending only on $d$; it is
explicit in terms of $m$, in terms of the heights of $F_0$ and of $\upsilon$,
and in terms of the regulator of the number field $K$. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: On Defects Between Gapped Boundaries in Two-Dimensional Topological Phases of Matter,
Abstract: Defects between gapped boundaries provide a possible physical realization of
projective non-abelian braid statistics. A notable example is the projective
Majorana/parafermion braid statistics of boundary defects in fractional quantum
Hall/topological insulator and superconductor heterostructures. In this paper,
we develop general theories to analyze the topological properties and
projective braiding of boundary defects of topological phases of matter in two
spatial dimensions. We present commuting Hamiltonians to realize defects
between gapped boundaries in any $(2+1)D$ untwisted Dijkgraaf-Witten theory,
and use these to describe their topological properties such as their quantum
dimension. By modeling the algebraic structure of boundary defects through
multi-fusion categories, we establish a bulk-edge correspondence between
certain boundary defects and symmetry defects in the bulk. Even though it is
not clear how to physically braid the defects, this correspondence elucidates
the projective braid statistics for many classes of boundary defects, both
amongst themselves and with bulk anyons. Specifically, three such classes of
importance to condensed matter physics/topological quantum computation are
studied in detail: (1) A boundary defect version of Majorana and parafermion
zero modes, (2) a similar version of genons in bilayer theories, and (3)
boundary defects in $\mathfrak{D}(S_3)$. | [
0,
1,
1,
0,
0,
0
] | [
"Physics"
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.