title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Doubly autoparallel structure on the probability simplex | On the probability simplex, we can consider the standard information
geometric structure with the e- and m-affine connections mutually dual with
respect to the Fisher metric. The geometry naturally defines submanifolds
simultaneously autoparallel for the both affine connections, which we call {\em
doubly autoparallel submanifolds}.
In this note we discuss their several interesting common properties. Further,
we algebraically characterize doubly autoparallel submanifolds on the
probability simplex and give their classification.
| 0 | 0 | 1 | 0 | 0 | 0 |
Probing Hidden Spin Order with Interpretable Machine Learning | The search of unconventional magnetic and non-magnetic states is a major
topic in the study of frustrated magnetism. Canonical examples of those states
include various spin liquids and spin nematics. However, discerning their
existence and the correct characterization is usually challenging. Here we
introduce a machine-learning protocol that can identify general nematic order
and their order parameter from seemingly featureless spin configurations, thus
providing comprehensive insight on the presence or absence of hidden orders. We
demonstrate the capabilities of our method by extracting the analytical form of
nematic order parameter tensors up to rank 6. This may prove useful in the
search for novel spin states and for ruling out spurious spin liquid
candidates.
| 0 | 0 | 0 | 1 | 0 | 0 |
Maximum a Posteriori Joint State Path and Parameter Estimation in Stochastic Differential Equations | A wide variety of phenomena of engineering and scientific interest are of a
continuous-time nature and can be modeled by stochastic differential equations
(SDEs), which represent the evolution of the uncertainty in the states of a
system. For systems of this class, some parameters of the SDE might be unknown
and the measured data often includes noise, so state and parameter estimators
are needed to perform inference and further analysis using the system state
path. The distributions of SDEs which are nonlinear or subject to non-Gaussian
measurement noise do not admit tractable analytic expressions, so state and
parameter estimators for these systems are often approximations based on
heuristics, such as the extended and unscented Kalman smoothers, or the
prediction error method using nonlinear Kalman filters. However, the Onsager
Machlup functional can be used to obtain fictitious densities for the
parameters and state-paths of SDEs with analytic expressions. In this thesis,
we provide a unified theoretical framework for maximum a posteriori (MAP)
estimation of general random variables, possibly infinite-dimensional, and show
how the Onsager--Machlup functional can be used to construct the joint MAP
state-path and parameter estimator for SDEs. We also prove that the minimum
energy estimator, which is often thought to be the MAP state-path estimator,
actually gives the state paths associated to the MAP noise paths. Furthermore,
we prove that the discretized MAP state-path and parameter estimators, which
have emerged recently as powerful alternatives to nonlinear Kalman smoothers,
converge hypographically as the discretization step vanishes. Their
hypographical limit, however, is the MAP estimator for SDEs when the
trapezoidal discretization is used and the minimum energy estimator when the
Euler discretization is used, associating different interpretations to each
discretized estimate.
| 0 | 0 | 1 | 1 | 0 | 0 |
Spreading of an infectious disease between different locations | The endogenous adaptation of agents, that may adjust their local contact
network in response to the risk of being infected, can have the perverse effect
of increasing the overall systemic infectiveness of a disease. We study a
dynamical model over two geographically distinct but interacting locations, to
better understand theoretically the mechanism at play. Moreover, we provide
empirical motivation from the Italian National Bovine Database, for the period
2006-2013.
| 0 | 0 | 0 | 0 | 0 | 1 |
Observation and calculation of the quasi-bound rovibrational levels of the electronic ground state of H$_2^+$ | Although the existence of quasi-bound rotational levels of the $X^+ \
^2\Sigma_g^+$ ground state of H$_2^+$ has been predicted a long time ago, these
states have never been observed. Calculated positions and widths of quasi-bound
rotational levels located close to the top of the centrifugal barriers have not
been reported either. Given the role that such states play in the recombination
of H(1s) and H$^+$ to form H$_2^+$, this lack of data may be regarded as one of
the largest unknown aspects of this otherwise accurately known fundamental
molecular cation. We present measurements of the positions and widths of the
lowest-lying quasi-bound rotational levels of H$_2^+$ and compare the
experimental results with the positions and widths we calculate using a
potential model for the $X^+$ state of H$_2^+$ which includes adiabatic,
nonadiabatic, relativistic and radiative corrections to the Born-Oppenheimer
approximation.
| 0 | 1 | 0 | 0 | 0 | 0 |
A case study of hurdle and generalized additive models in astronomy: the escape of ionizing radiation | The dark ages of the Universe end with the formation of the first generation
of stars residing in primeval galaxies. These objects were the first to produce
ultraviolet ionizing photons in a period when the cosmic gas changed from a
neutral state to an ionized one, known as Epoch of Reionization (EoR). A
pivotal aspect to comprehend the EoR is to probe the intertwined relationship
between the fraction of ionizing photons capable to escape dark haloes, also
known as the escape fraction ($f_{esc}$), and the physical properties of the
galaxy. This work develops a sound statistical model suitable to account for
such non-linear relationships and the non-Gaussian nature of $f_{esc}$. This
model simultaneously estimates the probability that a given primordial galaxy
starts the ionizing photon production and estimates the mean level of the
$f_{esc}$ once it is triggered. The model was employed in the First Billion
Years simulation suite, from which we show that the baryonic fraction and the
rate of ionizing photons appear to have a larger impact on $f_{esc}$ than
previously thought. A naive univariate analysis of the same problem would
suggest smaller effects for these properties and a much larger impact for the
specific star formation rate, which is lessened after accounting for other
galaxy properties and non-linearities in the statistical model.
| 0 | 0 | 0 | 1 | 0 | 0 |
Out-of-focus: Learning Depth from Image Bokeh for Robotic Perception | In this project, we propose a novel approach for estimating depth from RGB
images. Traditionally, most work uses a single RGB image to estimate depth,
which is inherently difficult and generally results in poor performance, even
with thousands of data examples. In this work, we alternatively use multiple
RGB images that were captured while changing the focus of the camera's lens.
This method leverages the natural depth information correlated to the different
patterns of clarity/blur in the sequence of focal images, which helps
distinguish objects at different depths. Since no such data set exists for
learning this mapping, we collect our own data set using customized hardware.
We then use a convolutional neural network for learning the depth from the
stacked focal images. Comparative studies were conducted on both a standard
RGBD data set and our own data set (learning from both single and multiple
images), and results verified that stacked focal images yield better depth
estimation than using just single RGB image.
| 1 | 0 | 0 | 0 | 0 | 0 |
Iterative Machine Teaching | In this paper, we consider the problem of machine teaching, the inverse
problem of machine learning. Different from traditional machine teaching which
views the learners as batch algorithms, we study a new paradigm where the
learner uses an iterative algorithm and a teacher can feed examples
sequentially and intelligently based on the current performance of the learner.
We show that the teaching complexity in the iterative case is very different
from that in the batch case. Instead of constructing a minimal training set for
learners, our iterative machine teaching focuses on achieving fast convergence
in the learner model. Depending on the level of information the teacher has
from the learner model, we design teaching algorithms which can provably reduce
the number of teaching examples and achieve faster convergence than learning
without teachers. We also validate our theoretical findings with extensive
experiments on different data distribution and real image datasets.
| 1 | 0 | 0 | 1 | 0 | 0 |
GibbsNet: Iterative Adversarial Inference for Deep Graphical Models | Directed latent variable models that formulate the joint distribution as
$p(x,z) = p(z) p(x \mid z)$ have the advantage of fast and exact sampling.
However, these models have the weakness of needing to specify $p(z)$, often
with a simple fixed prior that limits the expressiveness of the model.
Undirected latent variable models discard the requirement that $p(z)$ be
specified with a prior, yet sampling from them generally requires an iterative
procedure such as blocked Gibbs-sampling that may require many steps to draw
samples from the joint distribution $p(x, z)$. We propose a novel approach to
learning the joint distribution between the data and a latent code which uses
an adversarially learned iterative procedure to gradually refine the joint
distribution, $p(x, z)$, to better match with the data distribution on each
step. GibbsNet is the best of both worlds both in theory and in practice.
Achieving the speed and simplicity of a directed latent variable model, it is
guaranteed (assuming the adversarial game reaches the virtual training criteria
global minimum) to produce samples from $p(x, z)$ with only a few sampling
iterations. Achieving the expressiveness and flexibility of an undirected
latent variable model, GibbsNet does away with the need for an explicit $p(z)$
and has the ability to do attribute prediction, class-conditional generation,
and joint image-attribute modeling in a single model which is not trained for
any of these specific tasks. We show empirically that GibbsNet is able to learn
a more complex $p(z)$ and show that this leads to improved inpainting and
iterative refinement of $p(x, z)$ for dozens of steps and stable generation
without collapse for thousands of steps, despite being trained on only a few
steps.
| 1 | 0 | 0 | 1 | 0 | 0 |
Characterization of 1-Tough Graphs using Factors | For a graph $G$, let $odd(G)$ and $\omega(G)$ denote the number of odd
components and the number of components of $G$, respectively. Then it is
well-known that $G$ has a 1-factor if and only if $odd(G-S)\le |S|$ for all
$S\subset V(G)$. Also it is clear that $odd(G-S) \le \omega(G-S)$. In this
paper we characterize a 1-tough graph $G$, which satisfies $\omega(G-S) \le
|S|$ for all $\emptyset \ne S \subset V(G)$, using an $H$-factor of a
set-valued function $H:V(G) \to \{ \{1\}, \{0,2\} \}$. Moreover, we generalize
this characterization to a graph that satisfies $\omega(G-S) \le f(S)$ for all
$\emptyset \ne S \subset V(G)$, where $f:V(G) \to \{1,3,5, \ldots\}$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Optimization by gradient boosting | Gradient boosting is a state-of-the-art prediction technique that
sequentially produces a model in the form of linear combinations of simple
predictors---typically decision trees---by solving an infinite-dimensional
convex optimization problem. We provide in the present paper a thorough
analysis of two widespread versions of gradient boosting, and introduce a
general framework for studying these algorithms from the point of view of
functional optimization. We prove their convergence as the number of iterations
tends to infinity and highlight the importance of having a strongly convex risk
functional to minimize. We also present a reasonable statistical context
ensuring consistency properties of the boosting predictors as the sample size
grows. In our approach, the optimization procedures are run forever (that is,
without resorting to an early stopping strategy), and statistical
regularization is basically achieved via an appropriate $L^2$ penalization of
the loss and strong convexity arguments.
| 1 | 0 | 1 | 1 | 0 | 0 |
RDV: Register, Deposit, Vote: Secure and Decentralized Consensus Mechanism for Blockchain Networks | A decentralized payment system is not secure if transactions are transferred
directly between clients. In such a situation it is not possible to prevent a
client from redeeming some coins twice in separate transactions that means a
double-spending attack. Bitcoin uses a simple method to preventing this attack
i.e. all transactions are published in a unique log (blockchain). This approach
requires a global consensus on the blockchain that because of significant
latency for transaction confirmation is vulnerable against double-spending. The
solution is to accelerate confirmations. In this paper, we try to introduce an
alternative for PoW because of all its major and significant problems that lead
to collapsing decentralization of the Bitcoin, while a full decentralized
payment system is the main goal of Bitcoin idea. As the network is growing and
becoming larger day-today , Bitcoin is approaching this risk. The method we
introduce is based on a distributed voting process: RDV: Register, Deposit,
Vote.
| 1 | 0 | 0 | 0 | 0 | 0 |
The Rees algebra of a two-Borel ideal is Koszul | Let $M$ and $N$ be two monomials of the same degree, and let $I$ be the
smallest Borel ideal containing $M$ and $N$. We show that the toric ring of $I$
is Koszul by constructing a quadratic Gröbner basis for the associated toric
ideal. Our proofs use the construction of graphs corresponding to fibers of the
toric map. As a consequence, we conclude that the Rees algebra is also Koszul.
| 0 | 0 | 1 | 0 | 0 | 0 |
A forward--backward random process for the spectrum of 1D Anderson operators | We give a new expression for the law of the eigenvalues of the discrete
Anderson model on the finite interval $[0,N]$, in terms of two random processes
starting at both ends of the interval. Using this formula, we deduce that the
tail of the eigenvectors behaves approximatelylike $\exp(\sigma
B\_{|n-k|}-\gamma\frac{|n-k|}{4})$ where $B\_{s}$ is the Brownian motion and
$k$ is uniformly chosen in $[0,N]$ independentlyof $B\_{s}$. A similar result
has recently been shown by B. Rifkind and B. Virag in the critical case, that
is, when the random potential is multiplied by a factor $\frac{1}{\sqrt{N}}$
| 0 | 0 | 1 | 0 | 0 | 0 |
From Curves to Tropical Jacobians and Back | Given a curve defined over an algebraically closed field which is complete
with respect to a nontrivial valuation, we study its tropical Jacobian. This is
done by first tropicalizing the curve, and then computing the Jacobian of the
resulting weighted metric graph. In general, it is not known how to find the
abstract tropicalization of a curve defined by polynomial equations, since an
embedded tropicalization may not be faithful, and there is no known algorithm
for carrying out semistable reduction in practice. We solve this problem in the
case of hyperelliptic curves by studying admissible covers. We also describe
how to take a weighted metric graph and compute its period matrix, which gives
its tropical Jacobian and tropical theta divisor. Lastly, we describe the
present status of reversing this process, namely how to compute a curve which
has a given matrix as its period matrix.
| 0 | 0 | 1 | 0 | 0 | 0 |
Importance sampling the union of rare events with an application to power systems analysis | We consider importance sampling to estimate the probability $\mu$ of a union
of $J$ rare events $H_j$ defined by a random variable $\boldsymbol{x}$. The
sampler we study has been used in spatial statistics, genomics and
combinatorics going back at least to Karp and Luby (1983). It works by sampling
one event at random, then sampling $\boldsymbol{x}$ conditionally on that event
happening and it constructs an unbiased estimate of $\mu$ by multiplying an
inverse moment of the number of occuring events by the union bound. We prove
some variance bounds for this sampler. For a sample size of $n$, it has a
variance no larger than $\mu(\bar\mu-\mu)/n$ where $\bar\mu$ is the union
bound. It also has a coefficient of variation no larger than
$\sqrt{(J+J^{-1}-2)/(4n)}$ regardless of the overlap pattern among the $J$
events. Our motivating problem comes from power system reliability, where the
phase differences between connected nodes have a joint Gaussian distribution
and the $J$ rare events arise from unacceptably large phase differences. In the
grid reliability problems even some events defined by $5772$ constraints in
$326$ dimensions, with probability below $10^{-22}$, are estimated with a
coefficient of variation of about $0.0024$ with only $n=10{,}000$ sample
values.
| 1 | 0 | 0 | 1 | 0 | 0 |
Estimating the sensitivity of centrality measures w.r.t. measurement errors | Most network studies rely on an observed network that differs from the
underlying network which is obfuscated by measurement errors. It is well known
that such errors can have a severe impact on the reliability of network
metrics, especially on centrality measures: a more central node in the observed
network might be less central in the underlying network.
We introduce a metric for the reliability of centrality measures -- called
sensitivity. Given two randomly chosen nodes, the sensitivity means the
probability that the more central node in the observed network is also more
central in the underlying network. The sensitivity concept relies on the
underlying network which is usually not accessible. Therefore, we propose two
methods to approximate the sensitivity. The iterative method, which simulates
possible underlying networks for the estimation and the imputation method,
which uses the sensitivity of the observed network for the estimation. Both
methods rely on the observed network and assumptions about the underlying type
of measurement error (e.g., the percentage of missing edges or nodes).
Our experiments on real-world networks and random graphs show that the
iterative method performs well in many cases. In contrast, the imputation
method does not yield useful estimations for networks other than
Erdős-Rényi graphs.
| 1 | 1 | 0 | 0 | 0 | 0 |
Matrix product moments in normal variables | Let ${\cal X }=XX^{\prime}$ be a random matrix associated with a centered
$r$-column centered Gaussian vector $X$ with a covariance matrix $P$. In this
article we compute expectations of matrix-products of the form $\prod_{1\leq
i\leq n}({\cal X } P^{v_i})$ for any $n\geq 1$ and any multi-index parameters
$v_i\in\mathbb{N}$. We derive closed form formulae and a simple sequential
algorithm to compute these matrices w.r.t. the parameter $n$. The second part
of the article is dedicated to a non commutative binomial formula for the
central matrix-moments $\mathbb{E}\left(\left[{\cal X }-P\right]^n\right)$. The
matrix product moments discussed in this study are expressed in terms of
polynomial formulae w.r.t. the powers of the covariance matrix, with
coefficients depending on the trace of these matrices. We also derive a series
of estimates w.r.t. the Loewner order on quadratic forms. For instance we shall
prove the rather crude estimate $\mathbb{E}\left(\left[{\cal X
}-P\right]^n\right)\leq \mathbb{E}\left({\cal X }^n-P^n\right)$, for any $n\geq
1$
| 0 | 0 | 1 | 1 | 0 | 0 |
Asymptotics and Optimal Bandwidth Selection for Nonparametric Estimation of Density Level Sets | Bandwidth selection is crucial in the kernel estimation of density level
sets. Risk based on the symmetric difference between the estimated and true
level sets is usually used to measure their proximity. In this paper we provide
an asymptotic $L^p$ approximation to this risk, where $p$ is characterized by
the weight function in the risk. In particular the excess risk corresponds to
an $L^2$ type of risk, and is adopted in an optimal bandwidth selection rule
for nonparametric level set estimation of $d$-dimensional density functions
($d\geq 1$).
| 0 | 0 | 1 | 1 | 0 | 0 |
Population-specific design of de-immunized protein biotherapeutics | Immunogenicity is a major problem during the development of biotherapeutics
since it can lead to rapid clearance of the drug and adverse reactions. The
challenge for biotherapeutic design is therefore to identify mutants of the
protein sequence that minimize immunogenicity in a target population whilst
retaining pharmaceutical activity and protein function. Current approaches are
moderately successful in designing sequences with reduced immunogenicity, but
do not account for the varying frequencies of different human leucocyte antigen
alleles in a specific population and in addition, since many designs are
non-functional, require costly experimental post-screening. Here we report a
new method for de-immunization design using multi-objective combinatorial
optimization that simultaneously optimizes the likelihood of a functional
protein sequence at the same time as minimizing its immunogenicity tailored to
a target population. We bypass the need for three-dimensional protein structure
or molecular simulations to identify functional designs by automatically
generating sequences using probabilistic models that have been used previously
for mutation effect prediction and structure prediction. As proof-of-principle
we designed sequences of the C2 domain of Factor VIII and tested them
experimentally, resulting in a good correlation with the predicted
immunogenicity of our model.
| 1 | 0 | 0 | 0 | 0 | 0 |
Linearized Binary Regression | Probit regression was first proposed by Bliss in 1934 to study mortality
rates of insects. Since then, an extensive body of work has analyzed and used
probit or related binary regression methods (such as logistic regression) in
numerous applications and fields. This paper provides a fresh angle to such
well-established binary regression methods. Concretely, we demonstrate that
linearizing the probit model in combination with linear estimators performs on
par with state-of-the-art nonlinear regression methods, such as posterior mean
or maximum aposteriori estimation, for a broad range of real-world regression
problems. We derive exact, closed-form, and nonasymptotic expressions for the
mean-squared error of our linearized estimators, which clearly separates them
from nonlinear regression methods that are typically difficult to analyze. We
showcase the efficacy of our methods and results for a number of synthetic and
real-world datasets, which demonstrates that linearized binary regression finds
potential use in a variety of inference, estimation, signal processing, and
machine learning applications that deal with binary-valued observations or
measurements.
| 0 | 0 | 0 | 1 | 0 | 0 |
Arithmetic properties of polynomials | In this paper, first, we prove that the Diophantine system
\[f(z)=f(x)+f(y)=f(u)-f(v)=f(p)f(q)\] has infinitely many integer solutions for
$f(X)=X(X+a)$ with nonzero integers $a\equiv 0,1,4\pmod{5}$. Second, we show
that the above Diophantine system has an integer parametric solution for
$f(X)=X(X+a)$ with nonzero integers $a$, if there are integers $m,n,k$ such
that \[\begin{cases} \begin{split} (n^2-m^2) (4mnk(k+a+1) + a(m^2+2mn-n^2))
&\equiv0\pmod{(m^2+n^2)^2},\\ (m^2+2mn-n^2) ((m^2-2mn-n^2)k(k+a+1) - 2amn)
&\equiv0 \pmod{(m^2+n^2)^2}, \end{split} \end{cases}\] where $k\equiv0\pmod{4}$
when $a$ is even, and $k\equiv2\pmod{4}$ when $a$ is odd. Third, we get that
the Diophantine system \[f(z)=f(x)+f(y)=f(u)-f(v)=f(p)f(q)=\frac{f(r)}{f(s)}\]
has a five-parameter rational solution for $f(X)=X(X+a)$ with nonzero rational
number $a$ and infinitely many nontrivial rational parametric solutions for
$f(X)=X(X+a)(X+b)$ with nonzero integers $a,b$ and $a\neq b$. At last, we raise
some related questions.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Graph Analytics Framework for Ranking Authors, Papers and Venues | A lot of scientific works are published in different areas of science,
technology, engineering and mathematics. It is not easy, even for experts, to
judge the quality of authors, papers and venues (conferences and journals). An
objective measure to assign scores to these entities and to rank them is very
useful. Although, several metrics and indexes have been proposed earlier, they
suffer from various problems. In this paper, we propose a graph-based analytics
framework to assign scores and to rank authors, papers and venues. Our
algorithm considers only the link structures of the underlying graphs. It does
not take into account other aspects, such as the associated texts and the
reputation of these entities. In the limit of large number of iterations, the
solution of the iterative equations gives the unique entity scores. This
framework can be easily extended to other interdependent networks.
| 1 | 0 | 0 | 0 | 0 | 0 |
Inner Cohomology of the General Linear Group | The main theorem is incorrectly stated.
| 0 | 0 | 1 | 0 | 0 | 0 |
Particle-hole symmetry and composite fermions in fractional quantum Hall states | We study fractional quantum Hall states at filling fractions in the Jain
sequences using the framework of composite Dirac fermions. Synthesizing
previous work, we write down an effective field theory consistent with all
symmetry requirements, including Galilean invariance and particle-hole
symmetry. Employing a Fermi liquid description, we demonstrate the appearance
of the Girvin--Macdonlald--Platzman algebra and compute the dispersion relation
of neutral excitations and various response functions. Our results satisfy
requirements of particle-hole symmetry. We show that while the dispersion
relation obtained from the HLR theory is particle-hole symmetric, correlation
functions obtained from HLR are not. The results of the Dirac theory are shown
to be consistent with the Haldane bound on the projected structure factor,
while those of the HLR theory violate it.
| 0 | 1 | 0 | 0 | 0 | 0 |
Large-type Artin groups are systolic | We prove that Artin groups from a class containing all large-type Artin
groups are systolic. This provides a concise yet precise description of their
geometry. Immediate consequences are new results concerning large-type Artin
groups: biautomaticity; existence of $EZ$-boundaries; the Novikov conjecture;
descriptions of finitely presented subgroups, of virtually solvable subgroups,
and of centralizers for infinite order elements; the Burghelea conjecture and
the Bass conjecture; existence of low-dimensional models for classifying spaces
for some families of subgroups.
| 0 | 0 | 1 | 0 | 0 | 0 |
Gradient Sensing via Cell Communication | The chemotactic dynamics of cells and organisms that have no specialized
gradient sensing organelles is not well understood. In fact, chemotaxis of this
sort of organism is especially challenging to explain when the external
chemical gradient is so small as to make variations of concentrations minute
over the length of each of the organisms. Experimental evidence lends support
to the conjecture that chemotactic behavior of chains of cells can be achieved
via cell-to-cell communication. This is the chemotactic basis for the Local
Excitation, Global Inhibition (LEGI) model.
A generalization of the model for the communication component of the LEGI
model is proposed. Doing so permits us to study in detail how gradient sensing
changes as a function of the structure of the communication term. The key
findings of this study are, an accounting of how gradient sensing is affected
by the competition of communication and diffusive processes; the determination
of the scale dependence of the model outcomes; the sensitivity of communication
to parameters in the model. Together with an essential analysis of the dynamics
of the model, these findings can prove useful in suggesting experiments aimed
at determining the viability of a communication mechanism in chemotactic
dynamics of chains and networks of cells exposed to a chemical concentration
gradient.
| 0 | 0 | 0 | 0 | 1 | 0 |
Nichols Algebras and Quantum Principal Bundles | A general procedure for constructing Yetter-Drinfeld modules from quantum
principal bundles is introduced. As an application a Yetter-Drinfeld structure
is put on the cotangent space of the Heckenberger-Kolb calculi of the quantum
Grassmannians. For the special case of quantum projective space the associated
braiding is shown to be non-diagonal and of Hecke type. Moreover, its Nichols
algebra is shown to be finite-dimensional and equal to the anti-holomorphic
part of the total differential calculus.
| 0 | 0 | 1 | 0 | 0 | 0 |
Inference of signals with unknown correlation structure from nonlinear measurements | We present a method to reconstruct autocorrelated signals together with their
autocorrelation structure from nonlinear, noisy measurements for arbitrary
monotonous nonlinear instrument response. In the presented formulation the
algorithm provides a significant speedup compared to prior implementations,
allowing for a wider range of application. The nonlinearity can be used to
model instrument characteristics or to enforce properties on the underlying
signal, such as positivity. Uncertainties on any posterior quantities can be
provided due to independent samples from an approximate posterior distribution.
We demonstrate the methods applicability via simulated and real measurements,
using different measurement instruments, nonlinearities and dimensionality.
| 0 | 1 | 0 | 1 | 0 | 0 |
An optimization approach for dynamical Tucker tensor approximation | An optimization-based approach for the Tucker tensor approximation of
parameter-dependent data tensors and solutions of tensor differential equations
with low Tucker rank is presented. The problem of updating the tensor
decomposition is reformulated as fitting problem subject to the tangent space
without relying on an orthogonality gauge condition. A discrete Euler scheme is
established in an alternating least squares framework, where the quadratic
subproblems reduce to trace optimization problems, that are shown to be
explicitly solvable and accessible using SVD of small size. In the presence of
small singular values, instability for larger ranks is reduced, since the
method does not need the (pseudo) inverse of matricizations of the core tensor.
Regularization of Tikhonov type can be used to compensate for the lack of
uniqueness in the tangent space. The method is validated numerically and shown
to be stable also for larger ranks in the case of small singular values of the
core unfoldings. Higher order explicit integrators of Runge-Kutta type can be
composed.
| 0 | 1 | 0 | 0 | 0 | 0 |
Optical and structural study of the pressure-induced phase transition of CdWO$_4$ | The optical absorption of CdWO$_4$ is reported at high pressures up to 23
GPa. The onset of a phase transition was detected at 19.5 GPa, in good
agreement with a previous Raman spectroscopy study. The crystal structure of
the high-pressure phase of CdWO$_4$ was solved at 22 GPa employing
single-crystal synchrotron x-ray diffraction. The symmetry changes from space
group $P$2/$c$ in the low-pressure wolframite phase to $P2_1/c$ in the
high-pressure post-wolframite phase accompanied by a doubling of the unit-cell
volume. The octahedral oxygen coordination of the tungsten and cadmium ions is
increased to [7]-fold and [6+1]-fold, respectively, at the phase transition.
The compressibility of the low-pressure phase of CdWO$_4$ has been reevaluated
with powder x-ray diffraction up to 15 GPa finding a bulk modulus of $B_0$ =
123 GPa. The direct band gap of the low-pressure phase increases with
compression up to 16.9 GPa at 12 meV/GPa. At this point an indirect band gap
crosses the direct band gap and decreases at -2 meV/GPa up to 19.5 GPa where
the phase transition starts. At the phase transition the band gap collapses by
0.7 eV and another direct band gap decreases at -50 meV/GPa up to the maximum
measured pressure. The structural stability of the post-wolframite structure is
confirmed by \textit{ab initio} calculations finding the post-wolframite-type
phase to be more stable than the wolframite at 18 GPa. Lattice dynamic
calculations based on space group $P2_1/c$ explain well the Raman-active modes
previously measured in the high-pressure post-wolframite phase. The
pressure-induced band gap crossing in the wolframite phase as well as the
pressure dependence of the direct band gap in the high-pressure phase are
further discussed with respect to the calculations.
| 0 | 1 | 0 | 0 | 0 | 0 |
Learning Context-Sensitive Convolutional Filters for Text Processing | Convolutional neural networks (CNNs) have recently emerged as a popular
building block for natural language processing (NLP). Despite their success,
most existing CNN models employed in NLP share the same learned (and static)
set of filters for all input sentences. In this paper, we consider an approach
of using a small meta network to learn context-sensitive convolutional filters
for text processing. The role of meta network is to abstract the contextual
information of a sentence or document into a set of input-aware filters. We
further generalize this framework to model sentence pairs, where a
bidirectional filter generation mechanism is introduced to encapsulate
co-dependent sentence representations. In our benchmarks on four different
tasks, including ontology classification, sentiment analysis, answer sentence
selection, and paraphrase identification, our proposed model, a modified CNN
with context-sensitive filters, consistently outperforms the standard CNN and
attention-based CNN baselines. By visualizing the learned context-sensitive
filters, we further validate and rationalize the effectiveness of proposed
framework.
| 1 | 0 | 0 | 1 | 0 | 0 |
On right $S$-Noetherian rings and $S$-Noetherian modules | In this paper we study right $S$-Noetherian rings and modules, extending of
notions introduced by Anderson and Dumitrescu in commutative algebra to
noncommutative rings. Two characterizations of right $S$-Noetherian rings are
given in terms of completely prime right ideals and point annihilator sets. We
also prove an existence result for completely prime point annihilators of
certain $S$-Noetherian modules with the following consequence in commutative
algebra: If a module $M$ over a commutative ring is $S$-Noetherian with respect
to a multiplicative set $S$ that contains no zero-divisors for $M$, then $M$
has an associated prime.
| 0 | 0 | 1 | 0 | 0 | 0 |
Reconfiguration of Brain Network between Resting-state and Oddball Paradigm | The oddball paradigm is widely applied to the investigation of multiple
cognitive functions. Prior studies have explored the cortical oscillation and
power spectral differing from the resting-state conduction to oddball paradigm,
but whether brain networks existing the significant difference is still
unclear. Our study addressed how the brain reconfigures its architecture from a
resting-state condition (i.e., baseline) to P300 stimulus task in the visual
oddball paradigm. In this study, electroencephalogram (EEG) datasets were
collected from 24 postgraduate students, who were required to only mentally
count the number of target stimulus; afterwards the functional EEG networks
constructed in different frequency bands were compared between baseline and
oddball task conditions to evaluate the reconfiguration of functional network
in the brain. Compared to the baseline, our results showed the significantly (p
< 0.05) enhanced delta/theta EEG connectivity and decreased alpha default mode
network in the progress of brain reconfiguration to the P300 task. Furthermore,
the reconfigured coupling strengths were demonstrated to relate to P300
amplitudes, which were then regarded as input features to train a classifier to
differentiate the high and low P300 amplitudes groups with an accuracy of
77.78%. The findings of our study help us to understand the changes of
functional brain connectivity from resting-state to oddball stimulus task, and
the reconfigured network pattern has the potential for the selection of good
subjects for P300-based brain- computer interface.
| 0 | 0 | 0 | 0 | 1 | 0 |
Approximate Ranking from Pairwise Comparisons | A common problem in machine learning is to rank a set of n items based on
pairwise comparisons. Here ranking refers to partitioning the items into sets
of pre-specified sizes according to their scores, which includes identification
of the top-k items as the most prominent special case. The score of a given
item is defined as the probability that it beats a randomly chosen other item.
Finding an exact ranking typically requires a prohibitively large number of
comparisons, but in practice, approximate rankings are often adequate.
Accordingly, we study the problem of finding approximate rankings from pairwise
comparisons. We analyze an active ranking algorithm that counts the number of
comparisons won, and decides whether to stop or which pair of items to compare
next, based on confidence intervals computed from the data collected in
previous steps. We show that this algorithm succeeds in recovering approximate
rankings using a number of comparisons that is close to optimal up to
logarithmic factors. We also present numerical results, showing that in
practice, approximation can drastically reduce the number of comparisons
required to estimate a ranking.
| 0 | 0 | 0 | 1 | 0 | 0 |
Optimised information gathering in smartphone users | Human activities from hunting to emailing are performed in a fractal-like
scale invariant pattern. These patterns are considered efficient for hunting or
foraging, but are they efficient for gathering information? Here we link the
scale invariant pattern of inter-touch intervals on the smartphone to optimal
strategies for information gathering. We recorded touchscreen touches in 65
individuals for a month and categorized the activity into checking for
information vs. sharing content. For both categories, the inter-touch intervals
were well described by power-law fits spanning 5 orders of magnitude, from 1 s
to several hours. The power-law exponent typically found for checking was 1.5
and for generating it was 1.3. Next, by using computer simulations we addressed
whether the checking pattern was efficient - in terms of minimizing futile
attempts yielding no new information. We find that the best performing power
law exponent depends on the duration of the assessment and the exponent of 1.5
was the most efficient in the short-term i.e. in the few minutes range.
Finally, we addressed whether how people generated and shared content was in
tune with the checking pattern. We assumed that the unchecked posts must be
minimized for maximal efficiency and according to our analysis the most
efficient temporal pattern to share content was the exponent of 1.3 - which was
also the pattern displayed by the smartphone users. The behavioral organization
for content generation is different from content consumption across time
scales. We propose that this difference is a signature of optimal behavior and
the short-term assessments used in modern human actions.
| 1 | 1 | 0 | 0 | 0 | 0 |
On Recoverable and Two-Stage Robust Selection Problems with Budgeted Uncertainty | In this paper the problem of selecting $p$ out of $n$ available items is
discussed, such that their total cost is minimized. We assume that costs are
not known exactly, but stem from a set of possible outcomes.
Robust recoverable and two-stage models of this selection problem are
analyzed. In the two-stage problem, up to $p$ items is chosen in the first
stage, and the solution is completed once the scenario becomes revealed in the
second stage. In the recoverable problem, a set of $p$ items is selected in the
first stage, and can be modified by exchanging up to $k$ items in the second
stage, after a scenario reveals.
We assume that uncertain costs are modeled through bounded uncertainty sets,
i.e., the interval uncertainty sets with an additional linear (budget)
constraint, in their discrete and continuous variants. Polynomial algorithms
for recoverable and two-stage selection problems with continuous bounded
uncertainty, and compact mixed integer formulations in the case of discrete
bounded uncertainty are constructed.
| 1 | 0 | 1 | 0 | 0 | 0 |
Sparsity/Undersampling Tradeoffs in Anisotropic Undersampling, with Applications in MR Imaging/Spectroscopy | We study anisotropic undersampling schemes like those used in
multi-dimensional NMR spectroscopy and MR imaging, which sample exhaustively in
certain time dimensions and randomly in others.
Our analysis shows that anisotropic undersampling schemes are equivalent to
certain block-diagonal measurement systems. We develop novel exact formulas for
the sparsity/undersampling tradeoffs in such measurement systems. Our formulas
predict finite-N phase transition behavior differing substantially from the
well known asymptotic phase transitions for classical Gaussian undersampling.
Extensive empirical work shows that our formulas accurately describe observed
finite-N behavior, while the usual formulas based on universality are
substantially inaccurate.
We also vary the anisotropy, keeping the total number of samples fixed, and
for each variation we determine the precise sparsity/undersampling tradeoff
(phase transition). We show that, other things being equal, the ability to
recover a sparse object decreases with an increasing number of
exhaustively-sampled dimensions.
| 1 | 0 | 0 | 0 | 0 | 0 |
Multi-way sparsest cut problem on trees with a control on the number of parts and outliers | Given a graph, the sparsest cut problem asks for a subset of vertices whose
edge expansion (the normalized cut given by the subset) is minimized. In this
paper, we study a generalization of this problem seeking for $ k $ disjoint
subsets of vertices (clusters) whose all edge expansions are small and
furthermore, the number of vertices remained in the exterior of the subsets
(outliers) is also small. We prove that although this problem is $ NP-$hard for
trees, it can be solved in polynomial time for all weighted trees, provided
that we restrict the search space to subsets which induce connected subgraphs.
The proposed algorithm is based on dynamic programming and runs in the worst
case in $ O(k^2 n^3) $, when $ n $ is the number of vertices and $ k $ is the
number of clusters. It also runs in linear time when the number of clusters and
the number of outliers is bounded by a constant.
| 1 | 0 | 0 | 0 | 0 | 0 |
Gallucci's axiom revisited | In this paper we propose a well-justified synthetic approach of the
projective space. We define the concepts of plane and space of incidence and
also the Gallucci's axiom as an axiom to our classical projective space. To
this purpose we prove from our space axioms, the theorems of Desargues, Pappus,
the fundamental theorem of projectivities, and the fundamental theorem of
central-axial collinearities, respectively. Our building up do not use any
information on analytical projective geometry, as the concept of cross-ratio
and the homogeneous coordinates of points.
| 0 | 0 | 1 | 0 | 0 | 0 |
Effect of the non-thermal Sunyaev-Zel'dovich Effect on the temperature determination of galaxy clusters | A recent stacking analysis of Planck HFI data of galaxy clusters (Hurier
2016) allowed to derive the cluster temperatures by using the relativistic
corrections to the Sunyaev-Zel'dovich effect (SZE). However, the temperatures
of high-temperature clusters, as derived from this analysis, resulted to be
basically higher than the temperatures derived from X-ray measurements, at a
moderate statistical significance of $1.5\sigma$. This discrepancy has been
attributed by Hurier (2016) to calibration issues. In this paper we discuss an
alternative explanation for this discrepancy in terms of a non-thermal SZE
astrophysical component. We find that this explanation can work if non-thermal
electrons in galaxy clusters have a low value of their minimum momentum
($p_1\sim0.5-1$), and if their pressure is of the order of $20-30\%$ of the
thermal gas pressure. Both these conditions are hard to obtain if the
non-thermal electrons are mixed with the hot gas in the intra cluster medium,
but can be possibly obtained if the non-thermal electrons are mainly confined
in bubbles with high content of non-thermal plasma and low content of thermal
plasma, or in giant radio lobes/relics located in the outskirts of clusters. In
order to derive more precise results on the properties of non-thermal electrons
in clusters, and in view of more solid detections of a discrepancy between
X-rays and SZE derived clusters temperatures that cannot be explained in other
ways, it would be necessary to reproduce the full analysis done by Hurier
(2016) by adding systematically the non-thermal component of the SZE.
| 0 | 1 | 0 | 0 | 0 | 0 |
ModelFactory: A Matlab/Octave based toolbox to create human body models | Background: Model-based analysis of movements can help better understand
human motor control. Here, the models represent the human body as an
articulated multi-body system that reflects the characteristics of the human
being studied.
Results: We present an open-source toolbox that allows for the creation of
human models with easy-to-setup, customizable configurations. The toolbox
scripts are written in Matlab/Octave and provide a command-based interface as
well as a graphical interface to construct, visualize and export models.
Built-in software modules provide functionalities such as automatic scaling of
models based on subject height and weight, custom scaling of segment lengths,
mass and inertia, addition of body landmarks, and addition of motion capture
markers. Users can set up custom definitions of joints, segments and other body
properties using the many included examples as templates. In addition to the
human, any number of objects (e.g. exoskeletons, orthoses, prostheses, boxes)
can be added to the modeling environment.
Conclusions: The ModelFactory toolbox is published as open-source software
under the permissive zLib license. The toolbox fulfills an important function
by making it easier to create human models, and should be of interest to human
movement researchers.
This document is the author's version of this article.
| 1 | 0 | 0 | 0 | 1 | 0 |
Dimensionality reduction with missing values imputation | In this study, we propose a new statical approach for high-dimensionality
reduction of heterogenous data that limits the curse of dimensionality and
deals with missing values. To handle these latter, we propose to use the Random
Forest imputation's method. The main purpose here is to extract useful
information and so reducing the search space to facilitate the data exploration
process. Several illustrative numeric examples, using data coming from publicly
available machine learning repositories are also included. The experimental
component of the study shows the efficiency of the proposed analytical
approach.
| 1 | 0 | 0 | 1 | 0 | 0 |
An effective formalism for testing extensions to General Relativity with gravitational waves | The recent direct observation of gravitational waves (GW) from merging black
holes opens up the possibility of exploring the theory of gravity in the strong
regime at an unprecedented level. It is therefore interesting to explore which
extensions to General Relativity (GR) could be detected. We construct an
Effective Field Theory (EFT) satisfying the following requirements. It is
testable with GW observations; it is consistent with other experiments,
including short distance tests of GR; it agrees with widely accepted principles
of physics, such as locality, causality and unitarity; and it does not involve
new light degrees of freedom. The most general theory satisfying these
requirements corresponds to adding to the GR Lagrangian operators constructed
out of powers of the Riemann tensor, suppressed by a scale comparable to the
curvature of the observed merging binaries. The presence of these operators
modifies the gravitational potential between the compact objects, as well as
their effective mass and current quadrupoles, ultimately correcting the
waveform of the emitted GW.
| 0 | 1 | 0 | 0 | 0 | 0 |
On the Wiener-Hopf method for surface plasmons: Diffraction from semi-infinite metamaterial sheet | By formally invoking the Wiener-Hopf method, we explicitly solve a
one-dimensional, singular integral equation for the excitation of a slowly
decaying electromagnetic wave, called surface plasmon-polariton (SPP), of small
wavelength on a semi-infinite, flat conducting sheet irradiated by a plane wave
in two spatial dimensions. This setting is germane to wave diffraction by edges
of large sheets of single-layer graphene. Our analytical approach includes: (i)
formulation of a functional equation in the Fourier domain; (ii) evaluation of
a split function, which is expressed by a contour integral and is a key
ingredient of the Wiener-Hopf factorization; and (iii) extraction of the SPP as
a simple-pole residue of a Fourier integral. Our analytical solution is in good
agreement with a finite-element numerical computation.
| 0 | 0 | 1 | 0 | 0 | 0 |
Sim2Real View Invariant Visual Servoing by Recurrent Control | Humans are remarkably proficient at controlling their limbs and tools from a
wide range of viewpoints and angles, even in the presence of optical
distortions. In robotics, this ability is referred to as visual servoing:
moving a tool or end-point to a desired location using primarily visual
feedback. In this paper, we study how viewpoint-invariant visual servoing
skills can be learned automatically in a robotic manipulation scenario. To this
end, we train a deep recurrent controller that can automatically determine
which actions move the end-point of a robotic arm to a desired object. The
problem that must be solved by this controller is fundamentally ambiguous:
under severe variation in viewpoint, it may be impossible to determine the
actions in a single feedforward operation. Instead, our visual servoing system
must use its memory of past movements to understand how the actions affect the
robot motion from the current viewpoint, correcting mistakes and gradually
moving closer to the target. This ability is in stark contrast to most visual
servoing methods, which either assume known dynamics or require a calibration
phase. We show how we can learn this recurrent controller using simulated data
and a reinforcement learning objective. We then describe how the resulting
model can be transferred to a real-world robot by disentangling perception from
control and only adapting the visual layers. The adapted model can servo to
previously unseen objects from novel viewpoints on a real-world Kuka IIWA
robotic arm. For supplementary videos, see:
this https URL
| 1 | 0 | 0 | 0 | 0 | 0 |
The homotopy Lie algebra of symplectomorphism groups of 3-fold blow-ups of $(S^2 \times S^2, σ_{std} \oplus σ_{std}) $ | We consider the 3-point blow-up of the manifold $ (S^2 \times S^2, \sigma
\oplus \sigma)$ where $\sigma$ is the standard symplectic form which gives area
1 to the sphere $S^2$, and study its group of symplectomorphisms $\rm{Symp} (
S^2 \times S^2 \#\, 3\overline{\mathbb C\mathbb P}\,\!^2, \omega)$. So far, the
monotone case was studied by J. Evans and he proved that this group is
contractible. Moreover, J. Li, T. J. Li and W. Wu showed that the group
Symp$_{h}(S^2 \times S^2 \#\, 3\overline{ \mathbb C\mathbb P}\,\!^2,\omega) $
of symplectomorphisms that act trivially on homology is always connected and
recently they also computed its fundamental group. We describe, in full detail,
the rational homotopy Lie algebra of this group.
We show that some particular circle actions contained in the
symplectomorphism group generate its full topology. More precisely, they give
the generators of the homotopy graded Lie algebra of $\rm{Symp} (S^2 \times S^2
\#\, 3\overline{ \mathbb C\mathbb P}\,\!^2, \omega)$. Our study depends on
Karshon's classification of Hamiltonian circle actions and the inflation
technique introduced by Lalonde-McDuff. As an application, we deduce the rank
of the homotopy groups of $\rm{Symp}({\mathbb C\mathbb P}^2 \#\,
5\overline{\mathbb C\mathbb P}\,\!^2, \tilde \omega)$, in the case of small
blow-ups.
| 0 | 0 | 1 | 0 | 0 | 0 |
Pressure Drop and Flow development in the Entrance Region of Micro-Channels with Second Order Slip Boundary Conditions and the Requirement for Development Length | In the present investigation, the development of axial velocity profile, the
requirement for development length ($L^*_{fd}=L/D_{h}$) and the pressure drop
in the entrance region of circular and parallel plate micro-channels have been
critically analysed for a large range of operating conditions ($10^{-2}\le
Re\le 10^{4}$, $10^{-4}\le Kn\le 0.2$ and $0\le C_2\le 0.5$). For this purpose,
the conventional Navier-Stokes equations have been numerically solved using the
finite volume method on non-staggered grid, while employing the second-order
velocity slip condition at the wall with $C_1=1$. The results indicate that
although the magnitude of local velocity slip at the wall is always greater
than that for the fully-developed section, the local wall shear stress,
particularly for higher $Kn$ and $C_2$, could be considerably lower than its
fully-developed value. This effect, which is more prominent for lower $Re$,
significantly affects the local and the fully-developed incremental pressure
drop number $K(x)$ and $K_{fd}$, respectively. As a result, depending upon the
operating condition, $K_{fd}$, as well as $K(x)$, could assume negative values.
This never reported observation implies that in the presence of enhanced
velocity slip at the wall, the pressure gradient in the developing region could
even be less than that in the fully-developed section. From simulated data, it
has been observed that both $L^*_{fd}$ and $K_{fd}$ are characterised by the
low and the high $Re$ asymptotes, using which, extremely accurate correlations
for them have been proposed for both geometries. Although owing to the complex
nature, no correlation could be derived for $K(x)$ and an exact knowledge of
$K(x)$ is necessary for evaluating the actual pressure drop for a duct length
$L^*<L^*_{fd}$, a method has been proposed that provides a conservative
estimate of the pressure drop for both $K_{fd}>0$ and $K_{fd}\le0$.
| 0 | 1 | 0 | 0 | 0 | 0 |
Household poverty classification in data-scarce environments: a machine learning approach | We describe a method to identify poor households in data-scarce countries by
leveraging information contained in nationally representative household
surveys. It employs standard statistical learning techniques---cross-validation
and parameter regularization---which together reduce the extent to which the
model is over-fitted to match the idiosyncracies of observed survey data. The
automated framework satisfies three important constraints of this development
setting: i) The prediction model uses at most ten questions, which limits the
costs of data collection; ii) No computation beyond simple arithmetic is needed
to calculate the probability that a given household is poor, immediately after
data on the ten indicators is collected; and iii) One specification of the
model (i.e. one scorecard) is used to predict poverty throughout a country that
may be characterized by significant sub-national differences. Using survey data
from Zambia, the model's out-of-sample predictions distinguish poor households
from non-poor households using information contained in ten questions.
| 0 | 0 | 0 | 1 | 0 | 0 |
Linguistic Matrix Theory | Recent research in computational linguistics has developed algorithms which
associate matrices with adjectives and verbs, based on the distribution of
words in a corpus of text. These matrices are linear operators on a vector
space of context words. They are used to construct the meaning of composite
expressions from that of the elementary constituents, forming part of a
compositional distributional approach to semantics. We propose a Matrix Theory
approach to this data, based on permutation symmetry along with Gaussian
weights and their perturbations. A simple Gaussian model is tested against word
matrices created from a large corpus of text. We characterize the cubic and
quartic departures from the model, which we propose, alongside the Gaussian
parameters, as signatures for comparison of linguistic corpora. We propose that
perturbed Gaussian models with permutation symmetry provide a promising
framework for characterizing the nature of universality in the statistical
properties of word matrices. The matrix theory framework developed here
exploits the view of statistics as zero dimensional perturbative quantum field
theory. It perceives language as a physical system realizing a universality
class of matrix statistics characterized by permutation symmetry.
| 1 | 0 | 0 | 0 | 0 | 0 |
Dissecting Ponzi schemes on Ethereum: identification, analysis, and impact | Ponzi schemes are financial frauds where, under the promise of high profits,
users put their money, recovering their investment and interests only if enough
users after them continue to invest money. Originated in the offline world 150
years ago, Ponzi schemes have since then migrated to the digital world,
approaching first on the Web, and more recently hanging over cryptocurrencies
like Bitcoin. Smart contract platforms like Ethereum have provided a new
opportunity for scammers, who have now the possibility of creating
"trustworthy" frauds that still make users lose money, but at least are
guaranteed to execute "correctly". We present a comprehensive survey of Ponzi
schemes on Ethereum, analysing their behaviour and their impact from various
viewpoints. Perhaps surprisingly, we identify a remarkably high number of Ponzi
schemes, despite the hosting platform has been operating for less than two
years.
| 1 | 0 | 0 | 0 | 0 | 0 |
On orthogonality and learning recurrent networks with long term dependencies | It is well known that it is challenging to train deep neural networks and
recurrent neural networks for tasks that exhibit long term dependencies. The
vanishing or exploding gradient problem is a well known issue associated with
these challenges. One approach to addressing vanishing and exploding gradients
is to use either soft or hard constraints on weight matrices so as to encourage
or enforce orthogonality. Orthogonal matrices preserve gradient norm during
backpropagation and may therefore be a desirable property. This paper explores
issues with optimization convergence, speed and gradient stability when
encouraging or enforcing orthogonality. To perform this analysis, we propose a
weight matrix factorization and parameterization strategy through which we can
bound matrix norms and therein control the degree of expansivity induced during
backpropagation. We find that hard constraints on orthogonality can negatively
affect the speed of convergence and model performance.
| 1 | 0 | 0 | 0 | 0 | 0 |
Chunk-Based Bi-Scale Decoder for Neural Machine Translation | In typical neural machine translation~(NMT), the decoder generates a sentence
word by word, packing all linguistic granularities in the same time-scale of
RNN. In this paper, we propose a new type of decoder for NMT, which splits the
decode state into two parts and updates them in two different time-scales.
Specifically, we first predict a chunk time-scale state for phrasal modeling,
on top of which multiple word time-scale states are generated. In this way, the
target sentence is translated hierarchically from chunks to words, with
information in different granularities being leveraged. Experiments show that
our proposed model significantly improves the translation performance over the
state-of-the-art NMT model.
| 1 | 0 | 0 | 0 | 0 | 0 |
IL-Net: Using Expert Knowledge to Guide the Design of Furcated Neural Networks | Deep neural networks (DNN) excel at extracting patterns. Through
representation learning and automated feature engineering on large datasets,
such models have been highly successful in computer vision and natural language
applications. Designing optimal network architectures from a principled or
rational approach however has been less than successful, with the best
successful approaches utilizing an additional machine learning algorithm to
tune the network hyperparameters. However, in many technical fields, there
exist established domain knowledge and understanding about the subject matter.
In this work, we develop a novel furcated neural network architecture that
utilizes domain knowledge as high-level design principles of the network. We
demonstrate proof-of-concept by developing IL-Net, a furcated network for
predicting the properties of ionic liquids, which is a class of complex
multi-chemicals entities. Compared to existing state-of-the-art approaches, we
show that furcated networks can improve model accuracy by approximately 20-35%,
without using additional labeled data. Lastly, we distill two key design
principles for furcated networks that can be adapted to other domains.
| 0 | 0 | 0 | 1 | 0 | 0 |
Fast Spectral Clustering Using Autoencoders and Landmarks | In this paper, we introduce an algorithm for performing spectral clustering
efficiently. Spectral clustering is a powerful clustering algorithm that
suffers from high computational complexity, due to eigen decomposition. In this
work, we first build the adjacency matrix of the corresponding graph of the
dataset. To build this matrix, we only consider a limited number of points,
called landmarks, and compute the similarity of all data points with the
landmarks. Then, we present a definition of the Laplacian matrix of the graph
that enable us to perform eigen decomposition efficiently, using a deep
autoencoder. The overall complexity of the algorithm for eigen decomposition is
$O(np)$, where $n$ is the number of data points and $p$ is the number of
landmarks. At last, we evaluate the performance of the algorithm in different
experiments.
| 1 | 0 | 0 | 1 | 0 | 0 |
Sufficient Markov Decision Processes with Alternating Deep Neural Networks | Advances in mobile computing technologies have made it possible to monitor
and apply data-driven interventions across complex systems in real time. Markov
decision processes (MDPs) are the primary model for sequential decision
problems with a large or indefinite time horizon. Choosing a representation of
the underlying decision process that is both Markov and low-dimensional is
non-trivial. We propose a method for constructing a low-dimensional
representation of the original decision process for which: 1. the MDP model
holds; 2. a decision strategy that maximizes mean utility when applied to the
low-dimensional representation also maximizes mean utility when applied to the
original process. We use a deep neural network to define a class of potential
process representations and estimate the process of lowest dimension within
this class. The method is illustrated using data from a mobile study on heavy
drinking and smoking among college students.
| 0 | 0 | 1 | 1 | 0 | 0 |
Gate-controlled magnonic-assisted switching of magnetization in ferroelectric/ferromagnetic junctions | Interfacing a ferromagnet with a polarized ferroelectric gate generates a
non-uniform, interfacial spin density coupled to the ferroelectric polarization
allowing so for an electric field control of effective transversal field to
magnetization. Here we study the dynamic magnetization switching behavior of
such a multilayer system based on the Landau-Lifshitz-Baryakhtar equation,
demonstrating that interfacial magnetoelectric coupling is utilizable as a
highly localized and efficient tool for manipulating magnetism.
| 0 | 1 | 0 | 0 | 0 | 0 |
Three-Dimensional Electronic Structure of type-II Weyl Semimetal WTe$_2$ | By combining bulk sensitive soft-X-ray angular-resolved photoemission
spectroscopy and accurate first-principles calculations we explored the bulk
electronic properties of WTe$_2$, a candidate type-II Weyl semimetal featuring
a large non-saturating magnetoresistance. Despite the layered geometry
suggesting a two-dimensional electronic structure, we find a three-dimensional
electronic dispersion. We report an evident band dispersion in the reciprocal
direction perpendicular to the layers, implying that electrons can also travel
coherently when crossing from one layer to the other. The measured Fermi
surface is characterized by two well-separated electron and hole pockets at
either side of the $\Gamma$ point, differently from previous more surface
sensitive ARPES experiments that additionally found a significant quasiparticle
weight at the zone center. Moreover, we observe a significant sensitivity of
the bulk electronic structure of WTe$_2$ around the Fermi level to electronic
correlations and renormalizations due to self-energy effects, previously
neglected in first-principles descriptions.
| 0 | 1 | 0 | 0 | 0 | 0 |
Decentralized Online Learning with Kernels | We consider multi-agent stochastic optimization problems over reproducing
kernel Hilbert spaces (RKHS). In this setting, a network of interconnected
agents aims to learn decision functions, i.e., nonlinear statistical models,
that are optimal in terms of a global convex functional that aggregates data
across the network, with only access to locally and sequentially observed
samples. We propose solving this problem by allowing each agent to learn a
local regression function while enforcing consensus constraints. We use a
penalized variant of functional stochastic gradient descent operating
simultaneously with low-dimensional subspace projections. These subspaces are
constructed greedily by applying orthogonal matching pursuit to the sequence of
kernel dictionaries and weights. By tuning the projection-induced bias, we
propose an algorithm that allows for each individual agent to learn, based upon
its locally observed data stream and message passing with its neighbors only, a
regression function that is close to the globally optimal regression function.
That is, we establish that with constant step-size selections agents' functions
converge to a neighborhood of the globally optimal one while satisfying the
consensus constraints as the penalty parameter is increased. Moreover, the
complexity of the learned regression functions is guaranteed to remain finite.
On both multi-class kernel logistic regression and multi-class kernel support
vector classification with data generated from class-dependent Gaussian mixture
models, we observe stable function estimation and state of the art performance
for distributed online multi-class classification. Experiments on the Brodatz
textures further substantiate the empirical validity of this approach.
| 1 | 0 | 1 | 1 | 0 | 0 |
Enumeration of complementary-dual cyclic $\mathbb{F}_{q}$-linear $\mathbb{F}_{q^t}$-codes | Let $\mathbb{F}_q$ denote the finite field of order $q,$ $n$ be a positive
integer coprime to $q$ and $t \geq 2$ be an integer. In this paper, we
enumerate all the complementary-dual cyclic $\mathbb{F}_q$-linear
$\mathbb{F}_{q^t}$-codes of length $n$ by placing $\ast$, ordinary and
Hermitian trace bilinear forms on $\mathbb{F}_{q^t}^n.$
| 0 | 0 | 1 | 0 | 0 | 0 |
MUTAN: Multimodal Tucker Fusion for Visual Question Answering | Bilinear models provide an appealing framework for mixing and merging
information in Visual Question Answering (VQA) tasks. They help to learn high
level associations between question meaning and visual concepts in the image,
but they suffer from huge dimensionality issues. We introduce MUTAN, a
multimodal tensor-based Tucker decomposition to efficiently parametrize
bilinear interactions between visual and textual representations. Additionally
to the Tucker framework, we design a low-rank matrix-based decomposition to
explicitly constrain the interaction rank. With MUTAN, we control the
complexity of the merging scheme while keeping nice interpretable fusion
relations. We show how our MUTAN model generalizes some of the latest VQA
architectures, providing state-of-the-art results.
| 1 | 0 | 0 | 0 | 0 | 0 |
Nucleosynthesis Predictions and High-Precision Deuterium Measurements | Two new high-precision measurements of the deuterium abundance from absorbers
along the line of sight to the quasar PKS1937--1009 were presented. The
absorbers have lower neutral hydrogen column densities (N(HI) $\approx$
18\,cm$^{-2}$) than for previous high-precision measurements, boding well for
further extensions of the sample due to the plenitude of low column density
absorbers. The total high-precision sample now consists of 12 measurements with
a weighted average deuterium abundance of D/H = $2.55\pm0.02\times10^{-5}$. The
sample does not favour a dipole similar to the one detected for the fine
structure constant. The increased precision also calls for improved
nucleosynthesis predictions. For that purpose we have updated the public
AlterBBN code including new reactions, updated nuclear reaction rates, and the
possibility of adding new physics such as dark matter. The standard Big Bang
Nucleosynthesis prediction of D/H = $2.456\pm0.057\times10^{-5}$ is consistent
with the observed value within 1.7 standard deviations.
| 0 | 1 | 0 | 0 | 0 | 0 |
Nearly-Linear Time Spectral Graph Reduction for Scalable Graph Partitioning and Data Visualization | This paper proposes a scalable algorithmic framework for spectral reduction
of large undirected graphs. The proposed method allows computing much smaller
graphs while preserving the key spectral (structural) properties of the
original graph. Our framework is built upon the following two key components: a
spectrum-preserving node aggregation (reduction) scheme, as well as a spectral
graph sparsification framework with iterative edge weight scaling. We show that
the resulting spectrally-reduced graphs can robustly preserve the first few
nontrivial eigenvalues and eigenvectors of the original graph Laplacian. In
addition, the spectral graph reduction method has been leveraged to develop
much faster algorithms for multilevel spectral graph partitioning as well as
t-distributed Stochastic Neighbor Embedding (t-SNE) of large data sets. We
conducted extensive experiments using a variety of large graphs and data sets,
and obtained very promising results. For instance, we are able to reduce the
"coPapersCiteseer" graph with 0.43 million nodes and 16 million edges to a much
smaller graph with only 13K (32X fewer) nodes and 17K (950X fewer) edges in
about 16 seconds; the spectrally-reduced graphs also allow us to achieve up to
1100X speedup for spectral graph partitioning and up to 60X speedup for t-SNE
visualization of large data sets.
| 1 | 0 | 0 | 0 | 0 | 0 |
Text Indexing and Searching in Sublinear Time | We introduce the first index that can be built in $o(n)$ time for a text of
length $n$, and also queried in $o(m)$ time for a pattern of length $m$. On a
constant-size alphabet, for example, our index uses
$O(n\log^{1/2+\varepsilon}n)$ bits, is built in $O(n/\log^{1/2-\varepsilon} n)$
deterministic time, and finds the $\mathrm{occ}$ pattern occurrences in time
$O(m/\log n + \sqrt{\log n}\log\log n + \mathrm{occ})$, where $\varepsilon>0$
is an arbitrarily small constant. As a comparison, the most recent classical
text index uses $O(n\log n)$ bits, is built in $O(n)$ time, and searches in
time $O(m/\log n + \log\log n + \mathrm{occ})$. We build on a novel text
sampling based on difference covers, which enjoys properties that allow us
efficiently computing longest common prefixes in constant time. We extend our
results to the secondary memory model as well, where we give the first
construction in $o(Sort(n))$ time of a data structure with suffix array
functionality, which can search for patterns in the almost optimal time, with
an additive penalty of $O(\sqrt{\log_{M/B} n}\log\log n)$, where $M$ is the
size of main memory available and $B$ is the disk block size.
| 1 | 0 | 0 | 0 | 0 | 0 |
Temperature dependence of the bulk Rashba splitting in the bismuth tellurohalides | We study the temperature dependence of the Rashba-split bands in the bismuth
tellurohalides BiTe$X$ $(X=$ I, Br, Cl) from first principles. We find that
increasing temperature reduces the Rashba splitting, with the largest effect
observed in BiTeI with a reduction of the Rashba parameter of $40$% when
temperature increases from $0$ K to $300$ K. These results highlight the
inadequacy of previous interpretations of the observed Rashba splitting in
terms of static-lattice calculations alone. Notably, we find the opposite
trend, a strengthening of the Rashba splitting with rising temperature, in the
pressure-stabilized topological-insulator phase of BiTeI. We propose that the
opposite trends with temperature on either side of the topological phase
transition could be an experimental signature for identifying it. The predicted
temperature dependence is consistent with optical conductivity measurements,
and should also be observable using photoemission spectroscopy, which could
provide further insights into the nature of spin splitting and topology in the
bismuth tellurohalides.
| 0 | 1 | 0 | 0 | 0 | 0 |
Viscosity solutions and the minimal surface system | We give a definition of viscosity solution for the minimal surface system and
prove a version of Allard regularity theorem in this setting.
| 0 | 0 | 1 | 0 | 0 | 0 |
Ray-tracing semiclassical low frequency acoustic modeling with local and extended reaction boundaries | The recently introduced acoustic ray-tracing semiclassical (RTS) method is
validated for a set of practically relevant boundary conditions. RTS is a
frequency domain geometrical method which directly reproduces the acoustic
Green's function. As previously demonstrated for a rectangular room and weakly
absorbing boundaries with a real and frequency-independent impedance, RTS is
capable of modeling also the lowest modes of such a room, which makes it a
useful method for low frequency sound field modeling in enclosures. In
practice, rooms are furnished with diverse types of materials and acoustic
elements, resulting in a frequency-dependent, phase-modifying
absorption/reflection. In a realistic setting, we test the RTS method with two
additional boundary conditions: a local reaction boundary simulating a
resonating membrane absorber and an extended reaction boundary representing a
porous layer backed by a rigid boundary described within the Delany-Bazley-Miki
model, as well as a combination thereof. The RTS-modeled spatially dependent
pressure response and octave band decay curves with the corresponding
reverberation times are compared to those obtained by the finite element
method.
| 0 | 1 | 0 | 0 | 0 | 0 |
Towards Understanding the Evolution of the WWW Conference | The World Wide Web conference is a well-established and mature venue with an
already long history. Over the years it has been attracting papers reporting
many important research achievements centered around the Web. In this work we
aim at understanding the evolution of WWW conference series by detecting
crucial years and important topics. We propose a simple yet novel approach
based on tracking the classification errors of the conference papers according
to their predicted publication years.
| 1 | 0 | 0 | 0 | 0 | 0 |
Hierarchical State Abstractions for Decision-Making Problems with Computational Constraints | In this semi-tutorial paper, we first review the information-theoretic
approach to account for the computational costs incurred during the search for
optimal actions in a sequential decision-making problem. The traditional (MDP)
framework ignores computational limitations while searching for optimal
policies, essentially assuming that the acting agent is perfectly rational and
aims for exact optimality. Using the free-energy, a variational principle is
introduced that accounts not only for the value of a policy alone, but also
considers the cost of finding this optimal policy. The solution of the
variational equations arising from this formulation can be obtained using
familiar Bellman-like value iterations from dynamic programming (DP) and the
Blahut-Arimoto (BA) algorithm from rate distortion theory. Finally, we
demonstrate the utility of the approach for generating hierarchies of state
abstractions that can be used to best exploit the available computational
resources. A numerical example showcases these concepts for a path-planning
problem in a grid world environment.
| 1 | 0 | 0 | 1 | 0 | 0 |
The generalized Milne problem in gas-dusty atmosphere | We consider the generalized Milne problem in non-conservative plane-parallel
optically thick atmosphere consisting of two components - the free electrons
and small dust particles. Recall, that the traditional Milne problem describes
the propagation of radiation through the conservative (without absorption)
optically thick atmosphere when the source of thermal radiation located far
below the surface. In such case, the flux of propagating light is the same at
every distance in an atmosphere. In the generalized Milne problem, the flux
changes inside the atmosphere. The solutions of the both Milne problems give
the angular distribution and polarization degree of emerging radiation. The
considered problem depends on two dimensionless parameters W and (a+b), which
depend on three parameters: $\eta$ - the ratio of optical depth due to free
electrons to optical depth due to small dust grains; the absorption factor
$\varepsilon$ of dust grains and two coefficients - $\bar b_1$ and $\bar b_2$,
describing the averaged anisotropic dust grains. These coefficients obey the
relation $\bar b_1+3\bar b_2=1$. The goal of the paper is to study the
dependence of the radiation angular distribution and degree of polarization of
emerging light on these parameters. Here we consider only continuum radiation.
| 0 | 1 | 0 | 0 | 0 | 0 |
h-multigrid agglomeration based solution strategies for discontinuous Galerkin discretizations of incompressible flow problems | In this work we exploit agglomeration based $h$-multigrid preconditioners to
speed-up the iterative solution of discontinuous Galerkin discretizations of
the Stokes and Navier-Stokes equations. As a distinctive feature $h$-coarsened
mesh sequences are generated by recursive agglomeration of a fine grid,
admitting arbitrarily unstructured grids of complex domains, and agglomeration
based discontinuous Galerkin discretizations are employed to deal with
agglomerated elements of coarse levels. Both the expense of building coarse
grid operators and the performance of the resulting multigrid iteration are
investigated. For the sake of efficiency coarse grid operators are inherited
through element-by-element $L^2$ projections, avoiding the cost of numerical
integration over agglomerated elements. Specific care is devoted to the
projection of viscous terms discretized by means of the BR2 dG method. We
demonstrate that enforcing the correct amount of stabilization on coarse grids
levels is mandatory for achieving uniform convergence with respect to the
number of levels. The numerical solution of steady and unsteady, linear and
non-linear problems is considered tackling challenging 2D test cases and 3D
real life computations on parallel architectures. Significant execution time
gains are documented.
| 0 | 1 | 0 | 0 | 0 | 0 |
The content correlation of multiple streaming edges | We study how to detect clusters in a graph defined by a stream of edges,
without storing the entire graph. We extend the approach to dynamic graphs
defined by the most recent edges of the stream and to several streams. The {\em
content correlation }of two streams $\rho(t)$ is the Jaccard similarity of
their clusters in the windows before time $t$. We propose a simple and
efficient method to approximate this correlation online and show that for
dynamic random graphs which follow a power law degree distribution, we can
guarantee a good approximation. As an application, we follow Twitter streams
and compute their content correlations online. We then propose a {\em search by
correlation} where answers to sets of keywords are entirely based on the small
correlations of the streams. Answers are ordered by the correlations, and
explanations can be traced with the stored clusters.
| 1 | 0 | 0 | 0 | 0 | 0 |
Fundamental solutions for second order parabolic systems with drift terms | We construct fundamental solutions of second-order parabolic systems of
divergence form with bounded and measurable leading coefficients and divergence
free first-order coefficients in the class of $BMO^{-1}_x$, under the
assumption that weak solutions of the system satisfy a certain local
boundedness estimate. We also establish Gaussian upper bound for such
fundamental solutions under the same conditions.
| 0 | 0 | 1 | 0 | 0 | 0 |
CMB in the river frame and gauge invariance at second order | GAUGE INVARIANCE: The Sachs-Wolfe formula describing the Cosmic Microwave
Background (CMB) temperature anisotropies is one of the most important
relations in cosmology. Despite its importance, the gauge invariance of this
formula has only been discussed at first order. Here we discuss the subtle
issue of second-order gauge transformations on the CMB. By introducing two
rules (needed to handle the subtle issues), we prove the gauge invariance of
the second-order Sachs-Wolfe formula and provide several compact expressions
which can be useful for the study of gauge transformations on cosmology. Our
results go beyond a simple technicality: we discuss from a physical point of
view several aspects that improve our understanding of the CMB. We also
elucidate how crucial it is to understand gauge transformations on the CMB in
order to avoid errors and/or misconceptions as occurred in the past. THE RIVER
FRAME: we introduce a cosmological frame which we call the river frame. In this
frame, photons and any object can be thought as fishes swimming in the river
and relations are easily expressed in either the metric or the covariant
formalism then ensuring a transparent geometric meaning. Finally, our results
show that the river frame is useful to make perturbative and non-perturbative
analysis. In particular, it was already used to obtain the fully nonlinear
generalization of the Sachs-Wolfe formula and is used here to describe
second-order perturbations.
| 0 | 1 | 0 | 0 | 0 | 0 |
Active matrix completion with uncertainty quantification | The noisy matrix completion problem, which aims to recover a low-rank matrix
$\mathbf{X}$ from a partial, noisy observation of its entries, arises in many
statistical, machine learning, and engineering applications. In this paper, we
present a new, information-theoretic approach for active sampling (or
designing) of matrix entries for noisy matrix completion, based on the maximum
entropy design principle. One novelty of our method is that it implicitly makes
use of uncertainty quantification (UQ) -- a measure of uncertainty for
unobserved matrix entries -- to guide the active sampling procedure. The
proposed framework reveals several novel insights on the role of compressive
sensing (e.g., coherence) and coding design (e.g., Latin squares) on the
sampling performance and UQ for noisy matrix completion. Using such insights,
we develop an efficient posterior sampler for UQ, which is then used to guide a
closed-form sampling scheme for matrix entries. Finally, we illustrate the
effectiveness of this integrated sampling / UQ methodology in simulation
studies and two applications to collaborative filtering.
| 0 | 0 | 0 | 1 | 0 | 0 |
Majoration du nombre de valeurs friables d'un polynôme | For $Q$ a polynomial with integer coefficients and $x, y \geq 2$, we prove
upper bounds for the quantity $\Psi_Q(x, y) = |\{n\leq x: p\mid Q(n)\Rightarrow
p\leq y\}|$.
We apply our results to a problem of De Koninck, Doyon and Luca on integers
divisible by the square of their largest prime factor. As a corollary to our
arguments, we improve the known level of distribution of the set $\{n^2-D\}$
for well-factorable moduli, previously due to Iwaniec. We also consider the
Chebyshev problem of estimating $\max\{P^+(n^2-D), n\leq x\}$ and make
explicit, in Deshouillers-Iwaniec's state-of-the-art result, the dependence on
the Selberg eigenvalue conjecture.
| 0 | 0 | 1 | 0 | 0 | 0 |
General analytical solution for the electromagnetic grating diffraction problem | Implementing the modal method in the electromagnetic grating diffraction
problem delivered by the curvilinear coordinate transformation yields a general
analytical solution to the 1D grating diffraction problem in a form of a
T-matrix. Simultaneously it is shown that the validity of the Rayleigh
expansion is defined by the validity of the modal expansion in a transformed
medium delivered by the coordinate transformation.
| 0 | 1 | 1 | 0 | 0 | 0 |
Synthetic geometry of differential equations: I. Jets and comonad structure | We give an abstract formulation of the formal theory partial differential
equations (PDEs) in synthetic differential geometry, one that would seamlessly
generalize the traditional theory to a range of enhanced contexts, such as
super-geometry, higher (stacky) differential geometry, or even a combination of
both. A motivation for such a level of generality is the eventual goal of
solving the open problem of covariant geometric pre-quantization of locally
variational field theories, which may include fermions and (higher) gauge
fields. (abridged)
| 0 | 0 | 1 | 0 | 0 | 0 |
Precision Prediction for the Cosmological Density Distribution | The distribution of matter in the universe is, to first order, lognormal.
Improving this approximation requires characterization of the third moment
(skewness) of the log density field. Thus, using Millennium Simulation
phenomenology and building on previous work, we present analytic fits for the
mean, variance, and skewness of the log density field $A$. We further show that
a Generalized Extreme Value (GEV) distribution accurately models $A$; we submit
that this GEV behavior is the result of strong intrapixel correlations, without
which the smoothed distribution would tend (by the Central Limit Theorem)
toward a Gaussian. Our GEV model yields cumulative distribution functions
accurate to within 1.7 per cent for near-concordance cosmologies, over a range
of redshifts and smoothing scales.
| 0 | 1 | 0 | 0 | 0 | 0 |
Hamiltonian analogs of combustion engines: a systematic exception to adiabatic decoupling | Workhorse theories throughout all of physics derive effective Hamiltonians to
describe slow time evolution, even though low-frequency modes are actually
coupled to high-frequency modes. Such effective Hamiltonians are accurate
because of \textit{adiabatic decoupling}: the high-frequency modes `dress' the
low-frequency modes, and renormalize their Hamiltonian, but they do not
steadily inject energy into the low-frequency sector. Here, however, we
identify a broad class of dynamical systems in which adiabatic decoupling fails
to hold, and steady energy transfer across a large gap in natural frequency
(`steady downconversion') instead becomes possible, through nonlinear
resonances of a certain form. Instead of adiabatic decoupling, the special
features of multiple time scale dynamics lead in these cases to efficiency
constraints that somewhat resemble thermodynamics.
| 0 | 1 | 0 | 0 | 0 | 0 |
Towards Arbitrary Noise Augmentation - Deep Learning for Sampling from Arbitrary Probability Distributions | Accurate noise modelling is important for training of deep learning
reconstruction algorithms. While noise models are well known for traditional
imaging techniques, the noise distribution of a novel sensor may be difficult
to determine a priori. Therefore, we propose learning arbitrary noise
distributions. To do so, this paper proposes a fully connected neural network
model to map samples from a uniform distribution to samples of any explicitly
known probability density function. During the training, the Jensen-Shannon
divergence between the distribution of the model's output and the target
distribution is minimized. We experimentally demonstrate that our model
converges towards the desired state. It provides an alternative to existing
sampling methods such as inversion sampling, rejection sampling, Gaussian
mixture models and Markov-Chain-Monte-Carlo. Our model has high sampling
efficiency and is easily applied to any probability distribution, without the
need of further analytical or numerical calculations.
| 0 | 0 | 0 | 1 | 0 | 0 |
Unbiased Simulation for Optimizing Stochastic Function Compositions | In this paper, we introduce an unbiased gradient simulation algorithms for
solving convex optimization problem with stochastic function compositions. We
show that the unbiased gradient generated from the algorithm has finite
variance and finite expected computation cost. We then combined the unbiased
gradient simulation with two variance reduced algorithms (namely SVRG and SCSG)
and showed that the proposed optimization algorithms based on unbiased gradient
simulations exhibit satisfactory convergence properties. Specifically, in the
SVRG case, the algorithm with simulated gradient can be shown to converge
linearly to optima in expectation and almost surely under strong convexity.
Finally, for the numerical experiment,we applied the algorithms to two
important cases of stochastic function compositions optimization: maximizing
the Cox's partial likelihood model and training conditional random fields.
| 0 | 0 | 0 | 1 | 0 | 0 |
Temporal Grounding Graphs for Language Understanding with Accrued Visual-Linguistic Context | A robot's ability to understand or ground natural language instructions is
fundamentally tied to its knowledge about the surrounding world. We present an
approach to grounding natural language utterances in the context of factual
information gathered through natural-language interactions and past visual
observations. A probabilistic model estimates, from a natural language
utterance, the objects,relations, and actions that the utterance refers to, the
objectives for future robotic actions it implies, and generates a plan to
execute those actions while updating a state representation to include newly
acquired knowledge from the visual-linguistic context. Grounding a command
necessitates a representation for past observations and interactions; however,
maintaining the full context consisting of all possible observed objects,
attributes, spatial relations, actions, etc., over time is intractable.
Instead, our model, Temporal Grounding Graphs, maintains a learned state
representation for a belief over factual groundings, those derived from
natural-language interactions, and lazily infers new groundings from visual
observations using the context implied by the utterance. This work
significantly expands the range of language that a robot can understand by
incorporating factual knowledge and observations of its workspace in its
inference about the meaning and grounding of natural-language utterances.
| 1 | 0 | 0 | 0 | 0 | 0 |
Nonconvex generalizations of ADMM for nonlinear equality constrained problems | The growing demand on efficient and distributed optimization algorithms for
large-scale data stimulates the popularity of Alternative Direction Methods of
Multipliers (ADMM) in numerous areas, such as compressive sensing, matrix
completion, and sparse feature learning. While linear equality constrained
problems have been extensively explored to be solved by ADMM, there lacks a
generic framework for ADMM to solve problems with nonlinear equality
constraints, which are common in practical application (e.g., orthogonality
constraints). To address this problem, in this paper, we proposed a new generic
ADMM framework for handling nonlinear equality constraints, called neADMM.
First, we propose the generalized problem formulation and systematically
provide the sufficient condition for the convergence of neADMM. Second, we
prove a sublinear convergence rate based on variational inequality framework
and also provide an novel accelerated strategy on the update of the penalty
parameter. In addition, several practical applications under the generic
framework of neADMM are provided. Experimental results on several applications
demonstrate the usefulness of our neADMM.
| 1 | 0 | 1 | 0 | 0 | 0 |
Interpretable LSTMs For Whole-Brain Neuroimaging Analyses | The analysis of neuroimaging data poses several strong challenges, in
particular, due to its high dimensionality, its strong spatio-temporal
correlation and the comparably small sample sizes of the respective datasets.
To address these challenges, conventional decoding approaches such as the
searchlight reduce the complexity of the decoding problem by considering local
clusters of voxels only. Thereby, neglecting the distributed spatial patterns
of brain activity underlying many cognitive states. In this work, we introduce
the DLight framework, which overcomes these challenges by utilizing a long
short-term memory unit (LSTM) based deep neural network architecture to analyze
the spatial dependency structure of whole-brain fMRI data. In order to maintain
interpretability of the neuroimaging data, we adapt the layer-wise relevance
propagation (LRP) method. Thereby, we enable the neuroscientist user to study
the learned association of the LSTM between the data and the cognitive state of
the individual. We demonstrate the versatility of DLight by applying it to a
large fMRI dataset of the Human Connectome Project. We show that the decoding
performance of our method scales better with large datasets, and moreover
outperforms conventional decoding approaches, while still detecting
physiologically appropriate brain areas for the cognitive states classified. We
also demonstrate that DLight is able to detect these areas on several levels of
data granularity (i.e., group, subject, trial, time point).
| 0 | 0 | 0 | 0 | 1 | 0 |
Effect of Isopropanol on Gold Assisted Chemical Etching of Silicon Microstructures | Wet etching is an essential and complex step in semiconductor device
processing. Metal-Assisted Chemical Etching (MacEtch) is fundamentally a wet
but anisotropic etching method. In the MacEtch technique, there are still a
number of unresolved challenges preventing the optimal fabrication of
high-aspect-ratio semiconductor micro- and nanostructures, such as undesired
etching, uncontrolled catalyst movement, non-uniformity and micro-porosity in
the metal-free areas. Here, an optimized MacEtch process using with a
nanostructured Au catalyst is proposed for fabrication of Si high aspect ratio
microstructures. The addition of isopropanol as surfactant in the HF-H2O2 water
solution improves the uniformity and the control of the H2 gas release. An
additional KOH etching removes eventually the unwanted nanowires left by the
MacEtch through the nanoporous catalyst film. We demonstrate the benefits of
the isopropanol addition for reducing the etching rate and the nanoporosity of
etched structures with a monothonical decrease as a function of the isopropanol
concentration.
| 0 | 1 | 0 | 0 | 0 | 0 |
Integrating Runtime Values with Source Code to Facilitate Program Comprehension | An inherently abstract nature of source code makes programs difficult to
understand. In our research, we designed three techniques utilizing concrete
values of variables and other expressions during program execution.
RuntimeSearch is a debugger extension searching for a given string in all
expressions at runtime. DynamiDoc generates documentation sentences containing
examples of arguments, return values and state changes. RuntimeSamp augments
source code lines in the IDE (integrated development environment) with sample
variable values. In this post-doctoral article, we briefly describe these three
approaches and related motivational studies, surveys and evaluations. We also
reflect on the PhD study, providing advice for current students. Finally,
short-term and long-term future work is described.
| 1 | 0 | 0 | 0 | 0 | 0 |
Nearest Embedded and Embedding Self-Nested Trees | Self-nested trees present a systematic form of redundancy in their subtrees
and thus achieve optimal compression rates by DAG compression. A method for
quantifying the degree of self-similarity of plants through self-nested trees
has been introduced by Godin and Ferraro in 2010. The procedure consists in
computing a self-nested approximation, called the nearest embedding self-nested
tree, that both embeds the plant and is the closest to it. In this paper, we
propose a new algorithm that computes the nearest embedding self-nested tree
with a smaller overall complexity, but also the nearest embedded self-nested
tree. We show from simulations that the latter is mostly the closest to the
initial data, which suggests that this better approximation should be used as a
privileged measure of the degree of self-similarity of plants.
| 1 | 0 | 0 | 0 | 0 | 0 |
Computing the Lusztig--Vogan Bijection | Let $G$ be a connected complex reductive algebraic group with Lie algebra
$\mathfrak{g}$. The Lusztig--Vogan bijection relates two bases for the bounded
derived category of $G$-equivariant coherent sheaves on the nilpotent cone
$\mathcal{N}$ of $\mathfrak{g}$. One basis is indexed by $\Lambda^+$, the set
of dominant weights of $G$, and the other by $\Omega$, the set of pairs
$(\mathcal{O}, \mathcal{E})$ consisting of a nilpotent orbit $\mathcal{O}
\subset \mathcal{N}$ and an irreducible $G$-equivariant vector bundle
$\mathcal{E} \rightarrow \mathcal{O}$. The existence of the Lusztig--Vogan
bijection $\gamma \colon \Omega \rightarrow \Lambda^+$ was proven by
Bezrukavnikov, and an algorithm computing $\gamma$ in type $A$ was given by
Achar. Herein we present a combinatorial description of $\gamma$ in type $A$
that subsumes and dramatically simplifies Achar's algorithm.
| 0 | 0 | 1 | 0 | 0 | 0 |
Divide and Conquer: Variable Set Separation in Hybrid Systems Reachability Analysis | In this paper we propose an improvement for flowpipe-construction-based
reachability analysis techniques for hybrid systems. Such methods apply
iterative successor computations to pave the reachable region of the state
space by state sets in an over-approximative manner. As the computational costs
steeply increase with the dimension, in this work we analyse the possibilities
for improving scalability by dividing the search space in sub-spaces and
execute reachability computations in the sub-spaces instead of the global
space. We formalise such an algorithm and provide experimental evaluations to
compare the efficiency as well as the precision of our sub-space search to the
original search in the global space.
| 1 | 0 | 0 | 0 | 0 | 0 |
The bottom of the spectrum of time-changed processes and the maximum principle of Schrödinger operators | We give a necessary and sufficient condition for the maximum principle of
Schrödinger operators in terms of the bottom of the spectrum of
time-changed processes. As a corollary, we obtain a sufficient condition for
the Liouville property of Schrödinger operators.
| 0 | 0 | 1 | 0 | 0 | 0 |
Autocorrelation and Lower Bound on the 2-Adic Complexity of LSB Sequence of $p$-ary $m$-Sequence | In modern stream cipher, there are many algorithms, such as ZUC, LTE
encryption algorithm and LTE integrity algorithm, using bit-component sequences
of $p$-ary $m$-sequences as the input of the algorithm. Therefore, analyzing
their statistical property (For example, autocorrelation, linear complexity and
2-adic complexity) of bit-component sequences of $p$-ary $m$-sequences is
becoming an important research topic. In this paper, we first derive some
autocorrelation properties of LSB (Least Significant Bit) sequences of $p$-ary
$m$-sequences, i.e., we convert the problem of computing autocorrelations of
LSB sequences of period $p^n-1$ for any positive $n\geq2$ to the problem of
determining autocorrelations of LSB sequence of period $p-1$. Then, based on
this property and computer calculation, we list some autocorrelation
distributions of LSB sequences of $p$-ary $m$-sequences with order $n$ for some
small primes $p$'s, such as $p=3,5,7,11,17,31$. Additionally, using their
autocorrelation distributions and the method inspired by Hu, we give the lower
bounds on the 2-adic complexities of these LSB sequences. Our results show that
the main parts of all the lower bounds on the 2-adic complexity of these LSB
sequencesare larger than $\frac{N}{2}$, where $N$ is the period of these
sequences. Therefor, these bounds are large enough to resist the analysis of
RAA (Rational Approximation Algorithm) for FCSR (Feedback with Carry Shift
Register). Especially, for a Mersenne prime $p=2^k-1$, since all its
bit-component sequences of a $p$-ary $m$-sequence are shift equivalent, our
results hold for all its bit-component sequences.
| 1 | 0 | 0 | 0 | 0 | 0 |
Integration of Machine Learning Techniques to Evaluate Dynamic Customer Segmentation Analysis for Mobile Customers | The telecommunications industry is highly competitive, which means that the
mobile providers need a business intelligence model that can be used to achieve
an optimal level of churners, as well as a minimal level of cost in marketing
activities. Machine learning applications can be used to provide guidance on
marketing strategies. Furthermore, data mining techniques can be used in the
process of customer segmentation. The purpose of this paper is to provide a
detailed analysis of the C.5 algorithm, within naive Bayesian modelling for the
task of segmenting telecommunication customers behavioural profiling according
to their billing and socio-demographic aspects. Results have been
experimentally implemented.
| 1 | 0 | 0 | 1 | 0 | 0 |
A Convex Cycle-based Degradation Model for Battery Energy Storage Planning and Operation | A vital aspect in energy storage planning and operation is to accurately
model its operational cost, which mainly comes from the battery cell
degradation. Battery degradation can be viewed as a complex material fatigue
process that based on stress cycles. Rainflow algorithm is a popular way for
cycle identification in material fatigue process, and has been extensively used
in battery degradation assessment. However, the rainflow algorithm does not
have a closed form, which makes the major difficulty to include it in
optimization. In this paper, we prove the rainflow cycle-based cost is convex.
Convexity enables the proposed degradation model to be incorporated in
different battery optimization problems and guarantees the solution quality. We
provide a subgradient algorithm to solve the problem. A case study on PJM
regulation market demonstrates the effectiveness of the proposed degradation
model in maximizing the battery operating profits as well as extending its
lifetime.
| 1 | 0 | 1 | 0 | 0 | 0 |
Demonstration of cascaded modulator-chicane micro-bunching of a relativistic electron beam | We present results of an experiment showing the first successful
demonstration of a cascaded micro-bunching scheme. Two modulator-chicane
pre-bunchers arranged in series and a high power mid-IR laser seed are used to
modulate a 52 MeV electron beam into a train of sharp microbunches phase-locked
to the external drive laser. This configuration allows to increase the fraction
of electrons trapped in a strongly tapered inverse free electron laser (IFEL)
undulator to 96\%, with up to 78\% of the particles accelerated to the final
design energy yielding a significant improvement compared to the classical
single buncher scheme. These results represent a critical advance in
laser-based longitudinal phase space manipulations and find application both in
high gradient advanced acceleration as well as in high peak and average power
coherent radiation sources.
| 0 | 1 | 0 | 0 | 0 | 0 |
Grothendieck rigidity of 3-manifold groups | We show that fundamental groups of compact, orientable, irreducible
3-manifolds with toroidal boundary are Grothendieck rigid.
| 0 | 0 | 1 | 0 | 0 | 0 |
Sobczyk's simplicial calculus does not have a proper foundation | The pseudoscalars in Garret Sobczyk's paper \emph{Simplicial Calculus with
Geometric Algebra} are not well defined. Therefore his calculus does not have a
proper foundation.
| 0 | 0 | 1 | 0 | 0 | 0 |
Hierarchy of Information Scrambling, Thermalization, and Hydrodynamic Flow in Graphene | We determine the information scrambling rate $\lambda_{L}$ due to
electron-electron Coulomb interaction in graphene. $\lambda_{L}$ characterizes
the growth of chaos and has been argued to give information about the
thermalization and hydrodynamic transport coefficients of a many-body system.
We demonstrate that $\lambda_{L}$ behaves for strong coupling similar to
transport and energy relaxation rates. A weak coupling analysis, however,
reveals that scrambling is related to dephasing or single particle relaxation.
Furthermore, $\lambda_{L}$ is found to be parametrically larger than the
collision rate relevant for hydrodynamic processes, such as electrical
conduction or viscous flow, and the rate of energy relaxation, relevant for
thermalization. Thus, while scrambling is obviously necessary for
thermalization and quantum transport, it does generically not set the time
scale for these processes. In addition we derive a quantum kinetic theory for
information scrambling that resembles the celebrated Boltzmann equation and
offers a physically transparent insight into quantum chaos in many-body
systems.
| 0 | 1 | 0 | 0 | 0 | 0 |
Neural Collaborative Autoencoder | In recent years, deep neural networks have yielded state-of-the-art
performance on several tasks. Although some recent works have focused on
combining deep learning with recommendation, we highlight three issues of
existing models. First, these models cannot work on both explicit and implicit
feedback, since the network structures are specially designed for one
particular case. Second, due to the difficulty on training deep neural
networks, existing explicit models do not fully exploit the expressive
potential of deep learning. Third, neural network models are easier to overfit
on the implicit setting than shallow models. To tackle these issues, we present
a generic recommender framework called Neural Collaborative Autoencoder (NCAE)
to perform collaborative filtering, which works well for both explicit feedback
and implicit feedback. NCAE can effectively capture the subtle hidden
relationships between interactions via a non-linear matrix factorization
process. To optimize the deep architecture of NCAE, we develop a three-stage
pre-training mechanism that combines supervised and unsupervised feature
learning. Moreover, to prevent overfitting on the implicit setting, we propose
an error reweighting module and a sparsity-aware data-augmentation strategy.
Extensive experiments on three real-world datasets demonstrate that NCAE can
significantly advance the state-of-the-art.
| 1 | 0 | 0 | 1 | 0 | 0 |
Reexamination of Tolman's law and the Gibbs adsorption equation for curved interfaces | The influence of the surface curvature on the surface tension of small
droplets in equilibrium with a surrounding vapour, or small bubbles in
equilibrium with a surrounding liquid, can be expanded as $\gamma(R) = \gamma_0
+ c_1\gamma_0/R + O(1/R^2)$, where $R = R_\gamma$ is the radius of the surface
of tension and $\gamma_0$ is the surface tension of the planar interface,
corresponding to zero curvature. According to Tolman's law, the first-order
coefficient in this expansion is assumed to be related to the planar limit
$\delta_0$ of the Tolman length, i.e., the difference $\delta = R_\rho -
R_\gamma$ between the equimolar radius and the radius of the surface of
tension, by $c_1 = -2\delta_0$.
We show here that the deduction of Tolman's law from interfacial
thermodynamics relies on an inaccurate application of the Gibbs adsorption
equation to dispersed phases (droplets or bubbles). A revision of the
underlying theory reveals that the adsorption equation needs to be employed in
an alternative manner to that suggested by Tolman. Accordingly, we develop a
generalized Gibbs adsorption equation which consistently takes the size
dependence of interfacial properties into account, and show that from this
equation, a relation between the Tolman length and the influence of the size of
the dispersed phase on the surface tension cannot be deduced, invalidating the
argument which was put forward by Tolman [J. Chem. Phys. 17 (1949) 333].
| 0 | 1 | 0 | 0 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.