title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
A 588 Gbps LDPC Decoder Based on Finite-Alphabet Message Passing | An ultra-high throughput low-density parity check (LDPC) decoder with an
unrolled full-parallel architecture is proposed, which achieves the highest
decoding throughput compared to previously reported LDPC decoders in the
literature. The decoder benefits from a serial message-transfer approach
between the decoding stages to alleviate the well-known routing congestion
problem in parallel LDPC decoders. Furthermore, a finite-alphabet message
passing algorithm is employed to replace the variable node update rule of the
standard min-sum decoder with look-up tables, which are designed in a way that
maximizes the mutual information between decoding messages. The proposed
algorithm results in an architecture with reduced bit-width messages, leading
to a significantly higher decoding throughput and to a lower area as compared
to a min-sum decoder when serial message-transfer is used. The architecture is
placed and routed for the standard min-sum reference decoder and for the
proposed finite-alphabet decoder using a custom pseudo-hierarchical backend
design strategy to further alleviate routing congestions and to handle the
large design. Post-layout results show that the finite-alphabet decoder with
the serial message-transfer architecture achieves a throughput as large as 588
Gbps with an area of 16.2 mm$^2$ and dissipates an average power of 22.7 pJ per
decoded bit in a 28 nm FD-SOI library. Compared to the reference min-sum
decoder, this corresponds to 3.1 times smaller area and 2 times better energy
efficiency.
| 1 | 0 | 0 | 0 | 0 | 0 |
On q-analogues of quadratic Euler sums | In this paper we define the generalized q-analogues of Euler sums and present
a new family of identities for q-analogues of Euler sums by using the method of
Jackson q-integral rep- resentations of series. We then apply it to obtain a
family of identities relating quadratic Euler sums to linear sums and
q-polylogarithms. Furthermore, we also use certain stuffle products to evaluate
several q-series with q-harmonic numbers. Some interesting new results and
illustrative examples are considered. Finally, we can obtain some explicit
relations for the classical Euler sums when q approaches to 1.
| 0 | 0 | 1 | 0 | 0 | 0 |
An iterative aggregation and disaggregation approach to the calculation of steady-state distributions of continuous processes | A mapping of the process on a continuous configuration space to the symbolic
representation of the motion on a discrete state space will be combined with an
iterative aggregation and disaggregation (IAD) procedure to obtain steady state
distributions of the process. The IAD speeds up the convergence to the unit
eigenvector, which is the steady state distribution, by forming smaller
aggregated matrices whose unit eigenvector solutions are used to refine
approximations of the steady state vector until convergence is reached. This
method works very efficiently and can be used together with distributed or
parallel computing methods to obtain high resolution images of the steady state
distribution of complex atomistic or energy landscape type problems. The method
is illustrated in two numerical examples. In the first example the transition
matrix is assumed to be known. The second example represents an overdamped
Brownian motion process subject to a dichotomously changing external potential.
| 0 | 1 | 0 | 0 | 0 | 0 |
Reynolds number dependence of the structure functions in homogeneous turbulence | We compare the predictions of stochastic closure theory (SCT) with
experimental measurements of homogeneous turbulence made in the Variable
Density Turbulence Tunnel (VDTT) at the Max Planck Institute for Dynamics and
Self-Organization in Gottingen. While the general form of SCT contains
infinitely many free parameters, the data permit us to reduce the number to
seven, only three of which are active over the entire inertial range. Of these
three, one parameter characterizes the variance of the mean field noise in SCT
and another characterizes the rate in the large deviations of the mean. The
third parameter is the decay exponent of the Fourier variables in the Fourier
expansion of the noise, which characterizes the smoothness of the turbulent
velocity. SCT compares favorably with velocity structure functions measured in
the experiment. We considered even-order structure functions ranging in order
from two to eight as well as the third-order structure functions at five
Taylor-Reynolds numbers (Rl) between 110 and 1450. The comparisons highlight
several advantages of the SCT, which include explicit predictions for the
structure functions at any scale and for any Reynolds number. We observed that
finite-Rl corrections, for instance, are important even at the highest Reynolds
numbers produced in the experiments. SCT gives us the correct basis function to
express all the moments of the velocity differences in turbulence in Fourier
space. The SCT produces the coefficients of the series and so determines the
statistical quantities that characterize the small scales in turbulence. It
also characterizes the random force acting on the fluid in the stochastic
Navier-Stokes equation, as described in the paper.
| 0 | 1 | 0 | 0 | 0 | 0 |
Power and Energy-efficiency Roofline Model for GPUs | Energy consumption has been a great deal of concern in recent years and
developers need to take energy-efficiency into account when they design
algorithms. Their design needs to be energy-efficient and low-power while it
tries to achieve attainable performance provided by underlying hardware.
However, different optimization techniques have different effects on power and
energy-efficiency and a visual model would assist in the selection process.
In this paper, we extended the roofline model and provided a visual
representation of optimization strategies for power consumption. Our model is
composed of various ceilings regarding each strategy we included in our models.
One roofline model for computational performance and one for memory performance
is introduced. We assembled our models based on some optimization strategies
for two widespread GPUs from NVIDIA: Geforce GTX 970 and Tesla K80.
| 1 | 0 | 0 | 0 | 0 | 0 |
Equivalence of estimates on domain and its boundary | Let $\Omega$ be a pseudoconvex domain in $\mathbb C^n$ with smooth boundary
$b\Omega$. We define general estimates $(f\text{-}\mathcal M)^k_{\Omega}$ and
$(f\text{-}\mathcal M)^k_{b\Omega}$ on $k$-forms for the complex Laplacian
$\Box$ on $\Omega$ and the Kohn-Laplacian $\Box_b$ on $b\Omega$. For $1\le k\le
n-2$, we show that $(f\text{-}\mathcal M)^k_{b\Omega}$ holds if and only if
$(f\text{-}\mathcal M)^k_{\Omega}$ and $(f\text{-}\mathcal M)^{n-k-1}_{\Omega}$
hold. Our proof relies on Kohn's method in [Ann. of Math. (2), 156(1):213--248,
2002].
| 0 | 0 | 1 | 0 | 0 | 0 |
Waveform and Spectrum Management for Unmanned Aerial Systems Beyond 2025 | The application domains of civilian unmanned aerial systems (UASs) include
agriculture, exploration, transportation, and entertainment. The expected
growth of the UAS industry brings along new challenges: Unmanned aerial vehicle
(UAV) flight control signaling requires low throughput, but extremely high
reliability, whereas the data rate for payload data can be significant. This
paper develops UAV number projections and concludes that small and micro UAVs
will dominate the US airspace with accelerated growth between 2028 and 2032. We
analyze the orthogonal frequency division multiplexing (OFDM) waveform because
it can provide the much needed flexibility, spectral efficiency, and,
potentially, reliability and derive suitable OFDM waveform parameters as a
function of UAV flight characteristics. OFDM also lends itself to agile
spectrum access. Based on our UAV growth predictions, we conclude that dynamic
spectrum access is needed and discuss the applicability of spectrum sharing
techniques for future UAS communications.
| 1 | 0 | 0 | 0 | 0 | 0 |
Mining Public Opinion about Economic Issues: Twitter and the U.S. Presidential Election | Opinion polls have been the bridge between public opinion and politicians in
elections. However, developing surveys to disclose people's feedback with
respect to economic issues is limited, expensive, and time-consuming. In recent
years, social media such as Twitter has enabled people to share their opinions
regarding elections. Social media has provided a platform for collecting a
large amount of social media data. This paper proposes a computational public
opinion mining approach to explore the discussion of economic issues in social
media during an election. Current related studies use text mining methods
independently for election analysis and election prediction; this research
combines two text mining methods: sentiment analysis and topic modeling. The
proposed approach has effectively been deployed on millions of tweets to
analyze economic concerns of people during the 2012 US presidential election.
| 1 | 0 | 0 | 1 | 0 | 0 |
The igus Humanoid Open Platform: A Child-sized 3D Printed Open-Source Robot for Research | The use of standard robotic platforms can accelerate research and lower the
entry barrier for new research groups. There exist many affordable humanoid
standard platforms in the lower size ranges of up to 60cm, but larger humanoid
robots quickly become less affordable and more difficult to operate, maintain
and modify. The igus Humanoid Open Platform is a new and affordable, fully
open-source humanoid platform. At 92cm in height, the robot is capable of
interacting in an environment meant for humans, and is equipped with enough
sensors, actuators and computing power to support researchers in many fields.
The structure of the robot is entirely 3D printed, leading to a lightweight and
visually appealing design. The main features of the platform are described in
this article.
| 1 | 0 | 0 | 0 | 0 | 0 |
Sports stars: analyzing the performance of astronomers at visualization-based discovery | In this data-rich era of astronomy, there is a growing reliance on automated
techniques to discover new knowledge. The role of the astronomer may change
from being a discoverer to being a confirmer. But what do astronomers actually
look at when they distinguish between "sources" and "noise?" What are the
differences between novice and expert astronomers when it comes to visual-based
discovery? Can we identify elite talent or coach astronomers to maximize their
potential for discovery? By looking to the field of sports performance
analysis, we consider an established, domain-wide approach, where the expertise
of the viewer (i.e. a member of the coaching team) plays a crucial role in
identifying and determining the subtle features of gameplay that provide a
winning advantage. As an initial case study, we investigate whether the
SportsCode performance analysis software can be used to understand and document
how an experienced HI astronomer makes discoveries in spectral data cubes. We
find that the process of timeline-based coding can be applied to spectral cube
data by mapping spectral channels to frames within a movie. SportsCode provides
a range of easy to use methods for annotation, including feature-based codes
and labels, text annotations associated with codes, and image-based drawing.
The outputs, including instance movies that are uniquely associated with coded
events, provide the basis for a training program or team-based analysis that
could be used in unison with discipline specific analysis software. In this
coordinated approach to visualization and analysis, SportsCode can act as a
visual notebook, recording the insight and decisions in partnership with
established analysis methods. Alternatively, in situ annotation and coding of
features would be a valuable addition to existing and future visualisation and
analysis packages.
| 0 | 1 | 0 | 0 | 0 | 0 |
Chimera states in complex networks: interplay of fractal topology and delay | Chimera states are an example of intriguing partial synchronization patterns
emerging in networks of identical oscillators. They consist of spatially
coexisting domains of coherent (synchronized) and incoherent (desynchronized)
dynamics. We analyze chimera states in networks of Van der Pol oscillators with
hierarchical connectivities, and elaborate the role of time delay introduced in
the coupling term. In the parameter plane of coupling strength and delay time
we find tongue-like regions of existence of chimera states alternating with
regions of existence of coherent travelling waves. We demonstrate that by
varying the time delay one can deliberately stabilize desired spatio-temporal
patterns in the system.
| 0 | 1 | 0 | 0 | 0 | 0 |
Dense 3D Facial Reconstruction from a Single Depth Image in Unconstrained Environment | With the increasing demands of applications in virtual reality such as 3D
films, virtual Human-Machine Interactions and virtual agents, the analysis of
3D human face analysis is considered to be more and more important as a
fundamental step for those virtual reality tasks. Due to information provided
by an additional dimension, 3D facial reconstruction enables aforementioned
tasks to be achieved with higher accuracy than those based on 2D facial
analysis. The denser the 3D facial model is, the more information it could
provide. However, most existing dense 3D facial reconstruction methods require
complicated processing and high system cost. To this end, this paper presents a
novel method that simplifies the process of dense 3D facial reconstruction by
employing only one frame of depth data obtained with an off-the-shelf RGB-D
sensor. The experiments showed competitive results with real world data.
| 1 | 0 | 0 | 0 | 0 | 0 |
Concentration phenomena for a fractional Schrödinger-Kirchhoff type equation | In this paper we deal with the multiplicity and concentration of positive
solutions for the following fractional Schrödinger-Kirchhoff type equation
\begin{equation*} M\left(\frac{1}{\varepsilon^{3-2s}}
\iint_{\mathbb{R}^{6}}\frac{|u(x)- u(y)|^{2}}{|x-y|^{3+2s}} dxdy +
\frac{1}{\varepsilon^{3}} \int_{\mathbb{R}^{3}} V(x)u^{2}
dx\right)[\varepsilon^{2s} (-\Delta)^{s}u+ V(x)u]= f(u) \, \mbox{in}
\mathbb{R}^{3} \end{equation*} where $\varepsilon>0$ is a small parameter,
$s\in (\frac{3}{4}, 1)$, $(-\Delta)^{s}$ is the fractional Laplacian, $M$ is a
Kirchhoff function, $V$ is a continuous positive potential and $f$ is a
superlinear continuous function with subcritical growth. By using penalization
techniques and Ljusternik-Schnirelmann theory, we investigate the relation
between the number of positive solutions with the topology of the set where the
potential attains its minimum.
| 0 | 0 | 1 | 0 | 0 | 0 |
Inferactive data analysis | We describe inferactive data analysis, so-named to denote an interactive
approach to data analysis with an emphasis on inference after data analysis.
Our approach is a compromise between Tukey's exploratory (roughly speaking
"model free") and confirmatory data analysis (roughly speaking classical and
"model based"), also allowing for Bayesian data analysis. We view this approach
as close in spirit to current practice of applied statisticians and data
scientists while allowing frequentist guarantees for results to be reported in
the scientific literature, or Bayesian results where the data scientist may
choose the statistical model (and hence the prior) after some initial
exploratory analysis. While this approach to data analysis does not cover every
scenario, and every possible algorithm data scientists may use, we see this as
a useful step in concrete providing tools (with frequentist statistical
guarantees) for current data scientists. The basis of inference we use is
selective inference [Lee et al., 2016, Fithian et al., 2014], in particular its
randomized form [Tian and Taylor, 2015a]. The randomized framework, besides
providing additional power and shorter confidence intervals, also provides
explicit forms for relevant reference distributions (up to normalization)
through the {\em selective sampler} of Tian et al. [2016]. The reference
distributions are constructed from a particular conditional distribution formed
from what we call a DAG-DAG -- a Data Analysis Generative DAG. As sampling
conditional distributions in DAGs is generally complex, the selective sampler
is crucial to any practical implementation of inferactive data analysis. Our
principal goal is in reviewing the recent developments in selective inference
as well as describing the general philosophy of selective inference.
| 0 | 0 | 1 | 1 | 0 | 0 |
Promising Accurate Prefix Boosting for sequence-to-sequence ASR | In this paper, we present promising accurate prefix boosting (PAPB), a
discriminative training technique for attention based sequence-to-sequence
(seq2seq) ASR. PAPB is devised to unify the training and testing scheme in an
effective manner. The training procedure involves maximizing the score of each
partial correct sequence obtained during beam search compared to other
hypotheses. The training objective also includes minimization of token
(character) error rate. PAPB shows its efficacy by achieving 10.8\% and 3.8\%
WER with and without RNNLM respectively on Wall Street Journal dataset.
| 1 | 0 | 0 | 0 | 0 | 0 |
Tunable GMM Kernels | The recently proposed "generalized min-max" (GMM) kernel can be efficiently
linearized, with direct applications in large-scale statistical learning and
fast near neighbor search. The linearized GMM kernel was extensively compared
in with linearized radial basis function (RBF) kernel. On a large number of
classification tasks, the tuning-free GMM kernel performs (surprisingly) well
compared to the best-tuned RBF kernel. Nevertheless, one would naturally expect
that the GMM kernel ought to be further improved if we introduce tuning
parameters.
In this paper, we study three simple constructions of tunable GMM kernels:
(i) the exponentiated-GMM (or eGMM) kernel, (ii) the powered-GMM (or pGMM)
kernel, and (iii) the exponentiated-powered-GMM (epGMM) kernel. The pGMM kernel
can still be efficiently linearized by modifying the original hashing procedure
for the GMM kernel. On about 60 publicly available classification datasets, we
verify that the proposed tunable GMM kernels typically improve over the
original GMM kernel. On some datasets, the improvements can be astonishingly
significant.
For example, on 11 popular datasets which were used for testing deep learning
algorithms and tree methods, our experiments show that the proposed tunable GMM
kernels are strong competitors to trees and deep nets. The previous studies
developed tree methods including "abc-robust-logitboost" and demonstrated the
excellent performance on those 11 datasets (and other datasets), by
establishing the second-order tree-split formula and new derivatives for
multi-class logistic loss. Compared to tree methods like
"abc-robust-logitboost" (which are slow and need substantial model sizes), the
tunable GMM kernels produce largely comparable results.
| 1 | 0 | 0 | 1 | 0 | 0 |
Synchrotron radiation induced magnetization in magnetically-doped and pristine topological insulators | Quantum mechanics postulates that any measurement influences the state of the
investigated system. Here, by means of angle-, spin-, and time-resolved
photoemission experiments and ab initio calculations we demonstrate how
non-equal depopulation of the Dirac cone (DC) states with opposite momenta in
V-doped and pristine topological insulators (TIs) created by a photoexcitation
by linearly polarized synchrotron radiation (SR) is followed by the
hole-generated uncompensated spin accumulation and the SR-induced magnetization
via the spin-torque effect. We show that the photoexcitation of the DC is
asymmetric, that it varies with the photon energy, and that it practically does
not change during the relaxation. We find a relation between the
photoexcitation asymmetry, the generated spin accumulation and the induced spin
polarization of the DC and V 3d states. Experimentally the SR-generated
in-plane and out-of-plane magnetization is confirmed by the
$k_{\parallel}$-shift of the DC position and by the splitting of the states at
the Dirac point even above the Curie temperature. Theoretical predictions and
estimations of the measurable physical quantities substantiate the experimental
results.
| 0 | 1 | 0 | 0 | 0 | 0 |
The linear nature of pseudowords | Given a pseudoword over suitable pseudovarieties, we associate to it a
labeled linear order determined by the factorizations of the pseudoword. We
show that, in the case of the pseudovariety of aperiodic finite semigroups, the
pseudoword can be recovered from the labeled linear order.
| 0 | 0 | 1 | 0 | 0 | 0 |
Health Care Expenditures, Financial Stability, and Participation in the Supplemental Nutrition Assistance Program (SNAP) | This paper examines the association between household healthcare expenses and
participation in the Supplemental Nutrition Assistance Program (SNAP) when
moderated by factors associated with financial stability of households. Using a
large longitudinal panel encompassing eight years, this study finds that an
inter-temporal increase in out-of-pocket medical expenses increased the
likelihood of household SNAP participation in the current period. Financially
stable households with precautionary financial assets to cover at least 6
months worth of household expenses were significantly less likely to
participate in SNAP. The low income households who recently experienced an
increase in out of pocket medical expenses but had adequate precautionary
savings were less likely than similar households who did not have precautionary
savings to participate in SNAP. Implications for economists, policy makers, and
household finance professionals are discussed.
| 0 | 0 | 0 | 0 | 0 | 1 |
Improved estimates for polynomial Roth type theorems in finite fields | We prove that, under certain conditions on the function pair $\varphi_1$ and
$\varphi_2$, bilinear average $p^{-1}\sum_{y\in
\mathbb{F}_p}f_1(x+\varphi_1(y)) f_2(x+\varphi_2(y))$ along curve $(\varphi_1,
\varphi_2)$ satisfies certain decay estimate. As a consequence, Roth type
theorems hold in the setting of finite fields. In particular, if
$\varphi_1,\varphi_2\in \mathbb{F}_p[X]$ with $\varphi_1(0)=\varphi_2(0)=0$ are
linearly independent polynomials, then for any $A\subset \mathbb{F}_p,
|A|=\delta p$ with $\delta>c p^{-\frac{1}{12}}$, there are $\gtrsim
\delta^3p^2$ triplets $x,x+\varphi_1(y), x+\varphi_2(y)\in A$. This extends a
recent result of Bourgain and Chang who initiated this type of problems, and
strengthens the bound in a result of Peluse, who generalized Bourgain and
Chang's work. The proof uses discrete Fourier analysis and algebraic geometry.
| 0 | 0 | 1 | 0 | 0 | 0 |
Molecules cooled below the Doppler limit | The ability to cool atoms below the Doppler limit -- the minimum temperature
reachable by Doppler cooling -- has been essential to most experiments with
quantum degenerate gases, optical lattices and atomic fountains, among many
other applications. A broad set of new applications await ultracold molecules,
and the extension of laser cooling to molecules has begun. A molecular
magneto-optical trap has been demonstrated, where molecules approached the
Doppler limit. However, the sub-Doppler temperatures required for most
applications have not yet been reached. Here we cool molecules to 50 uK, well
below the Doppler limit, using a three-dimensional optical molasses. These
ultracold molecules could be loaded into optical tweezers to trap arbitrary
arrays for quantum simulation, launched into a molecular fountain for testing
fundamental physics, and used to study ultracold collisions and ultracold
chemistry.
| 0 | 1 | 0 | 0 | 0 | 0 |
On purely generated $α$-smashing weight structures and weight-exact localizations | This paper is dedicated to new methods of constructing weight structures and
weight-exact localizations; our arguments generalize their bounded versions
considered in previous papers of the authors. We start from a class of objects
$P$ of triangulated category $C$ that satisfies a certain negativity condition
(there are no $C$-extensions of positive degrees between elements of $P$; we
actually need a somewhat stronger condition of this sort) to obtain a weight
structure both "halves" of which are closed either with respect to
$C$-coproducts of less than $\alpha$ objects (for $\alpha$ being a fixed
regular cardinal) or with respect to all coproducts (provided that $C$ is
closed with respect to coproducts of this sort). This construction gives all
"reasonable" weight structures satisfying the latter condition. In particular,
we obtain certain weight structures on spectra (in $SH$) consisting of less
than $\alpha$ cells and on certain localizations of $SH$; these results are
new.
| 0 | 0 | 1 | 0 | 0 | 0 |
Evaluating Overfit and Underfit in Models of Network Community Structure | A common data mining task on networks is community detection, which seeks an
unsupervised decomposition of a network into structural groups based on
statistical regularities in the network's connectivity. Although many methods
exist, the No Free Lunch theorem for community detection implies that each
makes some kind of tradeoff, and no algorithm can be optimal on all inputs.
Thus, different algorithms will over or underfit on different inputs, finding
more, fewer, or just different communities than is optimal, and evaluation
methods that use a metadata partition as a ground truth will produce misleading
conclusions about general accuracy. Here, we present a broad evaluation of over
and underfitting in community detection, comparing the behavior of 16
state-of-the-art community detection algorithms on a novel and structurally
diverse corpus of 406 real-world networks. We find that (i) algorithms vary
widely both in the number of communities they find and in their corresponding
composition, given the same input, (ii) algorithms can be clustered into
distinct high-level groups based on similarities of their outputs on real-world
networks, and (iii) these differences induce wide variation in accuracy on link
prediction and link description tasks. We introduce a new diagnostic for
evaluating overfitting and underfitting in practice, and use it to roughly
divide community detection methods into general and specialized learning
algorithms. Across methods and inputs, Bayesian techniques based on the
stochastic block model and a minimum description length approach to
regularization represent the best general learning approach, but can be
outperformed under specific circumstances. These results introduce both a
theoretically principled approach to evaluate over and underfitting in models
of network community structure and a realistic benchmark by which new methods
may be evaluated and compared.
| 1 | 0 | 0 | 1 | 1 | 0 |
Named Entity Evolution Recognition on the Blogosphere | Advancements in technology and culture lead to changes in our language. These
changes create a gap between the language known by users and the language
stored in digital archives. It affects user's possibility to firstly find
content and secondly interpret that content. In previous work we introduced our
approach for Named Entity Evolution Recognition~(NEER) in newspaper
collections. Lately, increasing efforts in Web preservation lead to increased
availability of Web archives covering longer time spans. However, language on
the Web is more dynamic than in traditional media and many of the basic
assumptions from the newspaper domain do not hold for Web data. In this paper
we discuss the limitations of existing methodology for NEER. We approach these
by adapting an existing NEER method to work on noisy data like the Web and the
Blogosphere in particular. We develop novel filters that reduce the noise and
make use of Semantic Web resources to obtain more information about terms. Our
evaluation shows the potentials of the proposed approach.
| 1 | 0 | 0 | 0 | 0 | 0 |
Truncated Cramér-von Mises test of normality | A new test of normality based on a standardised empirical process is
introduced in this article.
The first step is to introduce a Cramér-von Mises type statistic with
weights equal to the inverse of the standard normal density function supported
on a symmetric interval $[-a_n,a_n]$ depending on the sample size $n.$ The
sequence of end points $a_n$ tends to infinity, and is chosen so that the
statistic goes to infinity at the speed of $\ln \ln n.$ After substracting the
mean, a suitable test statistic is obtained, with the same asymptotic law as
the well-known Shapiro-Wilk statistic. The performance of the new test is
described and compared with three other well-known tests of normality, namely,
Shapiro-Wilk, Anderson-Darling and that of del Barrio-Matrán, Cuesta
Albertos, and Rodr\'{\i}guez Rodr\'{\i}guez, by means of power calculations
under many alternative hypotheses.
| 0 | 0 | 1 | 1 | 0 | 0 |
Bayesian fairness | We consider the problem of how decision making can be fair when the
underlying probabilistic model of the world is not known with certainty. We
argue that recent notions of fairness in machine learning need to explicitly
incorporate parameter uncertainty, hence we introduce the notion of {\em
Bayesian fairness} as a suitable candidate for fair decision rules. Using
balance, a definition of fairness introduced by Kleinberg et al (2016), we show
how a Bayesian perspective can lead to well-performing, fair decision rules
even under high uncertainty.
| 1 | 0 | 0 | 1 | 0 | 0 |
A boundary integral equation method for mode elimination and vibration confinement in thin plates with clamped points | We consider the bi-Laplacian eigenvalue problem for the modes of vibration of
a thin elastic plate with a discrete set of clamped points. A high-order
boundary integral equation method is developed for efficient numerical
determination of these modes in the presence of multiple localized defects for
a wide range of two-dimensional geometries. The defects result in
eigenfunctions with a weak singularity that is resolved by decomposing the
solution as a superposition of Green's functions plus a smooth regular part.
This method is applied to a variety of regular and irregular domains and two
key phenomena are observed. First, careful placement of clamping points can
entirely eliminate particular eigenvalues and suggests a strategy for
manipulating the vibrational characteristics of rigid bodies so that
undesirable frequencies are removed. Second, clamping of the plate can result
in partitioning of the domain so that vibrational modes are largely confined to
certain spatial regions. This numerical method gives a precision tool for
tuning the vibrational characteristics of thin elastic plates.
| 0 | 0 | 1 | 0 | 0 | 0 |
Noether's Problem on Semidirect Product Groups | Let $K$ be a field, $G$ a finite group. Let $G$ act on the function field $L
= K(x_{\sigma} : \sigma \in G)$ by $\tau \cdot x_{\sigma} = x_{\tau\sigma}$ for
any $\sigma, \tau \in G$. Denote the fixed field of the action by $K(G) = L^{G}
= \left\{ \frac{f}{g} \in L : \sigma(\frac{f}{g}) = \frac{f}{g}, \forall \sigma
\in G \right\}$. Noether's problem asks whether $K(G)$ is rational (purely
transcendental) over $K$. It is known that if $G = C_m \rtimes C_n$ is a
semidirect product of cyclic groups $C_m$ and $C_n$ with $\mathbb{Z}[\zeta_n]$
a unique factorization domain, and $K$ contains an $e$th primitive root of
unity, where $e$ is the exponent of $G$, then $K(G)$ is rational over $K$. In
this paper, we give another criteria to determine whether $K(C_m \rtimes C_n)$
is rational over $K$. In particular, if $p, q$ are prime numbers and there
exists $x \in \mathbb{Z}[\zeta_q]$ such that the norm
$N_{\mathbb{Q}(\zeta_q)/\mathbb{Q}}(x) = p$, then $\mathbb{C}(C_{p} \rtimes
C_{q})$ is rational over $\mathbb{C}$.
| 0 | 0 | 1 | 0 | 0 | 0 |
A note on the uniqueness of models in social abstract argumentation | Social abstract argumentation is a principled way to assign values to
conflicting (weighted) arguments. In this note we discuss the important
property of the uniqueness of the model.
| 1 | 0 | 0 | 0 | 0 | 0 |
On Optimal Generalizability in Parametric Learning | We consider the parametric learning problem, where the objective of the
learner is determined by a parametric loss function. Employing empirical risk
minimization with possibly regularization, the inferred parameter vector will
be biased toward the training samples. Such bias is measured by the cross
validation procedure in practice where the data set is partitioned into a
training set used for training and a validation set, which is not used in
training and is left to measure the out-of-sample performance. A classical
cross validation strategy is the leave-one-out cross validation (LOOCV) where
one sample is left out for validation and training is done on the rest of the
samples that are presented to the learner, and this process is repeated on all
of the samples. LOOCV is rarely used in practice due to the high computational
complexity. In this paper, we first develop a computationally efficient
approximate LOOCV (ALOOCV) and provide theoretical guarantees for its
performance. Then we use ALOOCV to provide an optimization algorithm for
finding the regularizer in the empirical risk minimization framework. In our
numerical experiments, we illustrate the accuracy and efficiency of ALOOCV as
well as our proposed framework for the optimization of the regularizer.
| 1 | 0 | 0 | 1 | 0 | 0 |
A Neural Language Model for Dynamically Representing the Meanings of Unknown Words and Entities in a Discourse | This study addresses the problem of identifying the meaning of unknown words
or entities in a discourse with respect to the word embedding approaches used
in neural language models. We proposed a method for on-the-fly construction and
exploitation of word embeddings in both the input and output layers of a neural
model by tracking contexts. This extends the dynamic entity representation used
in Kobayashi et al. (2016) and incorporates a copy mechanism proposed
independently by Gu et al. (2016) and Gulcehre et al. (2016). In addition, we
construct a new task and dataset called Anonymized Language Modeling for
evaluating the ability to capture word meanings while reading. Experiments
conducted using our novel dataset show that the proposed variant of RNN
language model outperformed the baseline model. Furthermore, the experiments
also demonstrate that dynamic updates of an output layer help a model predict
reappearing entities, whereas those of an input layer are effective to predict
words following reappearing entities.
| 1 | 0 | 0 | 0 | 0 | 0 |
Bounds on the Size and Asymptotic Rate of Subblock-Constrained Codes | The study of subblock-constrained codes has recently gained attention due to
their application in diverse fields. We present bounds on the size and
asymptotic rate for two classes of subblock-constrained codes. The first class
is binary constant subblock-composition codes (CSCCs), where each codeword is
partitioned into equal sized subblocks, and every subblock has the same fixed
weight. The second class is binary subblock energy-constrained codes (SECCs),
where the weight of every subblock exceeds a given threshold. We present novel
upper and lower bounds on the code sizes and asymptotic rates for binary CSCCs
and SECCs. For a fixed subblock length and small relative distance, we show
that the asymptotic rate for CSCCs (resp. SECCs) is strictly lower than the
corresponding rate for constant weight codes (CWCs) (resp. heavy weight codes
(HWCs)). Further, for codes with high weight and low relative distance, we show
that the asymptotic rates for CSCCs is strictly lower than that of SECCs, which
contrasts that the asymptotic rate for CWCs is equal to that of HWCs. We also
provide a correction to an earlier result by Chee et al. (2014) on the
asymptotic CSCC rate. Additionally, we present several numerical examples
comparing the rates for CSCCs and SECCs with those for constant weight codes
and heavy weight codes.
| 1 | 0 | 0 | 0 | 0 | 0 |
Extreme values of the Riemann zeta function on the 1-line | We prove that there are arbitrarily large values of $t$ such that
$|\zeta(1+it)| \geq e^{\gamma} (\log_2 t + \log_3 t) + \mathcal{O}(1)$. This
essentially matches the prediction for the optimal lower bound in a conjecture
of Granville and Soundararajan. Our proof uses a new variant of the "long
resonator" method. While earlier implementations of this method crucially
relied on a "sparsification" technique to control the mean-square of the
resonator function, in the present paper we exploit certain self-similarity
properties of a specially designed resonator function.
| 0 | 0 | 1 | 0 | 0 | 0 |
Robust two-qubit gates in a linear ion crystal using a frequency-modulated driving force | In an ion trap quantum computer, collective motional modes are used to
entangle two or more qubits in order to execute multi-qubit logical gates. Any
residual entanglement between the internal and motional states of the ions
results in loss of fidelity, especially when there are many spectator ions in
the crystal. We propose using a frequency-modulated (FM) driving force to
minimize such errors. In simulation, we obtained an optimized FM two-qubit gate
that can suppress errors to less than 0.01\% and is robust against frequency
drifts over $\pm$1 kHz. Experimentally, we have obtained a two-qubit gate
fidelity of $98.3(4)\%$, a state-of-the-art result for two-qubit gates with 5
ions.
| 0 | 1 | 0 | 0 | 0 | 0 |
Symbolic Versus Numerical Computation and Visualization of Parameter Regions for Multistationarity of Biological Networks | We investigate models of the mitogenactivated protein kinases (MAPK) network,
with the aim of determining where in parameter space there exist multiple
positive steady states. We build on recent progress which combines various
symbolic computation methods for mixed systems of equalities and inequalities.
We demonstrate that those techniques benefit tremendously from a newly
implemented graph theoretical symbolic preprocessing method. We compare
computation times and quality of results of numerical continuation methods with
our symbolic approach before and after the application of our preprocessing.
| 1 | 0 | 0 | 0 | 0 | 0 |
Computer Self-efficacy and Its Relationship with Web Portal Usage: Evidence from the University of the East | The University of the East Web Portal is an academic, web based system that
provides educational electronic materials and e-learning services. To fully
optimize its usage, it is imperative to determine the factors that relate to
its usage. Thus, this study, to determine the computer self-efficacy of the
faculty members of the University of the East and its relationship with their
web portal usage, was conceived. Using a validated questionnaire, the profile
of the respondents, their computer self-efficacy, and web portal usage were
gathered. Data showed that the respondents were relatively young (M = 40 years
old), majority had masters degree (f = 85, 72%), most had been using the web
portal for four semesters (f = 60, 51%), and the large part were intermediate
web portal users (f = 69, 59%). They were highly skilled in using the computer
(M = 4.29) and skilled in using the Internet (M = 4.28). E-learning services (M
= 3.29) and online library resources (M = 3.12) were only used occasionally.
Pearson correlation revealed that age was positively correlated with online
library resources (r = 0.267, p < 0.05) and a negative relationship existed
between perceived skill level in using the portal and online library resources
usage (r = -0.206, p < 0.05). A 2x2 chi square revealed that the highest
educational attainment had a significant relationship with online library
resources (chi square = 5.489, df = 1, p < 0.05). Basic computer (r = 0.196, p
< 0.05) and Internet skills (r = 0.303, p < 0.05) were significantly and
positively related with e-learning services usage but not with online library
resources usage. Other individual factors such as attitudes towards the web
portal and anxiety towards using the web portal can be investigated.
| 1 | 0 | 0 | 0 | 0 | 0 |
Normal forms of dispersive scalar Poisson brackets with two independent variables | We classify the dispersive Poisson brackets with one dependent variable and
two independent variables, with leading order of hydrodynamic type, up to Miura
transformations. We show that, in contrast to the case of a single independent
variable for which a well known triviality result exists, the Miura equivalence
classes are parametrised by an infinite number of constants, which we call
numerical invariants of the brackets. We obtain explicit formulas for the first
few numerical invariants.
| 0 | 1 | 1 | 0 | 0 | 0 |
A Liouville theorem for indefinite fractional diffusion equations and its application to existence of solutions | In this work we obtain a Liouville theorem for positive, bounded solutions of
the equation $$ (-\Delta)^s u= h(x_N)f(u) \quad \hbox{in }\mathbb{R}^{N} $$
where $(-\Delta)^s$ stands for the fractional Laplacian with $s\in (0,1)$, and
the functions $h$ and $f$ are nondecreasing. The main feature is that the
function $h$ changes sign in $\mathbb{R}$, therefore the problem is sometimes
termed as indefinite. As an application we obtain a priori bounds for positive
solutions of some boundary value problems, which give existence of such
solutions by means of bifurcation methods.
| 0 | 0 | 1 | 0 | 0 | 0 |
Semiparametric Contextual Bandits | This paper studies semiparametric contextual bandits, a generalization of the
linear stochastic bandit problem where the reward for an action is modeled as a
linear function of known action features confounded by an non-linear
action-independent term. We design new algorithms that achieve
$\tilde{O}(d\sqrt{T})$ regret over $T$ rounds, when the linear function is
$d$-dimensional, which matches the best known bounds for the simpler
unconfounded case and improves on a recent result of Greenewald et al. (2017).
Via an empirical evaluation, we show that our algorithms outperform prior
approaches when there are non-linear confounding effects on the rewards.
Technically, our algorithms use a new reward estimator inspired by
doubly-robust approaches and our proofs require new concentration inequalities
for self-normalized martingales.
| 0 | 0 | 0 | 1 | 0 | 0 |
Robustness of Quantum-Enhanced Adaptive Phase Estimation | As all physical adaptive quantum-enhanced metrology schemes operate under
noisy conditions with only partially understood noise characteristics, so a
practical control policy must be robust even for unknown noise. We aim to
devise a test to evaluate the robustness of AQEM policies and assess the
resource used by the policies. The robustness test is performed on QEAPE by
simulating the scheme under four phase-noise models corresponding to
normal-distribution noise, random-telegraph noise, skew-normal-distribution
noise, and log-normal-distribution noise. Control policies are devised either
by an evolutionary algorithm under the same noisy conditions, albeit ignorant
of its properties, or a Bayesian-based feedback method that assumes no noise.
Our robustness test and resource comparison method can be used to determining
the efficacy and selecting a suitable policy.
| 1 | 0 | 0 | 1 | 0 | 0 |
Six operations formalism for generalized operads | This paper shows that generalizations of operads equipped with their
respective bar/cobar dualities are related by a six operations formalism
analogous to that of classical contexts in algebraic geometry. As a consequence
of our constructions, we prove intertwining theorems which govern derived
Koszul duality of push-forwards and pull-backs.
| 0 | 0 | 1 | 0 | 0 | 0 |
How Complex is your classification problem? A survey on measuring classification complexity | Extracting characteristics from the training datasets of classification
problems has proven effective in a number of meta-analyses. Among them,
measures of classification complexity can estimate the difficulty in separating
the data points into their expected classes. Descriptors of the spatial
distribution of the data and estimates of the shape and size of the decision
boundary are among the existent measures for this characterization. This
information can support the formulation of new data-driven pre-processing and
pattern recognition techniques, which can in turn be focused on challenging
characteristics of the problems. This paper surveys and analyzes measures which
can be extracted from the training datasets in order to characterize the
complexity of the respective classification problems. Their use in recent
literature is also reviewed and discussed, allowing to prospect opportunities
for future work in the area. Finally, descriptions are given on an R package
named Extended Complexity Library (ECoL) that implements a set of complexity
measures and is made publicly available.
| 0 | 0 | 0 | 1 | 0 | 0 |
Constraining Dark Energy Dynamics in Extended Parameter Space | Dynamical dark energy has been recently suggested as a promising and physical
way to solve the 3.4 sigma tension on the value of the Hubble constant $H_0$
between the direct measurement of Riess et al. (2016) (R16, hereafter) and the
indirect constraint from Cosmic Microwave Anisotropies obtained by the Planck
satellite under the assumption of a $\Lambda$CDM model. In this paper, by
parameterizing dark energy evolution using the $w_0$-$w_a$ approach, and
considering a $12$ parameter extended scenario, we find that: a) the tension on
the Hubble constant can indeed be solved with dynamical dark energy, b) a
cosmological constant is ruled out at more than $95 \%$ c.l. by the Planck+R16
dataset, and c) all of the standard quintessence and half of the "downward
going" dark energy model space (characterized by an equation of state that
decreases with time) is also excluded at more than $95 \%$ c.l. These results
are further confirmed when cosmic shear, CMB lensing, or SN~Ia luminosity
distance data are also included. However, tension remains with the BAO dataset.
A cosmological constant and small portion of the freezing quintessence models
are still in agreement with the Planck+R16+BAO dataset at between 68\% and 95\%
c.l. Conversely, for Planck plus a phenomenological $H_0$ prior, both thawing
and freezing quintessence models prefer a Hubble constant of less than 70
km/s/Mpc. The general conclusions hold also when considering models with
non-zero spatial curvature.
| 0 | 1 | 0 | 0 | 0 | 0 |
Evaluation of matrix factorisation approaches for muscle synergy extraction | The muscle synergy concept provides a widely-accepted paradigm to break down
the complexity of motor control. In order to identify the synergies, different
matrix factorisation techniques have been used in a repertoire of fields such
as prosthesis control and biomechanical and clinical studies. However, the
relevance of these matrix factorisation techniques is still open for discussion
since there is no ground truth for the underlying synergies. Here, we evaluate
factorisation techniques and investigate the factors that affect the quality of
estimated synergies. We compared commonly used matrix factorisation methods:
Principal component analysis (PCA), Independent component analysis (ICA),
Non-negative matrix factorization (NMF) and second-order blind identification
(SOBI). Publicly available real data were used to assess the synergies
extracted by each factorisation method in the classification of wrist
movements. Synthetic datasets were utilised to explore the effect of muscle
synergy sparsity, level of noise and number of channels on the extracted
synergies. Results suggest that the sparse synergy model and a higher number of
channels would result in better-estimated synergies. Without dimensionality
reduction, SOBI showed better results than other factorisation methods. This
suggests that SOBI would be an alternative when a limited number of electrodes
is available but its performance was still poor in that case. Otherwise, NMF
had the best performance when the number of channels was higher than the number
of synergies. Therefore, NMF would be the best method for muscle synergy
extraction.
| 0 | 0 | 0 | 0 | 1 | 0 |
Comparison of two classifications of a class of ODE's in the case of general position | Two classifications of second order ODE's cubic with respect to the first
order derivative are compared in the case of general position, which is common
for both classifications. The correspondence of vectorial, pseudovectorial,
scalar, and pseudoscalar invariants is established.
| 0 | 0 | 1 | 0 | 0 | 0 |
Learning linear structural equation models in polynomial time and sample complexity | The problem of learning structural equation models (SEMs) from data is a
fundamental problem in causal inference. We develop a new algorithm --- which
is computationally and statistically efficient and works in the
high-dimensional regime --- for learning linear SEMs from purely observational
data with arbitrary noise distribution. We consider three aspects of the
problem: identifiability, computational efficiency, and statistical efficiency.
We show that when data is generated from a linear SEM over $p$ nodes and
maximum degree $d$, our algorithm recovers the directed acyclic graph (DAG)
structure of the SEM under an identifiability condition that is more general
than those considered in the literature, and without faithfulness assumptions.
In the population setting, our algorithm recovers the DAG structure in
$\mathcal{O}(p(d^2 + \log p))$ operations. In the finite sample setting, if the
estimated precision matrix is sparse, our algorithm has a smoothed complexity
of $\widetilde{\mathcal{O}}(p^3 + pd^7)$, while if the estimated precision
matrix is dense, our algorithm has a smoothed complexity of
$\widetilde{\mathcal{O}}(p^5)$. For sub-Gaussian noise, we show that our
algorithm has a sample complexity of $\mathcal{O}(\frac{d^8}{\varepsilon^2}
\log (\frac{p}{\sqrt{\delta}}))$ to achieve $\varepsilon$ element-wise additive
error with respect to the true autoregression matrix with probability at most
$1 - \delta$, while for noise with bounded $(4m)$-th moment, with $m$ being a
positive integer, our algorithm has a sample complexity of
$\mathcal{O}(\frac{d^8}{\varepsilon^2} (\frac{p^2}{\delta})^{1/m})$.
| 1 | 0 | 0 | 1 | 0 | 0 |
On Bousfield's problem for solvable groups of finite Prüfer rank | For a group $G$ and $R=\mathbb Z,\mathbb Z/p,\mathbb Q$ we denote by $\hat
G_R$ the $R$-completion of $G.$ We study the map $H_n(G,K)\to H_n(\hat G_R,K),$
where $(R,K)=(\mathbb Z,\mathbb Z/p),(\mathbb Z/p,\mathbb Z/p),(\mathbb
Q,\mathbb Q).$ We prove that $H_2(G,K)\to H_2(\hat G_R,K)$ is an epimorphism
for a finitely generated solvable group $G$ of finite Prüfer rank. In
particular, Bousfield's $HK$-localisation of such groups coincides with the
$K$-completion for $K=\mathbb Z/p,\mathbb Q.$ Moreover, we prove that
$H_n(G,K)\to H_n(\hat G_R,K)$ is an epimorphism for any $n$ if $G$ is a
finitely presented group of the form $G=M\rtimes C,$ where $C$ is the infinite
cyclic group and $M$ is a $C$-module.
| 0 | 0 | 1 | 0 | 0 | 0 |
Approximate Analytical Solution of a Cancer Immunotherapy Model by the Application of Differential Transform and Adomian Decomposition Methods | Immunotherapy plays a major role in tumour treatment, in comparison with
other methods of dealing with cancer. The Kirschner-Panetta (KP) model of
cancer immunotherapy describes the interaction between tumour cells, effector
cells and interleukin-2 which are clinically utilized as medical treatment. The
model selects a rich concept of immune-tumour dynamics. In this paper,
approximate analytical solutions to KP model are represented by using the
differential transform and Adomian decomposition. The complicated nonlinearity
of the KP system causes the application of these two methods to require more
involved calculations. The approximate analytical solutions to the model are
compared with the results obtained by numerical fourth order Runge-Kutta
method.
| 0 | 0 | 0 | 0 | 1 | 0 |
Point-hyperplane frameworks, slider joints, and rigidity preserving transformations | A one-to-one correspondence between the infinitesimal motions of bar-joint
frameworks in $\mathbb{R}^d$ and those in $\mathbb{S}^d$ is a classical
observation by Pogorelov, and further connections among different rigidity
models in various different spaces have been extensively studied. In this
paper, we shall extend this line of research to include the infinitesimal
rigidity of frameworks consisting of points and hyperplanes. This enables us to
understand correspondences between point-hyperplane rigidity, classical
bar-joint rigidity, and scene analysis.
Among other results, we derive a combinatorial characterization of graphs
that can be realized as infinitesimally rigid frameworks in the plane with a
given set of points collinear. This extends a result by Jackson and Jordán,
which deals with the case when three points are collinear.
| 0 | 0 | 1 | 0 | 0 | 0 |
Spectrum of signless 1-Laplacian on simplicial complexes | We first develop a general framework for signless 1-Laplacian defined in
terms of the combinatorial structure of a simplicial complex. The structure of
the eigenvectors and the complex feature of eigenvalues are studied. The
Courant nodal domain theorem for partial differential equation is extended to
the signless 1-Laplacian on complex. We also study the effects of a wedge sum
and a duplication of a motif on the spectrum of the signless 1-Laplacian, and
identify some of the combinatorial features of a simplicial complex that are
encoded in its spectrum. A special result is that the independent number and
clique covering number on a complex provide lower and upper bounds of the
multiplicity of the largest eigenvalue of signless 1-Laplacian, respectively,
which has no counterpart of $p$-Laplacian for any $p>1$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Inter-Area Oscillation Damping With Non-Synchronized Wide-Area Power System Stabilizer | One of the major issues in an interconnected power system is the low damping
of inter-area oscillations which significantly reduces the power transfer
capability. Advances in Wide-Area Measurement System (WAMS) makes it possible
to use the information from geographical distant location to improve power
system dynamics and performances. A speed deviation based Wide-Area Power
System Stabilizer (WAPSS) is known to be effective in damping inter-area modes.
However, the involvement of wide-area signals gives rise to the problem of
time-delay, which may degrade the system performance. In general, time-stamped
synchronized signals from Phasor Data Concentrator (PDC) are used for WAPSS, in
which delays are introduced in both local and remote signals. One can opt for a
feedback of remote signal only from PDC and uses the local signal as it is
available, without time synchronization. This paper utilizes configurations of
time-matched synchronized and nonsychronized feedback and provides the
guidelines to design the controller. The controllers are synthesized using
$H_\infty$ control with regional pole placement for ensuring adequate dynamic
performance. To show the effectiveness of the proposed approach, two power
system models have been used for the simulations. It is shown that the
controllers designed based on the nonsynchronized signals are more robust to
time time delay variations than the controllers using synchronized signal.
| 1 | 0 | 0 | 0 | 0 | 0 |
Genetic Algorithms for Evolving Computer Chess Programs | This paper demonstrates the use of genetic algorithms for evolving: 1) a
grandmaster-level evaluation function, and 2) a search mechanism for a chess
program, the parameter values of which are initialized randomly. The evaluation
function of the program is evolved by learning from databases of (human)
grandmaster games. At first, the organisms are evolved to mimic the behavior of
human grandmasters, and then these organisms are further improved upon by means
of coevolution. The search mechanism is evolved by learning from tactical test
suites. Our results show that the evolved program outperforms a two-time world
computer chess champion and is at par with the other leading computer chess
programs.
| 1 | 0 | 0 | 1 | 0 | 0 |
Low-cost Autonomous Navigation System Based on Optical Flow Classification | This work presents a low-cost robot, controlled by a Raspberry Pi, whose
navigation system is based on vision. The strategy used consisted of
identifying obstacles via optical flow pattern recognition. Its estimation was
done using the Lucas-Kanade algorithm, which can be executed by the Raspberry
Pi without harming its performance. Finally, an SVM-based classifier was used
to identify patterns of this signal associated with obstacles movement. The
developed system was evaluated considering its execution over an optical flow
pattern dataset extracted from a real navigation environment. In the end, it
was verified that the acquisition cost of the system was inferior to that
presented by most of the cited works, while its performance was similar to
theirs.
| 1 | 0 | 0 | 0 | 0 | 0 |
NVIDIA Tensor Core Programmability, Performance & Precision | The NVIDIA Volta GPU microarchitecture introduces a specialized unit, called
"Tensor Core" that performs one matrix-multiply-and-accumulate on 4x4 matrices
per clock cycle. The NVIDIA Tesla V100 accelerator, featuring the Volta
microarchitecture, provides 640 Tensor Cores with a theoretical peak
performance of 125 Tflops/s in mixed precision. In this paper, we investigate
current approaches to program NVIDIA Tensor Cores, their performances and the
precision loss due to computation in mixed precision.
Currently, NVIDIA provides three different ways of programming
matrix-multiply-and-accumulate on Tensor Cores: the CUDA Warp Matrix Multiply
Accumulate (WMMA) API, CUTLASS, a templated library based on WMMA, and cuBLAS
GEMM. After experimenting with different approaches, we found that NVIDIA
Tensor Cores can deliver up to 83 Tflops/s in mixed precision on a Tesla V100
GPU, seven and three times the performance in single and half precision
respectively. A WMMA implementation of batched GEMM reaches a performance of 4
Tflops/s. While precision loss due to matrix multiplication with half precision
input might be critical in many HPC applications, it can be considerably
reduced at the cost of increased computation. Our results indicate that HPC
applications using matrix multiplications can strongly benefit from using of
NVIDIA Tensor Cores.
| 1 | 0 | 0 | 0 | 0 | 0 |
A spectroscopic survey of Orion KL between 41.5 and 50 GHz | Orion KL is one of the most frequently observed sources in the Galaxy, and
the site where many molecular species have been discovered for the first time.
With the availability of powerful wideband backends, it is nowadays possible to
complete spectral surveys in the entire mm-range to obtain a spectroscopically
unbiased chemical picture of the region. In this paper we present a sensitive
spectral survey of Orion KL, made with one of the 34m antennas of the Madrid
Deep Space Communications Complex in Robledo de Chavela, Spain. The spectral
range surveyed is from 41.5 to 50 GHz, with a frequency spacing of 180 kHz
(equivalent to about 1.2 km/s, depending on the exact frequency). The rms
achieved ranges from 8 to 12 mK. The spectrum is dominated by the J=1-0 SiO
maser lines and by radio recombination lines (RRLs), which were detected up to
Delta_n=11. Above a 3-sigma level, we identified 66 RRLs and 161 molecular
lines corresponding to 39 isotopologues from 20 molecules; a total of 18 lines
remain unidentified, two of them above a 5-sigma level. Results of radiative
modelling of the detected molecular lines (excluding masers) are presented. At
this frequency range, this is the most sensitive survey and also the one with
the widest band. Although some complex molecules like CH_3CH_2CN and CH_2CHCN
arise from the hot core, most of the detected molecules originate from the low
temperature components in Orion KL.
| 0 | 1 | 0 | 0 | 0 | 0 |
Restricted Boltzmann Machines for Robust and Fast Latent Truth Discovery | We address the problem of latent truth discovery, LTD for short, where the
goal is to discover the underlying true values of entity attributes in the
presence of noisy, conflicting or incomplete information. Despite a multitude
of algorithms to address the LTD problem that can be found in literature, only
little is known about their overall performance with respect to effectiveness
(in terms of truth discovery capabilities), efficiency and robustness. A
practical LTD approach should satisfy all these characteristics so that it can
be applied to heterogeneous datasets of varying quality and degrees of
cleanliness.
We propose a novel algorithm for LTD that satisfies the above requirements.
The proposed model is based on Restricted Boltzmann Machines, thus coined
LTD-RBM. In extensive experiments on various heterogeneous and publicly
available datasets, LTD-RBM is superior to state-of-the-art LTD techniques in
terms of an overall consideration of effectiveness, efficiency and robustness.
| 0 | 0 | 0 | 1 | 0 | 0 |
Effects of the Mach number on the evolution of vortex-surface fields in compressible Taylor--Green flows | We investigate the evolution of vortex-surface fields (VSFs) in compressible
Taylor--Green flows at Mach numbers ($Ma$) ranging from 0.5 to 2.0 using direct
numerical simulation. The formulation of VSFs in incompressible flows is
extended to compressible flows, and a mass-based renormalization of VSFs is
used to facilitate characterizing the evolution of a particular vortex surface.
The effects of the Mach number on the VSF evolution are different in three
stages. In the early stage, the jumps of the compressive velocity component
near shocklets generate sinks to contract surrounding vortex surfaces, which
shrink vortex volume and distort vortex surfaces. The subsequent reconnection
of vortex surfaces, quantified by the minimal distance between approaching
vortex surfaces and the exchange of vorticity fluxes, occurs earlier and has a
higher reconnection degree for larger $Ma$ owing to the dilatational
dissipation and shocklet-induced reconnection of vortex lines. In the late
stage, the positive dissipation rate and negative pressure work accelerate the
loss of kinetic energy and suppress vortex twisting with increasing $Ma$.
| 0 | 1 | 0 | 0 | 0 | 0 |
Above-threshold ionization (ATI) in multicenter molecules: the role of the initial state | A possible route to extract electronic and nuclear dynamics from molecular
targets with attosecond temporal and nanometer spatial resolution is to employ
recolliding electrons as `probes'. The recollision process in molecules is,
however, very challenging to treat using {\it ab initio} approaches. Even for
the simplest diatomic systems, such as H$_2$, today's computational
capabilities are not enough to give a complete description of the electron and
nuclear dynamics initiated by a strong laser field. As a consequence,
approximate qualitative descriptions are called to play an important role. In
this contribution we extend the work presented in N. Suárez {\it et al.},
Phys.~Rev. A {\bf 95}, 033415 (2017), to three-center molecular targets.
Additionally, we incorporate a more accurate description of the molecular
ground state, employing information extracted from quantum chemistry software
packages. This step forward allows us to include, in a detailed way, both the
molecular symmetries and nodes present in the high-occupied molecular orbital.
We are able to, on the one hand, keep our formulation as analytical as in the
case of diatomics, and, on the other hand, to still give a complete description
of the underlying physics behind the above-threshold ionization process. The
application of our approach to complex multicenter - with more than 3 centers,
targets appears to be straightforward.
| 0 | 1 | 0 | 0 | 0 | 0 |
Detailed, accurate, human shape estimation from clothed 3D scan sequences | We address the problem of estimating human pose and body shape from 3D scans
over time. Reliable estimation of 3D body shape is necessary for many
applications including virtual try-on, health monitoring, and avatar creation
for virtual reality. Scanning bodies in minimal clothing, however, presents a
practical barrier to these applications. We address this problem by estimating
body shape under clothing from a sequence of 3D scans. Previous methods that
have exploited body models produce smooth shapes lacking personalized details.
We contribute a new approach to recover a personalized shape of the person. The
estimated shape deviates from a parametric model to fit the 3D scans. We
demonstrate the method using high quality 4D data as well as sequences of
visual hulls extracted from multi-view images. We also make available BUFF, a
new 4D dataset that enables quantitative evaluation
(this http URL). Our method outperforms the state of the art in
both pose estimation and shape estimation, qualitatively and quantitatively.
| 1 | 0 | 0 | 0 | 0 | 0 |
The Picard groups for unital inclusions of unital $C^*$-algebras | We shall introduce the notion of the Picard group for an inclusion of
$C^*$-algebras. We shall also study its basic properties and the relation
between the Picard group for an inclusion of $C^*$-algebras and the ordinary
Picard group. Furthermore, we shall give some examples of the Picard groups for
unital inclusions of unital $C^*$-algebras.
| 0 | 0 | 1 | 0 | 0 | 0 |
Generic Singularities of 3D Piecewise Smooth Dynamical Systems | The aim of this paper is to provide a discussion on current directions of
research involving typical singularities of 3D nonsmooth vector fields. A brief
survey of known results is presented. The main purpose of this work is to
describe the dynamical features of a fold-fold singularity in its most basic
form and to give a complete and detailed proof of its local structural
stability (or instability). In addition, classes of all topological types of a
fold-fold singularity are intrinsically characterized. Such proof essentially
follows firstly from some lines laid out by Colombo, García, Jeffrey,
Teixeira and others and secondly offers a rigorous mathematical treatment under
clear and crisp assumptions and solid arguments. One should to highlight that
the geometric-topological methods employed lead us to the completely
mathematical understanding of the dynamics around a T-singularity. This
approach lends itself to applications in generic bifurcation theory. It is
worth to say that such subject is still poorly understood in higher dimension.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Big Data Analysis Framework Using Apache Spark and Deep Learning | With the spreading prevalence of Big Data, many advances have recently been
made in this field. Frameworks such as Apache Hadoop and Apache Spark have
gained a lot of traction over the past decades and have become massively
popular, especially in industries. It is becoming increasingly evident that
effective big data analysis is key to solving artificial intelligence problems.
Thus, a multi-algorithm library was implemented in the Spark framework, called
MLlib. While this library supports multiple machine learning algorithms, there
is still scope to use the Spark setup efficiently for highly time-intensive and
computationally expensive procedures like deep learning. In this paper, we
propose a novel framework that combines the distributive computational
abilities of Apache Spark and the advanced machine learning architecture of a
deep multi-layer perceptron (MLP), using the popular concept of Cascade
Learning. We conduct empirical analysis of our framework on two real world
datasets. The results are encouraging and corroborate our proposed framework,
in turn proving that it is an improvement over traditional big data analysis
methods that use either Spark or Deep learning as individual elements.
| 1 | 0 | 0 | 1 | 0 | 0 |
Multi-Layer Generalized Linear Estimation | We consider the problem of reconstructing a signal from multi-layered
(possibly) non-linear measurements. Using non-rigorous but standard methods
from statistical physics we present the Multi-Layer Approximate Message Passing
(ML-AMP) algorithm for computing marginal probabilities of the corresponding
estimation problem and derive the associated state evolution equations to
analyze its performance. We also give the expression of the asymptotic free
energy and the minimal information-theoretically achievable reconstruction
error. Finally, we present some applications of this measurement model for
compressed sensing and perceptron learning with structured matrices/patterns,
and for a simple model of estimation of latent variables in an auto-encoder.
| 1 | 1 | 0 | 1 | 0 | 0 |
The Trace and the Mass of subcritical GJMS Operators | Let $L_g$ be the subcritical GJMS operator on an even-dimensional compact
manifold $(X, g)$ and consider the zeta-regularized trace
$\mathrm{Tr}_\zeta(L_g^{-1})$ of its inverse. We show that if $\ker L_g = 0$,
then the supremum of this quantity, taken over all metrics $g$ of fixed volume
in the conformal class, is always greater than or equal to the corresponding
quantity on the standard sphere. Moreover, we show that in the case that it is
strictly larger, the supremum is attained by a metric of constant mass. Using
positive mass theorems, we give some geometric conditions for this to happen.
| 0 | 0 | 1 | 0 | 0 | 0 |
Two-photon imaging assisted by a dynamic random medium | Random scattering is usually viewed as a serious nuisance in optical imaging,
and needs to be prevented in the conventional imaging scheme based on
single-photon interference. Here we proposed a two-photon imaging scheme with
the widely used lens replaced by a dynamic random medium. In contrast to
destroying imaging process, the dynamic random medium in our scheme works as a
crucial imaging element to bring constructive interference, and allows us to
image an object from light field scattered by this dynamic random medium. On
the one hand, our imaging scheme with incoherent two-photon illumination
enables us to achieve super-resolution imaging with the resolution reaching
Heisenberg limit. On the other hand, with coherent two-photon illumination, the
image of a pure-phase object can be obtained in our imaging scheme. These
results show new possibilities to overcome bottleneck of widely used
single-photon imaging by developing imaging method based on multi-photon
interference.
| 0 | 1 | 0 | 0 | 0 | 0 |
Global Robustness Evaluation of Deep Neural Networks with Provable Guarantees for the $L_0$ Norm | Deployment of deep neural networks (DNNs) in safety- or security-critical
systems requires provable guarantees on their correct behaviour. A common
requirement is robustness to adversarial perturbations in a neighbourhood
around an input. In this paper we focus on the $L_0$ norm and aim to compute,
for a trained DNN and an input, the maximal radius of a safe norm ball around
the input within which there are no adversarial examples. Then we define global
robustness as an expectation of the maximal safe radius over a test data set.
We first show that the problem is NP-hard, and then propose an approximate
approach to iteratively compute lower and upper bounds on the network's
robustness. The approach is \emph{anytime}, i.e., it returns intermediate
bounds and robustness estimates that are gradually, but strictly, improved as
the computation proceeds; \emph{tensor-based}, i.e., the computation is
conducted over a set of inputs simultaneously, instead of one by one, to enable
efficient GPU computation; and has \emph{provable guarantees}, i.e., both the
bounds and the robustness estimates can converge to their optimal values.
Finally, we demonstrate the utility of the proposed approach in practice to
compute tight bounds by applying and adapting the anytime algorithm to a set of
challenging problems, including global robustness evaluation, competitive $L_0$
attacks, test case generation for DNNs, and local robustness evaluation on
large-scale ImageNet DNNs. We release the code of all case studies via GitHub.
| 0 | 0 | 0 | 1 | 0 | 0 |
Evolution of an eroding cylinder in single and lattice arrangements | The coupled evolution of an eroding cylinder immersed in a fluid within the
subcritical Reynolds range is explored with scale resolving simulations.
Erosion of the cylinder is driven by fluid shear stress. Kármán vortex
shedding features in the wake and these oscillations occur on a significantly
smaller time scale compared to the slowly eroding cylinder boundary. Temporal
and spatial averaging across the cylinder span allows mean wall statistics such
as wall shear to be evaluated; with geometry evolving in 2-D and the flow field
simulated in 3-D. The cylinder develops into a rounded triangular body with
uniform wall shear stress which is in agreement with existing theory and
experiments. We introduce a node shuffle algorithm to reposition nodes around
the cylinder boundary with a uniform distribution such that the mesh quality is
preserved under high boundary deformation. A cylinder is then modelled within
an infinite array of other cylinders by simulating a repeating unit cell and
their profile evolution is studied. A similar terminal form is discovered for
large cylinder spacings with consistent flow conditions and an intermediate
profile was found with a closely packed lattice before reaching the common
terminal form.
| 0 | 1 | 0 | 0 | 0 | 0 |
FLAME: A Fast Large-scale Almost Matching Exactly Approach to Causal Inference | A classical problem in causal inference is that of matching, where treatment
units need to be matched to control units. Some of the main challenges in
developing matching methods arise from the tension among (i) inclusion of as
many covariates as possible in defining the matched groups, (ii) having matched
groups with enough treated and control units for a valid estimate of Average
Treatment Effect (ATE) in each group, and (iii) computing the matched pairs
efficiently for large datasets. In this paper we propose a fast method for
approximate and exact matching in causal analysis called FLAME (Fast
Large-scale Almost Matching Exactly). We define an optimization objective for
match quality, which gives preferences to matching on covariates that can be
useful for predicting the outcome while encouraging as many matches as
possible. FLAME aims to optimize our match quality measure, leveraging
techniques that are natural for query processing in the area of database
management. We provide two implementations of FLAME using SQL queries and
bit-vector techniques.
| 1 | 0 | 0 | 1 | 0 | 0 |
Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications | We propose a stochastic extension of the primal-dual hybrid gradient
algorithm studied by Chambolle and Pock in 2011 to solve saddle point problems
that are separable in the dual variable. The analysis is carried out for
general convex-concave saddle point problems and problems that are either
partially smooth / strongly convex or fully smooth / strongly convex. We
perform the analysis for arbitrary samplings of dual variables, and obtain
known deterministic results as a special case. Several variants of our
stochastic method significantly outperform the deterministic variant on a
variety of imaging tasks.
| 1 | 0 | 1 | 0 | 0 | 0 |
Improving Regret Bounds for Combinatorial Semi-Bandits with Probabilistically Triggered Arms and Its Applications | We study combinatorial multi-armed bandit with probabilistically triggered
arms (CMAB-T) and semi-bandit feedback. We resolve a serious issue in the prior
CMAB-T studies where the regret bounds contain a possibly exponentially large
factor of $1/p^*$, where $p^*$ is the minimum positive probability that an arm
is triggered by any action. We address this issue by introducing a triggering
probability modulated (TPM) bounded smoothness condition into the general
CMAB-T framework, and show that many applications such as influence
maximization bandit and combinatorial cascading bandit satisfy this TPM
condition. As a result, we completely remove the factor of $1/p^*$ from the
regret bounds, achieving significantly better regret bounds for influence
maximization and cascading bandits than before. Finally, we provide lower bound
results showing that the factor $1/p^*$ is unavoidable for general CMAB-T
problems, suggesting that the TPM condition is crucial in removing this factor.
| 1 | 0 | 0 | 1 | 0 | 0 |
Multiscale Hierarchical Convolutional Networks | Deep neural network algorithms are difficult to analyze because they lack
structure allowing to understand the properties of underlying transforms and
invariants. Multiscale hierarchical convolutional networks are structured deep
convolutional networks where layers are indexed by progressively higher
dimensional attributes, which are learned from training data. Each new layer is
computed with multidimensional convolutions along spatial and attribute
variables. We introduce an efficient implementation of such networks where the
dimensionality is progressively reduced by averaging intermediate layers along
attribute indices. Hierarchical networks are tested on CIFAR image data bases
where they obtain comparable precisions to state of the art networks, with much
fewer parameters. We study some properties of the attributes learned from these
databases.
| 1 | 0 | 0 | 1 | 0 | 0 |
Caveats for information bottleneck in deterministic scenarios | Information bottleneck (IB) is a method for extracting information from one
random variable $X$ that is relevant for predicting another random variable
$Y$. To do so, IB identifies an intermediate "bottleneck" variable $T$ that has
low mutual information $I(X;T)$ and high mutual information $I(Y;T)$. The "IB
curve" characterizes the set of bottleneck variables that achieve maximal
$I(Y;T)$ for a given $I(X;T)$, and is typically explored by maximizing the "IB
Lagrangian", $I(Y;T) - \beta I(X;T)$. In some cases, $Y$ is a deterministic
function of $X$, including many classification problems in supervised learning
where the output class $Y$ is a deterministic function of the input $X$. We
demonstrate three caveats when using IB in any situation where $Y$ is a
deterministic function of $X$: (1) the IB curve cannot be recovered by
maximizing the IB Lagrangian for different values of $\beta$; (2) there are
"uninteresting" trivial solutions at all points of the IB curve; and (3) for
multi-layer classifiers that achieve low prediction error, different layers
cannot exhibit a strict trade-off between compression and prediction, contrary
to a recent proposal. We also show that when $Y$ is a small perturbation away
from being a deterministic function of $X$, these three caveats arise in an
approximate way. To address problem (1), we propose a functional that, unlike
the IB Lagrangian, can recover the IB curve in all cases. We demonstrate the
three caveats on the MNIST dataset.
| 0 | 0 | 0 | 1 | 0 | 0 |
Monte Carlo Simulation of Charge Transport in Graphene (Simulazione Monte Carlo per il trasporto di cariche nel grafene) | Simulations of charge transport in graphene are presented by implementing a
recent method published on the paper: V. Romano, A. Majorana, M. Coco, "DSMC
method consistent with the Pauli exclusion principle and comparison with
deterministic solutions for charge transport in graphene", Journal of
Computational Physics 302 (2015) 267-284. After an overview of the most
important aspects of the semiclassical transport model for the dynamics of
electrons in monolayer graphene, it is made a comparison in computational time
between MATLAB and Fortran implementations of the algorithms. Therefore it is
studied the case of graphene on substrates which it is produced original
results by introducing models for the distribution of distances between
graphene's atoms and impurities. Finally simulations, by choosing different
kind of substrates, are done.
-----
Le simulazioni per il trasporto di cariche nel grafene sono presentate
implementando un recente metodo pubblicato nell'articolo: V. Romano, A.
Majorana, M. Coco, "DSMC method consistent with the Pauli exclusion principle
and comparison with deterministic solutions for charge transport in graphene",
Journal of Computational Physics 302 (2015) 267-284. Dopo una panoramica sugli
aspetti più importanti del modello di trasporto semiclassico per la dinamica
degli elettroni nel grafene sospeso, è stato effettuato un confronto del
tempo computazionale tra le implementazioni MATLAB e Fortran dell'algoritmo.
Inoltre è stato anche studiato il caso del grafene su substrato su cui sono
stati prodotti dei risultati originali considerando dei modelli per la
distribuzione delle distanze tra gli atomi del grafene e le impurezze. Infine
sono state effettuate delle simulazioni scegliendo substrati di diversa natura.
| 1 | 1 | 0 | 0 | 0 | 0 |
Embedded-Graph Theory | In this paper, we propose a new type of graph, denoted as "embedded-graph",
and its theory, which employs a distributed representation to describe the
relations on the graph edges. Embedded-graphs can express linguistic and
complicated relations, which cannot be expressed by the existing edge-graphs or
weighted-graphs. We introduce the mathematical definition of embedded-graph,
translation, edge distance, and graph similarity. We can transform an
embedded-graph into a weighted-graph and a weighted-graph into an edge-graph by
the translation method and by threshold calculation, respectively. The edge
distance of an embedded-graph is a distance based on the components of a target
vector, and it is calculated through cosine similarity with the target vector.
The graph similarity is obtained considering the relations with linguistic
complexity. In addition, we provide some examples and data structures for
embedded-graphs in this paper.
| 1 | 0 | 0 | 0 | 0 | 0 |
A duality principle for the multi-block entanglement entropy of free fermion systems | The analysis of the entanglement entropy of a subsystem of a one-dimensional
quantum system is a powerful tool for unravelling its critical nature. For
instance, the scaling behaviour of the entanglement entropy determines the
central charge of the associated Virasoro algebra. For a free fermion system,
the entanglement entropy depends essentially on two sets, namely the set $A$ of
sites of the subsystem considered and the set $K$ of excited momentum modes. In
this work we make use of a general duality principle establishing the
invariance of the entanglement entropy under exchange of the sets $A$ and $K$
to tackle complex problems by studying their dual counterparts. The duality
principle is also a key ingredient in the formulation of a novel conjecture for
the asymptotic behavior of the entanglement entropy of a free fermion system in
the general case in which both sets $A$ and $K$ consist of an arbitrary number
of blocks. We have verified that this conjecture reproduces the numerical
results with excellent precision for all the configurations analyzed. We have
also applied the conjecture to deduce several asymptotic formulas for the
mutual and $r$-partite information generalizing the known ones for the single
block case.
| 0 | 1 | 1 | 0 | 0 | 0 |
Game Efficiency through Linear Programming Duality | The efficiency of a game is typically quantified by the price of anarchy
(PoA), defined as the worst ratio of the objective function value of an
equilibrium --- solution of the game --- and that of an optimal outcome. Given
the tremendous impact of tools from mathematical programming in the design of
algorithms and the similarity of the price of anarchy and different measures
such as the approximation and competitive ratios, it is intriguing to develop a
duality-based method to characterize the efficiency of games.
In the paper, we present an approach based on linear programming duality to
study the efficiency of games. We show that the approach provides a general
recipe to analyze the efficiency of games and also to derive concepts leading
to improvements. The approach is particularly appropriate to bound the PoA.
Specifically, in our approach the dual programs naturally lead to competitive
PoA bounds that are (almost) optimal for several classes of games. The approach
indeed captures the smoothness framework and also some current non-smooth
techniques/concepts. We show the applicability to the wide variety of games and
environments, from congestion games to Bayesian welfare, from full-information
settings to incomplete-information ones.
| 1 | 0 | 0 | 0 | 0 | 0 |
An Equation-By-Equation Method for Solving the Multidimensional Moment Constrained Maximum Entropy Problem | An equation-by-equation (EBE) method is proposed to solve a system of
nonlinear equations arising from the moment constrained maximum entropy problem
of multidimensional variables. The design of the EBE method combines ideas from
homotopy continuation and Newton's iterative methods. Theoretically, we
establish the local convergence under appropriate conditions and show that the
proposed method, geometrically, finds the solution by searching along the
surface corresponding to one component of the nonlinear problem. We will
demonstrate the robustness of the method on various numerical examples,
including: (1) A six-moment one-dimensional entropy problem with an explicit
solution that contains components of order $10^0-10^3$ in magnitude; (2)
Four-moment multidimensional entropy problems with explicit solutions where the
resulting systems to be solved ranging from $70-310$ equations; (3) Four- to
eight-moment of a two-dimensional entropy problem, which solutions correspond
to the densities of the two leading EOFs of the wind stress-driven large-scale
oceanic model. In this case, we find that the EBE method is more accurate
compared to the classical Newton's method, the MATLAB generic solver, and the
previously developed BFGS-based method, which was also tested on this problem.
(4) Four-moment constrained of up to five-dimensional entropy problems which
solutions correspond to multidimensional densities of the components of the
solutions of the Kuramoto-Sivashinsky equation. For the higher dimensional
cases of this example, the EBE method is superior because it automatically
selects a subset of the prescribed moment constraints from which the maximum
entropy solution can be estimated within the desired tolerance. This selection
feature is particularly important since the moment constrained maximum entropy
problems do not necessarily have solutions in general.
| 0 | 0 | 1 | 0 | 0 | 0 |
On reductions of the discrete Kadomtsev--Petviashvili-type equations | The reduction by restricting the spectral parameters $k$ and $k'$ on a
generic algebraic curve of degree $\mathcal{N}$ is performed for the discrete
AKP, BKP and CKP equations, respectively. A variety of two-dimensional discrete
integrable systems possessing a more general solution structure arise from the
reduction, and in each case a unified formula for generic positive integer
$\mathcal{N}\geq 2$ is given to express the corresponding reduced integrable
lattice equations. The obtained extended two-dimensional lattice models give
rise to many important integrable partial difference equations as special
degenerations. Some new integrable lattice models such as the discrete
Sawada--Kotera, Kaup--Kupershmidt and Hirota--Satsuma equations in extended
form are given as examples within the framework.
| 0 | 1 | 0 | 0 | 0 | 0 |
Exploring the Space of Black-box Attacks on Deep Neural Networks | Existing black-box attacks on deep neural networks (DNNs) so far have largely
focused on transferability, where an adversarial instance generated for a
locally trained model can "transfer" to attack other learning models. In this
paper, we propose novel Gradient Estimation black-box attacks for adversaries
with query access to the target model's class probabilities, which do not rely
on transferability. We also propose strategies to decouple the number of
queries required to generate each adversarial sample from the dimensionality of
the input. An iterative variant of our attack achieves close to 100%
adversarial success rates for both targeted and untargeted attacks on DNNs. We
carry out extensive experiments for a thorough comparative evaluation of
black-box attacks and show that the proposed Gradient Estimation attacks
outperform all transferability based black-box attacks we tested on both MNIST
and CIFAR-10 datasets, achieving adversarial success rates similar to well
known, state-of-the-art white-box attacks. We also apply the Gradient
Estimation attacks successfully against a real-world Content Moderation
classifier hosted by Clarifai. Furthermore, we evaluate black-box attacks
against state-of-the-art defenses. We show that the Gradient Estimation attacks
are very effective even against these defenses.
| 1 | 0 | 0 | 0 | 0 | 0 |
Convergence diagnostics for stochastic gradient descent with constant step size | Many iterative procedures in stochastic optimization exhibit a transient
phase followed by a stationary phase. During the transient phase the procedure
converges towards a region of interest, and during the stationary phase the
procedure oscillates in that region, commonly around a single point. In this
paper, we develop a statistical diagnostic test to detect such phase transition
in the context of stochastic gradient descent with constant learning rate. We
present theory and experiments suggesting that the region where the proposed
diagnostic is activated coincides with the convergence region. For a class of
loss functions, we derive a closed-form solution describing such region.
Finally, we suggest an application to speed up convergence of stochastic
gradient descent by halving the learning rate each time stationarity is
detected. This leads to a new variant of stochastic gradient descent, which in
many settings is comparable to state-of-art.
| 1 | 0 | 1 | 1 | 0 | 0 |
Strong Khovanov-Floer Theories and Functoriality | We provide a unified framework for proving Reidemeister-invariance and
functoriality for a wide range of link homology theories. These include Lee
homology, Heegaard Floer homology of branched double covers, singular instanton
homology, and \Szabo's geometric link homology theory. We follow Baldwin,
Hedden, and Lobb (arXiv:1509.04691) in leveraging the relationships between
these theories and Khovanov homology. We obtain stronger functoriality results
by avoiding spectral sequences and instead showing that each theory factors
through Bar-Natan's cobordism-theoretic link homology theory.
| 0 | 0 | 1 | 0 | 0 | 0 |
Formal Verification of Neural Network Controlled Autonomous Systems | In this paper, we consider the problem of formally verifying the safety of an
autonomous robot equipped with a Neural Network (NN) controller that processes
LiDAR images to produce control actions. Given a workspace that is
characterized by a set of polytopic obstacles, our objective is to compute the
set of safe initial conditions such that a robot trajectory starting from these
initial conditions is guaranteed to avoid the obstacles. Our approach is to
construct a finite state abstraction of the system and use standard
reachability analysis over the finite state abstraction to compute the set of
the safe initial states. The first technical problem in computing the finite
state abstraction is to mathematically model the imaging function that maps the
robot position to the LiDAR image. To that end, we introduce the notion of
imaging-adapted sets as partitions of the workspace in which the imaging
function is guaranteed to be affine. We develop a polynomial-time algorithm to
partition the workspace into imaging-adapted sets along with computing the
corresponding affine imaging functions. Given this workspace partitioning, a
discrete-time linear dynamics of the robot, and a pre-trained NN controller
with Rectified Linear Unit (ReLU) nonlinearity, the second technical challenge
is to analyze the behavior of the neural network. To that end, we utilize a
Satisfiability Modulo Convex (SMC) encoding to enumerate all the possible
segments of different ReLUs. SMC solvers then use a Boolean satisfiability
solver and a convex programming solver and decompose the problem into smaller
subproblems. To accelerate this process, we develop a pre-processing algorithm
that could rapidly prune the space feasible ReLU segments. Finally, we
demonstrate the efficiency of the proposed algorithms using numerical
simulations with increasing complexity of the neural network controller.
| 1 | 0 | 0 | 0 | 0 | 0 |
Nonvanishing of central $L$-values of Maass forms | With the method of moments and the mollification method, we study the central
$L$-values of GL(2) Maass forms of weight $0$ and level $1$ and establish a
positive-proportional nonvanishing result of such values in the aspect of large
spectral parameter in short intervals, which is qualitatively optimal in view
of Weyl's law. As an application of this result and a formula of Katok--Sarnak,
we give a nonvanishing result on the first Fourier coefficients of Maass forms
of weight $\frac{1}{2}$ and level $4$ in the Kohnen plus space.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Universally Optimal Multistage Accelerated Stochastic Gradient Method | We study the problem of minimizing a strongly convex and smooth function when
we have noisy estimates of its gradient. We propose a novel multistage
accelerated algorithm that is universally optimal in the sense that it achieves
the optimal rate both in the deterministic and stochastic case and operates
without knowledge of noise characteristics. The algorithm consists of stages
that use a stochastic version of Nesterov's accelerated algorithm with a
specific restart and parameters selected to achieve the fastest reduction in
the bias-variance terms in the convergence rate bounds.
| 1 | 0 | 0 | 1 | 0 | 0 |
Radio Observation of Venus at Meter Wavelengths using the GMRT | The Venusian surface has been studied by measuring radar reflections and
thermal radio emission over a wide spectral region of several centimeters to
meter wavelengths from the Earth-based as well as orbiter platforms. The
radiometric observations, in the decimeter (dcm) wavelength regime showed a
decreasing trend in the observed brightness temperature (Tb) with increasing
wavelength. The thermal emission models available at present have not been able
to explain the radiometric observations at longer wavelength (dcm) to a
satisfactory level. This paper reports the first interferometric imaging
observations of Venus below 620 MHz. They were carried out at 606, 332.9 and
239.9 MHz using the Giant Meterwave Radio Telescope (GMRT). The Tb values
derived at the respective frequencies are 526 K, 409 K and < 426 K, with errors
of ~7% which are generally consistent with the reported Tb values at 608 MHz
and 430 MHz by previous investigators, but are much lower than those derived
from high-frequency observations at 1.38-22.46 GHz using the VLA.
| 0 | 1 | 0 | 0 | 0 | 0 |
Measurements of the depth of maximum of air-shower profiles at the Pierre Auger Observatory and their composition implications | Air-showers measured by the Pierre Auger Observatory were analyzed in order
to extract the depth of maximum (Xmax).The results allow the analysis of the
Xmax distributions as a function of energy ($> 10^{17.8}$ eV). The Xmax
distributions, their mean and standard deviation are analyzed with the help of
shower simulations with the aim of interpreting the mass composition. The mean
and standard deviation were used to derive <ln A> and its variance as a
function of energy. The fraction of four components (p, He, N and Fe) were fit
to the Xmax distributions. Regardless of the hadronic model used the data is
better described by a mix of light, intermediate and heavy primaries. Also,
independent of the hadronic models, a decrease of the proton flux with energy
is observed. No significant contribution of iron nuclei is derived in the
entire energy range studied.
| 0 | 1 | 0 | 0 | 0 | 0 |
Softmax Q-Distribution Estimation for Structured Prediction: A Theoretical Interpretation for RAML | Reward augmented maximum likelihood (RAML), a simple and effective learning
framework to directly optimize towards the reward function in structured
prediction tasks, has led to a number of impressive empirical successes. RAML
incorporates task-specific reward by performing maximum-likelihood updates on
candidate outputs sampled according to an exponentiated payoff distribution,
which gives higher probabilities to candidates that are close to the reference
output. While RAML is notable for its simplicity, efficiency, and its
impressive empirical successes, the theoretical properties of RAML, especially
the behavior of the exponentiated payoff distribution, has not been examined
thoroughly. In this work, we introduce softmax Q-distribution estimation, a
novel theoretical interpretation of RAML, which reveals the relation between
RAML and Bayesian decision theory. The softmax Q-distribution can be regarded
as a smooth approximation of the Bayes decision boundary, and the Bayes
decision rule is achieved by decoding with this Q-distribution. We further show
that RAML is equivalent to approximately estimating the softmax Q-distribution,
with the temperature $\tau$ controlling approximation error. We perform two
experiments, one on synthetic data of multi-class classification and one on
real data of image captioning, to demonstrate the relationship between RAML and
the proposed softmax Q-distribution estimation method, verifying our
theoretical analysis. Additional experiments on three structured prediction
tasks with rewards defined on sequential (named entity recognition), tree-based
(dependency parsing) and irregular (machine translation) structures show
notable improvements over maximum likelihood baselines.
| 1 | 0 | 0 | 1 | 0 | 0 |
Vision-based Autonomous Landing in Catastrophe-Struck Environments | Unmanned Aerial Vehicles (UAVs) equipped with bioradars are a life-saving
technology that can enable identification of survivors under collapsed
buildings in the aftermath of natural disasters such as earthquakes or gas
explosions. However, these UAVs have to be able to autonomously land on debris
piles in order to accurately locate the survivors. This problem is extremely
challenging as the structure of these debris piles is often unknown and no
prior knowledge can be leveraged. In this work, we propose a computationally
efficient system that is able to reliably identify safe landing sites and
autonomously perform the landing maneuver. Specifically, our algorithm computes
costmaps based on several hazard factors including terrain flatness, steepness,
depth accuracy and energy consumption information. We first estimate dense
candidate landing sites from the resulting costmap and then employ clustering
to group neighboring sites into a safe landing region. Finally, a minimum-jerk
trajectory is computed for landing considering the surrounding obstacles and
the UAV dynamics. We demonstrate the efficacy of our system using experiments
from a city scale hyperrealistic simulation environment and in real-world
scenarios with collapsed buildings.
| 1 | 0 | 0 | 0 | 0 | 0 |
Data-Driven Sparse Sensor Placement for Reconstruction | Optimal sensor placement is a central challenge in the design, prediction,
estimation, and control of high-dimensional systems. High-dimensional states
can often leverage a latent low-dimensional representation, and this inherent
compressibility enables sparse sensing. This article explores optimized sensor
placement for signal reconstruction based on a tailored library of features
extracted from training data. Sparse point sensors are discovered using the
singular value decomposition and QR pivoting, which are two ubiquitous matrix
computations that underpin modern linear dimensionality reduction. Sparse
sensing in a tailored basis is contrasted with compressed sensing, a universal
signal recovery method in which an unknown signal is reconstructed via a sparse
representation in a universal basis. Although compressed sensing can recover a
wider class of signals, we demonstrate the benefits of exploiting known
patterns in data with optimized sensing. In particular, drastic reductions in
the required number of sensors and improved reconstruction are observed in
examples ranging from facial images to fluid vorticity fields. Principled
sensor placement may be critically enabling when sensors are costly and
provides faster state estimation for low-latency, high-bandwidth control.
MATLAB code is provided for all examples.
| 1 | 0 | 1 | 0 | 0 | 0 |
Fast trimers in one-dimensional extended Fermi-Hubbard model | We consider a one-dimensional two component extended Fermi-Hubbard model with
nearest neighbor interactions and mass imbalance between the two species. We
study the stability of trimers, various observables for detecting them, and
expansion dynamics. We generalize the definition of the trimer gap to include
the formation of different types of clusters originating from nearest neighbor
interactions. Expansion dynamics reveal rapidly propagating trimers, with
speeds exceeding doublon propagation in strongly interacting regime. We present
a simple model for understanding this unique feature of the movement of the
trimers, and we discuss the potential for experimental realization.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Geometric Approach for Real-time Monitoring of Dynamic Large Scale Graphs: AS-level graphs illustrated | The monitoring of large dynamic networks is a major chal- lenge for a wide
range of application. The complexity stems from properties of the underlying
graphs, in which slight local changes can lead to sizable variations of global
prop- erties, e.g., under certain conditions, a single link cut that may be
overlooked during monitoring can result in splitting the graph into two
disconnected components. Moreover, it is often difficult to determine whether a
change will propagate globally or remain local. Traditional graph theory
measure such as the centrality or the assortativity of the graph are not
satisfying to characterize global properties of the graph. In this paper, we
tackle the problem of real-time monitoring of dynamic large scale graphs by
developing a geometric approach that leverages notions of geometric curvature
and recent development in graph embeddings using Ollivier-Ricci curvature [47].
We illustrate the use of our method by consid- ering the practical case of
monitoring dynamic variations of global Internet using topology changes
information provided by combining several BGP feeds. In particular, we use our
method to detect major events and changes via the geometry of the embedding of
the graph.
| 1 | 0 | 0 | 1 | 0 | 0 |
On general $(α, β)$-metrics of weak Landsberg type | In this paper, we study general $(\alpha,\beta)$-metrics which $\alpha$ is a
Riemannian metric and $\beta$ is an one-form. We have proven that every weak
Landsberg general $(\alpha,\beta)$-metric is a Berwald metric, where $\beta$ is
a closed and conformal one-form. This show that there exist no generalized
unicorn metric in this class of general $(\alpha,\beta)$-metric. Further, We
show that $F$ is a Landsberg general $(\alpha,\beta)$-metric if and only if it
is weak Landsberg general $(\alpha,\beta)$-metric, where $\beta$ is a closed
and conformal one-form.
| 0 | 0 | 1 | 0 | 0 | 0 |
An Intersectional Definition of Fairness | We introduce a measure of fairness for algorithms and data with regard to
multiple protected attributes. Our proposed definition, differential fairness,
is informed by the framework of intersectionality, which analyzes how
interlocking systems of power and oppression affect individuals along
overlapping dimensions including race, gender, sexual orientation, class, and
disability. We show that our criterion behaves sensibly for any subset of the
set of protected attributes, and we illustrate links to differential privacy. A
case study on census data demonstrates the utility of our approach.
| 0 | 0 | 0 | 1 | 0 | 0 |
Technological Parasitism | Technological parasitism is a new theory to explain the evolution of
technology in society. In this context, this study proposes a model to analyze
the interaction between a host technology (system) and a parasitic technology
(subsystem) to explain evolutionary pathways of technologies as complex
systems. The coefficient of evolutionary growth of the model here indicates the
typology of evolution of parasitic technology in relation to host technology:
i.e., underdevelopment, growth and development. This approach is illustrated
with realistic examples using empirical data of product and process
technologies. Overall, then, the theory of technological parasitism can be
useful for bringing a new perspective to explain and generalize the evolution
of technology and predict which innovations are likely to evolve rapidly in
society.
| 0 | 0 | 0 | 0 | 0 | 1 |
On the sample mean after a group sequential trial | A popular setting in medical statistics is a group sequential trial with
independent and identically distributed normal outcomes, in which interim
analyses of the sum of the outcomes are performed. Based on a prescribed
stopping rule, one decides after each interim analysis whether the trial is
stopped or continued. Consequently, the actual length of the study is a random
variable. It is reported in the literature that the interim analyses may cause
bias if one uses the ordinary sample mean to estimate the location parameter.
For a generic stopping rule, which contains many classical stopping rules as a
special case, explicit formulas for the expected length of the trial, the bias,
and the mean squared error (MSE) are provided. It is deduced that, for a fixed
number of interim analyses, the bias and the MSE converge to zero if the first
interim analysis is performed not too early. In addition, optimal rates for
this convergence are provided. Furthermore, under a regularity condition,
asymptotic normality in total variation distance for the sample mean is
established. A conclusion for naive confidence intervals based on the sample
mean is derived. It is also shown how the developed theory naturally fits in
the broader framework of likelihood theory in a group sequential trial setting.
A simulation study underpins the theoretical findings.
| 0 | 0 | 1 | 1 | 0 | 0 |
A New UGV Teleoperation Interface for Improved Awareness of Network Connectivity and Physical Surroundings | A reliable wireless connection between the operator and the teleoperated
Unmanned Ground Vehicle (UGV) is critical in many Urban Search and Rescue
(USAR) missions. Unfortunately, as was seen in e.g. the Fukushima disaster, the
networks available in areas where USAR missions take place are often severely
limited in range and coverage. Therefore, during mission execution, the
operator needs to keep track of not only the physical parts of the mission,
such as navigating through an area or searching for victims, but also the
variations in network connectivity across the environment. In this paper, we
propose and evaluate a new teleoperation User Interface (UI) that includes a
way of estimating the Direction of Arrival (DoA) of the Radio Signal Strength
(RSS) and integrating the DoA information in the interface. The evaluation
shows that using the interface results in more objects found, and less aborted
missions due to connectivity problems, as compared to a standard interface. The
proposed interface is an extension to an existing interface centered around the
video stream captured by the UGV. But instead of just showing the network
signal strength in terms of percent and a set of bars, the additional
information of DoA is added in terms of a color bar surrounding the video feed.
With this information, the operator knows what movement directions are safe,
even when moving in regions close to the connectivity threshold.
| 1 | 0 | 0 | 0 | 0 | 0 |
Practical Integer-to-Binary Mapping for Quantum Annealers | Recent advancements in quantum annealing hardware and numerous studies in
this area suggests that quantum annealers have the potential to be effective in
solving unconstrained binary quadratic programming problems. Naturally, one may
desire to expand the application domain of these machines to problems with
general discrete variables. In this paper, we explore the possibility of
employing quantum annealers to solve unconstrained quadratic programming
problems over a bounded integer domain. We present an approach for encoding
integer variables into binary ones, thereby representing unconstrained integer
quadratic programming problems as unconstrained binary quadratic programming
problems. To respect some of the limitations of the currently developed quantum
annealers, we propose an integer encoding, named bounded- coefficient encoding,
in which we limit the size of the coefficients that appear in the encoding.
Furthermore, we propose an algorithm for finding the upper bound on the
coefficients of the encoding using the precision of the machine and the
coefficients of the original integer problem. Finally, we experimentally show
that this approach is far more resilient to the noise of the quantum annealers
compared to traditional approaches for the encoding of integers in base two.
| 1 | 0 | 1 | 0 | 0 | 0 |
Nonequilibrium entropic bounds for Darwinian replicators | Life evolved on our planet by means of a combination of Darwinian selection
and innovations leading to higher levels of complexity. The emergence and
selection of replicating entities is a central problem in prebiotic evolution.
Theoretical models have shown how populations of different types of replicating
entities exclude or coexist with other classes of replicators. Models are
typically kinetic, based on standard replicator equations. On the other hand,
the presence of thermodynamical constrains for these systems remain an open
question. This is largely due to the lack of a general theory of out of
statistical methods for systems far from equilibrium. Nonetheless, a first
approach to this problem has been put forward in a series of novel
developements in non-equilibrium physics, under the rubric of the extended
second law of thermodynamics. The work presented here is twofold: firstly, we
review this theoretical framework and provide a brief description of the three
fundamental replicator types in prebiotic evolution: parabolic, malthusian and
hyperbolic. Finally, we employ these previously mentioned techinques to explore
how replicators are constrained by thermodynamics.
| 0 | 1 | 0 | 0 | 0 | 0 |
Cluster decomposition of full configuration interaction wave functions: a tool for chemical interpretation of systems with strong correlation | Approximate full configuration interaction (FCI) calculations have recently
become tractable for systems of unforeseen size thanks to stochastic and
adaptive approximations to the exponentially scaling FCI problem. The result of
an FCI calculation is a weighted set of electronic configurations, which can
also be expressed in terms of excitations from a reference configuration. The
excitation amplitudes contain information on the complexity of the electronic
wave function, but this information is contaminated by contributions from
disconnected excitations, i.e. those excitations that are just products of
independent lower-level excitations. The unwanted contributions can be removed
via a cluster decomposition procedure, making it possible to examine the
importance of connected excitations in complicated multireference molecules
which are outside the reach of conventional algorithms. We present an
implementation of the cluster decomposition analysis and apply it to both true
FCI wave functions, as well as wave functions generated from the adaptive
sampling CI (ASCI) algorithm. The cluster decomposition is useful for
interpreting calculations in chemical studies, as a diagnostic for the
convergence of various excitation manifolds, as well as as a guidepost for
polynomially scaling electronic structure models. Applications are presented
for (i) the double dissociation of water, (ii) the carbon dimer, (iii) the
{\pi} space of polyacenes, as well as (iv) the chromium dimer. While the
cluster amplitudes exhibit rapid decay with increasing rank for the first three
systems, even connected octuple excitations still appear important in Cr$_2$,
suggesting that spin-restricted single-reference coupled-cluster approaches may
not be tractable for some problems in transition metal chemistry.
| 0 | 1 | 0 | 0 | 0 | 0 |
Possible Quasi-Periodic modulation in the z = 1.1 $γ$-ray blazar PKS 0426-380 | We search for $\gamma$-ray and optical periodic modulations in a distant flat
spectrum radio quasar (FSRQ) PKS 0426-380 (the redshift $z=1.1$). Using two
techniques (i.e., the maximum likelihood optimization and the exposure-weighted
aperture photometry), we obtain $\gamma$-ray light curves from \emph{Fermi}-LAT
Pass 8 data covering from 2008 August to 2016 December. We then analyze the
light curves with the Lomb-Scargle Periodogram (LSP) and the Weighted Wavelet
Z-transform (WWZ). A $\gamma$-ray quasi-periodicity with a period of 3.35 $\pm$
0.68 years is found at the significance-level of $\simeq3.6\ \sigma$. The
optical-UV flux covering from 2005 August to 2013 April provided by ASI SCIENCE
DATA CENTER is also analyzed, but no significant quasi-periodicity is found. It
should be pointed out that the result of the optical-UV data could be tentative
because of the incomplete of the data. Further long-term multiwavelength
monitoring of this FSRQ is needed to confirm its quasi-periodicity.
| 0 | 1 | 0 | 0 | 0 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.