title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Spatial solitons in thermo-optical media from the nonlinear Schrodinger-Poisson equation and dark matter analogues | We analyze theoretically the Schrodinger-Poisson equation in two transverse
dimensions in the presence of a Kerr term. The model describes the nonlinear
propagation of optical beams in thermooptical media and can be regarded as an
analogue system for a self-gravitating self-interacting wave. We compute
numerically the family of radially symmetric ground state bright stationary
solutions for focusing and defocusing local nonlinearity, keeping in both cases
a focusing nonlocal nonlinearity. We also analyze excited states and
oscillations induced by fixing the temperature at the borders of the material.
We provide simulations of soliton interactions, drawing analogies with the
dynamics of galactic cores in the scalar field dark matter scenario.
| 0 | 1 | 0 | 0 | 0 | 0 |
Instrument-Armed Bandits | We extend the classic multi-armed bandit (MAB) model to the setting of
noncompliance, where the arm pull is a mere instrument and the treatment
applied may differ from it, which gives rise to the instrument-armed bandit
(IAB) problem. The IAB setting is relevant whenever the experimental units are
human since free will, ethics, and the law may prohibit unrestricted or forced
application of treatment. In particular, the setting is relevant in bandit
models of dynamic clinical trials and other controlled trials on human
interventions. Nonetheless, the setting has not been fully investigate in the
bandit literature. We show that there are various and divergent notions of
regret in this setting, all of which coincide only in the classic MAB setting.
We characterize the behavior of these regrets and analyze standard MAB
algorithms. We argue for a particular kind of regret that captures the causal
effect of treatments but show that standard MAB algorithms cannot achieve
sublinear control on this regret. Instead, we develop new algorithms for the
IAB problem, prove new regret bounds for them, and compare them to standard MAB
algorithms in numerical examples.
| 1 | 0 | 0 | 1 | 0 | 0 |
Deep learning Inversion of Seismic Data | In this paper, we propose a new method to tackle the mapping challenge from
time-series data to spatial image in the field of seismic exploration, i.e.,
reconstructing the velocity model directly from seismic data by deep neural
networks (DNNs). The conventional way to address this ill-posed seismic
inversion problem is through iterative algorithms, which suffer from poor
nonlinear mapping and strong non-uniqueness. Other attempts may either import
human intervention errors or underuse seismic data. The challenge for DNNs
mainly lies in the weak spatial correspondence, the uncertain
reflection-reception relationship between seismic data and velocity model as
well as the time-varying property of seismic data. To approach these
challenges, we propose an end-to-end Seismic Inversion Networks (SeisInvNet for
short) with novel components to make the best use of all seismic data.
Specifically, we start with every seismic trace and enhance it with its
neighborhood information, its observation setup and global context of its
corresponding seismic profile. Then from enhanced seismic traces, the spatially
aligned feature maps can be learned and further concatenated to reconstruct
velocity model. In general, we let every seismic trace contribute to the
reconstruction of the whole velocity model by finding spatial correspondence.
The proposed SeisInvNet consistently produces improvements over the baselines
and achieves promising performance on our proposed SeisInv dataset according to
various evaluation metrics, and the inversion results are more consistent with
the target from the aspects of velocity value, subsurface structure and
geological interface. In addition to the superior performance, the mechanism is
also carefully discussed, and some potential problems are identified for
further study.
| 1 | 0 | 0 | 0 | 0 | 0 |
Nopol: Automatic Repair of Conditional Statement Bugs in Java Programs | We propose NOPOL, an approach to automatic repair of buggy conditional
statements (i.e., if-then-else statements). This approach takes a buggy program
as well as a test suite as input and generates a patch with a conditional
expression as output. The test suite is required to contain passing test cases
to model the expected behavior of the program and at least one failing test
case that reveals the bug to be repaired. The process of NOPOL consists of
three major phases. First, NOPOL employs angelic fix localization to identify
expected values of a condition during the test execution. Second, runtime trace
collection is used to collect variables and their actual values, including
primitive data types and objected-oriented features (e.g., nullness checks), to
serve as building blocks for patch generation. Third, NOPOL encodes these
collected data into an instance of a Satisfiability Modulo Theory (SMT)
problem, then a feasible solution to the SMT instance is translated back into a
code patch. We evaluate NOPOL on 22 real-world bugs (16 bugs with buggy IF
conditions and 6 bugs with missing preconditions) on two large open-source
projects, namely Apache Commons Math and Apache Commons Lang. Empirical
analysis on these bugs shows that our approach can effectively fix bugs with
buggy IF conditions and missing preconditions. We illustrate the capabilities
and limitations of NOPOL using case studies of real bug fixes.
| 1 | 0 | 0 | 0 | 0 | 0 |
Parametric geometry of numbers in function fields | Parametric geometry of numbers is a new theory, recently created by Schmidt
and Summerer, which unifies and simplifies many aspects of classical
Diophantine approximations, providing a handle on problems which previously
seemed out of reach. Our goal is to transpose this theory to fields of rational
functions in one variable and to analyze in that context the problem of
simultaneous approximation to exponential functions.
| 0 | 0 | 1 | 0 | 0 | 0 |
Refined open intersection numbers and the Kontsevich-Penner matrix model | A study of the intersection theory on the moduli space of Riemann surfaces
with boundary was recently initiated in a work of R. Pandharipande, J. P.
Solomon and the third author, where they introduced open intersection numbers
in genus 0. Their construction was later generalized to all genera by J. P.
Solomon and the third author. In this paper we consider a refinement of the
open intersection numbers by distinguishing contributions from surfaces with
different numbers of boundary components, and we calculate all these numbers.
We then construct a matrix model for the generating series of the refined open
intersection numbers and conjecture that it is equivalent to the
Kontsevich-Penner matrix model. An evidence for the conjecture is presented.
Another refinement of the open intersection numbers, which describes the
distribution of the boundary marked points on the boundary components, is also
discussed.
| 0 | 0 | 1 | 0 | 0 | 0 |
SEIRS epidemics in growing populations | An SEIRS epidemic with disease fatalities is introduced in a growing
population (modelled as a super-critical linear birth and death process). The
study of the initial phase of the epidemic is stochastic, while the analysis of
the major outbreaks is deterministic. Depending on the values of the
parameters, the following scenarios are possible. i) The disease dies out
quickly, only infecting few; ii) the epidemic takes off, the \textit{number} of
infected individuals grows exponentially, but the \textit{fraction} of infected
individuals remains negligible; iii) the epidemic takes off, the
\textit{number} of infected grows initially quicker than the population, the
disease fatalities diminish the growth rate of the population, but it remains
super critical, and the \emph{fraction} of infected go to an endemic
equilibrium; iv) the epidemic takes off, the \textit{number} of infected
individuals grows initially quicker than the population, the diseases
fatalities turn the exponential growth of the population to an exponential
decay.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Multi-task Selected Learning Approach for Solving New Type 3D Bin Packing Problem | This paper studies a new type of 3D bin packing problem (BPP), in which a
number of cuboid-shaped items must be put into a bin one by one orthogonally.
The objective is to find a way to place these items that can minimize the
surface area of the bin. This problem is based on the fact that there is no
fixed-sized bin in many real business scenarios and the cost of a bin is
proportional to its surface area. Based on previous research on 3D BPP, the
surface area is determined by the sequence, spatial locations and orientations
of items. It is a new NP-hard combinatorial optimization problem on
unfixed-sized bin packing, for which we propose a multi-task framework based on
Selected Learning, generating the sequence and orientations of items packed
into the bin simultaneously. During training steps, Selected Learning chooses
one of loss functions derived from Deep Reinforcement Learning and Supervised
Learning corresponding to the training procedure. Numerical results show that
the method proposed significantly outperforms Lego baselines by a substantial
gain of 7.52%. Moreover, we produce large scale 3D Bin Packing order data set
for studying bin packing problems and will release it to the research
community.
| 0 | 0 | 0 | 1 | 0 | 0 |
Use of Docker for deployment and testing of astronomy software | We describe preliminary investigations of using Docker for the deployment and
testing of astronomy software. Docker is a relatively new containerisation
technology that is developing rapidly and being adopted across a range of
domains. It is based upon virtualization at operating system level, which
presents many advantages in comparison to the more traditional hardware
virtualization that underpins most cloud computing infrastructure today. A
particular strength of Docker is its simple format for describing and managing
software containers, which has benefits for software developers, system
administrators and end users.
We report on our experiences from two projects -- a simple activity to
demonstrate how Docker works, and a more elaborate set of services that
demonstrates more of its capabilities and what they can achieve within an
astronomical context -- and include an account of how we solved problems
through interaction with Docker's very active open source development
community, which is currently the key to the most effective use of this
rapidly-changing technology.
| 1 | 1 | 0 | 0 | 0 | 0 |
Label Embedding Network: Learning Label Representation for Soft Training of Deep Networks | We propose a method, called Label Embedding Network, which can learn label
representation (label embedding) during the training process of deep networks.
With the proposed method, the label embedding is adaptively and automatically
learned through back propagation. The original one-hot represented loss
function is converted into a new loss function with soft distributions, such
that the originally unrelated labels have continuous interactions with each
other during the training process. As a result, the trained model can achieve
substantially higher accuracy and with faster convergence speed. Experimental
results based on competitive tasks demonstrate the effectiveness of the
proposed method, and the learned label embedding is reasonable and
interpretable. The proposed method achieves comparable or even better results
than the state-of-the-art systems. The source code is available at
\url{this https URL}.
| 1 | 0 | 0 | 0 | 0 | 0 |
On the Performance of Multi-Instrument Solar Flare Observations During Solar Cycle 24 | The current fleet of space-based solar observatories offers us a wealth of
opportunities to study solar flares over a range of wavelengths. Significant
advances in our understanding of flare physics often come from coordinated
observations between multiple instruments. Consequently, considerable efforts
have been, and continue to be made to coordinate observations among instruments
(e.g. through the Max Millennium Program of Solar Flare Research). However,
there has been no study to date that quantifies how many flares have been
observed by combinations of various instruments. Here we describe a technique
that retrospectively searches archival databases for flares jointly observed by
RHESSI, SDO/EVE (MEGS-A and -B), Hinode/(EIS, SOT, and XRT), and IRIS. Out of
the 6953 flares of GOES magnitude C1 or greater that we consider over the 6.5
years after the launch of SDO, 40 have been observed by six or more instruments
simultaneously. Using each instrument's individual rate of success in observing
flares, we show that the numbers of flares co-observed by three or more
instruments are higher than the number expected under the assumption that the
instruments operated independently of one another. In particular, the number of
flares observed by larger numbers of instruments is much higher than expected.
Our study illustrates that these missions often acted in cooperation, or at
least had aligned goals. We also provide details on an interactive widget now
available in SSWIDL that allows a user to search for flaring events that have
been observed by a chosen set of instruments. This provides access to a broader
range of events in order to answer specific science questions. The difficulty
in scheduling coordinated observations for solar-flare research is discussed
with respect to instruments projected to begin operations during Solar Cycle
25, such as DKIST, Solar Orbiter, and Parker Solar Probe.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Feature Complete SPIKE Banded Algorithm and Solver | New features and enhancements for the SPIKE banded solver are presented.
Among all the SPIKE algorithm versions, we focus our attention on the recursive
SPIKE technique which provides the best trade-off between generality and
parallel efficiency, but was known for its lack of flexibility. Its application
was essentially limited to power of two number of cores/processors. This
limitation is successfully addressed in this paper. In addition, we present a
new transpose solve option, a standard feature of most numerical solver
libraries which has never been addressed by the SPIKE algorithm so far. A
pivoting recursive SPIKE strategy is finally presented as an alternative to
non-pivoting scheme for systems with large condition numbers. All these new
enhancements participate to create a feature complete SPIKE algorithm and a new
black-box SPIKE-OpenMP package that significantly outperforms the performance
and scalability obtained with other state-of-the-art banded solvers.
| 1 | 0 | 0 | 0 | 0 | 0 |
Dark Matter in the Local Group of Galaxies | We describe the neutrino flavor (e = electron, u = muon, t = tau) masses as
m(i=e;u;t)= m + [Delta]mi with |[Delta]mij|/m < 1 and probably |[Delta]mij|/m
<< 1. The quantity m is the degenerate neutrino mass. Because neutrino flavor
is not a quantum number, this degenerate mass appears in the neutrino equation
of state. We apply a Monte Carlo computational physics technique to the Local
Group (LG) of galaxies to determine an approximate location for a Dark Matter
embedding condensed neutrino object(CNO). The calculation is based on the
rotational properties of the only spiral galaxies within the LG: M31, M33 and
the Milky Way. CNOs could be the Dark Matter everyone is looking for and we
estimate the CNO embedding the LG to have a mass 5.17x10^15 Mo and a radius
1.316 Mpc, with the estimated value of m ~= 0.8 eV/c2. The up-coming KATRIN
experiment will either be the definitive result or eliminate condensed
neutrinos as a Dark Matter candidate.
| 0 | 1 | 0 | 0 | 0 | 0 |
Effective Extensible Programming: Unleashing Julia on GPUs | GPUs and other accelerators are popular devices for accelerating
compute-intensive, parallelizable applications. However, programming these
devices is a difficult task. Writing efficient device code is challenging, and
is typically done in a low-level programming language. High-level languages are
rarely supported, or do not integrate with the rest of the high-level language
ecosystem. To overcome this, we propose compiler infrastructure to efficiently
add support for new hardware or environments to an existing programming
language.
We evaluate our approach by adding support for NVIDIA GPUs to the Julia
programming language. By integrating with the existing compiler, we
significantly lower the cost to implement and maintain the new compiler, and
facilitate reuse of existing application code. Moreover, use of the high-level
Julia programming language enables new and dynamic approaches for GPU
programming. This greatly improves programmer productivity, while maintaining
application performance similar to that of the official NVIDIA CUDA toolkit.
| 1 | 0 | 0 | 0 | 0 | 0 |
Finite Time Adaptive Stabilization of LQ Systems | Stabilization of linear systems with unknown dynamics is a canonical problem
in adaptive control. Since the lack of knowledge of system parameters can cause
it to become destabilized, an adaptive stabilization procedure is needed prior
to regulation. Therefore, the adaptive stabilization needs to be completed in
finite time. In order to achieve this goal, asymptotic approaches are not very
helpful. There are only a few existing non-asymptotic results and a full
treatment of the problem is not currently available.
In this work, leveraging the novel method of random linear feedbacks, we
establish high probability guarantees for finite time stabilization. Our
results hold for remarkably general settings because we carefully choose a
minimal set of assumptions. These include stabilizability of the underlying
system and restricting the degree of heaviness of the noise distribution. To
derive our results, we also introduce a number of new concepts and technical
tools to address regularity and instability of the closed-loop matrix.
| 1 | 0 | 0 | 1 | 0 | 0 |
On the composition of Berezin-Toeplitz operators on symplectic manifolds | We compute the second coefficient of the composition of two Berezin-Toeplitz
operators associated with the $\text{spin}^c$ Dirac operator on a symplectic
manifold, making use of the full-off diagonal expansion of the Bergman kernel.
| 0 | 0 | 1 | 0 | 0 | 0 |
Maximum and minimum operators of convex integrands | For given convex integrands $\gamma_{{}_{i}}: S^{n}\to \mathbb{R}_{+}$ (where
$i=1, 2$), the functions $\gamma_{{}_{max}}$ and $\gamma_{{}_{min}}$ can be
defined as natural way. In this paper, we show that the Wulff shape of
$\gamma_{{}_{max}}$ (resp. the Wulff shape of $\gamma_{{}_{min}}$) is exactly
the convex hull of $(\mathcal{W}_{\gamma_{{}_{1}}}\cup
\mathcal{W}_{\gamma_{{}_{2}}})$ (resp. $\mathcal{W}_{\gamma_{{}_{1}}}\cap
\mathcal{W}_{\gamma_{{}_{2}}}$).
| 0 | 0 | 1 | 0 | 0 | 0 |
Approximate Kernel PCA Using Random Features: Computational vs. Statistical Trade-off | Kernel methods are powerful learning methodologies that provide a simple way
to construct nonlinear algorithms from linear ones. Despite their popularity,
they suffer from poor scalability in big data scenarios. Various approximation
methods, including random feature approximation have been proposed to alleviate
the problem. However, the statistical consistency of most of these approximate
kernel methods is not well understood except for kernel ridge regression
wherein it has been shown that the random feature approximation is not only
computationally efficient but also statistically consistent with a minimax
optimal rate of convergence. In this paper, we investigate the efficacy of
random feature approximation in the context of kernel principal component
analysis (KPCA) by studying the trade-off between computational and statistical
behaviors of approximate KPCA. We show that the approximate KPCA is both
computationally and statistically efficient compared to KPCA in terms of the
error associated with reconstructing a kernel function based on its projection
onto the corresponding eigenspaces. Depending on the eigenvalue decay behavior
of the covariance operator, we show that only $n^{2/3}$ features (polynomial
decay) or $\sqrt{n}$ features (exponential decay) are needed to match the
statistical performance of KPCA. We also investigate their statistical
behaviors in terms of the convergence of corresponding eigenspaces wherein we
show that only $\sqrt{n}$ features are required to match the performance of
KPCA and if fewer than $\sqrt{n}$ features are used, then approximate KPCA has
a worse statistical behavior than that of KPCA.
| 0 | 0 | 1 | 1 | 0 | 0 |
A Heuristic Search Algorithm Using the Stability of Learning Algorithms in Certain Scenarios as the Fitness Function: An Artificial General Intelligence Engineering Approach | This paper presents a non-manual design engineering method based on heuristic
search algorithm to search for candidate agents in the solution space which
formed by artificial intelligence agents modeled on the base of
bionics.Compared with the artificial design method represented by meta-learning
and the bionics method represented by the neural architecture chip,this method
is more feasible for realizing artificial general intelligence,and it has a
much better interaction with cognitive neuroscience;at the same time,the
engineering method is based on the theoretical hypothesis that the final
learning algorithm is stable in certain scenarios,and has generalization
ability in various scenarios.The paper discusses the theory preliminarily and
proposes the possible correlation between the theory and the fixed-point
theorem in the field of mathematics.Limited by the author's knowledge
level,this correlation is proposed only as a kind of conjecture.
| 1 | 0 | 0 | 0 | 0 | 0 |
SEDIGISM: Structure, excitation, and dynamics of the inner Galactic interstellar medium | The origin and life-cycle of molecular clouds are still poorly constrained,
despite their importance for understanding the evolution of the interstellar
medium. We have carried out a systematic, homogeneous, spectroscopic survey of
the inner Galactic plane, in order to complement the many continuum Galactic
surveys available with crucial distance and gas-kinematic information. Our aim
is to combine this data set with recent infrared to sub-millimetre surveys at
similar angular resolutions. The SEDIGISM survey covers 78 deg^2 of the inner
Galaxy (-60 deg < l < +18 deg, |b| < 0.5 deg) in the J=2-1 rotational
transition of 13CO. This isotopologue of CO is less abundant than 12CO by
factors up to 100. Therefore, its emission has low to moderate optical depths,
and higher critical density, making it an ideal tracer of the cold, dense
interstellar medium. The data have been observed with the SHFI single-pixel
instrument at APEX. The observational setup covers the 13CO(2-1) and C18O(2-1)
lines, plus several transitions from other molecules. The observations have
been completed. Data reduction is in progress, and the final data products will
be made available in the near future. Here we give a detailed description of
the survey and the dedicated data reduction pipeline. Preliminary results based
on a science demonstration field covering -20 deg < l < -18.5 deg are
presented. Analysis of the 13CO(2-1) data in this field reveals compact clumps,
diffuse clouds, and filamentary structures at a range of heliocentric
distances. By combining our data with data in the (1-0) transition of CO
isotopologues from the ThrUMMS survey, we are able to compute a 3D realization
of the excitation temperature and optical depth in the interstellar medium.
Ultimately, this survey will provide a detailed, global view of the inner
Galactic interstellar medium at an unprecedented angular resolution of ~30".
| 0 | 1 | 0 | 0 | 0 | 0 |
Imitating Driver Behavior with Generative Adversarial Networks | The ability to accurately predict and simulate human driving behavior is
critical for the development of intelligent transportation systems. Traditional
modeling methods have employed simple parametric models and behavioral cloning.
This paper adopts a method for overcoming the problem of cascading errors
inherent in prior approaches, resulting in realistic behavior that is robust to
trajectory perturbations. We extend Generative Adversarial Imitation Learning
to the training of recurrent policies, and we demonstrate that our model
outperforms rule-based controllers and maximum likelihood models in realistic
highway simulations. Our model both reproduces emergent behavior of human
drivers, such as lane change rate, while maintaining realistic control over
long time horizons.
| 1 | 0 | 0 | 0 | 0 | 0 |
Topological semimetals with double-helix nodal link | Topological nodal line semimetals are characterized by the crossing of the
conduction and valence bands along one or more closed loops in the Brillouin
zone. Usually, these loops are either isolated or touch each other at some
highly symmetric points. Here, we introduce a new kind of nodal line semimetal,
that contains a pair of linked nodal loops. A concrete two-band model was
constructed, which supports a pair of nodal lines with a double-helix
structure, which can be further twisted into a Hopf link because of the
periodicity of the Brillouin zone. The nodal lines are stabilized by the
combined spatial inversion $\mathcal{P}$ and time reversal $\mathcal{T}$
symmetry; the individual $\mathcal{P}$ and $\mathcal{T}$ symmetries must be
broken. The band exhibits nontrivial topology that each nodal loop carries a
$\pi$ Berry flux. Surface flat bands emerge at the open boundary and are
exactly encircled by the projection of the nodal lines on the surface Brillouin
zone. The experimental implementation of our model using cold atoms in optical
lattices is discussed.
| 0 | 1 | 0 | 0 | 0 | 0 |
Artificial Intelligence Assisted Power Grid Hardening in Response to Extreme Weather Events | In this paper, an artificial intelligence based grid hardening model is
proposed with the objective of improving power grid resilience in response to
extreme weather events. At first, a machine learning model is proposed to
predict the component states (either operational or outage) in response to the
extreme event. Then, these predictions are fed into a hardening model, which
determines strategic locations for placement of distributed generation (DG)
units. In contrast to existing literature in hardening and resilience
enhancement, this paper co-optimizes grid economic and resilience objectives by
considering the intricate dependencies of the two. The numerical simulations on
the standard IEEE 118-bus test system illustrate the merits and applicability
of the proposed hardening model. The results indicate that the proposed
hardening model through decentralized and distributed local energy resources
can produce a more robust solution that can protect the system significantly
against multiple component outages due to an extreme event.
| 1 | 0 | 0 | 0 | 0 | 0 |
Computing isomorphisms and embeddings of finite fields | Let $\mathbb{F}_q$ be a finite field. Given two irreducible polynomials $f,g$
over $\mathbb{F}_q$, with $\mathrm{deg} f$ dividing $\mathrm{deg} g$, the
finite field embedding problem asks to compute an explicit description of a
field embedding of $\mathbb{F}_q[X]/f(X)$ into $\mathbb{F}_q[Y]/g(Y)$. When
$\mathrm{deg} f = \mathrm{deg} g$, this is also known as the isomorphism
problem.
This problem, a special instance of polynomial factorization, plays a central
role in computer algebra software. We review previous algorithms, due to
Lenstra, Allombert, Rains, and Narayanan, and propose improvements and
generalizations. Our detailed complexity analysis shows that our newly proposed
variants are at least as efficient as previously known algorithms, and in many
cases significantly better.
We also implement most of the presented algorithms, compare them with the
state of the art computer algebra software, and make the code available as open
source. Our experiments show that our new variants consistently outperform
available software.
| 1 | 0 | 1 | 0 | 0 | 0 |
Analysis of the current-driven domain wall motion in a ratchet ferromagnetic strip | The current-driven domain wall motion in a ratchet memory due to spin-orbit
torques is studied from both full micromagnetic simulations and the one
dimensional model. Within the framework of this model, the integration of the
anisotropy energy contribution leads to a new term in the well known q-$\Phi$
equations, being this contribution responsible for driving the domain wall to
an equilibrium position. The comparison between the results drawn by the one
dimensional model and full micromagnetic simulations proves the utility of such
a model in order to predict the current-driven domain wall motion in the
ratchet memory. Additionally, since current pulses are applied, the paper shows
how the proper working of such a device requires the adequate balance of
excitation and relaxation times, being the latter longer than the former.
Finally, the current-driven regime of a ratchet memory is compared to the
field-driven regime described elsewhere, then highlighting the advantages of
this current-driven regime.
| 0 | 1 | 0 | 0 | 0 | 0 |
On the Hilbert coefficients, depth of associated graded rings and reduction numbers | Let $(R,\mathfrak{m})$ be a $d$-dimensional Cohen-Macaulay local ring, $I$ an
$\mathfrak{m}$-primary ideal of $R$ and $J=(x_1,...,x_d)$ a minimal reduction
of $I$. We show that if $J_{d-1}=(x_1,...,x_{d-1})$ and
$\sum\limits_{n=1}^\infty\lambda{({I^{n+1}\cap J_{d-1}})/({J{I^n} \cap
J_{d-1}})=i}$ where i=0,1, then depth $G(I)\geq{d-i-1}$. Moreover, we prove
that if $e_2(I) = \sum_{n=2}^\infty (n-1) \lambda (I^n/JI^{n-1})-2;$ or if $I$
is integrally closed and $e_2(I) = \sum_{n=2}^\infty
(n-1)\lambda({I^{n}}/JI^{n-1})-i$ where $i=3,4$, then $e_1(I) =
\sum_{n=1}^\infty \lambda(I^n / JI^{n-1})-1.$ In addition, we show that $r(I)$
is independent. Furthermore, we study the independence of $r(I)$ with some
other conditions.
| 0 | 0 | 1 | 0 | 0 | 0 |
End-to-End ASR-free Keyword Search from Speech | End-to-end (E2E) systems have achieved competitive results compared to
conventional hybrid hidden Markov model (HMM)-deep neural network based
automatic speech recognition (ASR) systems. Such E2E systems are attractive due
to the lack of dependence on alignments between input acoustic and output
grapheme or HMM state sequence during training. This paper explores the design
of an ASR-free end-to-end system for text query-based keyword search (KWS) from
speech trained with minimal supervision. Our E2E KWS system consists of three
sub-systems. The first sub-system is a recurrent neural network (RNN)-based
acoustic auto-encoder trained to reconstruct the audio through a
finite-dimensional representation. The second sub-system is a character-level
RNN language model using embeddings learned from a convolutional neural
network. Since the acoustic and text query embeddings occupy different
representation spaces, they are input to a third feed-forward neural network
that predicts whether the query occurs in the acoustic utterance or not. This
E2E ASR-free KWS system performs respectably despite lacking a conventional ASR
system and trains much faster.
| 1 | 0 | 0 | 0 | 0 | 0 |
Tracking performance in high multiplicities environment at ALICE | In LHC Run 3, ALICE will increase the data taking rate significantly to
50\,kHz continuous read out of minimum bias Pb-Pb events. This challenges the
online and offline computing infrastructure, requiring to process 50 times as
many events per second as in Run 2, and increasing the data compression ratio
from 5 to 20. Such high data compression is impossible by lossless ZIP-like
algorithms, but it must use results from online reconstruction, which in turn
requires online calibration. These important online processing steps are the
most computing-intense ones, and will use GPUs as hardware accelerators. The
new online features are already under test during Run 2 in the High Level
Trigger (HLT) online processing farm. The TPC (Time Projection Chamber)
tracking algorithm for Run 3 is derived from the current HLT online tracking
and is based on the Cellular Automaton and Kalman Filter. HLT has deployed
online calibration for the TPC drift time, which needs to be extended to space
charge distortions calibration. This requires online reconstruction for
additional detectors like TRD (Transition Radiation Detector) and TOF (Time Of
Flight). We present prototypes of these developments, in particular a data
compression algorithm that achieves a compression factor of~9 on Run 2 TPC
data, and the efficiency of online TRD tracking. We give an outlook to the
challenges of TPC tracking with continuous read out.
| 0 | 1 | 0 | 0 | 0 | 0 |
Schatten class Hankel and $\overline{\partial}$-Neumann operators on pseudoconvex domains in $\mathbb{C}^n$ | Let $\Omega$ be a $C^2$-smooth bounded pseudoconvex domain in $\mathbb{C}^n$
for $n\geq 2$ and let $\varphi$ be a holomorphic function on $\Omega$ that is
$C^2$-smooth on the closure of $\Omega$. We prove that if
$H_{\overline{\varphi}}$ is in Schatten $p$-class for $p\leq 2n$ then $\varphi$
is a constant function. As a corollary, we show that the
$\overline{\partial}$-Neumann operator on $\Omega$ is not Hilbert-Schmidt.
| 0 | 0 | 1 | 0 | 0 | 0 |
Cross-Sectional Variation of Intraday Liquidity, Cross-Impact, and their Effect on Portfolio Execution | The composition of natural liquidity has been changing over time. An analysis
of intraday volumes for the S&P500 constituent stocks illustrates that (i)
volume surprises, i.e., deviations from their respective forecasts, are
correlated across stocks, and (ii) this correlation increases during the last
few hours of the trading session. These observations could be attributed, in
part, to the prevalence of portfolio trading activity that is implicit in the
growth of ETF, passive and systematic investment strategies; and, to the
increased trading intensity of such strategies towards the end of the trading
session, e.g., due to execution of mutual fund inflows/outflows that are
benchmarked to the closing price on each day. In this paper, we investigate the
consequences of such portfolio liquidity on price impact and portfolio
execution. We derive a linear cross-asset market impact from a stylized model
that explicitly captures the fact that a certain fraction of natural liquidity
providers only trade portfolios of stocks whenever they choose to execute. We
find that due to cross-impact and its intraday variation, it is optimal for a
risk-neutral, cost minimizing liquidator to execute a portfolio of orders in a
coupled manner, as opposed to a separable VWAP-like execution that is often
assumed. The optimal schedule couples the execution of the various orders so as
to be able to take advantage of increased portfolio liquidity towards the end
of the day. A worst case analysis shows that the potential cost reduction from
this optimized execution schedule over the separable approach can be as high as
6% for plausible model parameters. Finally, we discuss how to estimate
cross-sectional price impact if one had a dataset of realized portfolio
transaction records that exploits the low-rank structure of its coefficient
matrix suggested by our analysis.
| 0 | 0 | 0 | 0 | 0 | 1 |
An investigation of pulsar searching techniques with the Fast Folding Algorithm | Here we present an in-depth study of the behaviour of the Fast Folding
Algorithm, an alternative pulsar searching technique to the Fast Fourier
Transform. Weaknesses in the Fast Fourier Transform, including a susceptibility
to red noise, leave it insensitive to pulsars with long rotational periods (P >
1 s). This sensitivity gap has the potential to bias our understanding of the
period distribution of the pulsar population. The Fast Folding Algorithm, a
time-domain based pulsar searching technique, has the potential to overcome
some of these biases. Modern distributed-computing frameworks now allow for the
application of this algorithm to all-sky blind pulsar surveys for the first
time. However, many aspects of the behaviour of this search technique remain
poorly understood, including its responsiveness to variations in pulse shape
and the presence of red noise. Using a custom CPU-based implementation of the
Fast Folding Algorithm, ffancy, we have conducted an in-depth study into the
behaviour of the Fast Folding Algorithm in both an ideal, white noise regime as
well as a trial on observational data from the HTRU-S Low Latitude pulsar
survey, including a comparison to the behaviour of the Fast Fourier Transform.
We are able to both confirm and expand upon earlier studies that demonstrate
the ability of the Fast Folding Algorithm to outperform the Fast Fourier
Transform under ideal white noise conditions, and demonstrate a significant
improvement in sensitivity to long-period pulsars in real observational data
through the use of the Fast Folding Algorithm.
| 0 | 1 | 0 | 0 | 0 | 0 |
Symplectic stability on manifolds with cylindrical ends | A famous result of Jurgen Moser states that a symplectic form on a compact
manifold cannot be deformed within its cohomology class to an inequivalent
symplectic form. It is well known that this does not hold in general for
noncompact symplectic manifolds. The notion of Eliashberg-Gromov convex ends
provides a natural restricted setting for the study of analogs of Moser's
symplectic stability result in the noncompact case, and this has been
significantly developed in work of Cieliebak-Eliashberg. Retaining the end
structure on the underlying smooth manifold, but dropping the convexity and
completeness assumptions on the symplectic forms at infinity we show that
symplectic stability holds under a natural growth condition on the path of
symplectic forms. The result can be straightforwardly applied as we show
through explicit examples.
| 0 | 0 | 1 | 0 | 0 | 0 |
Making the Dzyaloshinskii-Moriya interaction visible | Brillouin light spectroscopy is a powerful and robust technique for measuring
the interfacial Dzyaloshinskii-Moriya interaction in thin films with broken
inversion symmetry. Here we show that the magnon visibility, i.e. the intensity
of the inelastically scattered light, strongly depends on the thickness of the
dielectric seed material - SiO$_2$. By using both, analytical thin-film optics
and numerical calculations, we reproduce the experimental data. We therefore
provide a guideline for the maximization of the signal by adapting the
substrate properties to the geometry of the measurement. Such a boost-up of the
signal eases the magnon visualization in ultrathin magnetic films, speeds-up
the measurement and increases the reliability of the data.
| 0 | 1 | 0 | 0 | 0 | 0 |
Approximate Collapsed Gibbs Clustering with Expectation Propagation | We develop a framework for approximating collapsed Gibbs sampling in
generative latent variable cluster models. Collapsed Gibbs is a popular MCMC
method, which integrates out variables in the posterior to improve mixing.
Unfortunately for many complex models, integrating out these variables is
either analytically or computationally intractable. We efficiently approximate
the necessary collapsed Gibbs integrals by borrowing ideas from expectation
propagation. We present two case studies where exact collapsed Gibbs sampling
is intractable: mixtures of Student-t's and time series clustering. Our
experiments on real and synthetic data show that our approximate sampler
enables a runtime-accuracy tradeoff in sampling these types of models,
providing results with competitive accuracy much more rapidly than the naive
Gibbs samplers one would otherwise rely on in these scenarios.
| 0 | 0 | 0 | 1 | 0 | 0 |
Thermal graphene metamaterials and epsilon-near-zero high temperature plasmonics | The key feature of a thermophotovoltaic (TPV) emitter is the enhancement of
thermal emission corresponding to energies just above the bandgap of the
absorbing photovoltaic cell and simultaneous suppression of thermal emission
below the bandgap. We show here that a single layer plasmonic coating can
perform this task with high efficiency. Our key design principle involves
tuning the epsilon-near-zero frequency (plasma frequency) of the metal acting
as a thermal emitter to the electronic bandgap of the semiconducting cell. This
approach utilizes the change in reflectivity of a metal near its plasma
frequency (epsilon-near-zero frequency) to lead to spectrally selective thermal
emission and can be adapted to large area coatings using high temperature
plasmonic materials. We provide a detailed analysis of the spectral and angular
performance of high temperature plasmonic coatings as TPV emitters. We show the
potential of such high temperature plasmonic thermal emitter coatings (p-TECs)
for narrowband near-field thermal emission. We also show the enhancement of
near-surface energy density in graphene-multilayer thermal metamaterials due to
a topological transition at an effective epsilon-near-zero frequency. This
opens up spectrally selective thermal emission from graphene multilayers in the
infrared frequency regime. Our design paves the way for the development of
single layer p-TECs and graphene multilayers for spectrally selective radiative
heat transfer applications.
| 0 | 1 | 0 | 0 | 0 | 0 |
Ferroionic states in ferroelectric thin films | The electric coupling between surface ions and bulk ferroelectricity gives
rise to a continuum of mixed states in ferroelectric thin films, exquisitely
sensitive to temperature and external factors, such as applied voltage and
oxygen pressure. Here we develop the comprehensive analytical description of
these coupled ferroelectric and ionic ("ferroionic") states by combining the
Ginzburg-Landau-Devonshire description of the ferroelectric properties of the
film with Langmuir adsorption model for the electrochemical reaction at the
film surface. We explore the thermodynamic and kinetic characteristics of the
ferroionic states as a function of temperature, film thickness, and external
electric potential. These studies provide a new insight into mesoscopic
properties of ferroelectric thin films, whose surface is exposed to chemical
environment as screening charges supplier.
| 0 | 1 | 0 | 0 | 0 | 0 |
Quantum Chebyshev's Inequality and Applications | In this paper we provide new quantum algorithms with polynomial speed-up for
a range of problems for which no such results were known, or we improve
previous algorithms. First, we consider the approximation of the frequency
moments $F_k$ of order $k \geq 3$ in the multi-pass streaming model with
updates (turnstile model). We design a $P$-pass quantum streaming algorithm
with memory $M$ satisfying a tradeoff of $P^2 M = \tilde{O}(n^{1-2/k})$,
whereas the best classical algorithm requires $P M = \Theta(n^{1-2/k})$. Then,
we study the problem of estimating the number $m$ of edges and the number $t$
of triangles given query access to an $n$-vertex graph. We describe optimal
quantum algorithms that perform $\tilde{O}(\sqrt{n}/m^{1/4})$ and
$\tilde{O}(\sqrt{n}/t^{1/6} + m^{3/4}/\sqrt{t})$ queries respectively. This is
a quadratic speed-up compared to the classical complexity of these problems.
For this purpose we develop a new quantum paradigm that we call Quantum
Chebyshev's inequality. Namely we demonstrate that, in a certain model of
quantum sampling, one can approximate with relative error the mean of any
random variable with a number of quantum samples that is linear in the ratio of
the square root of the variance to the mean. Classically the dependency is
quadratic. Our algorithm subsumes a previous result of Montanaro [Mon15]. This
new paradigm is based on a refinement of the Amplitude Estimation algorithm of
Brassard et al. [BHMT02] and of previous quantum algorithms for the mean
estimation problem. We show that this speed-up is optimal, and we identify
another common model of quantum sampling where it cannot be obtained. For our
applications, we also adapt the variable-time amplitude amplification technique
of Ambainis [Amb10] into a variable-time amplitude estimation algorithm.
| 0 | 0 | 0 | 1 | 0 | 0 |
Learning Convex Regularizers for Optimal Bayesian Denoising | We propose a data-driven algorithm for the maximum a posteriori (MAP)
estimation of stochastic processes from noisy observations. The primary
statistical properties of the sought signal is specified by the penalty
function (i.e., negative logarithm of the prior probability density function).
Our alternating direction method of multipliers (ADMM)-based approach
translates the estimation task into successive applications of the proximal
mapping of the penalty function. Capitalizing on this direct link, we define
the proximal operator as a parametric spline curve and optimize the spline
coefficients by minimizing the average reconstruction error for a given
training set. The key aspects of our learning method are that the associated
penalty function is constrained to be convex and the convergence of the ADMM
iterations is proven. As a result of these theoretical guarantees, adaptation
of the proposed framework to different levels of measurement noise is extremely
simple and does not require any retraining. We apply our method to estimation
of both sparse and non-sparse models of Lévy processes for which the
minimum mean square error (MMSE) estimators are available. We carry out a
single training session and perform comparisons at various signal-to-noise
ratio (SNR) values. Simulations illustrate that the performance of our
algorithm is practically identical to the one of the MMSE estimator
irrespective of the noise power.
| 1 | 0 | 0 | 1 | 0 | 0 |
On the $L^p$ boundedness of wave operators for two-dimensional Schrödinger operators with threshold obstructions | Let $H=-\Delta+V$ be a Schrödinger operator on $L^2(\mathbb R^2)$ with
real-valued potential $V$, and let $H_0=-\Delta$. If $V$ has sufficient
pointwise decay, the wave operators $W_{\pm}=s-\lim_{t\to \pm\infty}
e^{itH}e^{-itH_0}$ are known to be bounded on $L^p(\mathbb R^2)$ for all $1< p<
\infty$ if zero is not an eigenvalue or resonance. We show that if there is an
s-wave resonance or an eigenvalue only at zero, then the wave operators are
bounded on $L^p(\mathbb R^2)$ for $1 < p<\infty$. This result stands in
contrast to results in higher dimensions, where the presence of zero energy
obstructions is known to shrink the range of valid exponents $p$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Data-driven Advice for Applying Machine Learning to Bioinformatics Problems | As the bioinformatics field grows, it must keep pace not only with new data
but with new algorithms. Here we contribute a thorough analysis of 13
state-of-the-art, commonly used machine learning algorithms on a set of 165
publicly available classification problems in order to provide data-driven
algorithm recommendations to current researchers. We present a number of
statistical and visual comparisons of algorithm performance and quantify the
effect of model selection and algorithm tuning for each algorithm and dataset.
The analysis culminates in the recommendation of five algorithms with
hyperparameters that maximize classifier performance across the tested
problems, as well as general guidelines for applying machine learning to
supervised classification problems.
| 1 | 0 | 0 | 1 | 0 | 0 |
Bivariate Causal Discovery and its Applications to Gene Expression and Imaging Data Analysis | The mainstream of research in genetics, epigenetics and imaging data analysis
focuses on statistical association or exploring statistical dependence between
variables. Despite their significant progresses in genetic research,
understanding the etiology and mechanism of complex phenotypes remains elusive.
Using association analysis as a major analytical platform for the complex data
analysis is a key issue that hampers the theoretic development of genomic
science and its application in practice. Causal inference is an essential
component for the discovery of mechanical relationships among complex
phenotypes. Many researchers suggest making the transition from association to
causation. Despite its fundamental role in science, engineering and
biomedicine, the traditional methods for causal inference require at least
three variables. However, quantitative genetic analysis such as QTL, eQTL,
mQTL, and genomic-imaging data analysis requires exploring the causal
relationships between two variables. This paper will focus on bivariate causal
discovery. We will introduce independence of cause and mechanism (ICM) as a
basic principle for causal inference, algorithmic information theory and
additive noise model (ANM) as major tools for bivariate causal discovery.
Large-scale simulations will be performed to evaluate the feasibility of the
ANM for bivariate causal discovery. To further evaluate their performance for
causal inference, the ANM will be applied to the construction of gene
regulatory networks. Also, the ANM will be applied to trait-imaging data
analysis to illustrate three scenarios: presence of both causation and
association, presence of association while absence of causation, and presence
of causation, while lack of association between two variables.
| 0 | 0 | 0 | 0 | 1 | 0 |
Revisiting Distillation and Incremental Classifier Learning | One of the key differences between the learning mechanism of humans and
Artificial Neural Networks (ANNs) is the ability of humans to learn one task at
a time. ANNs, on the other hand, can only learn multiple tasks simultaneously.
Any attempts at learning new tasks incrementally cause them to completely
forget about previous tasks. This lack of ability to learn incrementally,
called Catastrophic Forgetting, is considered a major hurdle in building a true
AI system. In this paper, our goal is to isolate the truly effective existing
ideas for incremental learning from those that only work under certain
conditions. To this end, we first thoroughly analyze the current state of the
art (iCaRL) method for incremental learning and demonstrate that the good
performance of the system is not because of the reasons presented in the
existing literature. We conclude that the success of iCaRL is primarily due to
knowledge distillation and recognize a key limitation of knowledge
distillation, i.e, it often leads to bias in classifiers. Finally, we propose a
dynamic threshold moving algorithm that is able to successfully remove this
bias. We demonstrate the effectiveness of our algorithm on CIFAR100 and MNIST
datasets showing near-optimal results. Our implementation is available at
this https URL.
| 0 | 0 | 0 | 1 | 0 | 0 |
Intuitive Hand Teleoperation by Novice Operators Using a Continuous Teleoperation Subspace | Human-in-the-loop manipulation is useful in when autonomous grasping is not
able to deal sufficiently well with corner cases or cannot operate fast enough.
Using the teleoperator's hand as an input device can provide an intuitive
control method but requires mapping between pose spaces which may not be
similar. We propose a low-dimensional and continuous teleoperation subspace
which can be used as an intermediary for mapping between different hand pose
spaces. We present an algorithm to project between pose space and teleoperation
subspace. We use a non-anthropomorphic robot to experimentally prove that it is
possible for teleoperation subspaces to effectively and intuitively enable
teleoperation. In experiments, novice users completed pick and place tasks
significantly faster using teleoperation subspace mapping than they did using
state of the art teleoperation methods.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Hierarchical Bayes Approach to Adjust for Selection Bias in Before-After Analyses of Vision Zero Policies | American cities devote significant resources to the implementation of traffic
safety countermeasures that prevent pedestrian fatalities. However, the
before-after comparisons typically used to evaluate the success of these
countermeasures often suffer from selection bias. This paper motivates the
tendency for selection bias to overestimate the benefits of traffic safety
policy, using New York City's Vision Zero strategy as an example. The NASS
General Estimates System, Fatality Analysis Reporting System and other
databases are combined into a Bayesian hierarchical model to calculate a more
realistic before-after comparison. The results confirm the before-after
analysis of New York City's Vision Zero policy did in fact overestimate the
effect of the policy, and a more realistic estimate is roughly two-thirds the
size.
| 0 | 0 | 0 | 1 | 0 | 0 |
Cavitation near the oscillating piezoelectric plate in water | It is known that gas bubbles on the surface bounding a fluid flow can change
the coefficient of friction and affect the parameters of the boundary layer. In
this paper, we propose a method that allows us to create, in the near-wall
region, a thin layer of liquid filled with bubbles. It will be shown that if
there is an oscillating piezoelectric plate on the surface bounding a liquid,
then, under certain conditions, cavitation develops in the boundary layer. The
relationship between the parameters of cavitation and the characteristics of
the piezoelectric plate oscillations is obtained. Possible applications are
discussed.
| 0 | 1 | 0 | 0 | 0 | 0 |
Revised Note on Learning Algorithms for Quadratic Assignment with Graph Neural Networks | Inverse problems correspond to a certain type of optimization problems
formulated over appropriate input distributions. Recently, there has been a
growing interest in understanding the computational hardness of these
optimization problems, not only in the worst case, but in an average-complexity
sense under this same input distribution.
In this revised note, we are interested in studying another aspect of
hardness, related to the ability to learn how to solve a problem by simply
observing a collection of previously solved instances. These 'planted
solutions' are used to supervise the training of an appropriate predictive
model that parametrizes a broad class of algorithms, with the hope that the
resulting model will provide good accuracy-complexity tradeoffs in the average
sense.
We illustrate this setup on the Quadratic Assignment Problem, a fundamental
problem in Network Science. We observe that data-driven models based on Graph
Neural Networks offer intriguingly good performance, even in regimes where
standard relaxation based techniques appear to suffer.
| 1 | 0 | 0 | 1 | 0 | 0 |
On Completeness Results of Hoare Logic Relative to the Standard Model | The general completeness problem of Hoare logic relative to the standard
model $N$ of Peano arithmetic has been studied by Cook, and it allows for the
use of arbitrary arithmetical formulas as assertions. In practice, the
assertions would be simple arithmetical formulas, e.g. of a low level in the
arithmetical hierarchy. In addition, we find that, by restricting inputs to
$N$, the complexity of the minimal assertion theory for the completeness of
Hoare logic to hold can be reduced. This paper further studies the completeness
of Hoare Logic relative to $N$ by restricting assertions to subclasses of
arithmetical formulas (and by restricting inputs to $N$). Our completeness
results refine Cook's result by reducing the complexity of the assertion
theory.
| 1 | 0 | 0 | 0 | 0 | 0 |
Shift-Coupling of Random Rooted Graphs and Networks | In this paper, we present a result similar to the shift-coupling result of
Thorisson (1996) in the context of random graphs and networks. The result is
that a given random rooted network can be obtained by changing the root of
another given one if and only if the distributions of the two agree on the
invariant sigma-field. Several applications of the result are presented for the
case of unimodular networks. In particular, it is shown that the distribution
of a unimodular network is uniquely determined by its restriction to the
invariant sigma-filed. Also, the theorem is applied to the existence of an
invariant transport kernel that balances between two given (discrete) measures
on the vertices. An application is the existence of a so called extra head
scheme for the Bernoulli process on an infinite unimodular graph. Moreover, a
construction is presented for balancing transport kernels that is a
generalization of the Gale-Shapley stable matching algorithm in bipartite
graphs. Another application is on a general method that covers the situations
where some vertices and edges are added to a unimodular network and then, to
make it unimodular, the probability measure is biased and then a new root is
selected. It is proved that this method provides all possible
unimodularizations in these situations. Finally, analogous existing results for
stationary point processes and unimodular networks are discussed in detail.
| 0 | 0 | 1 | 0 | 0 | 0 |
Fast Automated Analysis of Strong Gravitational Lenses with Convolutional Neural Networks | Quantifying image distortions caused by strong gravitational lensing and
estimating the corresponding matter distribution in lensing galaxies has been
primarily performed by maximum likelihood modeling of observations. This is
typically a time and resource-consuming procedure, requiring sophisticated
lensing codes, several data preparation steps, and finding the maximum
likelihood model parameters in a computationally expensive process with
downhill optimizers. Accurate analysis of a single lens can take up to a few
weeks and requires the attention of dedicated experts. Tens of thousands of new
lenses are expected to be discovered with the upcoming generation of ground and
space surveys, the analysis of which can be a challenging task. Here we report
the use of deep convolutional neural networks to accurately estimate lensing
parameters in an extremely fast and automated way, circumventing the
difficulties faced by maximum likelihood methods. We also show that lens
removal can be made fast and automated using Independent Component Analysis of
multi-filter imaging data. Our networks can recover the parameters of the
Singular Isothermal Ellipsoid density profile, commonly used to model strong
lensing systems, with an accuracy comparable to the uncertainties of
sophisticated models, but about ten million times faster: 100 systems in
approximately 1s on a single graphics processing unit. These networks can
provide a way for non-experts to obtain lensing parameter estimates for large
samples of data. Our results suggest that neural networks can be a powerful and
fast alternative to maximum likelihood procedures commonly used in
astrophysics, radically transforming the traditional methods of data reduction
and analysis.
| 0 | 1 | 0 | 0 | 0 | 0 |
Adversarial Examples, Uncertainty, and Transfer Testing Robustness in Gaussian Process Hybrid Deep Networks | Deep neural networks (DNNs) have excellent representative power and are state
of the art classifiers on many tasks. However, they often do not capture their
own uncertainties well making them less robust in the real world as they
overconfidently extrapolate and do not notice domain shift. Gaussian processes
(GPs) with RBF kernels on the other hand have better calibrated uncertainties
and do not overconfidently extrapolate far from data in their training set.
However, GPs have poor representational power and do not perform as well as
DNNs on complex domains. In this paper we show that GP hybrid deep networks,
GPDNNs, (GPs on top of DNNs and trained end-to-end) inherit the nice properties
of both GPs and DNNs and are much more robust to adversarial examples. When
extrapolating to adversarial examples and testing in domain shift settings,
GPDNNs frequently output high entropy class probabilities corresponding to
essentially "don't know". GPDNNs are therefore promising as deep architectures
that know when they don't know.
| 0 | 0 | 0 | 1 | 0 | 0 |
Elliptic regularization of the isometric immersion problem | We introduce an elliptic regularization of the PDE system representing the
isometric immersion of a surface in $\mathbb R^{3}$. The regularization is
geometric, and has a natural variational interpretation.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Debris Backwards Flow Simulation System for Malaysia Airlines Flight 370 | This paper presents a system based on a Two-Way Particle-Tracking Model to
analyze possible crash positions of flight MH370. The particle simulator
includes a simple flow simulation of the debris based on a Lagrangian approach
and a module to extract appropriated ocean current data from netCDF files. The
influence of wind, waves, immersion depth and hydrodynamic behavior are not
considered in the simulation.
| 1 | 1 | 0 | 0 | 0 | 0 |
End-to-End Task-Completion Neural Dialogue Systems | One of the major drawbacks of modularized task-completion dialogue systems is
that each module is trained individually, which presents several challenges.
For example, downstream modules are affected by earlier modules, and the
performance of the entire system is not robust to the accumulated errors. This
paper presents a novel end-to-end learning framework for task-completion
dialogue systems to tackle such issues. Our neural dialogue system can directly
interact with a structured database to assist users in accessing information
and accomplishing certain tasks. The reinforcement learning based dialogue
manager offers robust capabilities to handle noises caused by other components
of the dialogue system. Our experiments in a movie-ticket booking domain show
that our end-to-end system not only outperforms modularized dialogue system
baselines for both objective and subjective evaluation, but also is robust to
noises as demonstrated by several systematic experiments with different error
granularity and rates specific to the language understanding module.
| 1 | 0 | 0 | 0 | 0 | 0 |
Improving power of genetic association studies by extreme phenotype sampling: a review and some new results | Extreme phenotype sampling is a selective genotyping design for genetic
association studies where only individuals with extreme values of a continuous
trait are genotyped for a set of genetic variants. Under financial or other
limitations, this design is assumed to improve the power to detect associations
between genetic variants and the trait, compared to randomly selecting the same
number of individuals for genotyping. Here we present extensions of likelihood
models that can be used for inference when the data are sampled according to
the extreme phenotype sampling design. Computational methods for parameter
estimation and hypothesis testing are provided. We consider methods for common
variant genetic effects and gene-environment interaction effects in linear
regression models with a normally distributed trait. We use simulated and real
data to show that extreme phenotype sampling can be powerful compared to random
sampling, but that this does not hold for all extreme sampling methods and
situations.
| 0 | 0 | 0 | 1 | 0 | 0 |
Energy network: towards an interconnected energy infrastructure for the future | The fundamental theory of energy networks in different energy forms is
established following an in-depth analysis of the nature of energy for
comprehensive energy utilization. The definition of an energy network is given.
Combining the generalized balance equation of energy in space and the Pfaffian
equation, the generalized transfer equations of energy in lines (pipes) are
proposed. The energy variation laws in the transfer processes are investigated.
To establish the equations of energy networks, the Kirchhoff's Law in electric
networks is extended to energy networks, which is called the Generalized
Kirchhoff"s Law. According to the linear phenomenological law, the generalized
equivalent energy transfer equations with lumped parameters are derived in
terms of the characteristic equations of energy transfer in lines(pipes).The
equations are finally unified into a complete energy network equation system
and its solvability is further discussed. Experiments are carried out on a
combined cooling, heating and power(CCHP) system in engineering, the energy
network theory proposed in this paper is used to model and analyze this system.
By comparing the theoretical results obtained by our modeling approach and the
data measured in experiments, the energy equations are validated.
| 0 | 1 | 0 | 0 | 0 | 0 |
Invitation to Alexandrov geometry: CAT[0] spaces | The idea is to demonstrate the beauty and power of Alexandrov geometry by
reaching interesting applications with a minimum of preparation.
The topics include
1. Estimates on the number of collisions in billiards.
2. Construction of exotic aspherical manifolds.
3. The geometry of two-convex sets in Euclidean space.
| 0 | 0 | 1 | 0 | 0 | 0 |
Smoothing of transport plans with fixed marginals and rigorous semiclassical limit of the Hohenberg-Kohn functional | We prove rigorously that the exact N-electron Hohenberg-Kohn density
functional converges in the strongly interacting limit to the strictly
correlated electrons (SCE) functional, and that the absolute value squared of
the associated constrained-search wavefunction tends weakly in the sense of
probability measures to a minimizer of the multi-marginal optimal transport
problem with Coulomb cost associated to the SCE functional. This extends our
previous work for N=2 [CFK11]. The correct limit problem has been derived in
the physics literature by Seidl [Se99] and Seidl, Gori-Giorgi and Savin
[SGS07]; in these papers the lack of a rigorous proof was pointed out.
We also give a mathematical counterexample to this type of result, by
replacing the constraint of given one-body density -- an infinite-dimensional
quadratic expression in the wavefunction -- by an infinite-dimensional
quadratic expression in the wavefunction and its gradient. Connections with the
Lawrentiev phenomenon in the calculus of variations are indicated.
| 0 | 0 | 1 | 0 | 0 | 0 |
The extension of some D(4)-pairs | In this paper we illustrate the use of the results from [1] proving that
$D(4)$-triple $\{a, b, c\}$ with $a < b < a + 57\sqrt{a}$ has a unique
extension to a quadruple with a larger element. This furthermore implies that
$D(4)$-pair $\{a, b\}$ cannot be extended to a quintuple if $a < b < a +
57\sqrt{a}$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Standards for enabling heterogeneous IaaS cloud federations | Technology market is continuing a rapid growth phase where different resource
providers and Cloud Management Frameworks are positioning to provide ad-hoc
solutions -in terms of management interfaces, information discovery or billing-
trying to differentiate from competitors but that as a result remain
incompatible between them when addressing more complex scenarios like federated
clouds. Grasping interoperability problems present in current infrastructures
is then a must-do, tackled by studying how existing and emerging standards
could enhance user experience in the cloud ecosystem. In this paper we will
review the current open challenges in Infrastructure as a Service cloud
interoperability and federation, as well as point to the potential standards
that should alleviate these problems.
| 1 | 0 | 0 | 0 | 0 | 0 |
Grid-based Approaches for Distributed Data Mining Applications | The data mining field is an important source of large-scale applications and
datasets which are getting more and more common. In this paper, we present
grid-based approaches for two basic data mining applications, and a performance
evaluation on an experimental grid environment that provides interesting
monitoring capabilities and configuration tools. We propose a new distributed
clustering approach and a distributed frequent itemsets generation well-adapted
for grid environments. Performance evaluation is done using the Condor system
and its workflow manager DAGMan. We also compare this performance analysis to a
simple analytical model to evaluate the overheads related to the workflow
engine and the underlying grid system. This will specifically show that
realistic performance expectations are currently difficult to achieve on the
grid.
| 1 | 0 | 0 | 0 | 0 | 0 |
Transforming acoustic characteristics to deceive playback spoofing countermeasures of speaker verification systems | Automatic speaker verification (ASV) systems use a playback detector to
filter out playback attacks and ensure verification reliability. Since current
playback detection models are almost always trained using genuine and
played-back speech, it may be possible to degrade their performance by
transforming the acoustic characteristics of the played-back speech close to
that of the genuine speech. One way to do this is to enhance speech "stolen"
from the target speaker before playback. We tested the effectiveness of a
playback attack using this method by using the speech enhancement generative
adversarial network to transform acoustic characteristics. Experimental results
showed that use of this "enhanced stolen speech" method significantly increases
the equal error rates for the baseline used in the ASVspoof 2017 challenge and
for a light convolutional neural network-based method. The results also showed
that its use degrades the performance of a Gaussian mixture model-universal
background model-based ASV system. This type of attack is thus an urgent
problem needing to be solved.
| 1 | 0 | 0 | 0 | 0 | 0 |
LoopInvGen: A Loop Invariant Generator based on Precondition Inference | We describe the LoopInvGen tool for generating loop invariants that can
provably guarantee correctness of a program with respect to a given
specification. LoopInvGen is an efficient implementation of the inference
technique originally proposed in our earlier work on PIE
(this https URL).
In contrast to existing techniques, LoopInvGen is not restricted to a fixed
set of features -- atomic predicates that are composed together to build
complex loop invariants. Instead, we start with no initial features, and use
program synthesis techniques to grow the set on demand. This not only enables a
less onerous and more expressive approach, but also appears to be significantly
faster than the existing tools over the SyGuS-COMP 2017 benchmarks from the INV
track.
| 1 | 0 | 0 | 0 | 0 | 0 |
Dimensional crossover of effective orbital dynamics in polar distorted 3He-A: Transitions to anti-spacetime | Topologically protected superfluid phases of $^3$He allow one to simulate
many important aspects of relativistic quantum field theories and quantum
gravity in condensed matter. Here we discuss a topological Lifshitz transition
of the effective quantum vacuum in which the determinant of the tetrad field
changes sign through a crossing to a vacuum state with a degenerate fermionic
metric. Such a transition is realized in polar distorted superfluid $^3$He-A in
terms of the effective tetrad fields emerging in the vicinity of the superfluid
gap nodes: the tetrads of the Weyl points in the chiral A-phase of $^3$He and
the degenerate tetrad in the vicinity of a Dirac nodal line in the polar phase
of $^3$He. The continuous phase transition from the $A$-phase to the polar
phase, i.e. in the transition from the Weyl nodes to the Dirac nodal line and
back, allows one to follow the behavior of the fermionic and bosonic effective
actions when the sign of the tetrad determinant changes, and the effective
chiral space-time transforms to anti-chiral "anti-spacetime". This condensed
matter realization demonstrates that while the original fermionic action is
analytic across the transition, the effective action for the orbital degrees of
freedom (pseudo-EM) fields and gravity have non-analytic behavior. In
particular, the action for the pseudo-EM field in the vacuum with Weyl fermions
(A-phase) contains the modulus of the tetrad determinant. In the vacuum with
the degenerate metric (polar phase) the nodal line is effectively a family of
$2+1$d Dirac fermion patches, which leads to a non-analytic $(B^2-E^2)^{3/4}$
QED action in the vicinity of the Dirac line.
| 0 | 1 | 0 | 0 | 0 | 0 |
Low frequency spectral energy distributions of radio pulsars detected with the Murchison Widefield Array | We present low-frequency spectral energy distributions of 60 known radio
pulsars observed with the Murchison Widefield Array (MWA) telescope. We
searched the GaLactic and Extragalactic All-sky MWA (GLEAM) survey images for
200-MHz continuum radio emission at the position of all pulsars in the ATNF
pulsar catalogue. For the 60 confirmed detections we have measured flux
densities in 20 x 8 MHz bands between 72 and 231 MHz. We compare our results to
existing measurements and show that the MWA flux densities are in good
agreement.
| 0 | 1 | 0 | 0 | 0 | 0 |
Scalable Spectrum Allocation and User Association in Networks with Many Small Cells | A scalable framework is developed to allocate radio resources across a large
number of densely deployed small cells with given traffic statistics on a slow
timescale. Joint user association and spectrum allocation is first formulated
as a convex optimization problem by dividing the spectrum among all possible
transmission patterns of active access points (APs). To improve scalability
with the number of APs, the problem is reformulated using local patterns of
interfering APs. To maintain global consistency among local patterns,
inter-cluster interaction is characterized as hyper-edges in a hyper-graph with
nodes corresponding to neighborhoods of APs. A scalable solution is obtained by
iteratively solving a convex optimization problem for bandwidth allocation with
reduced complexity and constructing a global spectrum allocation using
hyper-graph coloring. Numerical results demonstrate the proposed solution for a
network with 100 APs and several hundred user equipments. For a given quality
of service (QoS), the proposed scheme can increase the network capacity several
fold compared to assigning each user to the strongest AP with full-spectrum
reuse.
| 1 | 0 | 1 | 0 | 0 | 0 |
On Geometry and Symmetry of Kepler Systems. I | We study the Kepler metrics on Kepler manifolds from the point of view of
Sasakian geometry and Hessian geometry. This establishes a link between the
problem of classical gravity and the modern geometric methods in the study of
AdS/CFT correspondence in string theory.
| 0 | 0 | 1 | 0 | 0 | 0 |
The collisional frequency shift of a trapped-ion optical clock | Collisions with background gas can perturb the transition frequency of
trapped ions in an optical atomic clock. We develop a non-perturbative
framework based on a quantum channel description of the scattering process, and
use it to derive a master equation which leads to a simple analytic expression
for the collisional frequency shift. As a demonstration of our method, we
calculate the frequency shift of the Sr$^+$ optical atomic clock transition due
to elastic collisions with helium.
| 0 | 1 | 0 | 0 | 0 | 0 |
Continuity properties for Born-Jordan operators with symbols in Hörmander classes and modulation spaces | We show that the Weyl symbol of a Born-Jordan operator is in the same class
as the Born-Jordan symbol, when Hörmander symbols and certain types of
modulation spaces are used as symbol classes. We use these properties to carry
over continuity and Schatten-von Neumann properties to the Born-Jordan
calculus.
| 0 | 0 | 1 | 0 | 0 | 0 |
IoT Data Analytics Using Deep Learning | Deep learning is a popular machine learning approach which has achieved a lot
of progress in all traditional machine learning areas. Internet of thing (IoT)
and Smart City deployments are generating large amounts of time-series sensor
data in need of analysis. Applying deep learning to these domains has been an
important topic of research. The Long-Short Term Memory (LSTM) network has been
proven to be well suited for dealing with and predicting important events with
long intervals and delays in the time series. LTSM networks have the ability to
maintain long-term memory. In an LTSM network, a stacked LSTM hidden layer also
makes it possible to learn a high level temporal feature without the need of
any fine tuning and preprocessing which would be required by other techniques.
In this paper, we construct a long-short term memory (LSTM) recurrent neural
network structure, use the normal time series training set to build the
prediction model. And then we use the predicted error from the prediction model
to construct a Gaussian naive Bayes model to detect whether the original sample
is abnormal. This method is called LSTM-Gauss-NBayes for short. We use three
real-world data sets, each of which involve long-term time-dependence or
short-term time-dependence, even very weak time dependence. The experimental
results show that LSTM-Gauss-NBayes is an effective and robust model.
| 1 | 0 | 0 | 0 | 0 | 0 |
Viscous Dissipation in One-Dimensional Quantum Liquids | We develop a theory of viscous dissipation in one-dimensional
single-component quantum liquids at low temperatures. Such liquids are
characterized by a single viscosity coefficient, the bulk viscosity. We show
that for a generic interaction between the constituent particles this viscosity
diverges in the zero-temperature limit. In the special case of integrable
models, the viscosity is infinite at any temperature, which can be interpreted
as a breakdown of the hydrodynamic description. Our consideration is applicable
to all single-component Galilean-invariant one-dimensional quantum liquids,
regardless of the statistics of the constituent particles and the interaction
strength.
| 0 | 1 | 0 | 0 | 0 | 0 |
On the existence of homoclinic type solutions of inhomogenous Lagrangian systems | We study the existence of homoclinic type solutions for second order
Lagrangian systems of the type $\ddot{q}(t)-q(t)+a(t)\nabla G(q(t))=f(t)$,
where $t\in\mathbb{R}$, $q\in\mathbb{R}^n$, $a\colon\mathbb{R}\to\mathbb{R}$ is
a continuous positive bounded function, $G\colon\mathbb{R}^n\to\mathbb{R}$ is a
$C^1$-smooth potential satisfying the Ambrosetti-Rabinowitz superquadratic
growth condition and $f\colon\mathbb{R}\to\mathbb{R}^n$ is a continuous bounded
square integrable forcing term. A homoclinic type solution is obtained as limit
of $2k$-periodic solutions of an approximative sequence of second order
differential equations.
| 0 | 0 | 1 | 0 | 0 | 0 |
Correcting Two Deletions and Insertions in Racetrack Memory | Racetrack memory is a non-volatile memory engineered to provide both high
density and low latency, that is subject to synchronization or shift errors.
This paper describes a fast coding solution, in which delimiter bits assist in
identifying the type of shift error, and easily implementable graph-based codes
are used to correct the error, once identified. A code that is able to detect
and correct double shift errors is described in detail.
| 1 | 0 | 0 | 0 | 0 | 0 |
Limits of Yang-Mills α-connections | In the spirit of recent work of Lamm, Malchiodi and Micallef in the setting
of harmonic maps, we identify Yang-Mills connections obtained by approximations
with respect to the Yang-Mills {\alpha}-energy. More specifically, we show that
for the SU(2) Hopf fibration over the four sphere, for sufficiently small
{\alpha} values the SO(4) invariant ADHM instanton is the unique
{\alpha}-critical point which has Yang-Mills {\alpha}-energy lower than a
specific threshold.
| 0 | 0 | 1 | 0 | 0 | 0 |
Online Robust Principal Component Analysis with Change Point Detection | Robust PCA methods are typically batch algorithms which requires loading all
observations into memory before processing. This makes them inefficient to
process big data. In this paper, we develop an efficient online robust
principal component methods, namely online moving window robust principal
component analysis (OMWRPCA). Unlike existing algorithms, OMWRPCA can
successfully track not only slowly changing subspace but also abruptly changed
subspace. By embedding hypothesis testing into the algorithm, OMWRPCA can
detect change points of the underlying subspaces. Extensive simulation studies
demonstrate the superior performance of OMWRPCA compared with other
state-of-art approaches. We also apply the algorithm for real-time background
subtraction of surveillance video.
| 1 | 0 | 0 | 1 | 0 | 0 |
Reactive User Behavior and Mobility Models | In this paper, we present a set of simulation models to more realistically
mimic the behaviour of users reading messages. We propose a User Behaviour
Model, where a simulated user reacts to a message by a flexible set of possible
reactions (e.g. ignore, read, like, save, etc.) and a mobility-based reaction
(visit a place, run away from danger, etc.). We describe our models and their
implementation in OMNeT++. We strongly believe that these models will
significantly contribute to the state of the art of simulating realistically
opportunistic networks.
| 1 | 0 | 0 | 0 | 0 | 0 |
How production networks amplify economic growth | Technological improvement is the most important cause of long-term economic
growth, but the factors that drive it are still not fully understood. In
standard growth models technology is treated in the aggregate, and a main goal
has been to understand how growth depends on factors such as knowledge
production. But an economy can also be viewed as a network, in which producers
purchase goods, convert them to new goods, and sell them to households or other
producers. Here we develop a simple theory that shows how the network
properties of an economy can amplify the effects of technological improvements
as they propagate along chains of production. A key property of an industry is
its output multiplier, which can be understood as the average number of
production steps required to make a good. The model predicts that the output
multiplier of an industry predicts future changes in prices, and that the
average output multiplier of a country predicts future economic growth. We test
these predictions using data from the World Input Output Database and find
results in good agreement with the model. The results show how purely
structural properties of an economy, that have nothing to do with innovation or
human creativity, can exert an important influence on long-term growth.
| 0 | 0 | 0 | 0 | 0 | 1 |
The Minimal Resolution Conjecture on a general quartic surface in $\mathbb P^3$ | Mustaţă has given a conjecture for the graded Betti numbers in the
minimal free resolution of the ideal of a general set of points on an
irreducible projective algebraic variety. For surfaces in $\mathbb P^3$ this
conjecture has been proven for points on quadric surfaces and on general cubic
surfaces. In the latter case, Gorenstein liaison was the main tool. Here we
prove the conjecture for general quartic surfaces. Gorenstein liaison continues
to be a central tool, but to prove the existence of our links we make use of
certain dimension computations. We also discuss the higher degree case, but now
the dimension count does not force the existence of our links.
| 0 | 0 | 1 | 0 | 0 | 0 |
Exact density functional obtained via the Levy constrained search | A stochastic minimization method for a real-space wavefunction, $\Psi({\bf
r}_{1},{\bf r}_{2}\ldots{\bf r}_{n})$, constrained to a chosen density,
$\rho({\bf r})$, is developed. It enables the explicit calculation of the Levy
constrained search
$F[\rho]=\min_{\Psi\rightarrow\rho}\langle\Psi|\hat{T}+\hat{V}_{ee}|\Psi\rangle$
(Proc. Natl. Acad. Sci. 76 6062 (1979)), that gives the exact functional of
density functional theory. This general method is illustrated in the evaluation
of $F[\rho]$ for two-electron densities in one dimension with a soft-Coulomb
interaction. Additionally, procedures are given to determine the first and
second functional derivatives, $\frac{\delta F}{\delta\rho({\bf r})}$ and
$\frac{\delta^{2}F}{\delta\rho({\bf r})\delta\rho({\bf r}')}$. For a chosen
external potential, $v({\bf r})$, the functional and its derivatives are used
in minimizations only over densities to give the exact energy, $E_{v}$ without
needing to solve the Schrödinger equation.
| 0 | 1 | 0 | 0 | 0 | 0 |
Compile-Time Symbolic Differentiation Using C++ Expression Templates | Template metaprogramming is a popular technique for implementing compile time
mechanisms for numerical computing. We demonstrate how expression templates can
be used for compile time symbolic differentiation of algebraic expressions in
C++ computer programs. Given a positive integer $N$ and an algebraic function
of multiple variables, the compiler generates executable code for the $N$th
partial derivatives of the function. Compile-time simplification of the
derivative expressions is achieved using recursive templates. A detailed
analysis indicates that current C++ compiler technology is already sufficient
for practical use of our results, and highlights a number of issues where
further improvements may be desirable.
| 1 | 0 | 0 | 0 | 0 | 0 |
On Multilingual Training of Neural Dependency Parsers | We show that a recently proposed neural dependency parser can be improved by
joint training on multiple languages from the same family. The parser is
implemented as a deep neural network whose only input is orthographic
representations of words. In order to successfully parse, the network has to
discover how linguistically relevant concepts can be inferred from word
spellings. We analyze the representations of characters and words that are
learned by the network to establish which properties of languages were
accounted for. In particular we show that the parser has approximately learned
to associate Latin characters with their Cyrillic counterparts and that it can
group Polish and Russian words that have a similar grammatical function.
Finally, we evaluate the parser on selected languages from the Universal
Dependencies dataset and show that it is competitive with other recently
proposed state-of-the art methods, while having a simple structure.
| 1 | 0 | 0 | 0 | 0 | 0 |
Autocommuting probability of a finite group | Let $G$ be a finite group and $\Aut(G)$ the automorphism group of $G$. The
autocommuting probability of $G$, denoted by $\Pr(G, \Aut(G))$, is the
probability that a randomly chosen automorphism of $G$ fixes a randomly chosen
element of $G$. In this paper, we study $\Pr(G, \Aut(G))$ through a
generalization. We obtain a computing formula, several bounds and
characterizations of $G$ through $\Pr(G, \Aut(G))$. We conclude the paper by
showing that the generalized autocommuting probability of $G$ remains unchanged
under autoisoclinism.
| 0 | 0 | 1 | 0 | 0 | 0 |
Inside-Out Planet Formation. IV. Pebble Evolution and Planet Formation Timescales | Systems with tightly-packed inner planets (STIPs) are very common. Chatterjee
& Tan proposed Inside-Out Planet Formation (IOPF), an in situ formation theory,
to explain these planets. IOPF involves sequential planet formation from
pebble-rich rings that are fed from the outer disk and trapped at the pressure
maximum associated with the dead zone inner boundary (DZIB). Planet masses are
set by their ability to open a gap and cause the DZIB to retreat outwards. We
present models for the disk density and temperature structures that are
relevant to the conditions of IOPF. For a wide range of DZIB conditions, we
evaluate the gap opening masses of planets in these disks that are expected to
lead to truncation of pebble accretion onto the forming planet. We then
consider the evolution of dust and pebbles in the disk, estimating that pebbles
typically grow to sizes of a few cm during their radial drift from several tens
of AU to the inner, $\lesssim1\:$AU-scale disk. A large fraction of the
accretion flux of solids is expected to be in such pebbles. This allows us to
estimate the timescales for individual planet formation and entire planetary
system formation in the IOPF scenario. We find that to produce realistic STIPs
within reasonable timescales similar to disk lifetimes requires disk accretion
rates of $\sim10^{-9}\:M_\odot\:{\rm yr}^{-1}$ and relatively low viscosity
conditions in the DZIB region, i.e., Shakura-Sunyaev parameter of
$\alpha\sim10^{-4}$.
| 0 | 1 | 0 | 0 | 0 | 0 |
SilhoNet: An RGB Method for 3D Object Pose Estimation and Grasp Planning | Autonomous robot manipulation often involves both estimating the pose of the
object to be manipulated and selecting a viable grasp point. Methods using
RGB-D data have shown great success in solving these problems. However, there
are situations where cost constraints or the working environment may limit the
use of RGB-D sensors. When limited to monocular camera data only, both the
problem of object pose estimation and of grasp point selection are very
challenging. In the past, research has focused on solving these problems
separately. In this work, we introduce a novel method called SilhoNet that
bridges the gap between these two tasks. We use a Convolutional Neural Network
(CNN) pipeline that takes in ROI proposals to simultaneously predict an
intermediate silhouette representation for objects with an associated occlusion
mask. The 3D pose is then regressed from the predicted silhouettes. Grasp
points from a precomputed database are filtered by back-projecting them onto
the occlusion mask to find which points are visible in the scene. We show that
our method achieves better overall performance than the state-of-the art
PoseCNN network for 3D pose estimation on the YCB-video dataset.
| 1 | 0 | 0 | 0 | 0 | 0 |
Collapsed Tetragonal Phase Transition in LaRu$_2$P$_2$ | The structural properties of LaRu$_2$P$_2$ under external pressure have been
studied up to 14 GPa, employing high-energy x-ray diffraction in a
diamond-anvil pressure cell. At ambient conditions, LaRu$_2$P$_2$ (I4/mmm) has
a tetragonal structure with a bulk modulus of $B=105(2)$ GPa and exhibits
superconductivity at $T_c= 4.1$ K. With the application of pressure,
LaRu$_2$P$_2$ undergoes a phase transition to a collapsed tetragonal (cT) state
with a bulk modulus of $B=175(5)$ GPa. At the transition, the c-lattice
parameter exhibits a sharp decrease with a concurrent increase of the a-lattice
parameter. The cT phase transition in LaRu$_2$P$_2$ is consistent with a second
order transition, and was found to be temperature dependent, increasing from
$P=3.9(3)$ GPa at 160 K to $P=4.6(3)$ GPa at 300 K. In total, our data are
consistent with the cT transition being near, but slightly above 2 GPa at 5 K.
Finally, we compare the effect of physical and chemical pressure in the
RRu$_2$P$_2$ (R = Y, La-Er, Yb) isostructural series of compounds and find them
to be analogous.
| 0 | 1 | 0 | 0 | 0 | 0 |
Bayesian Nonparametric Unmixing of Hyperspectral Images | Hyperspectral imaging is an important tool in remote sensing, allowing for
accurate analysis of vast areas. Due to a low spatial resolution, a pixel of a
hyperspectral image rarely represents a single material, but rather a mixture
of different spectra. HSU aims at estimating the pure spectra present in the
scene of interest, referred to as endmembers, and their fractions in each
pixel, referred to as abundances. Today, many HSU algorithms have been
proposed, based either on a geometrical or statistical model. While most
methods assume that the number of endmembers present in the scene is known,
there is only little work about estimating this number from the observed data.
In this work, we propose a Bayesian nonparametric framework that jointly
estimates the number of endmembers, the endmembers itself, and their
abundances, by making use of the Indian Buffet Process as a prior for the
endmembers. Simulation results and experiments on real data demonstrate the
effectiveness of the proposed algorithm, yielding results comparable with
state-of-the-art methods while being able to reliably infer the number of
endmembers. In scenarios with strong noise, where other algorithms provide only
poor results, the proposed approach tends to overestimate the number of
endmembers slightly. The additional endmembers, however, often simply represent
noisy replicas of present endmembers and could easily be merged in a
post-processing step.
| 1 | 0 | 0 | 0 | 0 | 0 |
Dynamical control of electron-phonon interactions with high-frequency light | This work addresses the one-dimensional problem of Bloch electrons when they
are rapidly driven by a homogeneous time-periodic light and linearly coupled to
vibrational modes. Starting from a generic time-periodic electron-phonon
Hamiltonian, we derive a time-independent effective Hamiltonian that describes
the stroboscopic dynamics up to the third order in the high-frequency limit.
This yields nonequilibrium corrections to the electron-phonon coupling that are
controllable dynamically via the driving strength. This shows in particular
that local Holstein interactions in equilibrium are corrected by nonlocal
Peierls interactions out of equilibrium, as well as by phonon-assisted hopping
processes that make the dynamical Wannier-Stark localization of Bloch electrons
impossible. Subsequently, we revisit the Holstein polaron problem out of
equilibrium in terms of effective Green functions, and specify explicitly how
the binding energy and effective mass of the polaron can be controlled
dynamically. These tunable properties are reported within the weak- and
strong-coupling regimes since both can be visited within the same material when
varying the driving strength. This work provides some insight into controllable
microscopic mechanisms that may be involved during the multicycle laser
irradiations of organic molecular crystals in ultrafast pump-probe experiments,
although it should also be suitable for realizations in shaken optical lattices
of ultracold atoms.
| 0 | 1 | 0 | 0 | 0 | 0 |
Controlling the thermoelectric effect by mechanical manipulation of the electron's quantum phase in atomic junctions | The thermoelectric voltage developed across an atomic metal junction (i.e., a
nanostructure in which one or a few atoms connect two metal electrodes) in
response to a temperature difference between the electrodes, results from the
quantum interference of electrons that pass through the junction multiple times
after being scattered by the surrounding defects. Here we report successfully
tuning this quantum interference and thus controlling the magnitude and sign of
the thermoelectric voltage by applying a mechanical force that deforms the
junction. The observed switching of the thermoelectric voltage is reversible
and can be cycled many times. Our ab initio and semi-empirical calculations
elucidate the detailed mechanism by which the quantum interference is tuned. We
show that the applied strain alters the quantum phases of electrons passing
through the narrowest part of the junction and hence modifies the electronic
quantum interference in the device. Tuning the quantum interference causes the
energies of electronic transport resonances to shift, which affects the
thermoelectric voltage. These experimental and theoretical studies reveal that
Au atomic junctions can be made to exhibit both positive and negative
thermoelectric voltages on demand, and demonstrate the importance and
tunability of the quantum interference effect in the atomic-scale metal
nanostructures.
| 0 | 1 | 0 | 0 | 0 | 0 |
The time geography of segregation during working hours | Understanding segregation is essential to develop planning tools for building
more inclusive cities. Theoretically, segregation at the work place has been
described as lower compared to residential segregation given the importance of
skill complementarity among other productive factors shaping the economies of
cities. This paper tackles segregation during working hours from a dynamical
perspective by focusing on the movement of urbanites across the city. In
contrast to measuring residential patterns of segregation, we used mobile phone
data to infer home-work trajectory net- works and apply a community detection
algorithm to the example city of Santiago, Chile. We then describe
qualitatively and quantitatively outlined communities, in terms of their socio
eco- nomic composition. We then evaluate segregation for each of these
communities as the probability that a person from a specific community will
interact with a co-worker from the same commu- nity. Finally, we compare these
results with simulations where a new work location is set for each real user,
following the empirical probability distributions of home-work distances and
angles of direction for each community. Methodologically, this study shows that
segregation during working hours for Santiago is unexpectedly high for most of
the city with the exception of its central and business district. In fact, the
only community that is not statistically segregated corresponds to the downtown
area of Santiago, described as a zone of encounter and integration across the
city.
| 1 | 0 | 0 | 1 | 0 | 0 |
Improving Protein Gamma-Turn Prediction Using Inception Capsule Networks | Protein gamma-turn prediction is useful in protein function studies and
experimental design. Several methods for gamma-turn prediction have been
developed, but the results were unsatisfactory with Matthew correlation
coefficients (MCC) around 0.2-0.4. One reason for the low prediction accuracy
is the limited capacity of the methods; in particular, the traditional
machine-learning methods like SVM may not extract high-level features well to
distinguish between turn or non-turn. Hence, it is worthwhile exploring new
machine-learning methods for the prediction. A cutting-edge deep neural
network, named Capsule Network (CapsuleNet), provides a new opportunity for
gamma-turn prediction. Even when the number of input samples is relatively
small, the capsules from CapsuleNet are very effective to extract high-level
features for classification tasks. Here, we propose a deep inception capsule
network for gamma-turn prediction. Its performance on the gamma-turn benchmark
GT320 achieved an MCC of 0.45, which significantly outperformed the previous
best method with an MCC of 0.38. This is the first gamma-turn prediction method
utilizing deep neural networks. Also, to our knowledge, it is the first
published bioinformatics application utilizing capsule network, which will
provides a useful example for the community.
| 0 | 0 | 0 | 0 | 1 | 0 |
Image transformations on locally compact spaces | An image is here defined to be a set which is either open or closed and an
image transformation is structure preserving in the following sense: It
corresponds to an algebra homomorphism for each singly generated algebra. The
results extend parts of results of J.F. Aarnes on quasi-measures, -states,
-homomorphisms, and image-transformations from the setting compact Hausdorff
spaces to locally compact Hausdorff spaces.
| 0 | 0 | 1 | 1 | 0 | 0 |
From Half-metal to Semiconductor: Electron-correlation Effects in Zigzag SiC Nanoribbons From First Principles | We performed electronic structure calculations based on the first-principles
many-body theory approach in order to study quasiparticle band gaps, and
optical absorption spectra of hydrogen-passivated zigzag SiC nanoribbons.
Self-energy corrections are included using the GW approximation, and excitonic
effects are included using the Bethe-Salpeter equation. We have systematically
studied nanoribbons that have widths between 0.6 $\text{nm}$ and 2.2
$\text{nm}$. Quasiparticle corrections widened the Kohn-Sham band gaps because
of enhanced interaction effects, caused by reduced dimensionality. Zigzag SiC
nanoribbons with widths larger than 1 nm, exhibit half-metallicity at the
mean-field level. The self-energy corrections increased band gaps
substantially, thereby transforming the half-metallic zigzag SiC nanoribbons,
to narrow gap spin-polarized semiconductors. Optical absorption spectra of
these nanoribbons get dramatically modified upon inclusion of electron-hole
interactions, and the narrowest ribbon exhibits strongly bound excitons, with
binding energy of 2.1 eV. Thus, the narrowest zigzag SiC nanoribbon has the
potential to be used in optoelectronic devices operating in the IR region of
the spectrum, while the broader ones, exhibiting spin polarization, can be
utilized in spintronic applications.
| 0 | 1 | 0 | 0 | 0 | 0 |
Non-exponential decoherence of radio-frequency resonance rotation of spin in storage rings | Precision experiments, such as the search for electric dipole moments of
charged particles using radiofrequency spin rotators in storage rings, demand
for maintaining the exact spin resonance condition for several thousand
seconds. Synchrotron oscillations in the stored beam modulate the spin tune of
off-central particles, moving it off the perfect resonance condition set for
central particles on the reference orbit. Here we report an analytic
description of how synchrotron oscillations lead to non-exponential decoherence
of the radiofrequency resonance driven up-down spin rotations. This
non-exponential decoherence is shown to be accompanied by a nontrivial walk of
the spin phase. We also comment on sensitivity of the decoherence rate to the
harmonics of the radiofreqency spin rotator and a possibility to check
predictions of decoherence-free magic energies.
| 0 | 1 | 0 | 0 | 0 | 0 |
On the fundamental group of semi-Riemannian manifolds with positive curvature operator | This paper presents an investigation of the relation between some positivity
of the curvature and the finiteness of fundamental groups in semi-Riemannian
geometry. We consider semi-Riemannian submersions $\pi : (E, g) \rightarrow (B,
-g_{B}) $ under the condition with $(B, g_{B})$ Riemannian, the fiber closed
Riemannian, and the horizontal distribution integrable. Then we prove that, if
the lightlike geodesically complete or timelike geodesically complete
semi-Riemannian manifold $E$ has some positivity of curvature, then the
fundamental group of the fiber is finite. Moreover we construct an example of
semi-Riemannian submersions with some positivity of curvature, non-integrable
horizontal distribution, and the finiteness of the fundamental group of the
fiber.
| 0 | 0 | 1 | 0 | 0 | 0 |
Flow-Sensitive Composition of Thread-Modular Abstract Interpretation | We propose a constraint-based flow-sensitive static analysis for concurrent
programs by iteratively composing thread-modular abstract interpreters via the
use of a system of lightweight constraints. Our method is compositional in that
it first applies sequential abstract interpreters to individual threads and
then composes their results. It is flow-sensitive in that the causality
ordering of interferences (flow of data from global writes to reads) is modeled
by a system of constraints. These interference constraints are lightweight
since they only refer to the execution order of program statements as opposed
to their numerical properties: they can be decided efficiently using an
off-the-shelf Datalog engine. Our new method has the advantage of being more
accurate than existing, flow-insensitive, static analyzers while remaining
scalable and providing the expected soundness and termination guarantees even
for programs with unbounded data. We implemented our method and evaluated it on
a large number of benchmarks, demonstrating its effectiveness at increasing the
accuracy of thread-modular abstract interpretation.
| 1 | 0 | 0 | 0 | 0 | 0 |
Momentum Control of Humanoid Robots with Series Elastic Actuators | Humanoid robots may require a degree of compliance at the joint level for
improving efficiency, shock tolerance, and safe interaction with humans. The
presence of joint elasticity, however, complexifies the design of balancing and
walking controllers. This paper proposes a control framework for extending
momentum based controllers developed for stiff actuators to the case of series
elastic actuators. The key point is to consider the motor velocities as an
intermediate control input, and then apply high-gain control to stabilise the
desired motor velocities achieving momentum control. Simulations carried out on
a model of the robot iCub verify the soundness of the proposed approach.
| 1 | 0 | 1 | 0 | 0 | 0 |
SRN: Side-output Residual Network for Object Symmetry Detection in the Wild | In this paper, we establish a baseline for object symmetry detection in
complex backgrounds by presenting a new benchmark and an end-to-end deep
learning approach, opening up a promising direction for symmetry detection in
the wild. The new benchmark, named Sym-PASCAL, spans challenges including
object diversity, multi-objects, part-invisibility, and various complex
backgrounds that are far beyond those in existing datasets. The proposed
symmetry detection approach, named Side-output Residual Network (SRN),
leverages output Residual Units (RUs) to fit the errors between the object
symmetry groundtruth and the outputs of RUs. By stacking RUs in a
deep-to-shallow manner, SRN exploits the 'flow' of errors among multiple scales
to ease the problems of fitting complex outputs with limited layers,
suppressing the complex backgrounds, and effectively matching object symmetry
of different scales. Experimental results validate both the benchmark and its
challenging aspects related to realworld images, and the state-of-the-art
performance of our symmetry detection approach. The benchmark and the code for
SRN are publicly available at this https URL.
| 1 | 0 | 0 | 0 | 0 | 0 |
Time-Optimal Trajectories of Generic Control-Affine Systems Have at Worst Iterated Fuller Singularities | We consider in this paper the regularity problem for time-optimal
trajectories of a single-input control-affine system on a n-dimensional
manifold. We prove that, under generic conditions on the drift and the
controlled vector field, any control u associated with an optimal trajectory is
smooth out of a countable set of times. More precisely, there exists an integer
K, only depending on the dimension n, such that the non-smoothness set of u is
made of isolated points, accumulations of isolated points, and so on up to K-th
order iterated accumulations.
| 0 | 0 | 1 | 0 | 0 | 0 |
Wikipedia for Smart Machines and Double Deep Machine Learning | Very important breakthroughs in data centric deep learning algorithms led to
impressive performance in transactional point applications of Artificial
Intelligence (AI) such as Face Recognition, or EKG classification. With all due
appreciation, however, knowledge blind data only machine learning algorithms
have severe limitations for non-transactional AI applications, such as medical
diagnosis beyond the EKG results. Such applications require deeper and broader
knowledge in their problem solving capabilities, e.g. integrating anatomy and
physiology knowledge with EKG results and other patient findings. Following a
review and illustrations of such limitations for several real life AI
applications, we point at ways to overcome them. The proposed Wikipedia for
Smart Machines initiative aims at building repositories of software structures
that represent humanity science & technology knowledge in various parts of
life; knowledge that we all learn in schools, universities and during our
professional life. Target readers for these repositories are smart machines;
not human. AI software developers will have these Reusable Knowledge structures
readily available, hence, the proposed name ReKopedia. Big Data is by now a
mature technology, it is time to focus on Big Knowledge. Some will be derived
from data, some will be obtained from mankind gigantic repository of knowledge.
Wikipedia for smart machines along with the new Double Deep Learning approach
offer a paradigm for integrating datacentric deep learning algorithms with
algorithms that leverage deep knowledge, e.g. evidential reasoning and
causality reasoning. For illustration, a project is described to produce
ReKopedia knowledge modules for medical diagnosis of about 1,000 disorders.
Data is important, but knowledge deep, basic, and commonsense is equally
important.
| 1 | 0 | 0 | 0 | 0 | 0 |
A binary main belt comet | The asteroids are primitive solar system bodies which evolve both
collisionally and through disruptions due to rapid rotation [1]. These
processes can lead to the formation of binary asteroids [2-4] and to the
release of dust [5], both directly and, in some cases, through uncovering
frozen volatiles. In a sub-set of the asteroids called main-belt comets (MBCs),
the sublimation of excavated volatiles causes transient comet-like activity
[6-8]. Torques exerted by sublimation measurably influence the spin rates of
active comets [9] and might lead to the splitting of bilobate comet nuclei
[10]. The kilometer-sized main-belt asteroid 288P (300163) showed activity for
several months around its perihelion 2011 [11], suspected to be sustained by
the sublimation of water ice [12] and supported by rapid rotation [13], while
at least one component rotates slowly with a period of 16 hours [14]. 288P is
part of a young family of at least 11 asteroids that formed from a ~10km
diameter precursor during a shattering collision 7.5 million years ago [15].
Here we report that 288P is a binary main-belt comet. It is different from the
known asteroid binaries for its combination of wide separation, near-equal
component size, high eccentricity, and comet-like activity. The observations
also provide strong support for sublimation as the driver of activity in 288P
and show that sublimation torques may play a significant role in binary orbit
evolution.
| 0 | 1 | 0 | 0 | 0 | 0 |
Dynamic Clustering Algorithms via Small-Variance Analysis of Markov Chain Mixture Models | Bayesian nonparametrics are a class of probabilistic models in which the
model size is inferred from data. A recently developed methodology in this
field is small-variance asymptotic analysis, a mathematical technique for
deriving learning algorithms that capture much of the flexibility of Bayesian
nonparametric inference algorithms, but are simpler to implement and less
computationally expensive. Past work on small-variance analysis of Bayesian
nonparametric inference algorithms has exclusively considered batch models
trained on a single, static dataset, which are incapable of capturing time
evolution in the latent structure of the data. This work presents a
small-variance analysis of the maximum a posteriori filtering problem for a
temporally varying mixture model with a Markov dependence structure, which
captures temporally evolving clusters within a dataset. Two clustering
algorithms result from the analysis: D-Means, an iterative clustering algorithm
for linearly separable, spherical clusters; and SD-Means, a spectral clustering
algorithm derived from a kernelized, relaxed version of the clustering problem.
Empirical results from experiments demonstrate the advantages of using D-Means
and SD-Means over contemporary clustering algorithms, in terms of both
computational cost and clustering accuracy.
| 0 | 0 | 0 | 1 | 0 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.