title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Evidence for short-range magnetic order in the nematic phase of FeSe from anisotropic in-plane magnetostriction and susceptibility measurements | The nature of the nematic state in FeSe remains one of the major unsolved
mysteries in Fe- based superconductors. Both spin and orbital physics have been
invoked to explain the origin of this phase. Here we present experimental
evidence for frustrated, short-range magnetic order, as suggested by several
recent theoretical works, in the nematic state of FeSe. We use a combination of
magnetostriction, susceptibility and resistivity measurements to probe the
in-plane anisotropies of the nematic state and its associated fluctuations.
Despite the absence of long-range magnetic order in FeSe, we observe a sizable
in-plane magnetic susceptibility anisotropy, which is responsible for the
field-induced in-plane distortion inferred from magnetostriction measurements.
Further we demonstrate that all three anisotropies in FeSe are very similar to
those of BaFe2As2, which strongly suggests that the nematic phase in FeSe is
also of magnetic origin.
| 0 | 1 | 0 | 0 | 0 | 0 |
Contextual Parameter Generation for Universal Neural Machine Translation | We propose a simple modification to existing neural machine translation (NMT)
models that enables using a single universal model to translate between
multiple languages while allowing for language specific parameterization, and
that can also be used for domain adaptation. Our approach requires no changes
to the model architecture of a standard NMT system, but instead introduces a
new component, the contextual parameter generator (CPG), that generates the
parameters of the system (e.g., weights in a neural network). This parameter
generator accepts source and target language embeddings as input, and generates
the parameters for the encoder and the decoder, respectively. The rest of the
model remains unchanged and is shared across all languages. We show how this
simple modification enables the system to use monolingual data for training and
also perform zero-shot translation. We further show it is able to surpass
state-of-the-art performance for both the IWSLT-15 and IWSLT-17 datasets and
that the learned language embeddings are able to uncover interesting
relationships between languages.
| 0 | 0 | 0 | 1 | 0 | 0 |
Opinion Dynamics with Stubborn Agents | We consider the problem of optimizing the placement of stubborn agents in a
social network in order to maximally impact population opinions.
We assume individuals in a directed social network each have a latent opinion
that evolves over time in response to social media posts by their neighbors.
The individuals randomly communicate noisy versions of their latent opinion to
their neighbors. Each individual updates his opinion using a time-varying
update rule that has him become more stubborn with time and be less affected by
new posts. The dynamic update rule is a novel component of our model and
reflects realistic behaviors observed in many psychological studies.
We show that in the presence of stubborn agents with immutable opinions and
under fairly general conditions on the stubbornness rate of the individuals,
the opinions converge to an equilibrium determined by a linear system. We give
an interesting electrical network interpretation of the equilibrium. We also
use this equilibrium to present a simple closed form expression for harmonic
influence centrality, which is a function that quantifies how much a node can
affect the mean opinion in a network. We develop a discrete optimization
formulation for the problem of maximally shifting opinions in a network by
targeting nodes with stubborn agents. We show that this is an optimization
problem with a monotone and submodular objective, allowing us to utilize a
greedy algorithm. Finally, we show that a small number of stubborn agents can
non-trivially influence a large population using simulated networks.
| 1 | 0 | 0 | 1 | 0 | 0 |
A Community Microgrid Architecture with an Internal Local Market | This work fits in the context of community microgrids, where members of a
community can exchange energy and services among themselves, without going
through the usual channels of the public electricity grid. We introduce and
analyze a framework to operate a community microgrid, and to share the
resulting revenues and costs among its members. A market-oriented pricing of
energy exchanges within the community is obtained by implementing an internal
local market based on the marginal pricing scheme. The market aims at
maximizing the social welfare of the community, thanks to the more efficient
allocation of resources, the reduction of the peak power to be paid, and the
increased amount of reserve, achieved at an aggregate level. A community
microgrid operator, acting as a benevolent planner, redistributes revenues and
costs among the members, in such a way that the solution achieved by each
member within the community is not worse than the solution it would achieve by
acting individually. In this way, each member is incentivized to participate in
the community on a voluntary basis. The overall framework is formulated in the
form of a bilevel model, where the lower level problem clears the market, while
the upper level problem plays the role of the community microgrid operator.
Numerical results obtained on a real test case implemented in Belgium show
significant cost savings on a yearly scale for the community members, as
compared to the case when they act individually.
| 1 | 0 | 0 | 0 | 0 | 1 |
Extended nilHecke algebra and symmetric functions in type B | We formulate a type B extended nilHecke algebra, following the type A
construction of Naisse and Vaz. We describe an action of this algebra on
extended polynomials and describe some results on the structure on the extended
symmetric polynomials. Finally, following Appel, Egilmez, Hogancamp, and Lauda,
we prove a result analogous to a classical theorem of Solomon connecting the
extended symmetric polynomial ring to a ring of usual symmetric polynomials and
their differentials.
| 0 | 0 | 1 | 0 | 0 | 0 |
One-Sided Unsupervised Domain Mapping | In unsupervised domain mapping, the learner is given two unmatched datasets
$A$ and $B$. The goal is to learn a mapping $G_{AB}$ that translates a sample
in $A$ to the analog sample in $B$. Recent approaches have shown that when
learning simultaneously both $G_{AB}$ and the inverse mapping $G_{BA}$,
convincing mappings are obtained. In this work, we present a method of learning
$G_{AB}$ without learning $G_{BA}$. This is done by learning a mapping that
maintains the distance between a pair of samples. Moreover, good mappings are
obtained, even by maintaining the distance between different parts of the same
sample before and after mapping. We present experimental results that the new
method not only allows for one sided mapping learning, but also leads to
preferable numerical results over the existing circularity-based constraint.
Our entire code is made publicly available at
this https URL .
| 1 | 0 | 0 | 0 | 0 | 0 |
Adaptive Neural Networks for Efficient Inference | We present an approach to adaptively utilize deep neural networks in order to
reduce the evaluation time on new examples without loss of accuracy. Rather
than attempting to redesign or approximate existing networks, we propose two
schemes that adaptively utilize networks. We first pose an adaptive network
evaluation scheme, where we learn a system to adaptively choose the components
of a deep network to be evaluated for each example. By allowing examples
correctly classified using early layers of the system to exit, we avoid the
computational time associated with full evaluation of the network. We extend
this to learn a network selection system that adaptively selects the network to
be evaluated for each example. We show that computational time can be
dramatically reduced by exploiting the fact that many examples can be correctly
classified using relatively efficient networks and that complex,
computationally costly networks are only necessary for a small fraction of
examples. We pose a global objective for learning an adaptive early exit or
network selection policy and solve it by reducing the policy learning problem
to a layer-by-layer weighted binary classification problem. Empirically, these
approaches yield dramatic reductions in computational cost, with up to a 2.8x
speedup on state-of-the-art networks from the ImageNet image recognition
challenge with minimal (<1%) loss of top5 accuracy.
| 1 | 0 | 0 | 1 | 0 | 0 |
The phylogenetic effective sample size and jumps | The phylogenetic effective sample size is a parameter that has as its goal
the quantification of the amount of independent signal in a phylogenetically
correlated sample. It was studied for Brownian motion and Ornstein-Uhlenbeck
models of trait evolution. Here, we study this composite parameter when the
trait is allowed to jump at speciation points of the phylogeny. Our numerical
study indicates that there is a non-trivial limit as the effect of jumps grows.
The limit depends on the value of the drift parameter of the Ornstein-Uhlenbeck
process.
| 0 | 0 | 0 | 0 | 1 | 0 |
Bosonic symmetries of the extended fermionic $(2N,2M)$-Toda hierarchy | In this paper, we construct the additional symmetries of the fermionic
$(2N,2M)$-Toda hierarchy basing on the generalization of the $N{=}(1|1)$
supersymmetric two dimensional Toda lattice hierarchy. These additional flows
constitute a $w_{\infty}\times w_{\infty}$ Lie algebra. As a Bosonic reduction
of the $N{=}(1|1)$ supersymmetric two dimensional Toda lattice hierarchy and
the fermionic $(2N,2M)$-Toda hierarchy, we define a new extended fermionic
$(2N,2M)$-Toda hierarchy which admits a Bosonic Block type superconformal
structure.
| 0 | 1 | 1 | 0 | 0 | 0 |
Single-atom-resolved probing of lattice gases in momentum space | Measuring the full distribution of individual particles is of fundamental
importance to characterize many-body quantum systems through correlation
functions at any order. Here we demonstrate the possibility to reconstruct the
momentum-space distribution of three-dimensional interacting lattice gases
atom-by-atom. This is achieved by detecting individual metastable Helium atoms
in the far-field regime of expansion, when released from an optical lattice. We
benchmark our technique with Quantum Monte-Carlo calculations, demonstrating
the ability to resolve momentum distributions of superfluids occupying $10^5$
lattice sites. It permits a direct measure of the condensed fraction across
phase transitions, as we illustrate on the superfluid-to-normal transition. Our
single-atom-resolved approach opens a new route to investigate interacting
lattice gases through momentum correlations.
| 0 | 1 | 0 | 0 | 0 | 0 |
Provability Logics of Hierarchies | The branch of provability logic investigates the provability-based behavior
of the mathematical theories. In a more precise way, it studies the relation
between a mathematical theory $T$ and a modal logic $L$ via the provability
interpretation which interprets the modality as the provability predicate of
$T$. In this paper we will extend this relation to investigate the
provability-based behavior of a hierarchy of theories. More precisely, using
the modal language with infinitely many modalities,
$\{\Box_n\}_{n=0}^{\infty}$, we will define the hierarchical counterparts of
some of the classical modal theories such as $\mathbf{K4}$, $\mathbf{KD4}$,
$\mathbf{GL}$ and $\mathbf{S4}$. Then we will define their canonical
provability interpretations and their corresponding soundness-completeness
theorems.
| 0 | 0 | 1 | 0 | 0 | 0 |
On the Difference between Physics and Biology: Logical Branching and Biomolecules | Physical emergence - crystals, rocks, sandpiles, turbulent eddies, planets,
stars - is fundamentally different from biological emergence - amoeba, cells,
mice, humans - even though the latter is based in the former. This paper points
out that an essential difference is that as well as involving physical
causation, causation in biological systems has a logical nature at each level
of the hierarchy of emergence, from the biomolecular level up. The key link
between physics and life enabling this to happen is provided by biomolecules,
such as voltage gated ion channels, which enable branching logic to emerge from
the underlying physics and hence enable logically based cell processes to take
place in general, and in neurons in particular. These molecules can only have
come into being via the contextually dependent processes of natural selection,
which selects them for their biological function. A further major difference is
between life in general and intelligent life. We characterise intelligent
organisms as being engaged in deductive causation, which enables them to
transcend the physical limitations of their bodies through the power of
abstract thought, prediction, and planning. Ultimately this is enabled by the
biomolecules that underlie the propagation of action potentials in neuronal
axons in the brain.
| 0 | 1 | 0 | 0 | 0 | 0 |
Spin liquid and infinitesimal-disorder-driven cluster spin glass in the kagome lattice | The interplay between geometric frustration (GF) and bond disorder is studied
in the Ising kagome lattice within a cluster approach. The model considers
antiferromagnetic (AF) short-range couplings and long-range intercluster
disordered interactions. The replica formalism is used to obtain an effective
single cluster model from where the thermodynamics is analyzed by exact
diagonalization. We found that the presence of GF can introduce cluster
freezing at very low levels of disorder. The system exhibits an entropy plateau
followed by a large entropy drop close to the freezing temperature. In this
scenario, a spin-liquid (SL) behavior prevents conventional long-range order,
but an infinitesimal disorder picks out uncompensated cluster states from the
multi degenerate SL regime, potentializing the intercluster disordered coupling
and bringing the cluster spin-glass state. To summarize, our results suggest
that the SL state combined with low levels of disorder can activate small
clusters, providing hypersensitivity to the freezing process in geometrically
frustrated materials and playing a key role in the glassy stabilization. We
propose that this physical mechanism could be present in several geometrically
frustrated materials. In particular, we discuss our results in connection to
the recent experimental investigations of the Ising kagome compound
Co$_3$Mg(OH)$_6$Cl$_2$.
| 0 | 1 | 0 | 0 | 0 | 0 |
A punishment voting algorithm based on super categories construction for acoustic scene classification | In acoustic scene classification researches, audio segment is usually split
into multiple samples. Majority voting is then utilized to ensemble the results
of the samples. In this paper, we propose a punishment voting algorithm based
on the super categories construction method for acoustic scene classification.
Specifically, we propose a DenseNet-like model as the base classifier. The base
classifier is trained by the CQT spectrograms generated from the raw audio
segments. Taking advantage of the results of the base classifier, we propose a
super categories construction method using the spectral clustering. Super
classifiers corresponding to the constructed super categories are further
trained. Finally, the super classifiers are utilized to enhance the majority
voting of the base classifier by punishment voting. Experiments show that the
punishment voting obviously improves the performances on both the DCASE2017
Development dataset and the LITIS Rouen dataset.
| 1 | 0 | 0 | 1 | 0 | 0 |
Strong and Weak Equilibria for Time-Inconsistent Stochastic Control in Continuous Time | A new definition of continuous-time equilibrium controls is introduced. As
opposed to the standard definition, which involves a derivative-type operation,
the new definition parallels how a discrete-time equilibrium is defined, and
allows for unambiguous economic interpretation. The terms "strong equilibria"
and "weak equilibria" are coined for controls under the new and the standard
definitions, respectively. When the state process is a time-homogeneous
continuous-time Markov chain, a careful asymptotic analysis gives complete
characterizations of weak and strong equilibria. Thanks to Kakutani-Fan's
fixed-point theorem, general existence of weak and strong equilibria is also
established, under additional compactness assumption. Our theoretic results are
applied to a two-state model under non-exponential discounting. In particular,
we demonstrate explicitly that there can be incentive to deviate from a weak
equilibrium, which justifies the need for strong equilibria. Our analysis also
provides new results for the existence and characterization of discrete-time
equilibria under infinite horizon.
| 0 | 0 | 0 | 0 | 0 | 1 |
Trading algorithms with learning in latent alpha models | Alpha signals for statistical arbitrage strategies are often driven by latent
factors. This paper analyses how to optimally trade with latent factors that
cause prices to jump and diffuse. Moreover, we account for the effect of the
trader's actions on quoted prices and the prices they receive from trading.
Under fairly general assumptions, we demonstrate how the trader can learn the
posterior distribution over the latent states, and explicitly solve the latent
optimal trading problem. We provide a verification theorem, and a methodology
for calibrating the model by deriving a variation of the
expectation-maximization algorithm. To illustrate the efficacy of the optimal
strategy, we demonstrate its performance through simulations and compare it to
strategies which ignore learning in the latent factors. We also provide
calibration results for a particular model using Intel Corporation stock as an
example.
| 0 | 0 | 0 | 1 | 0 | 1 |
Temporal Segment Networks for Action Recognition in Videos | Deep convolutional networks have achieved great success for image
recognition. However, for action recognition in videos, their advantage over
traditional methods is not so evident. We present a general and flexible
video-level framework for learning action models in videos. This method, called
temporal segment network (TSN), aims to model long-range temporal structures
with a new segment-based sampling and aggregation module. This unique design
enables our TSN to efficiently learn action models by using the whole action
videos. The learned models could be easily adapted for action recognition in
both trimmed and untrimmed videos with simple average pooling and multi-scale
temporal window integration, respectively. We also study a series of good
practices for the instantiation of TSN framework given limited training
samples. Our approach obtains the state-the-of-art performance on four
challenging action recognition benchmarks: HMDB51 (71.0%), UCF101 (94.9%),
THUMOS14 (80.1%), and ActivityNet v1.2 (89.6%). Using the proposed RGB
difference for motion models, our method can still achieve competitive accuracy
on UCF101 (91.0%) while running at 340 FPS. Furthermore, based on the temporal
segment networks, we won the video classification track at the ActivityNet
challenge 2016 among 24 teams, which demonstrates the effectiveness of TSN and
the proposed good practices.
| 1 | 0 | 0 | 0 | 0 | 0 |
Neural Machine Translation between Herbal Prescriptions and Diseases | The current study applies deep learning to herbalism. Toward the goal, we
acquired the de-identified health insurance reimbursements that were claimed in
a 10-year period from 2004 to 2013 in the National Health Insurance Database of
Taiwan, the total number of reimbursement records equaling 340 millions. Two
artificial intelligence techniques were applied to the dataset: residual
convolutional neural network multitask classifier and attention-based recurrent
neural network. The former works to translate from herbal prescriptions to
diseases; and the latter from diseases to herbal prescriptions. Analysis of the
classification results indicates that herbal prescriptions are specific to:
anatomy, pathophysiology, sex and age of the patient, and season and year of
the prescription. Further analysis identifies temperature and gross domestic
product as the meteorological and socioeconomic factors that are associated
with herbal prescriptions. Analysis of the neural machine transitional result
indicates that the recurrent neural network learnt not only syntax but also
semantics of diseases and herbal prescriptions.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Quantum-Proof Non-Malleable Extractor, With Application to Privacy Amplification against Active Quantum Adversaries | In privacy amplification, two mutually trusted parties aim to amplify the
secrecy of an initial shared secret $X$ in order to establish a shared private
key $K$ by exchanging messages over an insecure communication channel. If the
channel is authenticated the task can be solved in a single round of
communication using a strong randomness extractor; choosing a quantum-proof
extractor allows one to establish security against quantum adversaries.
In the case that the channel is not authenticated, Dodis and Wichs (STOC'09)
showed that the problem can be solved in two rounds of communication using a
non-malleable extractor, a stronger pseudo-random construction than a strong
extractor.
We give the first construction of a non-malleable extractor that is secure
against quantum adversaries. The extractor is based on a construction by Li
(FOCS'12), and is able to extract from source of min-entropy rates larger than
$1/2$. Combining this construction with a quantum-proof variant of the
reduction of Dodis and Wichs, shown by Cohen and Vidick (unpublished), we
obtain the first privacy amplification protocol secure against active quantum
adversaries.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Theory of Reversibility for Erlang | In a reversible language, any forward computation can be undone by a finite
sequence of backward steps. Reversible computing has been studied in the
context of different programming languages and formalisms, where it has been
used for testing and verification, among others. In this paper, we consider a
subset of Erlang, a functional and concurrent programming language based on the
actor model. We present a formal semantics for reversible computation in this
language and prove its main properties, including its causal consistency. We
also build on top of it a rollback operator that can be used to undo the
actions of a process up to a given checkpoint.
| 1 | 0 | 0 | 0 | 0 | 0 |
Convergence and Consistency Analysis for A 3D Invariant-EKF SLAM | In this paper, we investigate the convergence and consistency properties of
an Invariant-Extended Kalman Filter (RI-EKF) based Simultaneous Localization
and Mapping (SLAM) algorithm. Basic convergence properties of this algorithm
are proven. These proofs do not require the restrictive assumption that the
Jacobians of the motion and observation models need to be evaluated at the
ground truth. It is also shown that the output of RI-EKF is invariant under any
stochastic rigid body transformation in contrast to $\mathbb{SO}(3)$ based EKF
SLAM algorithm ($\mathbb{SO}(3)$-EKF) that is only invariant under
deterministic rigid body transformation. Implications of these invariance
properties on the consistency of the estimator are also discussed. Monte Carlo
simulation results demonstrate that RI-EKF outperforms $\mathbb{SO}(3)$-EKF,
Robocentric-EKF and the "First Estimates Jacobian" EKF, for 3D point feature
based SLAM.
| 1 | 0 | 0 | 0 | 0 | 0 |
Optical Characterization of Electro-spun Polymer Nanofiber based Silver Nanotubes | Nanotubes of various kinds have been prepared in the last decade, starting
from the discovery of carbon nanotubes. Recently other types of nanotubes
including metallic (Au), inorganic (TiO2, HfS2, V7O16, CdSe, MoS2), and
polymeric (polyaniline, polyacrylonitrile) have been produced. Herein we
present a novel synthetic procedure leading to a new kind of porous,
high-surface-area nanoparticle nanotubes (NPNTs). This study characterizes the
synthesized silver nanotubes at optical wavelengths. The absorption spectrum of
PAN washed silver nanotubes shows an extended absorption peak at visible
wavelengths ranging from 350 to 700 nm. In addition, the absorption spectrum of
randomly oriented silver nanotubes showed plasmonic behavior, indicating high
efficient surface enhanced Raman scattering (SERS) performance.
| 0 | 1 | 0 | 0 | 0 | 0 |
Optimal Prediction for Additive Function-on-Function Regression | As with classic statistics, functional regression models are invaluable in
the analysis of functional data. While there are now extensive tools with
accompanying theory available for linear models, there is still a great deal of
work to be done concerning nonlinear models for functional data. In this work
we consider the Additive Function-on-Function Regression model, a type of
nonlinear model that uses an additive relationship between the functional
outcome and functional covariate. We present an estimation methodology built
upon Reproducing Kernel Hilbert Spaces, and establish optimal rates of
convergence for our estimates in terms of prediction error. We also discuss
computational challenges that arise with such complex models, developing a
representer theorem for our estimate as well as a more practical and
computationally efficient approximation. Simulations and an application to
Cumulative Intraday Returns around the 2008 financial crisis are also provided.
| 0 | 0 | 1 | 1 | 0 | 0 |
Star chromatic index of subcubic multigraphs | The star chromatic index of a multigraph $G$, denoted $\chi'_{s}(G)$, is the
minimum number of colors needed to properly color the edges of $G$ such that no
path or cycle of length four is bi-colored. A multigraph $G$ is star
$k$-edge-colorable if $\chi'_{s}(G)\le k$. Dvořák, Mohar and Šámal
[Star chromatic index, J. Graph Theory 72 (2013), 313--326] proved that every
subcubic multigraph is star $7$-edge-colorable. They conjectured in the same
paper that every subcubic multigraph should be star $6$-edge-colorable. In this
paper, we first prove that it is NP-complete to determine whether
$\chi'_s(G)\le3$ for an arbitrary graph $G$. This answers a question of Mohar.
We then establish some structure results on subcubic multigraphs $G$ with
$\delta(G)\le2$ such that $\chi'_s(G)>k$ but $\chi'_s(G-v)\le k$ for any $v\in
V(G)$, where $k\in\{5,6\}$. We finally apply the structure results, along with
a simple discharging method, to prove that every subcubic multigraph $G$ is
star $6$-edge-colorable if $mad(G)<5/2$, and star $5$-edge-colorable if
$mad(G)<24/11$, respectively, where $mad(G)$ is the maximum average degree of a
multigraph $G$. This partially confirms the conjecture of Dvořák, Mohar
and Šámal.
| 0 | 0 | 1 | 0 | 0 | 0 |
The absolutely Koszul property of Veronese subrings and Segre products | Absolutely Koszul algebras are a class of rings over which any finite graded
module has a rational Poincaré series. We provide a criterion to detect
non-absolutely Koszul rings. Combining the criterion with machine computations,
we identify large families of Veronese subrings and Segre products of
polynomial rings which are not absolutely Koszul. In particular, we classify
completely the absolutely Koszul algebras among Segre products of polynomial
rings, at least in characteristic $0$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Learning Theory and Algorithms for Revenue Management in Sponsored Search | Online advertisement is the main source of revenue for Internet business.
Advertisers are typically ranked according to a score that takes into account
their bids and potential click-through rates(eCTR). Generally, the likelihood
that a user clicks on an ad is often modeled by optimizing for the click
through rates rather than the performance of the auction in which the click
through rates will be used. This paper attempts to eliminate this
dis-connection by proposing loss functions for click modeling that are based on
final auction performance.In this paper, we address two feasible metrics (AUC^R
and SAUC) to evaluate the on-line RPM (revenue per mille) directly rather than
the CTR. And then, we design an explicit ranking function by incorporating the
calibration fac-tor and price-squashed factor to maximize the revenue. Given
the power of deep networks, we also explore an implicit optimal ranking
function with deep model. Lastly, various experiments with two real world
datasets are presented. In particular, our proposed methods perform better than
the state-of-the-art methods with regard to the revenue of the platform.
| 0 | 0 | 0 | 1 | 0 | 0 |
Isotropic functions revisited | To a smooth and symmetric function $f$ defined on a symmetric open set
$\Gamma\subset\mathbb{R}^{n}$ and a real $n$-dimensional vector space $V$ we
assign an associated operator function $F$ defined on an open subset
$\Omega\subset\mathcal{L}(V)$ of linear transformations of $V$, such that for
each inner product $g$ on $V$, on the subspace
$\Sigma_{g}(V)\subset\mathcal{L}(V)$ of $g$-selfadjoint operators,
$F_{g}=F_{|\Sigma_{g}(V)}$ is the isotropic function associated to $f$, which
means that $F_{g}(A)=f(\mathrm{EV}(A))$, where $\mathrm{EV}(A)$ denotes the
ordered $n$-tuple of real eigenvalues of $A$. We extend some well known
relations between the derivatives of $f$ and each $F_{g}$ to relations between
$f$ and $F$. By means of an example we show that well known regularity
properties of $F_{g}$ do not carry over to $F$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Condensates in double-well potential with synthetic gauge potentials and vortex seeding | We demonstrate an enhancement in the vortex generation when artificial gauge
potential is introduced to condensates confined in a double well potential.
This is due to the lower energy required to create a vortex in the low
condensate density region within the barrier. Furthermore, we study the
transport of vortices between the two wells, and show that the traverse time
for vortices is longer for the lower height of the well. We also show that the
critical value of synthetic magnetic field to inject vortices into the bulk of
the condensate is lower in the double-well potential compared to the harmonic
confining potential.
| 0 | 1 | 0 | 0 | 0 | 0 |
On strict sub-Gaussianity, optimal proxy variance and symmetry for bounded random variables | We investigate the sub-Gaussian property for almost surely bounded random
variables. If sub-Gaussianity per se is de facto ensured by the bounded support
of said random variables, then exciting research avenues remain open. Among
these questions is how to characterize the optimal sub-Gaussian proxy variance?
Another question is how to characterize strict sub-Gaussianity, defined by a
proxy variance equal to the (standard) variance? We address the questions in
proposing conditions based on the study of functions variations. A particular
focus is given to the relationship between strict sub-Gaussianity and symmetry
of the distribution. In particular, we demonstrate that symmetry is neither
sufficient nor necessary for strict sub-Gaussianity. In contrast, simple
necessary conditions on the one hand, and simple sufficient conditions on the
other hand, for strict sub-Gaussianity are provided. These results are
illustrated via various applications to a number of bounded random variables,
including Bernoulli, beta, binomial, uniform, Kumaraswamy, and triangular
distributions.
| 0 | 0 | 1 | 1 | 0 | 0 |
Fast Distributed Approximation for Max-Cut | Finding a maximum cut is a fundamental task in many computational settings.
Surprisingly, it has been insufficiently studied in the classic distributed
settings, where vertices communicate by synchronously sending messages to their
neighbors according to the underlying graph, known as the $\mathcal{LOCAL}$ or
$\mathcal{CONGEST}$ models. We amend this by obtaining almost optimal
algorithms for Max-Cut on a wide class of graphs in these models. In
particular, for any $\epsilon > 0$, we develop randomized approximation
algorithms achieving a ratio of $(1-\epsilon)$ to the optimum for Max-Cut on
bipartite graphs in the $\mathcal{CONGEST}$ model, and on general graphs in the
$\mathcal{LOCAL}$ model.
We further present efficient deterministic algorithms, including a
$1/3$-approximation for Max-Dicut in our models, thus improving the best known
(randomized) ratio of $1/4$. Our algorithms make non-trivial use of the greedy
approach of Buchbinder et al. (SIAM Journal on Computing, 2015) for maximizing
an unconstrained (non-monotone) submodular function, which may be of
independent interest.
| 1 | 0 | 0 | 0 | 0 | 0 |
A note on Oliver's p-group conjecture | Let $S$ be a $p$-group for an odd prime $p$, Oliver proposed the conjecture
that the Thompson subgroup $J(S)$ is always contained in the Oliver subgroup
$\mathfrak{X}(S)$. That means he conjectured that
$|J(S)\mathfrak{X}(S):\mathfrak{X}(S)|=1$. Let $\mathfrak{X}_1(S)$ be a
subgroup of $S$ such that $\mathfrak{X}_1(S)/\mathfrak{X}(S)$ is the center of
$S/\mathfrak{X}(S)$. In this short note, we prove that $J(S)\leq
\mathfrak{X}(S)$ if and only if $J(S)\leq \mathfrak{X}_1(S)$.
As an easy application, we prove that
$|J(S)\mathfrak{X}(S):\mathfrak{X}(S)|\neq p$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Frobenius elements in Galois representations with SL_n image | Suppose we have a elliptic curve over a number field whose mod $l$
representation has image isomorphic to $SL_2(\mathbb{F}_l)$. We present a
method to determine Frobenius elements of the associated Galois group which
incorporates the linear structure available. We are able to distinguish
$SL_n(\mathbb{F}_l)$-conjugacy from $GL_n(\mathbb{F}_l)$-conjugacy; this can be
thought of as being analogous to a result which distinguishes $A_n$-conjugacy
from $S_n$-conjugacy when the Galois group is considered as a permutation
group.
| 0 | 0 | 1 | 0 | 0 | 0 |
CaloGAN: Simulating 3D High Energy Particle Showers in Multi-Layer Electromagnetic Calorimeters with Generative Adversarial Networks | The precise modeling of subatomic particle interactions and propagation
through matter is paramount for the advancement of nuclear and particle physics
searches and precision measurements. The most computationally expensive step in
the simulation pipeline of a typical experiment at the Large Hadron Collider
(LHC) is the detailed modeling of the full complexity of physics processes that
govern the motion and evolution of particle showers inside calorimeters. We
introduce \textsc{CaloGAN}, a new fast simulation technique based on generative
adversarial networks (GANs). We apply these neural networks to the modeling of
electromagnetic showers in a longitudinally segmented calorimeter, and achieve
speedup factors comparable to or better than existing full simulation
techniques on CPU ($100\times$-$1000\times$) and even faster on GPU (up to
$\sim10^5\times$). There are still challenges for achieving precision across
the entire phase space, but our solution can reproduce a variety of geometric
shower shape properties of photons, positrons and charged pions. This
represents a significant stepping stone toward a full neural network-based
detector simulation that could save significant computing time and enable many
analyses now and in the future.
| 1 | 0 | 0 | 1 | 0 | 0 |
Measurement of ultrashort optical pulses via time lens imaging in CMOS compatible waveguides | We demonstrate temporal measurements of subpicosecond optical pulses via
time-to-frequency conversion in a 45cm long CMOS compatible high index glass
spiral waveguide. The measurements are based on efficient four wave mixing in
the C-band, using around 1W of peak pump power. We achieve a resolution of
400fs over a time window of 100ps, representing a time-bandwidth product > 250.
| 0 | 1 | 0 | 0 | 0 | 0 |
On the next-to-minimal weight of projective Reed-Muller codes | In this paper we present several values for the next-to-minimal weights of
projective Reed-Muller codes. We work over $\mathbb{F}_q$ with $q \geq 3$ since
in IEEE-IT 62(11) p. 6300-6303 (2016) we have determined the complete values
for the next-to-minimal weights of binary projective Reed-Muller codes. As in
loc. cit. here we also find examples of codewords with next-to-minimal weight
whose set of zeros is not in a hyperplane arrangement.
| 1 | 0 | 1 | 0 | 0 | 0 |
Scalable Global Grid catalogue for LHC Run3 and beyond | The AliEn (ALICE Environment) file catalogue is a global unique namespace
providing mapping between a UNIX-like logical name structure and the
corresponding physical files distributed over 80 storage elements worldwide.
Powerful search tools and hierarchical metadata information are integral parts
of the system and are used by the Grid jobs as well as local users to store and
access all files on the Grid storage elements. The catalogue has been in
production since 2005 and over the past 11 years has grown to more than 2
billion logical file names. The backend is a set of distributed relational
databases, ensuring smooth growth and fast access. Due to the anticipated fast
future growth, we are looking for ways to enhance the performance and
scalability by simplifying the catalogue schema while keeping the functionality
intact. We investigated different backend solutions, such as distributed key
value stores, as replacement for the relational database. This contribution
covers the architectural changes in the system, together with the technology
evaluation, benchmark results and conclusions.
| 1 | 0 | 0 | 0 | 0 | 0 |
A versatile UHV transport and measurement chamber for neutron reflectometry under UHV conditions | We report on a versatile mini ultra-high vacuum (UHV) chamber which is
designed to be used on the MAgnetic Reflectometer with high Incident Angle of
the Jülich Centre for Neutron Science at Heinz Maier-Leibnitz Zentrum in
Garching, Germany. Samples are prepared in the adjacent thin film laboratory by
molecular beam epitaxy and moved into the compact chamber for transfer without
exposure to ambient air. The chamber is based on DN 40 CF flanges and equipped
with sapphire view ports, a small getter pump, and a wobble stick, which serves
also as sample holder. Here, we present polarized neutron reflectivity
measurements which have been performed on Co thin films at room temperature in
UHV and in ambient air in a magnetic field of 200 mT and in the Q-range of 0.18
\AA$^{-1}$. The results confirm that the Co film is not contaminated during the
polarized neutron reflectivity measurement. Herewith it is demonstrated that
the mini UHV transport chamber also works as a measurement chamber which opens
new possibilities for polarized neutron measurements under UHV conditions.
| 0 | 1 | 0 | 0 | 0 | 0 |
Dual gauge field theory of quantum liquid crystals in three dimensions | The dislocation-mediated quantum melting of solids into quantum liquid
crystals is extended from two to three spatial dimensions, using a
generalization of boson-vortex or Abelian-Higgs duality. Dislocations are now
Burgers-vector-valued strings that trace out worldsheets in spacetime while the
phonons of the solid dualize into two-form (Kalb-Ramond) gauge fields. We
propose an effective dual Higgs potential that allows for restoring
translational symmetry in either one, two or three directions, leading to the
quantum analogues of columnar, smectic or nematic liquid crystals. In these
phases, transverse phonons turn into gapped, propagating modes while
compressional stress remains massless. Rotational Goldstone modes emerge
whenever translational symmetry is restored. We also consider electrically
charged matter, and find amongst others that as a hard principle only two out
of the possible three rotational Goldstone modes are observable using
electromagnetic means.
| 0 | 1 | 0 | 0 | 0 | 0 |
Deformation mechanism map of Cu/Nb nanoscale metallic multilayers as a function of temperature and layer thickness | The mechanical properties and deformation mechanisms of Cu/Nb nanoscale
metallic multilayers (NMMs) manufactured by accumulative roll bonding (ARB) are
studied at 25C and 400C. Cu/Nb NMMs with individual layer thicknesses between 7
and 63 nm were tested by in-situ micropillar compression inside a scanning
electron microscope Yield strength, strain-rate sensitivities and activation
volumes were obtained from the pillar compression tests. The deformed
micropillars were examined under scanning and transmission electron microscopy
in order to examine the deformation mechanisms active for different layer
thicknesses and temperatures. The analysis suggests that room temperature
deformation was determined by dislocation glide at larger layer thicknesses and
interface-related mechanisms at the thinner layer thicknesses. The high
temperature compression tests, in contrast, revealed superior thermo-mechanical
stability and strength retention for the NMMs with larger layer thicknesses
with deformation controlled by dislocation glide. A remarkable transition in
deformation mechanism occurred as the layer thickness decreased, to a
deformation response controlled by diffusion processes along the interfaces,
which resulted in temperature-induced softening. A deformation mechanism map,
in terms of layer thickness and temperature, is proposed from the results
obtained in this investigation.
| 0 | 1 | 0 | 0 | 0 | 0 |
Relations between Schramm spaces and generalized Wiener classes | We give necessary and sufficient conditions for the embeddings
$\Lambda\text{BV}^{(p)}\subseteq \Gamma\text{BV}^{(q_n\uparrow q)}$ and
$\Phi\text{BV}\subseteq\text{BV}^{(q_n\uparrow q)}$. As a consequence, a number
of results in the literature, including a fundamental theorem of Perlman and
Waterman, are simultaneously extended.
| 0 | 0 | 1 | 0 | 0 | 0 |
Stability Selection for Structured Variable Selection | In variable or graph selection problems, finding a right-sized model or
controlling the number of false positives is notoriously difficult. Recently, a
meta-algorithm called Stability Selection was proposed that can provide
reliable finite-sample control of the number of false positives. Its benefits
were demonstrated when used in conjunction with the lasso and orthogonal
matching pursuit algorithms.
In this paper, we investigate the applicability of stability selection to
structured selection algorithms: the group lasso and the structured
input-output lasso. We find that using stability selection often increases the
power of both algorithms, but that the presence of complex structure reduces
the reliability of error control under stability selection. We give strategies
for setting tuning parameters to obtain a good model size under stability
selection, and highlight its strengths and weaknesses compared to competing
methods screen and clean and cross-validation. We give guidelines about when to
use which error control method.
| 1 | 0 | 0 | 1 | 0 | 0 |
Vafa-Witten invariants for projective surfaces II: semistable case | We propose a definition of Vafa-Witten invariants counting semistable Higgs
pairs on a polarised surface. We use virtual localisation applied to
Mochizuki/Joyce-Song pairs.
For $K_S\le0$ we expect our definition coincides with an alternative
definition using weighted Euler characteristics. We prove this for deg $K_S<0$
here, and it is proved for $S$ a K3 surface in \cite{MT}.
For K3 surfaces we calculate the invariants in terms of modular forms which
generalise and prove conjectures of Vafa and Witten.
| 0 | 0 | 1 | 0 | 0 | 0 |
Aggregated knowledge from a small number of debates outperforms the wisdom of large crowds | The aggregation of many independent estimates can outperform the most
accurate individual judgment. This centenarian finding, popularly known as the
wisdom of crowds, has been applied to problems ranging from the diagnosis of
cancer to financial forecasting. It is widely believed that social influence
undermines collective wisdom by reducing the diversity of opinions within the
crowd. Here, we show that if a large crowd is structured in small independent
groups, deliberation and social influence within groups improve the crowd's
collective accuracy. We asked a live crowd (N=5180) to respond to
general-knowledge questions (e.g., what is the height of the Eiffel Tower?).
Participants first answered individually, then deliberated and made consensus
decisions in groups of five, and finally provided revised individual estimates.
We found that averaging consensus decisions was substantially more accurate
than aggregating the initial independent opinions. Remarkably, combining as few
as four consensus choices outperformed the wisdom of thousands of individuals.
| 1 | 1 | 0 | 0 | 0 | 0 |
Summertime, and the livin is easy: Winter and summer pseudoseasonal life expectancy in the United States | In temperate climates, mortality is seasonal with a winter-dominant pattern,
due in part to pneumonia and influenza. Cardiac causes, which are the leading
cause of death in the United States, are also winter-seasonal although it is
not clear why. Interactions between circulating respiratory viruses (f.e.,
influenza) and cardiac conditions have been suggested as a cause of
winter-dominant mortality patterns. We propose and implement a way to estimate
an upper bound on mortality attributable to winter-dominant viruses like
influenza. We calculate 'pseudo-seasonal' life expectancy, dividing the year
into two six-month spans, one encompassing winter the other summer. During the
summer when the circulation of respiratory viruses is drastically reduced, life
expectancy is about one year longer. We also quantify the seasonal mortality
difference in terms of seasonal "equivalent ages" (defined herein) and
proportional hazards. We suggest that even if viruses cause excess winter
cardiac mortality, the population-level mortality reduction of a perfect
influenza vaccine would be much more modest than is often recognized.
| 0 | 0 | 0 | 1 | 0 | 0 |
Solving Horn Clauses on Inductive Data Types Without Induction | We address the problem of verifying the satisfiability of Constrained Horn
Clauses (CHCs) based on theories of inductively defined data structures, such
as lists and trees. We propose a transformation technique whose objective is
the removal of these data structures from CHCs, hence reducing their
satisfiability to a satisfiability problem for CHCs on integers and booleans.
We propose a transformation algorithm and identify a class of clauses where it
always succeeds. We also consider an extension of that algorithm, which
combines clause transformation with reasoning on integer constraints. Via an
experimental evaluation we show that our technique greatly improves the
effectiveness of applying the Z3 solver to CHCs. We also show that our
verification technique based on CHC transformation followed by CHC solving, is
competitive with respect to CHC solvers extended with induction. This paper is
under consideration for acceptance in TPLP.
| 1 | 0 | 0 | 0 | 0 | 0 |
The Power of Non-Determinism in Higher-Order Implicit Complexity | We investigate the power of non-determinism in purely functional programming
languages with higher-order types. Specifically, we consider cons-free programs
of varying data orders, equipped with explicit non-deterministic choice.
Cons-freeness roughly means that data constructors cannot occur in function
bodies and all manipulation of storage space thus has to happen indirectly
using the call stack.
While cons-free programs have previously been used by several authors to
characterise complexity classes, the work on non-deterministic programs has
almost exclusively considered programs of data order 0. Previous work has shown
that adding explicit non-determinism to cons-free programs taking data of order
0 does not increase expressivity; we prove that this - dramatically - is not
the case for higher data orders: adding non-determinism to programs with data
order at least 1 allows for a characterisation of the entire class of
elementary-time decidable sets.
Finally we show how, even with non-deterministic choice, the original
hierarchy of characterisations is restored by imposing different restrictions.
| 1 | 0 | 0 | 0 | 0 | 0 |
Upper-limit on the Advanced Virgo output mode cleaner cavity length noise | The Advanced Virgo detector uses two monolithic optical cavities at its
output port to suppress higher order modes and radio frequency sidebands from
the carrier light used for gravitational wave detection. These two cavities in
series form the output mode cleaner. We present a measured upper limit on the
length noise of these cavities that is consistent with the thermo-refractive
noise prediction of $8 \times 10^{-16}\,\textrm{m/Hz}^{1/2}$ at 15 Hz. The
cavity length is controlled using Peltier cells and piezo-electric actuators to
maintain resonance on the incoming light. A length lock precision of $3.5
\times 10^{-13}\,\textrm{m}$ is achieved. These two results are combined to
demonstrate that the broadband length noise of the output mode cleaner in the
10-60 Hz band is at least a factor 10 below other expected noise sources in the
Advanced Virgo detector design configuration.
| 0 | 1 | 0 | 0 | 0 | 0 |
Mobile impurities in integrable models | We use a mobile impurity or depleton model to study elementary excitations in
one-dimensional integrable systems. For Lieb-Liniger and bosonic Yang-Gaudin
models we express two phenomenological parameters characterising renormalised
inter- actions of mobile impurities with superfluid background: the number of
depleted particles, $N$ and the superfluid phase drop $\pi J$ in terms of the
corresponding Bethe Ansatz solution and demonstrate, in the leading order, the
absence of two-phonon scattering resulting in vanishing rates of inelastic
processes such as viscosity experienced by the mobile impurities
| 0 | 1 | 1 | 0 | 0 | 0 |
Time-dependent variational principle in matrix-product state manifolds: pitfalls and potential | We study the applicability of the time-dependent variational principle in
matrix product state manifolds for the long time description of quantum
interacting systems. By studying integrable and nonintegrable systems for which
the long time dynamics are known we demonstrate that convergence of long time
observables is subtle and needs to be examined carefully. Remarkably, for the
disordered nonintegrable system we consider the long time dynamics are in good
agreement with the rigorously obtained short time behavior and with previous
obtained numerically exact results, suggesting that at least in this case the
apparent convergence of this approach is reliable. Our study indicates that
while great care must be exercised in establishing the convergence of the
method, it may still be asymptotically accurate for a class of disordered
nonintegrable quantum systems.
| 0 | 1 | 0 | 0 | 0 | 0 |
Self-supporting Topology Optimization for Additive Manufacturing | The paper presents a topology optimization approach that designs an optimal
structure, called a self-supporting structure, which is ready to be fabricated
via additive manufacturing without the usage of additional support structures.
Such supports in general have to be created during the fabricating process so
that the primary object can be manufactured layer by layer without collapse,
which is very time-consuming and waste of material.
The proposed approach resolves this problem by formulating the
self-supporting requirements as a novel explicit quadratic continuous
constraint in the topology optimization problem, or specifically, requiring the
number of unsupported elements (in terms of the sum of squares of their
densities) to be zero. Benefiting form such novel formulations, computing
sensitivity of the self-supporting constraint with respect to the design
density is straightforward, which otherwise would require lots of research
efforts in general topology optimization studies. The derived sensitivity for
each element is only linearly dependent on its sole density, which, different
from previous layer-based sensitivities, consequently allows for a parallel
implementation and possible higher convergence rate. In addition, a discrete
convolution operator is also designed to detect the unsupported elements as
involved in each step of optimization iteration, and improves the detection
process 100 times as compared with simply enumerating these elements. The
approach works for cases of general overhang angle, or general domain, and
produces an optimized structures, and their associated optimal compliance, very
close to that of the reference structure obtained without considering the
self-supporting constraint, as demonstrated by extensive 2D and 3D benchmark
examples.
| 1 | 0 | 0 | 0 | 0 | 0 |
Reward Shaping via Meta-Learning | Reward shaping is one of the most effective methods to tackle the crucial yet
challenging problem of credit assignment in Reinforcement Learning (RL).
However, designing shaping functions usually requires much expert knowledge and
hand-engineering, and the difficulties are further exacerbated given multiple
similar tasks to solve. In this paper, we consider reward shaping on a
distribution of tasks, and propose a general meta-learning framework to
automatically learn the efficient reward shaping on newly sampled tasks,
assuming only shared state space but not necessarily action space. We first
derive the theoretically optimal reward shaping in terms of credit assignment
in model-free RL. We then propose a value-based meta-learning algorithm to
extract an effective prior over the optimal reward shaping. The prior can be
applied directly to new tasks, or provably adapted to the task-posterior while
solving the task within few gradient updates. We demonstrate the effectiveness
of our shaping through significantly improved learning efficiency and
interpretable visualizations across various settings, including notably a
successful transfer from DQN to DDPG.
| 1 | 0 | 0 | 1 | 0 | 0 |
Regulous vector bundles | Among recently introduced new notions in real algebraic geometry is that of
regulous functions. Such functions form a foundation for the development of
regulous geometry. Several interesting results on regulous varieties and
regulous sheaves are already available. In this paper, we define and
investigate regulous vector bundles. We establish algebraic and geometric
properties of such vector bundles, and identify them with stratified-algebraic
vector bundles. Furthermore, using new results on curve-rational functions, we
characterize regulous vector bundles among families of vector spaces
parametrized by an affine regulous variety. We also study relationships between
regulous and topological vector bundles.
| 0 | 0 | 1 | 0 | 0 | 0 |
Remarks on the operator-norm convergence of the Trotter product formula | We revise the operator-norm convergence of the Trotter product formula for a
pair {A,B} of generators of semigroups on a Banach space. Operator-norm
convergence holds true if the dominating operator A generates a holomorphic
contraction semigroup and B is a A-infinitesimally small generator of a
contraction semigroup, in particular, if B is a bounded operator. Inspired by
studies of evolution semigroups it is shown in the present paper that the
operator-norm convergence generally fails even for bounded operators B if A is
not a holomorphic generator. Moreover, it is shown that operator norm
convergence of the Trotter product formula can be arbitrary slow.
| 0 | 0 | 1 | 0 | 0 | 0 |
Using data-compressors for statistical analysis of problems on homogeneity testing and classification | Nowadays data compressors are applied to many problems of text analysis, but
many such applications are developed outside of the framework of mathematical
statistics. In this paper we overcome this obstacle and show how several
methods of classical mathematical statistics can be developed based on
applications of the data compressors.
| 1 | 0 | 1 | 1 | 0 | 0 |
Exploring High-Dimensional Structure via Axis-Aligned Decomposition of Linear Projections | Two-dimensional embeddings remain the dominant approach to visualize high
dimensional data. The choice of embeddings ranges from highly non-linear ones,
which can capture complex relationships but are difficult to interpret
quantitatively, to axis-aligned projections, which are easy to interpret but
are limited to bivariate relationships. Linear project can be considered as a
compromise between complexity and interpretability, as they allow explicit axes
labels, yet provide significantly more degrees of freedom compared to
axis-aligned projections. Nevertheless, interpreting the axes directions, which
are linear combinations often with many non-trivial components, remains
difficult. To address this problem we introduce a structure aware decomposition
of (multiple) linear projections into sparse sets of axis aligned projections,
which jointly capture all information of the original linear ones. In
particular, we use tools from Dempster-Shafer theory to formally define how
relevant a given axis aligned project is to explain the neighborhood relations
displayed in some linear projection. Furthermore, we introduce a new approach
to discover a diverse set of high quality linear projections and show that in
practice the information of $k$ linear projections is often jointly encoded in
$\sim k$ axis aligned plots. We have integrated these ideas into an interactive
visualization system that allows users to jointly browse both linear
projections and their axis aligned representatives. Using a number of case
studies we show how the resulting plots lead to more intuitive visualizations
and new insight.
| 1 | 0 | 0 | 1 | 0 | 0 |
Exact Recovery with Symmetries for the Doubly-Stochastic Relaxation | Graph matching or quadratic assignment, is the problem of labeling the
vertices of two graphs so that they are as similar as possible. A common method
for approximately solving the NP-hard graph matching problem is relaxing it to
a convex optimization problem over the set of doubly stochastic (DS) matrices.
Recent analysis has shown that for almost all pairs of isomorphic and
asymmetric graphs, the DS relaxation succeeds in correctly retrieving the
isomorphism between the graphs. Our goal in this paper is to analyze the case
of symmetric isomorphic graphs. This goal is motivated by shape matching
applications where the graphs of interest usually have reflective symmetry.
For symmetric problems the graph matching problem has multiple isomorphisms
and so convex relaxations admit all convex combinations of these isomorphisms
as viable solutions. If the convex relaxation does not admit any additional
superfluous solution we say that it is convex exact. In this case there are
tractable algorithms to retrieve an isomorphism from the convex relaxation.
We show that convex exactness depends strongly on the symmetry group of the
graphs; For a fixed symmetry group $G$, either the DS relaxation will be convex
exact for almost all pairs of isomorphic graphs with symmetry group $G$, or the
DS relaxation will fail for all such pairs. We show that for reflective groups
with at least one full orbit convex exactness holds almost everywhere, and
provide some simple examples of non-reflective symmetry groups for which convex
exactness always fails.
When convex exactness holds, the isomorphisms of the graphs are the extreme
points of the convex solution set. We suggest an efficient algorithm for
retrieving an isomorphism in this case. We also show that the "convex to
concave" projection method will also retrieve an isomorphism in this case.
| 0 | 0 | 1 | 0 | 0 | 0 |
Statistical physics of human cooperation | Extensive cooperation among unrelated individuals is unique to humans, who
often sacrifice personal benefits for the common good and work together to
achieve what they are unable to execute alone. The evolutionary success of our
species is indeed due, to a large degree, to our unparalleled other-regarding
abilities. Yet, a comprehensive understanding of human cooperation remains a
formidable challenge. Recent research in social science indicates that it is
important to focus on the collective behavior that emerges as the result of the
interactions among individuals, groups, and even societies. Non-equilibrium
statistical physics, in particular Monte Carlo methods and the theory of
collective behavior of interacting particles near phase transition points, has
proven to be very valuable for understanding counterintuitive evolutionary
outcomes. By studying models of human cooperation as classical spin models, a
physicist can draw on familiar settings from statistical physics. However,
unlike pairwise interactions among particles that typically govern solid-state
physics systems, interactions among humans often involve group interactions,
and they also involve a larger number of possible states even for the most
simplified description of reality. The complexity of solutions therefore often
surpasses that observed in physical systems. Here we review experimental and
theoretical research that advances our understanding of human cooperation,
focusing on spatial pattern formation, on the spatiotemporal dynamics of
observed solutions, and on self-organization that may either promote or hinder
socially favorable states.
| 1 | 1 | 0 | 0 | 0 | 0 |
Bayesian Hypernetworks | We study Bayesian hypernetworks: a framework for approximate Bayesian
inference in neural networks. A Bayesian hypernetwork $\h$ is a neural network
which learns to transform a simple noise distribution, $p(\vec\epsilon) =
\N(\vec 0,\mat I)$, to a distribution $q(\pp) := q(h(\vec\epsilon))$ over the
parameters $\pp$ of another neural network (the "primary network")\@. We train
$q$ with variational inference, using an invertible $\h$ to enable efficient
estimation of the variational lower bound on the posterior $p(\pp | \D)$ via
sampling. In contrast to most methods for Bayesian deep learning, Bayesian
hypernets can represent a complex multimodal approximate posterior with
correlations between parameters, while enabling cheap iid sampling of~$q(\pp)$.
In practice, Bayesian hypernets can provide a better defense against
adversarial examples than dropout, and also exhibit competitive performance on
a suite of tasks which evaluate model uncertainty, including regularization,
active learning, and anomaly detection.
| 1 | 0 | 0 | 1 | 0 | 0 |
Block Motion Changes in Japan Triggered by the 2011 Great Tohoku Earthquake | Plate motions are governed by equilibrium between basal and edge forces.
Great earthquakes may induce differential static stress changes across tectonic
plates, enabling a new equilibrium state. Here we consider the torque balance
for idealized circular plates and find a simple scalar relationship for changes
in relative plate speed as a function of its size, upper mantle viscosity, and
coseismic stress changes. Applied to Japan, the 2011
$\mathrm{M}_{\mathrm{W}}=9.0$ Tohoku earthquake generated coseismic stresses of
$10^2-10^5$~Pa that could have induced changes in motion of small (radius
$\sim100$~km) crustal blocks within Honshu. Analysis of time-dependent GPS
velocities, with corrections for earthquake cycle effects, reveals that plate
speeds may have changed by up to $\sim3$ mm/yr between $\sim3.75$-year epochs
bracketing this earthquake, consistent with an upper mantle viscosity of $\sim
5\times10^{18}$Pa$\cdot$s, suggesting that great earthquakes may modulate
motions of proximal crustal blocks at frequencies as high as $10^-8$~Hz.
| 0 | 1 | 0 | 0 | 0 | 0 |
Learning the structure of Bayesian Networks: A quantitative assessment of the effect of different algorithmic schemes | One of the most challenging tasks when adopting Bayesian Networks (BNs) is
the one of learning their structure from data. This task is complicated by the
huge search space of possible solutions, and by the fact that the problem is
NP-hard. Hence, full enumeration of all the possible solutions is not always
feasible and approximations are often required. However, to the best of our
knowledge, a quantitative analysis of the performance and characteristics of
the different heuristics to solve this problem has never been done before.
For this reason, in this work, we provide a detailed comparison of many
different state-of-the-arts methods for structural learning on simulated data
considering both BNs with discrete and continuous variables, and with different
rates of noise in the data. In particular, we investigate the performance of
different widespread scores and algorithmic approaches proposed for the
inference and the statistical pitfalls within them.
| 1 | 0 | 0 | 1 | 0 | 0 |
Thermal fracturing on comets. Applications to 67P/Churyumov-Gerasimenko | We simulate the stresses induced by temperature changes in a putative hard
layer near the surface of comet 67P/Churyumov--Gerasimenko with a
thermo-viscoelastic model. Such a layer could be formed by the recondensation
or sintering of water ice (and dust grains), as suggested by laboratory
experiments and computer simulations, and would explain the high compressive
strength encountered by experiments on board the Philae lander. Changes in
temperature from seasonal insolation variation penetrate into the comet's
surface to depths controlled by the thermal inertia, causing the material to
expand and contract. Modelling this with a Maxwellian viscoelastic response on
a spherical nucleus, we show that a hard, icy layer with similar properties to
Martian permafrost will experience high stresses: up to tens of MPa, which
exceed its material strength (a few MPa), down to depths of centimetres to a
metre. The stress distribution with latitude is confirmed qualitatively when
taking into account the comet's complex shape but neglecting thermal inertia.
Stress is found to be comparable to the material strength everywhere for
sufficient thermal inertia ($\gtrsim50$ J m$^{-2}$ K$^{-1}$ s$^{-1/2}$) and ice
content ($\gtrsim 45\%$ at the equator). In this case, stresses penetrate to a
typical depth of $\sim0.25$ m, consistent with the detection of metre-scale
thermal contraction crack polygons all over the comet. Thermal fracturing may
be an important erosion process on cometary surfaces which breaks down material
and weakens cliffs.
| 0 | 1 | 0 | 0 | 0 | 0 |
ADN: An Information-Centric Networking Architecture for the Internet of Things | Forwarding data by name has been assumed to be a necessary aspect of an
information-centric redesign of the current Internet architecture that makes
content access, dissemination, and storage more efficient. The Named Data
Networking (NDN) and Content-Centric Networking (CCNx) architectures are the
leading examples of such an approach. However, forwarding data by name incurs
storage and communication complexities that are orders of magnitude larger than
solutions based on forwarding data using addresses. Furthermore, the specific
algorithms used in NDN and CCNx have been shown to have a number of
limitations. The Addressable Data Networking (ADN) architecture is introduced
as an alternative to NDN and CCNx. ADN is particularly attractive for
large-scale deployments of the Internet of Things (IoT), because it requires
far less storage and processing in relaying nodes than NDN. ADN allows things
and data to be denoted by names, just like NDN and CCNx do. However, instead of
replacing the waist of the Internet with named-data forwarding, ADN uses an
address-based forwarding plane and introduces an information plane that
seamlessly maps names to addresses without the involvement of end-user
applications. Simulation results illustrate the order of magnitude savings in
complexity that can be attained with ADN compared to NDN.
| 1 | 0 | 0 | 0 | 0 | 0 |
Urban Delay Tolerant Network Simulator (UDTNSim v0.1) | Delay Tolerant Networking (DTN) is an approach to networking which handles
network disruptions and high delays that may occur in many kinds of
communication networks. The major reasons for high delay include partial
connectivity of networks as can be seen in many types of ad hoc wireless
networks with frequent network partitions, long propagation time as experienced
in inter-planetary and deep space networks, and frequent link disruptions due
to the mobility of nodes as observed in terrestrial wireless network
environments. Experimenting network architectures, protocols, and mobility
models in such real-world scenarios is difficult due to the complexities
involved in the network environment. Therefore, in this document, we present
the documentation of an Urban Delay Tolerant Network Simulator (UDTNSim)
version 0.1, capable of simulating urban road network environments with DTN
characteristics including mobility models and routing protocols. The mobility
models included in this version of UDTNSim are (i) Stationary Movement, (ii)
Simple Random Movement, (iii) Path Type Based Movememt, (iv) Path Memory Based
Movement, (v) Path Type with Restricted Movement, and (vi) Path Type with Wait
Movement. In addition to mobility models, we also provide three routing and
data hand-off protocols: (i) Epidemic Routing, (ii) Superior Only Handoff, and
(iii) Superior Peer Handoff. UDTNSim v0.1 is designed using object-oriented
programming approach in order to provide flexibility in addition of new
features to the DTN environment. UDTNSim v0.1 is distributed as an open source
simulator for the use of the research community.
| 1 | 0 | 0 | 0 | 0 | 0 |
Regularized Ordinal Regression and the ordinalNet R Package | Regularization techniques such as the lasso (Tibshirani 1996) and elastic net
(Zou and Hastie 2005) can be used to improve regression model coefficient
estimation and prediction accuracy, as well as to perform variable selection.
Ordinal regression models are widely used in applications where the use of
regularization could be beneficial; however, these models are not included in
many popular software packages for regularized regression. We propose a
coordinate descent algorithm to fit a broad class of ordinal regression models
with an elastic net penalty. Furthermore, we demonstrate that each model in
this class generalizes to a more flexible form, for instance to accommodate
unordered categorical data. We introduce an elastic net penalty class that
applies to both model forms. Additionally, this penalty can be used to shrink a
non-ordinal model toward its ordinal counterpart. Finally, we introduce the R
package ordinalNet, which implements the algorithm for this model class.
| 0 | 0 | 0 | 1 | 0 | 0 |
Visual Multiple-Object Tracking for Unknown Clutter Rate | In multi-object tracking applications, model parameter tuning is a
prerequisite for reliable performance. In particular, it is difficult to know
statistics of false measurements due to various sensing conditions and changes
in the field of views. In this paper we are interested in designing a
multi-object tracking algorithm that handles unknown false measurement rate.
Recently proposed robust multi-Bernoulli filter is employed for clutter
estimation while generalized labeled multi-Bernoulli filter is considered for
target tracking. Performance evaluation with real videos demonstrates the
effectiveness of the tracking algorithm for real-world scenarios.
| 1 | 0 | 0 | 0 | 0 | 0 |
On Integrated $L^{1}$ Convergence Rate of an Isotonic Regression Estimator for Multivariate Observations | We consider a general monotone regression estimation where we allow for
independent and dependent regressors. We propose a modification of the
classical isotonic least squares estimator and establish its rate of
convergence for the integrated $L_1$-loss function. The methodology captures
the shape of the data without assuming additivity or a parametric form for the
regression function. Furthermore, the degree of smoothing is chosen
automatically and no auxiliary tuning is required for the theoretical analysis.
Some simulations and two real data illustrations complement the study of the
proposed estimator.
| 0 | 0 | 1 | 1 | 0 | 0 |
Towards Fast-Convergence, Low-Delay and Low-Complexity Network Optimization | Distributed network optimization has been studied for well over a decade.
However, we still do not have a good idea of how to design schemes that can
simultaneously provide good performance across the dimensions of utility
optimality, convergence speed, and delay. To address these challenges, in this
paper, we propose a new algorithmic framework with all these metrics
approaching optimality. The salient features of our new algorithm are
three-fold: (i) fast convergence: it converges with only $O(\log(1/\epsilon))$
iterations that is the fastest speed among all the existing algorithms; (ii)
low delay: it guarantees optimal utility with finite queue length; (iii) simple
implementation: the control variables of this algorithm are based on virtual
queues that do not require maintaining per-flow information. The new technique
builds on a kind of inexact Uzawa method in the Alternating Directional Method
of Multiplier, and provides a new theoretical path to prove global and linear
convergence rate of such a method without requiring the full rank assumption of
the constraint matrix.
| 1 | 0 | 1 | 0 | 0 | 0 |
On zeros of polynomials in best $L^p$-approximation and inserting mass points | The purpose of this note is to revive in $L^p$ spaces the original A. Markov
ideas to study monotonicity of zeros of orthogonal polynomials. This allows us
to prove and improve in a simple and unified way our previous result [Electron.
Trans. Numer. Anal., 44 (2015), pp. 271-280] concerning the discrete version of
A. Markov's theorem on monotonicity of zeros.
| 0 | 0 | 1 | 0 | 0 | 0 |
Empirical likelihood inference for partial functional linear regression models based on B spline | In this paper, we apply empirical likelihood method to inference for the
regression parameters in the partial functional linear regression models based
on B spline. We prove that the empirical log likelihood ratio for the
regression parameters converges in law to a weighted sum of independent chi
square distributions and run simulations to assess the finite sample
performance of our method.
| 0 | 0 | 0 | 1 | 0 | 0 |
Dropout Inference in Bayesian Neural Networks with Alpha-divergences | To obtain uncertainty estimates with real-world Bayesian deep learning
models, practical inference approximations are needed. Dropout variational
inference (VI) for example has been used for machine vision and medical
applications, but VI can severely underestimates model uncertainty.
Alpha-divergences are alternative divergences to VI's KL objective, which are
able to avoid VI's uncertainty underestimation. But these are hard to use in
practice: existing techniques can only use Gaussian approximating
distributions, and require existing models to be changed radically, thus are of
limited use for practitioners. We propose a re-parametrisation of the
alpha-divergence objectives, deriving a simple inference technique which,
together with dropout, can be easily implemented with existing models by simply
changing the loss of the model. We demonstrate improved uncertainty estimates
and accuracy compared to VI in dropout networks. We study our model's epistemic
uncertainty far away from the data using adversarial images, showing that these
can be distinguished from non-adversarial images by examining our model's
uncertainty.
| 1 | 0 | 0 | 1 | 0 | 0 |
Semiparametric Mixtures of Regressions with Single-index for Model Based Clustering | In this article, we propose two classes of semiparametric mixture regression
models with single-index for model based clustering. Unlike many
semiparametric/nonparametric mixture regression models that can only be applied
to low dimensional predictors, the new semiparametric models can easily
incorporate high dimensional predictors into the nonparametric components. The
proposed models are very general, and many of the recently proposed
semiparametric/nonparametric mixture regression models are indeed special cases
of the new models. Backfitting estimates and the corresponding modified EM
algorithms are proposed to achieve optimal convergence rates for both
parametric and nonparametric parts. We establish the identifiability results of
the proposed two models and investigate the asymptotic properties of the
proposed estimation procedures. Simulation studies are conducted to demonstrate
the finite sample performance of the proposed models. An application of NBA
data by new models reveals some new findings.
| 0 | 0 | 0 | 1 | 0 | 0 |
Deep learning Approach for Classifying, Detecting and Predicting Photometric Redshifts of Quasars in the Sloan Digital Sky Survey Stripe 82 | We apply a convolutional neural network (CNN) to classify and detect quasars
in the Sloan Digital Sky Survey Stripe 82 and also to predict the photometric
redshifts of quasars. The network takes the variability of objects into account
by converting light curves into images. The width of the images, noted w,
corresponds to the five magnitudes ugriz and the height of the images, noted h,
represents the date of the observation. The CNN provides good results since its
precision is 0.988 for a recall of 0.90, compared to a precision of 0.985 for
the same recall with a random forest classifier. Moreover 175 new quasar
candidates are found with the CNN considering a fixed recall of 0.97. The
combination of probabilities given by the CNN and the random forest makes good
performance even better with a precision of 0.99 for a recall of 0.90.
For the redshift predictions, the CNN presents excellent results which are
higher than those obtained with a feature extraction step and different
classifiers (a K-nearest-neighbors, a support vector machine, a random forest
and a gaussian process classifier). Indeed, the accuracy of the CNN within
|\Delta z|<0.1 can reach 78.09%, within |\Delta z|<0.2 reaches 86.15%, within
|\Delta z|<0.3 reaches 91.2% and the value of rms is 0.359. The performance of
the KNN decreases for the three |\Delta z| regions, since within the accuracy
of |\Delta z|<0.1, |\Delta z|<0.2 and |\Delta z|<0.3 is 73.72%, 82.46% and
90.09% respectively, and the value of rms amounts to 0.395. So the CNN
successfully reduces the dispersion and the catastrophic redshifts of quasars.
This new method is very promising for the future of big databases like the
Large Synoptic Survey Telescope.
| 0 | 1 | 0 | 0 | 0 | 0 |
Sources of inter-model scatter in TRACMIP, the Tropical Rain belts with an Annual cycle and a Continent Model Intercomparison Project | We analyze the source of inter-model scatter in the surface temperature
response to quadrupling CO2 in two sets of GCM simulations from the Tropical
Rain Belts with an Annual cycle and a Continent Model Intercomparison Project
(TRACMIP; Voigt et al, 2016). TRACMIP provides simulations of idealized
climates that allow for studying the fundamental dynamics of tropical rainfall
and its response to climate change. One configuration is an aquaplanet
atmosphere (i.e., with zonally-symmetric boundary conditions) coupled to a slab
ocean (AquaCTL and Aqua4x). The other includes an equatorial continent
represented by a thin slab ocean with increased surface albedo and decreased
evaporation (LandCTL and Land4x).
| 0 | 1 | 0 | 0 | 0 | 0 |
Sub-clustering in decomposable graphs and size-varying junction trees | This paper proposes a novel representation of decomposable graphs based on
semi-latent tree-dependent bipartite graphs. The novel representation has two
main benefits. First, it enables a form of sub-clustering within maximal
cliques of the graph, adding informational richness to the general use of
decomposable graphs that could be harnessed in applications with behavioural
type of data. Second, it allows for a new node-driven Markov chain Monte Carlo
sampler of decomposable graphs that can easily parallelize and scale. The
proposed sampler also benefits from the computational efficiency of
junction-tree-based samplers of decomposable graphs.
| 1 | 0 | 0 | 1 | 0 | 0 |
The inflation technique solves completely the classical inference problem | The causal inference problem consists in determining whether a probability
distribution over a set of observed variables is compatible with a given causal
structure. In [arXiv:1609.00672], one of us introduced a hierarchy of necessary
linear programming constraints which all the observed distributions compatible
with the considered causal structure must satisfy. In this work, we prove that
the inflation hierarchy is complete, i.e., any distribution of the observed
variables which does not admit a realization within the considered causal
structure will fail one of the inflation tests. More quantitatively, we show
that any distribution of measurable events satisfying the $n^{th}$ inflation
test is $O\left(\frac{1}{\sqrt{n}}\right)$-close in Euclidean norm to a
distribution realizable within the given causal structure. In addition, we show
that the corresponding $n^{th}$-order relaxation of the dual problem consisting
in maximizing a $k^{th}$ degree polynomial on the observed variables is
$O\left(\frac{k^2}{n}\right)$-close to the optimal solution.
| 0 | 0 | 1 | 1 | 0 | 0 |
Magnetic charge injection in spin ice: a new way to fragmentation | The complexity embedded in condensed matter fertilizes the discovery of new
states of matter, enriched by ingredients like frustration. Illustrating
examples in magnetic systems are Kitaev spin liquids, skyrmions phases, or spin
ices. These unconventional ground states support exotic excitations, for
example the magnetic charges in spin ices, also called monopoles. Beyond their
discovery, an important challenge is to be able to control and manipulate them.
Here, we propose a new mechanism to inject monopoles in a spin ice through a
staggered magnetic field. We show theoretically, and demonstrate experimentally
in the Ho$_2$Ir$_2$O$_7$ pyrochlore iridate, that it results in the
stabilization of a monopole crystal, which exhibits magnetic fragmentation. In
this new state of matter, the magnetic moment fragments into an ordered part
and a persistently fluctuating one. Compared to conventional spin ices, the
different nature of the excitations in this fragmented state opens the way to
novel tunable field-induced and dynamical behaviors.
| 0 | 1 | 0 | 0 | 0 | 0 |
Decentralized Task Allocation in Multi-Robot Systems via Bipartite Graph Matching Augmented with Fuzzy Clustering | Robotic systems, working together as a team, are becoming valuable players in
different real-world applications, from disaster response to warehouse
fulfillment services. Centralized solutions for coordinating multi-robot teams
often suffer from poor scalability and vulnerability to communication
disruptions. This paper develops a decentralized multi-agent task allocation
(Dec-MATA) algorithm for multi-robot applications. The task planning problem is
posed as a maximum-weighted matching of a bipartite graph, the solution of
which using the blossom algorithm allows each robot to autonomously identify
the optimal sequence of tasks it should undertake. The graph weights are
determined based on a soft clustering process, which also plays a problem
decomposition role seeking to reduce the complexity of the individual-agents'
task assignment problems. To evaluate the new Dec-MATA algorithm, a series of
case studies (of varying complexity) are performed, with tasks being
distributed randomly over an observable 2D environment. A centralized approach,
based on a state-of-the-art MILP formulation of the multi-Traveling Salesman
problem is used for comparative analysis. While getting within 7-28% of the
optimal cost obtained by the centralized algorithm, the Dec-MATA algorithm is
found to be 1-3 orders of magnitude faster and minimally sensitive to
task-to-robot ratios, unlike the centralized algorithm.
| 1 | 0 | 0 | 0 | 0 | 0 |
Combining Static and Dynamic Features for Multivariate Sequence Classification | Model precision in a classification task is highly dependent on the feature
space that is used to train the model. Moreover, whether the features are
sequential or static will dictate which classification method can be applied as
most of the machine learning algorithms are designed to deal with either one or
another type of data. In real-life scenarios, however, it is often the case
that both static and dynamic features are present, or can be extracted from the
data. In this work, we demonstrate how generative models such as Hidden Markov
Models (HMM) and Long Short-Term Memory (LSTM) artificial neural networks can
be used to extract temporal information from the dynamic data. We explore how
the extracted information can be combined with the static features in order to
improve the classification performance. We evaluate the existing techniques and
suggest a hybrid approach, which outperforms other methods on several public
datasets.
| 1 | 0 | 0 | 1 | 0 | 0 |
Tracking the gradients using the Hessian: A new look at variance reducing stochastic methods | Our goal is to improve variance reducing stochastic methods through better
control variates. We first propose a modification of SVRG which uses the
Hessian to track gradients over time, rather than to recondition, increasing
the correlation of the control variates and leading to faster theoretical
convergence close to the optimum. We then propose accurate and computationally
efficient approximations to the Hessian, both using a diagonal and a low-rank
matrix. Finally, we demonstrate the effectiveness of our method on a wide range
of problems.
| 1 | 0 | 0 | 1 | 0 | 0 |
On spectral properties of Neuman-Poincare operator and plasmonic resonances in 3D elastostatics | We consider plasmon resonances and cloaking for the elastostatic system in
$\mathbb{R}^3$ via the spectral theory of Neumann-Poincaré operator. We first
derive the full spectral properties of the Neumann-Poincaré operator for the
3D elastostatic system in the spherical geometry. The spectral result is of
significant interest for its own sake, and serves as a highly nontrivial
extension of the corresponding 2D study in [8]. The derivation of the spectral
result in 3D involves much more complicated and subtle calculations and
arguments than that for the 2D case. Then we consider a 3D plasmonic structure
in elastostatics which takes a general core-shell-matrix form with the
metamaterial located in the shell. Using the obtained spectral result, we
provide an accurate characterisation of the anomalous localised resonance and
cloaking associated to such a plasmonic structure.
| 0 | 0 | 1 | 0 | 0 | 0 |
A hybridizable discontinuous Galerkin method for the Navier--Stokes equations with pointwise divergence-free velocity field | We introduce a hybridizable discontinuous Galerkin method for the
incompressible Navier--Stokes equations for which the approximate velocity
field is pointwise divergence-free. The method builds on the method presented
by Labeur and Wells [SIAM J. Sci. Comput., vol. 34 (2012), pp. A889--A913]. We
show that with modifications of the function spaces in the method of Labeur and
Wells it is possible to formulate a simple method with pointwise
divergence-free velocity fields which is momentum conserving, energy stable,
and pressure-robust. Theoretical results are supported by two- and
three-dimensional numerical examples and for different orders of polynomial
approximation.
| 1 | 1 | 0 | 0 | 0 | 0 |
Near Perfect Protein Multi-Label Classification with Deep Neural Networks | Artificial neural networks (ANNs) have gained a well-deserved popularity
among machine learning tools upon their recent successful applications in
image- and sound processing and classification problems. ANNs have also been
applied for predicting the family or function of a protein, knowing its residue
sequence. Here we present two new ANNs with multi-label classification ability,
showing impressive accuracy when classifying protein sequences into 698 UniProt
families (AUC=99.99%) and 983 Gene Ontology classes (AUC=99.45%).
| 1 | 0 | 0 | 1 | 0 | 0 |
Comparison of Decoding Strategies for CTC Acoustic Models | Connectionist Temporal Classification has recently attracted a lot of
interest as it offers an elegant approach to building acoustic models (AMs) for
speech recognition. The CTC loss function maps an input sequence of observable
feature vectors to an output sequence of symbols. Output symbols are
conditionally independent of each other under CTC loss, so a language model
(LM) can be incorporated conveniently during decoding, retaining the
traditional separation of acoustic and linguistic components in ASR. For fixed
vocabularies, Weighted Finite State Transducers provide a strong baseline for
efficient integration of CTC AMs with n-gram LMs. Character-based neural LMs
provide a straight forward solution for open vocabulary speech recognition and
all-neural models, and can be decoded with beam search. Finally,
sequence-to-sequence models can be used to translate a sequence of individual
sounds into a word string. We compare the performance of these three
approaches, and analyze their error patterns, which provides insightful
guidance for future research and development in this important area.
| 1 | 0 | 0 | 0 | 0 | 0 |
Battery Degradation Maps for Power System Optimization and as a Benchmark Reference | This paper presents a novel method to describe battery degradation. We use
the concept of degradation maps to model the incremental charge capacity loss
as a function of discrete battery control actions and state of charge. The maps
can be scaled to represent any battery system in size and power. Their convex
piece-wise affine representations allow for tractable optimal control
formulations and can be used in power system simulations to incorporate battery
degradation. The map parameters for different battery technologies are
published making them an useful basis to benchmark different battery
technologies in case studies.
| 1 | 0 | 0 | 0 | 0 | 0 |
Computer Assisted Localization of a Heart Arrhythmia | We consider the problem of locating a point-source heart arrhythmia using
data from a standard diagnostic procedure, where a reference catheter is placed
in the heart, and arrival times from a second diagnostic catheter are recorded
as the diagnostic catheter moves around within the heart. We model this
situation as a nonconvex feasibility problem, where given a set of arrival
times, we look for a source location that is consistent with the available
data. We develop a new optimization approach and fast algorithm to obtain
online proposals for the next location to suggest to the operator as she
collects data. We validate the procedure using a Monte Carlo simulation based
on patients' electrophysiological data. The proposed procedure robustly and
quickly locates the source of arrhythmias without any prior knowledge of heart
anatomy.
| 0 | 0 | 0 | 1 | 0 | 0 |
Solving for high dimensional committor functions using artificial neural networks | In this note we propose a method based on artificial neural network to study
the transition between states governed by stochastic processes. In particular,
we aim for numerical schemes for the committor function, the central object of
transition path theory, which satisfies a high-dimensional Fokker-Planck
equation. By working with the variational formulation of such partial
differential equation and parameterizing the committor function in terms of a
neural network, approximations can be obtained via optimizing the neural
network weights using stochastic algorithms. The numerical examples show that
moderate accuracy can be achieved for high-dimensional problems.
| 1 | 0 | 0 | 1 | 0 | 0 |
The possibility of constructing a relativistic space of information states based on the theory of complexity and analogies with physical space-time | The possibility of calculation of the conditional and unconditional
complexity of description of information objects in the algorithmic theory of
information is connected with the limitations for the set of the used languages
of programming (description). The results of calculation of the conditional
complexity allow introducing the fundamental information dimensions and the
partial ordering in the set of information objects, and the requirement of
equality of languages allows introducing the vector space. In case of optimum
compression, the "prefix" contains the regular part of the information about
the object, and is analogous to the classical trajectory of a material point in
the physical space, and the "suffix" contains the random part of the
information, the quantity of which is analogous to the physical time in the
intrinsic reference system. Analysis of the mechanism of the "Einstein's clock"
allows representing the result of observation of the material point as a word,
written down in a binary alphabet, thus making the aforesaid analogies more
clear. The kinematics of the information trajectories is described by the
Lorentz's transformations, identically to its physical analog. At the same
time, various languages of description are associated with various reference
systems in physics. In the present paper, the information analog of the
principle of least action is found and the main problems of information
dynamics in the constructed space are formulated.
| 0 | 1 | 0 | 0 | 0 | 0 |
Problems on Matchings and Independent Sets of a Graph | Let $G$ be a finite simple graph. For $X \subset V(G)$, the difference of
$X$, $d(X) := |X| - |N (X)|$ where $N(X)$ is the neighborhood of $X$ and $\max
\, \{d(X):X\subset V(G)\}$ is called the critical difference of $G$. $X$ is
called a critical set if $d(X)$ equals the critical difference and ker$(G)$ is
the intersection of all critical sets. It is known that ker$(G)$ is an
independent (vertex) set of $G$. diadem$(G)$ is the union of all critical
independent sets. An independent set $S$ is an inclusion minimal set with $d(S)
> 0$ if no proper subset of $S$ has positive difference.
A graph $G$ is called König-Egerváry if the sum of its independence
number ($\alpha (G)$) and matching number ($\mu (G)$) equals $|V(G)|$. It is
known that bipartite graphs are König-Egerváry.
In this paper, we study independent sets with positive difference for which
every proper subset has a smaller difference and prove a result conjectured by
Levit and Mandrescu in 2013. The conjecture states that for any graph, the
number of inclusion minimal sets $S$ with $d(S) > 0$ is at least the critical
difference of the graph. We also give a short proof of the inequality
$|$ker$(G)| + |$diadem$(G)| \le 2\alpha (G)$ (proved by Short in 2016).
A characterization of unicyclic non-König-Egerváry graphs is also
presented and a conjecture which states that for such a graph $G$, the critical
difference equals $\alpha (G) - \mu (G)$, is proved.
We also make an observation about ker$G)$ using Edmonds-Gallai Structure
Theorem as a concluding remark.
| 1 | 0 | 1 | 0 | 0 | 0 |
Nonsequential double ionization of helium in IR+XUV two-color laser fields II: Collision-excitation ionization process | The collision-ionization mechanism of nonsequential double ionization (NSDI)
process in IR+XUV two-color laser fields [\PRA \textbf{93}, 043417 (2016)] has
been investigated by us recently. Here we extend this work to study the
collision-excitation-ionization (CEI) mechanism of NSDI processes in the
two-color laser fields with different laser conditions. It is found that the
CEI mechanism makes a dominant contribution to the NSDI as the XUV photon
energy is smaller than the ionization threshold of the He$^+$ ion, and the
momentum spectrum shows complex interference patterns and symmetrical
structures. By channel analysis, we find that, as the energy carried by the
recollision electron is not enough to excite the bound electron, the bound
electron will absorb XUV photons during their collision, as a result, both
forward and backward collisions make a comparable contributions to the NSDI
processes. However, it is found that, as the energy carried by the recollision
electron is large enough to excite the bound electron, the bound electron does
not absorb any XUV photon and it is excited only by sharing the energy carried
by the recollsion electron, hence the forward collision plays a dominant role
on the NSDI processes. Moreover, we find that the interference patterns of the
NSDI spectra can be reconstructed by the spectra of two above-threshold
ionization (ATI) processes, which may be used to analyze the structure of the
two separate ATI spectra by NSDI processes.
| 0 | 1 | 0 | 0 | 0 | 0 |
Chaos or Order? | What is chaos? Despite several decades of research on this ubiquitous and
fundamental phenomenon there is yet no agreed-upon answer to this question.
Recently, it was realized that all stochastic and deterministic differential
equations, describing all natural and engineered dynamical systems, possess a
topological supersymmetry. It was then suggested that its spontaneous breakdown
could be interpreted as the stochastic generalization of deterministic chaos.
This conclusion stems from the fact that such phenomenon encompasses features
that are traditionally associated with chaotic dynamics such as
non-integrability, positive topological entropy, sensitivity to initial
conditions, and the Poincare-Bendixson theorem. Here, we strengthen and
complete this picture by showing that the hallmarks of set-theoretic chaos --
topological transitivity/mixing and dense periodic orbits -- can also be
attributed to the spontaneous breakdown of topological supersymmetry. We also
demonstrate that these features, which highlight the noisy character of chaotic
dynamics, do not actually admit a stochastic generalization. We therefore
conclude that spontaneous topological symmetry breaking can be considered as
the most general definition of continuous-time dynamical chaos. Contrary to the
common perception and semantics of the word "chaos", this phenomenon should
then be truly interpreted as the low-symmetry, or ordered phase of the
dynamical systems that manifest it. Since the long-range order in this case is
temporal, we then suggest the word "chronotaxis" as a better representation of
this phenomenon.
| 0 | 1 | 1 | 0 | 0 | 0 |
Gaia Data Release 1: The archive visualisation service | Context: The first Gaia data release (DR1) delivered a catalogue of
astrometry and photometry for over a billion astronomical sources. Within the
panoply of methods used for data exploration, visualisation is often the
starting point and even the guiding reference for scientific thought. However,
this is a volume of data that cannot be efficiently explored using traditional
tools, techniques, and habits.
Aims: We aim to provide a global visual exploration service for the Gaia
archive, something that is not possible out of the box for most people. The
service has two main goals. The first is to provide a software platform for
interactive visual exploration of the archive contents, using common personal
computers and mobile devices available to most users. The second aim is to
produce intelligible and appealing visual representations of the enormous
information content of the archive.
Methods: The interactive exploration service follows a client-server design.
The server runs close to the data, at the archive, and is responsible for
hiding as far as possible the complexity and volume of the Gaia data from the
client. This is achieved by serving visual detail on demand. Levels of detail
are pre-computed using data aggregation and subsampling techniques. For DR1,
the client is a web application that provides an interactive multi-panel
visualisation workspace as well as a graphical user interface.
Results: The Gaia archive Visualisation Service offers a web-based
multi-panel interactive visualisation desktop in a browser tab. It currently
provides highly configurable 1D histograms and 2D scatter plots of Gaia DR1 and
the Tycho-Gaia Astrometric Solution (TGAS) with linked views. An innovative
feature is the creation of ADQL queries from visually defined regions in plots.
[abridged]
| 0 | 1 | 0 | 0 | 0 | 0 |
Monte Carlo methods for massively parallel computers | Applications that require substantial computational resources today cannot
avoid the use of heavily parallel machines. Embracing the opportunities of
parallel computing and especially the possibilities provided by a new
generation of massively parallel accelerator devices such as GPUs, Intel's Xeon
Phi or even FPGAs enables applications and studies that are inaccessible to
serial programs. Here we outline the opportunities and challenges of massively
parallel computing for Monte Carlo simulations in statistical physics, with a
focus on the simulation of systems exhibiting phase transitions and critical
phenomena. This covers a range of canonical ensemble Markov chain techniques as
well as generalized ensembles such as multicanonical simulations and population
annealing. While the examples discussed are for simulations of spin systems,
many of the methods are more general and moderate modifications allow them to
be applied to other lattice and off-lattice problems including polymers and
particle systems. We discuss important algorithmic requirements for such highly
parallel simulations, such as the challenges of random-number generation for
such cases, and outline a number of general design principles for parallel
Monte Carlo codes to perform well.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Deep Causal Inference Approach to Measuring the Effects of Forming Group Loans in Online Non-profit Microfinance Platform | Kiva is an online non-profit crowdsouring microfinance platform that raises
funds for the poor in the third world. The borrowers on Kiva are small business
owners and individuals in urgent need of money. To raise funds as fast as
possible, they have the option to form groups and post loan requests in the
name of their groups. While it is generally believed that group loans pose less
risk for investors than individual loans do, we study whether this is the case
in a philanthropic online marketplace. In particular, we measure the effect of
group loans on funding time while controlling for the loan sizes and other
factors. Because loan descriptions (in the form of texts) play an important
role in lenders' decision process on Kiva, we make use of this information
through deep learning in natural language processing. In this aspect, this is
the first paper that uses one of the most advanced deep learning techniques to
deal with unstructured data in a way that can take advantage of its superior
prediction power to answer causal questions. We find that on average, forming
group loans speeds up the funding time by about 3.3 days.
| 0 | 0 | 0 | 1 | 0 | 0 |
JADE - A Platform for Research on Cooperation of Physical and Virtual Agents | In the ICS, WUT a platform for simulation of cooperation of physical and
virtual mobile agents is under development. The paper describes the motivation
of the research, an organization of the platform, a model of agent, and the
principles of design of the platform. Several experimental simulations are
briefly described.
| 1 | 0 | 0 | 0 | 0 | 0 |
Analytic Expressions for the Inner-Rim Structure of Passively Heated Protoplanetary Disks | We analytically derive the expressions for the structure of the inner region
of protoplanetary disks based on the results from the recent hydrodynamical
simulations. The inner part of a disk can be divided into four regions:
dust-free region with gas temperature in the optically thin limit, optically
thin dust halo, optically thick condensation front and the classical optically
thick region in order from the inside. We derive the dust-to-gas mass ratio
profile in the dust halo using the fact that partial dust condensation
regulates the temperature to the dust evaporation temperature. Beyond the dust
halo, there is an optically thick condensation front where all the available
silicate gas condenses out. The curvature of the condensation surface is
determined by the condition that the surface temperature must be nearly equal
to the characteristic temperature $\sim 1200{\,\rm K}$. We derive the mid-plane
temperature in the outer two regions using the two-layer approximation with the
additional heating by the condensation front for the outermost region. As a
result, the overall temperature profile is step-like with steep gradients at
the borders between the outer three regions. The borders might act as planet
traps where the inward migration of planets due to gravitational interaction
with the gas disk stops. The temperature at the border between the two
outermost regions coincides with the temperature needed to activate
magnetorotational instability, suggesting that the inner edge of the dead zone
must lie at this border. The radius of the dead-zone inner edge predicted from
our solution is $\sim$ 2-3 times larger than that expected from the classical
optically thick temperature.
| 0 | 1 | 0 | 0 | 0 | 0 |
X-ray Emission Spectrum of Liquid Ethanol : Origin of Split Peaks | The X-ray emission spectrum of liquid ethanol was calculated using density
functional theory and a semi-classical approximation to the Kramers-Heisenberg
formula including core-hole-induced dynamics. Our spectrum agrees well with the
experimental spectrum. We found that the intensity ratio between the two peaks
at 526 and 527 eV assigned as 10a' and 3a" depends not only on the hydrogen
bonding network around the target molecule, but also on the intramolecular
conformation. This effect is absent in liquid methanol and demonstrates the
high sensitivity of X-ray emission to molecular structure. The dependence of
spectral features on hydrogen-bonding as well as on dynamical effects following
core-excitation are also discussed.
| 0 | 1 | 0 | 0 | 0 | 0 |
Activation Maximization Generative Adversarial Nets | Class labels have been empirically shown useful in improving the sample
quality of generative adversarial nets (GANs). In this paper, we mathematically
study the properties of the current variants of GANs that make use of class
label information. With class aware gradient and cross-entropy decomposition,
we reveal how class labels and associated losses influence GAN's training.
Based on that, we propose Activation Maximization Generative Adversarial
Networks (AM-GAN) as an advanced solution. Comprehensive experiments have been
conducted to validate our analysis and evaluate the effectiveness of our
solution, where AM-GAN outperforms other strong baselines and achieves
state-of-the-art Inception Score (8.91) on CIFAR-10. In addition, we
demonstrate that, with the Inception ImageNet classifier, Inception Score
mainly tracks the diversity of the generator, and there is, however, no
reliable evidence that it can reflect the true sample quality. We thus propose
a new metric, called AM Score, to provide a more accurate estimation of the
sample quality. Our proposed model also outperforms the baseline methods in the
new metric.
| 1 | 0 | 0 | 1 | 0 | 0 |
Job Management and Task Bundling | High Performance Computing is often performed on scarce and shared computing
resources. To ensure computers are used to their full capacity, administrators
often incentivize large workloads that are not possible on smaller systems.
Measurements in Lattice QCD frequently do not scale to machine-size workloads.
By bundling tasks together we can create large jobs suitable for gigantic
partitions. We discuss METAQ and mpi_jm, software developed to dynamically
group computational tasks together, that can intelligently backfill to consume
idle time without substantial changes to users' current workflows or
executables.
| 0 | 1 | 0 | 0 | 0 | 0 |
Scintillation based search for off-pulse radio emission from pulsars | We propose a new method to detect off-pulse (unpulsed and/or continuous)
emission from pulsars, using the intensity modulations associated with
interstellar scintillation. Our technique involves obtaining the dynamic
spectra, separately for on-pulse window and off-pulse region, with time and
frequency resolutions to properly sample the intensity variations due to
diffractive scintillation, and then estimating their mutual correlation as a
measure of off-pulse emission, if any. We describe and illustrate the essential
details of this technique with the help of simulations, as well as real data.
We also discuss advantages of this method over earlier approaches to detect
off-pulse emission. In particular, we point out how certain non-idealities
inherent to measurement set-ups could potentially affect estimations in earlier
approaches, and argue that the present technique is immune to such
non-idealities. We verify both of the above situations with relevant
simulations. We apply this method to observation of PSR B0329+54 at frequencies
730 and 810 MHz, made with the Green Bank Telescope and present upper limits
for the off-pulse intensity at the two frequencies. We expect this technique to
pave way for extensive investigations of off-pulse emission with the help of
even existing dynamic spectral data on pulsars and of course with more
sensitive long-duration data from new observations.
| 0 | 1 | 0 | 0 | 0 | 0 |
Machine learning for graph-based representations of three-dimensional discrete fracture networks | Structural and topological information play a key role in modeling flow and
transport through fractured rock in the subsurface. Discrete fracture network
(DFN) computational suites such as dfnWorks are designed to simulate flow and
transport in such porous media. Flow and transport calculations reveal that a
small backbone of fractures exists, where most flow and transport occurs.
Restricting the flowing fracture network to this backbone provides a
significant reduction in the network's effective size. However, the particle
tracking simulations needed to determine the reduction are computationally
intensive. Such methods may be impractical for large systems or for robust
uncertainty quantification of fracture networks, where thousands of forward
simulations are needed to bound system behavior.
In this paper, we develop an alternative network reduction approach to
characterizing transport in DFNs, by combining graph theoretical and machine
learning methods. We consider a graph representation where nodes signify
fractures and edges denote their intersections. Using random forest and support
vector machines, we rapidly identify a subnetwork that captures the flow
patterns of the full DFN, based primarily on node centrality features in the
graph. Our supervised learning techniques train on particle-tracking backbone
paths found by dfnWorks, but run in negligible time compared to those
simulations. We find that our predictions can reduce the network to
approximately 20% of its original size, while still generating breakthrough
curves consistent with those of the original network.
| 1 | 1 | 0 | 1 | 0 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.