title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Simulation optimization: A review of algorithms and applications | Simulation Optimization (SO) refers to the optimization of an objective
function subject to constraints, both of which can be evaluated through a
stochastic simulation. To address specific features of a particular
simulation---discrete or continuous decisions, expensive or cheap simulations,
single or multiple outputs, homogeneous or heterogeneous noise---various
algorithms have been proposed in the literature. As one can imagine, there
exist several competing algorithms for each of these classes of problems. This
document emphasizes the difficulties in simulation optimization as compared to
mathematical programming, makes reference to state-of-the-art algorithms in the
field, examines and contrasts the different approaches used, reviews some of
the diverse applications that have been tackled by these methods, and
speculates on future directions in the field.
| 1 | 0 | 1 | 0 | 0 | 0 |
Learning Disentangled Representations with Semi-Supervised Deep Generative Models | Variational autoencoders (VAEs) learn representations of data by jointly
training a probabilistic encoder and decoder network. Typically these models
encode all features of the data into a single variable. Here we are interested
in learning disentangled representations that encode distinct aspects of the
data into separate variables. We propose to learn such representations using
model architectures that generalise from standard VAEs, employing a general
graphical model structure in the encoder and decoder. This allows us to train
partially-specified models that make relatively strong assumptions about a
subset of interpretable variables and rely on the flexibility of neural
networks to learn representations for the remaining variables. We further
define a general objective for semi-supervised learning in this model class,
which can be approximated using an importance sampling procedure. We evaluate
our framework's ability to learn disentangled representations, both by
qualitative exploration of its generative capacity, and quantitative evaluation
of its discriminative ability on a variety of models and datasets.
| 1 | 0 | 0 | 1 | 0 | 0 |
Meteorites from Phobos and Deimos at Earth? | We examine the conditions under which material from the martian moons Phobos
and Deimos could reach our planet in the form of meteorites. We find that the
necessary ejection speeds from these moons (900 and 600 m/s for Phobos and
Deimos respectively) are much smaller than from Mars' surface (5000 m/s). These
speeds are below typical impact speeds for asteroids and comets (10-40 km/s) at
Mars' orbit, and we conclude that the delivery of meteorites from Phobos and
Deimos to the Earth can occur.
| 0 | 1 | 0 | 0 | 0 | 0 |
Computational Aspects of Optimal Strategic Network Diffusion | The diffusion of information has been widely modeled as stochastic diffusion
processes on networks. Alshamsi et al. (2018) proposed a model of strategic
diffusion in networks of related activities. In this work we investigate the
computational aspects of finding the optimal strategy of strategic diffusion.
We prove that finding an optimal solution to the problem is NP-complete in a
general case. To overcome this computational difficulty, we present an
algorithm to compute an optimal solution based on a dynamic programming
technique. We also show that the problem is fixed parameter-tractable when
parametrized by the product of the treewidth and maximum degree. We analyze the
possibility of developing an efficient approximation algorithm and show that
two heuristic algorithms proposed so far cannot have better than a logarithmic
approximation guarantee. Finally, we prove that the problem does not admit
better than a logarithmic approximation, unless P=NP.
| 1 | 0 | 0 | 0 | 0 | 0 |
Watermark Signal Detection and Its Application in Image Retrieval | We propose a few fundamental techniques to obtain effective watermark
features of images in the image search index, and utilize the signals in a
commercial search engine to improve the image search quality. We collect a
diverse and large set (about 1M) of images with human labels indicating whether
the image contains visible watermark. We train a few deep convolutional neural
networks to extract watermark information from the raw images. We also analyze
the images based on their domains to get watermark information from a
domain-based watermark classifier. The deep CNN classifiers we trained can
achieve high accuracy on the watermark data set. We demonstrate that using
these signals in Bing image search ranker, powered by LambdaMART, can
effectively reduce the watermark rate during the online image ranking.
| 1 | 0 | 0 | 0 | 0 | 0 |
Loop-augmented forests and a variant of the Foulkes' conjecture | A loop-augmented forest is a labeled rooted forest with loops on some of its
roots. By exploiting an interplay between nilpotent partial functions and
labeled rooted forests, we investigate the permutation action of the symmetric
group on loop-augmented forests. Furthermore, we describe an extension of the
Foulkes' conjecture and prove a special case. Among other important outcomes of
our analysis are a complete description of the stabilizer subgroup of an
idempotent in the semigroup of partial transformations and a generalization of
the (Knuth-Sagan) hook length formula.
| 0 | 0 | 1 | 0 | 0 | 0 |
Replication issues in syntax-based aspect extraction for opinion mining | Reproducing experiments is an important instrument to validate previous work
and build upon existing approaches. It has been tackled numerous times in
different areas of science. In this paper, we introduce an empirical
replicability study of three well-known algorithms for syntactic centric
aspect-based opinion mining. We show that reproducing results continues to be a
difficult endeavor, mainly due to the lack of details regarding preprocessing
and parameter setting, as well as due to the absence of available
implementations that clarify these details. We consider these are important
threats to validity of the research on the field, specifically when compared to
other problems in NLP where public datasets and code availability are critical
validity components. We conclude by encouraging code-based research, which we
think has a key role in helping researchers to understand the meaning of the
state-of-the-art better and to generate continuous advances.
| 1 | 0 | 0 | 0 | 0 | 0 |
Manifold Adversarial Learning | The recently proposed adversarial training methods show the robustness to
both adversarial and original examples and achieve state-of-the-art results in
supervised and semi-supervised learning. All the existing adversarial training
methods con- sider only how the worst perturbed examples (i.e., adversarial
examples) could affect the model output. Despite their success, we argue that
such setting may be in lack of generalization, since the output space (or label
space) is apparently less informative. In this paper, we propose a novel
method, called Manifold Adver- sarial Training (MAT). MAT manages to build an
adversarial framework based on how the worst perturbation could affect the
distributional manifold rather than the output space. Particularly, a latent
data space with the Gaussian Mixture Model (GMM) will be first derived. On one
hand, MAT tries to perturb the input samples in the way that would rough the
distributional manifold the worst. On the other hand, the deep learning model
is trained trying to promote in the latent space the manifold smoothness,
measured by the variation of Gaussian mixtures (given the local perturbation
around the data point). Importantly, since the latent space is more informative
than the output space, the proposed MAT can learn better a ro- bust and compact
data representation, leading to further performance improvemen- t. The proposed
MAT is important in that it can be considered as a superset of one
recently-proposed discriminative feature learning approach called center loss.
We conducted a series of experiments in both supervised and semi-supervised
learn- ing on three benchmark data sets, showing that the proposed MAT can
achieve remarkable performance, much better than those of the state-of-the-art
adversarial approaches.
| 0 | 0 | 0 | 1 | 0 | 0 |
An Extension of Proof Graphs for Disjunctive Parameterised Boolean Equation Systems | A parameterised Boolean equation system (PBES) is a set of equations that
defines sets as the least and/or greatest fixed-points that satisfy the
equations. This system is regarded as a declarative program defining functions
that take a datum and returns a Boolean value. The membership problem of PBESs
is a problem to decide whether a given element is in the defined set or not,
which corresponds to an execution of the program. This paper introduces reduced
proof graphs, and studies a technique to solve the membership problem of PBESs,
which is undecidable in general, by transforming it into a reduced proof graph.
A vertex X(v) in a proof graph represents that the data v is in the set X, if
the graph satisfies conditions induced from a given PBES. Proof graphs are,
however, infinite in general. Thus we introduce vertices each of which stands
for a set of vertices of the original ones, which possibly results in a finite
graph. For a subclass of disjunctive PBESs, we clarify some conditions which
reduced proof graphs should satisfy. We also show some examples having no
finite proof graph except for reduced one. We further propose a reduced
dependency space, which contains reduced proof graphs as sub-graphs if a proof
graph exists. We provide a procedure to construct finite reduced dependency
spaces, and show the soundness and completeness of the procedure.
| 1 | 0 | 0 | 0 | 0 | 0 |
A direct measure of free electron gas via the Kinematic Sunyaev-Zel'dovich effect in Fourier-space analysis | We present the measurement of the kinematic Sunyaev-Zel'dovich (kSZ) effect
in Fourier space, rather than in real space. We measure the density-weighted
pairwise kSZ power spectrum, the first use of this promising approach, by
cross-correlating a cleaned Cosmic Microwave Background (CMB) temperature map,
which jointly uses both Planck Release 2 and Wilkinson Microwave Anisotropy
Probe nine-year data, with the two galaxy samples, CMASS and LOWZ, derived fr
om the Baryon Oscillation Spectroscopic Survey (BOSS) Data Release 12. With the
current data, we constrain the average optical depth $\tau$ multiplied by the
ratio of the Hubble parameter at redshift $z$ and the present day, $E=H/H_0$;
we find $\tau E = (3.95\pm1.62)\times10^{-5}$ for LOWZ and $\tau E = ( 1.25\pm
1.06)\times10^{-5}$ for CMASS, with the optimal angular radius of an aperture
photometry filter to estimate the CMB temperature distortion associ ated with
each galaxy. By repeating the pairwise kSZ power analysis for various aperture
radii, we measure the optical depth as a function of aperture ra dii. While
this analysis results in the kSZ signals with only evidence for a detection,
${\rm S/N}=2.54$ for LOWZ and $1.24$ for CMASS, the combination of future CMB
and spectroscopic galaxy surveys should enable precision measurements. We
estimate that the combination of CMB-S4 and data from DESI shoul d yield
detections of the kSZ signal with ${\rm S/N}=70-100$, depending on the
resolution of CMB-S4.
| 0 | 1 | 0 | 0 | 0 | 0 |
Scalable Structure Learning for Probabilistic Soft Logic | Statistical relational frameworks such as Markov logic networks and
probabilistic soft logic (PSL) encode model structure with weighted first-order
logical clauses. Learning these clauses from data is referred to as structure
learning. Structure learning alleviates the manual cost of specifying models.
However, this benefit comes with high computational costs; structure learning
typically requires an expensive search over the space of clauses which involves
repeated optimization of clause weights. In this paper, we propose the first
two approaches to structure learning for PSL. We introduce a greedy
search-based algorithm and a novel optimization method that trade-off
scalability and approximations to the structure learning problem in varying
ways. The highly scalable optimization method combines data-driven generation
of clauses with a piecewise pseudolikelihood (PPLL) objective that learns model
structure by optimizing clause weights only once. We compare both methods
across five real-world tasks, showing that PPLL achieves an order of magnitude
runtime speedup and AUC gains up to 15% over greedy search.
| 0 | 0 | 0 | 1 | 0 | 0 |
A mode-coupling theory analysis of the rotation driven translational motion of aqueous polyatomic ions | In contrast to simple monatomic alkali and halide ions, complex polyatomic
ions like nitrate, acetate, nitrite, chlorate etc. have not been studied in any
great detail. Experiments have shown that diffusion of polyatomic ions exhibits
many remarkable anomalies, notable among them is the fact that polyatomic ions
with similar size show large difference in their diffusivity values. This fact
has drawn relatively little interest in scientific discussions. We show here
that a mode-coupling theory (MCT) can provide a physically meaningful
interpretation of the anomalous diffusivity of polyatomic ions in water, by
including the contribution of rotational jumps on translational friction. The
two systems discussed here, namely aqueous nitrate ion and aqueous acetate ion,
although have similar ionic radii exhibit largely different diffusivity values
due to the differences in the rate of their rotational jump motions. We have
further verified the mode-coupling theory formalism by comparing it with
experimental and simulation results that agrees well with the theoretical
prediction.
| 0 | 1 | 0 | 0 | 0 | 0 |
Regulating Access to System Sensors in Cooperating Programs | Modern operating systems such as Android, iOS, Windows Phone, and Chrome OS
support a cooperating program abstraction. Instead of placing all functionality
into a single program, programs cooperate to complete tasks requested by users.
However, untrusted programs may exploit interactions with other programs to
obtain unauthorized access to system sensors either directly or through
privileged services. Researchers have proposed that programs should only be
authorized to access system sensors on a user-approved input event, but these
methods do not account for possible delegation done by the program receiving
the user input event. Furthermore, proposed delegation methods do not enable
users to control the use of their input events accurately. In this paper, we
propose ENTRUST, a system that enables users to authorize sensor operations
that follow their input events, even if the sensor operation is performed by a
program different from the program receiving the input event. ENTRUST tracks
user input as well as delegation events and restricts the execution of such
events to compute unambiguous delegation paths to enable accurate and reusable
authorization of sensor operations. To demonstrate this approach, we implement
the ENTRUST authorization system for Android. We find, via a laboratory user
study, that attacks can be prevented at a much higher rate (54-64%
improvement); and via a field user study, that ENTRUST requires no more than
three additional authorizations per program with respect to the first-use
approach, while incurring modest performance (<1%) and memory overheads (5.5 KB
per program).
| 1 | 0 | 0 | 0 | 0 | 0 |
Identification of Treatment Effects under Conditional Partial Independence | Conditional independence of treatment assignment from potential outcomes is a
commonly used but nonrefutable assumption. We derive identified sets for
various treatment effect parameters under nonparametric deviations from this
conditional independence assumption. These deviations are defined via a
conditional treatment assignment probability, which makes it straightforward to
interpret. Our results can be used to assess the robustness of empirical
conclusions obtained under the baseline conditional independence assumption.
| 0 | 0 | 0 | 1 | 0 | 0 |
First demonstration of emulsion multi-stage shifter for accelerator neutrino experiment in J-PARC T60 | We describe the first ever implementation of an emulsion multi-stage shifter
in an accelerator neutrino experiment. The system was installed in the neutrino
monitor building in J-PARC as a part of a test experiment T60 and stable
operation was maintained for a total of 126.6 days. By applying time
information to emulsion films, various results were obtained. Time resolutions
of 5.3 to 14.7 s were evaluated in an operation spanning 46.9 days (time
resolved numbers of 3.8--1.4$\times10^{5}$). By using timing and spatial
information, a reconstruction of coincident events that consisted of high
multiplicity events and vertex events, including neutrino events was performed.
Emulsion events were matched to events observed by INGRID, one of near
detectors of the T2K experiment, with high reliability (98.5\%) and hybrid
analysis was established via use of the multi-stage shifter. The results
demonstrate that the multi-stage shifter is feasible for use in neutrino
experiments.
| 0 | 1 | 0 | 0 | 0 | 0 |
Deep Generative Adversarial Networks for Compressed Sensing Automates MRI | Magnetic resonance image (MRI) reconstruction is a severely ill-posed linear
inverse task demanding time and resource intensive computations that can
substantially trade off {\it accuracy} for {\it speed} in real-time imaging. In
addition, state-of-the-art compressed sensing (CS) analytics are not cognizant
of the image {\it diagnostic quality}. To cope with these challenges we put
forth a novel CS framework that permeates benefits from generative adversarial
networks (GAN) to train a (low-dimensional) manifold of diagnostic-quality MR
images from historical patients. Leveraging a mixture of least-squares (LS)
GANs and pixel-wise $\ell_1$ cost, a deep residual network with skip
connections is trained as the generator that learns to remove the {\it
aliasing} artifacts by projecting onto the manifold. LSGAN learns the texture
details, while $\ell_1$ controls the high-frequency noise. A multilayer
convolutional neural network is then jointly trained based on diagnostic
quality images to discriminate the projection quality. The test phase performs
feed-forward propagation over the generator network that demands a very low
computational overhead. Extensive evaluations are performed on a large
contrast-enhanced MR dataset of pediatric patients. In particular, images rated
based on expert radiologists corroborate that GANCS retrieves high contrast
images with detailed texture relative to conventional CS, and pixel-wise
schemes. In addition, it offers reconstruction under a few milliseconds, two
orders of magnitude faster than state-of-the-art CS-MRI schemes.
| 1 | 0 | 0 | 1 | 0 | 0 |
On multiplicative independence of rational function iterates | We give lower bounds for the degree of multiplicative combinations of
iterates of rational functions (with certain exceptions) over a general field,
establishing the multiplicative independence of said iterates. This leads to a
generalisation of Gao's method for constructing elements in the finite field
$\mathbb{F}_{q^n}$ whose orders are larger than any polynomial in $n$ when $n$
becomes large. Additionally, we discuss the finiteness of polynomials which
translate a given finite set of polynomials to become multiplicatively
dependent.
| 0 | 0 | 1 | 0 | 0 | 0 |
Haptic Assembly and Prototyping: An Expository Review | An important application of haptic technology to digital product development
is in virtual prototyping (VP), part of which deals with interactive planning,
simulation, and verification of assembly-related activities, collectively
called virtual assembly (VA). In spite of numerous research and development
efforts over the last two decades, the industrial adoption of haptic-assisted
VP/VA has been slower than expected. Putting hardware limitations aside, the
main roadblocks faced in software development can be traced to the lack of
effective and efficient computational models of haptic feedback. Such models
must 1) accommodate the inherent geometric complexities faced when assembling
objects of arbitrary shape; and 2) conform to the computation time limitation
imposed by the notorious frame rate requirements---namely, 1 kHz for haptic
feedback compared to the more manageable 30-60 Hz for graphic rendering. The
simultaneous fulfillment of these competing objectives is far from trivial.
This survey presents some of the conceptual and computational challenges and
opportunities as well as promising future directions in haptic-assisted VP/VA,
with a focus on haptic assembly from a geometric modeling and spatial reasoning
perspective. The main focus is on revisiting definitions and classifications of
different methods used to handle the constrained multibody simulation in
real-time, ranging from physics-based and geometry-based to hybrid and unified
approaches using a variety of auxiliary computational devices to specify,
impose, and solve assembly constraints. Particular attention is given to the
newly developed 'analytic methods' inherited from motion planning and protein
docking that have shown great promise as an alternative paradigm to the more
popular combinatorial methods.
| 1 | 0 | 0 | 0 | 0 | 0 |
On links between horocyclic and geodesic orbits on geometrically infinite surfaces | We study the topological dynamics of the horocycle flow $h_\mathbb{R}$ on a
geometrically infinite hyperbolic surface S. Let u be a non-periodic vector for
$h_\mathbb{R}$ in T^1 S. Suppose that the half-geodesic $u(\mathbb{R}^+)$ is
almost minimizing and that the injectivity radius along $u(\mathbb{R}^+)$ has a
finite inferior limit $Inj(u(\mathbb{R}^+))$. We prove that the closure of
$h_\mathbb{R} u$ meets the geodesic orbit along un unbounded sequence of points
$g_{t_n} u$. Moreover, if $Inj(u(\mathbb{R}^+)) = 0$, the whole half-orbit
$g_{\mathbb{R}^+} u$ is contained in $h_\mathbb{R} u$. When
$Inj(u(\mathbb{R}^+)) > 0$, it is known that in general $g_{\mathbb{R}^+} u
\subset h_\mathbb{R} u$. Yet, we give a construction where
$Inj(u(\mathbb{R}^+)) > 0$ and $g_{\mathbb{R}^+} u \subset h_\mathbb{R} u$,
which also constitutes a counterexample to Proposition 3 of [Led97].
| 0 | 0 | 1 | 0 | 0 | 0 |
Waist size for cusps in hyperbolic 3-manifolds II | The waist size of a cusp in an orientable hyperbolic 3-manifold is the length
of the shortest nontrivial curve generated by a parabolic isometry in the
maximal cusp boundary. Previously, it was shown that the smallest possible
waist size, which is 1, is realized only by the cusp in the figure-eight knot
complement. In this paper, it is proved that the next two smallest waist sizes
are realized uniquely for the cusps in the $5_2$ knot complement and the
manifold obtained by (2,1)-surgery on the Whitehead link. One application is an
improvement on the universal upper bound for the length of an unknotting tunnel
in a 2-cusped hyperbolic 3-manifold.
| 0 | 0 | 1 | 0 | 0 | 0 |
Landau levels from neutral Bogoliubov particles in two-dimensional nodal superconductors under strain and doping gradients | Motivated by recent work on strain-induced pseudo-magnetic fields in Dirac
and Weyl semimetals, we analyze the possibility of analogous fields in
two-dimensional nodal superconductors. We consider the prototypical case of a
d-wave superconductor, a representative of the cuprate family, and find that
the presence of weak strain leads to pseudo-magnetic fields and Landau
quantization of Bogoliubov quasiparticles in the low-energy sector. A similar
effect is induced by the presence of generic, weak doping gradients. In
contrast to genuine magnetic fields in superconductors, the strain- and doping
gradient-induced pseudo-magnetic fields couple in a way that preserves
time-reversal symmetry and is not subject to the screening associated with the
Meissner effect. These effects can be probed by tuning weak applied
supercurrents which lead to shifts in the energies of the Landau levels and
hence to quantum oscillations in thermodynamic and transport quantities.
| 0 | 1 | 0 | 0 | 0 | 0 |
Modular Labelled Sequent Calculi for Abstract Separation Logics | Abstract separation logics are a family of extensions of Hoare logic for
reasoning about programs that manipulate resources such as memory locations.
These logics are "abstract" because they are independent of any particular
concrete resource model. Their assertion languages, called propositional
abstract separation logics (PASLs), extend the logic of (Boolean) Bunched
Implications (BBI) in various ways. In particular, these logics contain the
connectives $*$ and $-\!*$, denoting the composition and extension of resources
respectively.
This added expressive power comes at a price since the resulting logics are
all undecidable. Given their wide applicability, even a semi-decision procedure
for these logics is desirable. Although several PASLs and their relationships
with BBI are discussed in the literature, the proof theory and automated
reasoning for these logics were open problems solved by the conference version
of this paper, which developed a modular proof theory for various PASLs using
cut-free labelled sequent calculi. This paper non-trivially improves upon this
previous work by giving a general framework of calculi on which any new axiom
in the logic satisfying a certain form corresponds to an inference rule in our
framework, and the completeness proof is generalised to consider such axioms.
Our base calculus handles Calcagno et al.'s original logic of separation
algebras by adding sound rules for partial-determinism and cancellativity,
while preserving cut-elimination. We then show that many important properties
in separation logic, such as indivisible unit, disjointness, splittability, and
cross-split, can be expressed in our general axiom form. Thus our framework
offers inference rules and completeness for these properties for free. Finally,
we show how our calculi reduce to calculi with global label substitutions,
enabling more efficient implementation.
| 1 | 0 | 0 | 0 | 0 | 0 |
Affine maps between quadratic assignment polytopes and subgraph isomorphism polytopes | We consider two polytopes. The quadratic assignment polytope $QAP(n)$ is the
convex hull of the set of tensors $x\otimes x$, $x \in P_n$, where $P_n$ is the
set of $n\times n$ permutation matrices. The second polytope is defined as
follows. For every permutation of vertices of the complete graph $K_n$ we
consider appropriate $\binom{n}{2} \times \binom{n}{2}$ permutation matrix of
the edges of $K_n$. The Young polytope $P((n-2,2))$ is the convex hull of all
such matrices.
In 2009, S. Onn showed that the subgraph isomorphism problem can be reduced
to optimization both over $QAP(n)$ and over $P((n-2,2))$. He also posed the
question whether $QAP(n)$ and $P((n-2,2))$, having $n!$ vertices each, are
isomorphic. We show that $QAP(n)$ and $P((n-2,2))$ are not isomorphic. Also, we
show that $QAP(n)$ is a face of $P((2n-2,2))$, but $P((n-2,2))$ is a projection
of $QAP(n)$.
| 1 | 0 | 1 | 0 | 0 | 0 |
LinXGBoost: Extension of XGBoost to Generalized Local Linear Models | XGBoost is often presented as the algorithm that wins every ML competition.
Surprisingly, this is true even though predictions are piecewise constant. This
might be justified in high dimensional input spaces, but when the number of
features is low, a piecewise linear model is likely to perform better. XGBoost
was extended into LinXGBoost that stores at each leaf a linear model. This
extension, equivalent to piecewise regularized least-squares, is particularly
attractive for regression of functions that exhibits jumps or discontinuities.
Those functions are notoriously hard to regress. Our extension is compared to
the vanilla XGBoost and Random Forest in experiments on both synthetic and
real-world data sets.
| 1 | 0 | 0 | 1 | 0 | 0 |
Superfluidity and relaxation dynamics of a laser-stirred 2D Bose gas | We investigate the superfluid behavior of a two-dimensional (2D) Bose gas of
$^{87}$Rb atoms using classical field dynamics. In the experiment by R.
Desbuquois \textit{et al.}, Nat. Phys. \textbf{8}, 645 (2012), a 2D
quasicondensate in a trap is stirred by a blue-detuned laser beam along a
circular path around the trap center. Here, we study this experiment from a
theoretical perspective. The heating induced by stirring increases rapidly
above a velocity $v_c$, which we define as the critical velocity. We identify
the superfluid, the crossover, and the thermal regime by a finite, a sharply
decreasing, and a vanishing critical velocity, respectively. We demonstrate
that the onset of heating occurs due to the creation of vortex-antivortex
pairs. A direct comparison of our numerical results to the experimental ones
shows good agreement, if a systematic shift of the critical phase-space density
is included. We relate this shift to the absence of thermal equilibrium between
the condensate and the thermal wings, which were used in the experiment to
extract the temperature. We expand on this observation by studying the full
relaxation dynamics between the condensate and the thermal cloud.
| 0 | 1 | 0 | 0 | 0 | 0 |
Measuring Player Retention and Monetization using the Mean Cumulative Function | Game analytics supports game development by providing direct quantitative
feedback about player experience. Player retention and monetization in
particular have become central business statistics in free-to-play game
development. Many metrics have been used for this purpose. However, game
developers often want to perform analytics in a timely manner before all users
have churned from the game. This causes data censoring which makes many metrics
biased. In this work, we introduce how the Mean Cumulative Function (MCF) can
be used to generalize many academic metrics to censored data. The MCF allows us
to estimate the expected value of a metric over time, which for example may be
the number of game sessions, number of purchases, total playtime and lifetime
value. Furthermore, the popular retention rate metric is the derivative of this
estimate applied to the expected number of distinct days played. Statistical
tools based on the MCF allow game developers to determine whether a given
change improves a game, or whether a game is yet good enough for public
release. The advantages of this approach are demonstrated on a real
in-development free-to-play mobile game, the Hipster Sheep.
| 0 | 0 | 0 | 1 | 0 | 0 |
Bohemian Upper Hessenberg Toeplitz Matrices | We look at Bohemian matrices, specifically those with entries from $\{-1, 0,
{+1}\}$. More, we specialize the matrices to be upper Hessenberg, with
subdiagonal entries $1$. Even more, we consider Toeplitz matrices of this kind.
Many properties remain after these specializations, some of which surprised us.
Focusing on only those matrices whose characteristic polynomials have maximal
height allows us to explicitly identify these polynomials and give a lower
bound on their height. This bound is exponential in the order of the matrix.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Study of Energy Trading in a Low-Voltage Network: Centralised and Distributed Approaches | Over the past years, distributed energy resources (DER) have been the object
of many studies, which recognise and establish their emerging role in the
future of power systems. However, the implementation of many scenarios and
mechanism are still challenging. This paper provides an overview of a local
energy market and explores the approaches in which consumers and prosumers take
part in this market. Therefore, the purpose of this paper is to review the
benefits of local markets for users. This study assesses the performance of
distributed and centralised trading mechanisms, comparing scenarios where the
objective of the exchange may be based on individual or social welfare.
Simulation results show the advantages of local markets and demonstrate the
importance of advancing the understanding of local markets.
| 1 | 0 | 0 | 0 | 0 | 0 |
Evidence Logics with Relational Evidence | Dynamic evidence logics are logics for reasoning about the evidence and
evidence-based beliefs of agents in a dynamic environment. In this paper, we
introduce a family of logics for reasoning about relational evidence: evidence
that involves an orderings of states in terms of their relative plausibility.
We provide sound and complete axiomatizations for the logics. We also present
several evidential actions and prove soundness and completeness for the
associated dynamic logics.
| 1 | 0 | 0 | 0 | 0 | 0 |
Review of methods for assessing the causal effect of binary interventions from aggregate time-series observational data | Researchers are often interested in assessing the impact of an intervention
on an outcome of interest in situations where the intervention is
non-randomised, information is available at an aggregate level, the
intervention is only applied to one or few units, the intervention is binary,
and there are outcome measurements at multiple time points. In this paper, we
review existing methods for causal inference in the setup just outlined. We
detail the assumptions underlying each method, emphasise connections between
the different approaches and provide guidelines regarding their practical
implementation. Several open problems are identified thus highlighting the need
for future research.
| 0 | 0 | 0 | 1 | 0 | 0 |
On the Difference Between Closest, Furthest, and Orthogonal Pairs: Nearly-Linear vs Barely-Subquadratic Complexity in Computational Geometry | Point location problems for $n$ points in $d$-dimensional Euclidean space
(and $\ell_p$ spaces more generally) have typically had two kinds of
running-time solutions:
* (Nearly-Linear) less than $d^{poly(d)} \cdot n \log^{O(d)} n$ time, or
* (Barely-Subquadratic) $f(d) \cdot n^{2-1/\Theta(d)}$ time, for various $f$.
For small $d$ and large $n$, "nearly-linear" running times are generally
feasible, while "barely-subquadratic" times are generally infeasible. For
example, in the Euclidean metric, finding a Closest Pair among $n$ points in
${\mathbb R}^d$ is nearly-linear, solvable in $2^{O(d)} \cdot n \log^{O(1)} n$
time, while known algorithms for Furthest Pair (the diameter of the point set)
are only barely-subquadratic, requiring $\Omega(n^{2-1/\Theta(d)})$ time. Why
do these proximity problems have such different time complexities? Is there a
barrier to obtaining nearly-linear algorithms for problems which are currently
only barely-subquadratic?
We give a novel exact and deterministic self-reduction for the Orthogonal
Vectors problem on $n$ vectors in $\{0,1\}^d$ to $n$ vectors in ${\mathbb
Z}^{\omega(\log d)}$ that runs in $2^{o(d)}$ time. As a consequence,
barely-subquadratic problems such as Euclidean diameter, Euclidean bichromatic
closest pair, ray shooting, and incidence detection do not have
$O(n^{2-\epsilon})$ time algorithms (in Turing models of computation) for
dimensionality $d = \omega(\log \log n)^2$, unless the popular Orthogonal
Vectors Conjecture and the Strong Exponential Time Hypothesis are false. That
is, while poly-log-log-dimensional Closest Pair is in $n^{1+o(1)}$ time, the
analogous case of Furthest Pair can encode larger-dimensional problems
conjectured to require $n^{2-o(1)}$ time. We also show that the All-Nearest
Neighbors problem in $\omega(\log n)$ dimensions requires $n^{2-o(1)}$ time to
solve, assuming either of the above conjectures.
| 1 | 0 | 0 | 0 | 0 | 0 |
Efficient Nonparametric Bayesian Inference For X-Ray Transforms | We consider the statistical inverse problem of recovering a function $f: M
\to \mathbb R$, where $M$ is a smooth compact Riemannian manifold with
boundary, from measurements of general $X$-ray transforms $I_a(f)$ of $f$,
corrupted by additive Gaussian noise. For $M$ equal to the unit disk with
`flat' geometry and $a=0$ this reduces to the standard Radon transform, but our
general setting allows for anisotropic media $M$ and can further model local
`attenuation' effects -- both highly relevant in practical imaging problems
such as SPECT tomography. We propose a nonparametric Bayesian inference
approach based on standard Gaussian process priors for $f$. The posterior
reconstruction of $f$ corresponds to a Tikhonov regulariser with a reproducing
kernel Hilbert space norm penalty that does not require the calculation of the
singular value decomposition of the forward operator $I_a$. We prove
Bernstein-von Mises theorems that entail that posterior-based inferences such
as credible sets are valid and optimal from a frequentist point of view for a
large family of semi-parametric aspects of $f$. In particular we derive the
asymptotic distribution of smooth linear functionals of the Tikhonov
regulariser, which is shown to attain the semi-parametric Cramér-Rao
information bound. The proofs rely on an invertibility result for the `Fisher
information' operator $I_a^*I_a$ between suitable function spaces, a result of
independent interest that relies on techniques from microlocal analysis. We
illustrate the performance of the proposed method via simulations in various
settings.
| 0 | 0 | 1 | 1 | 0 | 0 |
Glass-Box Program Synthesis: A Machine Learning Approach | Recently proposed models which learn to write computer programs from data use
either input/output examples or rich execution traces. Instead, we argue that a
novel alternative is to use a glass-box loss function, given as a program
itself that can be directly inspected. Glass-box optimization covers a wide
range of problems, from computing the greatest common divisor of two integers,
to learning-to-learn problems.
In this paper, we present an intelligent search system which learns, given
the partial program and the glass-box problem, the probabilities over the space
of programs. We empirically demonstrate that our informed search procedure
leads to significant improvements compared to brute-force program search, both
in terms of accuracy and time. For our experiments we use rich context free
grammars inspired by number theory, text processing, and algebra. Our results
show that (i) performing 4 rounds of our framework typically solves about 70%
of the target problems, (ii) our framework can improve itself even in domain
agnostic scenarios, and (iii) it can solve problems that would be otherwise too
slow to solve with brute-force search.
| 1 | 0 | 0 | 1 | 0 | 0 |
Cable-Driven Actuation for Highly Dynamic Robotic Systems | This paper presents design and experimental evaluations of an articulated
robotic limb called Capler-Leg. The key element of Capler-Leg is its
single-stage cable-pulley transmission combined with a high-gap radius motor.
Our cable-pulley system is designed to be as light-weight as possible and to
additionally serve as the primary cooling element, thus significantly
increasing the power density and efficiency of the overall system. The total
weight of active elements on the leg, i.e. the stators and the rotors,
contribute more than 60% of the total leg weight, which is an order of
magnitude higher than most existing robots. The resulting robotic leg has low
inertia, high torque transparency, low manufacturing cost, no backlash, and a
low number of parts. Capler-Leg system itself, serves as an experimental setup
for evaluating the proposed cable- pulley design in terms of robustness and
efficiency. A continuous jump experiment shows a remarkable 96.5 % recuperation
rate, measured at the battery output. This means that almost all the mechanical
energy output used during push-off returned back to the battery during
touch-down.
| 1 | 0 | 0 | 0 | 0 | 0 |
Improved $A_1-A_\infty$ and related estimates for commutators of rough singular integrals | An $A_1-A_\infty$ estimate improving a previous result in arXiv:1607.06432 is
obtained. Also new a result in terms of the ${A_\infty}$ constant and the one
supremum $A_q-A_\infty^{\exp}$ constant, is proved, providing a counterpart for
the result obained in arXiv:1705.08364. Both of the preceding results rely upon
a sparse domination in terms of bilinear forms for $[b,T_\Omega]$ with
$\Omega\in L^\infty(\mathbb{S}^{n-1})$ and $b\in BMO$ which is established
relying upon techniques from arXiv:1705.07397.
| 0 | 0 | 1 | 0 | 0 | 0 |
Ramsey Classes with Closure Operations (Selected Combinatorial Applications) | We state the Ramsey property of classes of ordered structures with closures
and given local properties. This generalises many old and new results: the
Nešetřil-Rödl Theorem, the author's Ramsey lift of bowtie-free
graphs as well as the Ramsey Theorem for Finite Models (i.e. structures with
both functions and relations) thus providing the ultimate generalisation of
Structural Ramsey Theorem. We give here a more concise reformulation of recent
authors paper "All those Ramsey classes (Ramsey classes with closures and
forbidden homomorphisms)" and the main purpose of this paper is to show several
applications. Particularly we prove the Ramsey property of ordered sets with
equivalences on the power set, Ramsey theorem for Steiner systems, Ramsey
theorem for resolvable designs and a partial Ramsey type results for
$H$-factorizable graphs. All of these results are natural, easy to state, yet
proofs involve most of the theory developed.
| 1 | 0 | 1 | 0 | 0 | 0 |
On Optimal Spectrum Access of Cognitive Relay With Finite Packet Buffer | We investigate a cognitive radio system where secondary user (SU) relays
primary user (PU) packets using two-phase relaying. SU transmits its own
packets with some access probability in relaying phase using time sharing. PU
and SU have queues of finite capacity which results in packet loss when the
queues are full. Utilizing knowledge of relay queue state, SU aims to maximize
its packet throughput while keeping packet loss probability of PU below a
threshold. By exploiting structure of the problem, we formulate it as a linear
program and find optimal access policy of SU. We also propose low complexity
sub-optimal access policies, namely constant probability transmission and step
transmission. Numerical results are presented to compare performance of
proposed methods and study effect of queue sizes on packet throughput.
| 1 | 0 | 0 | 0 | 0 | 0 |
Visibility of minorities in social networks | Homophily can put minority groups at a disadvantage by restricting their
ability to establish links with people from a majority group. This can limit
the overall visibility of minorities in the network. Building on a
Barabási-Albert model variation with groups and homophily, we show how the
visibility of minority groups in social networks is a function of (i) their
relative group size and (ii) the presence or absence of homophilic behavior. We
provide an analytical solution for this problem and demonstrate the existence
of asymmetric behavior. Finally, we study the visibility of minority groups in
examples of real-world social networks: sexual contacts, scientific
collaboration, and scientific citation. Our work presents a foundation for
assessing the visibility of minority groups in social networks in which
homophilic or heterophilic behaviour is present.
| 1 | 1 | 0 | 0 | 0 | 0 |
Interpreted Formalisms for Configurations | Imprecise and incomplete specification of system \textit{configurations}
threatens safety, security, functionality, and other critical system properties
and uselessly enlarges the configuration spaces to be searched by configuration
engineers and auto-tuners. To address these problems, this paper introduces
\textit{interpreted formalisms based on real-world types for configurations}.
Configuration values are lifted to values of real-world types, which we
formalize as \textit{subset types} in Coq. Values of these types are dependent
pairs whose components are values of underlying Coq types and proofs of
additional properties about them. Real-world types both extend and further
constrain \textit{machine-level} configurations, enabling richer, proof-based
checking of their consistency with real-world constraints. Tactic-based proof
scripts are written once to automate the construction of proofs, if proofs
exist, for configuration fields and whole configurations. \textit{Failures to
prove} reveal real-world type errors. Evaluation is based on a case study of
combinatorial optimization of Hadoop performance by meta-heuristic search over
Hadoop configurations spaces.
| 1 | 0 | 0 | 0 | 0 | 0 |
Solvability and microlocal analysis of the fractional Eringen wave equation | We discuss unique existence and microlocal regularity properties of Sobolev
space solutions to the fractional Eringen wave equation, initially given in the
form of a system of equations in which the classical non-local Eringen
constitutive equation is generalized by employing space-fractional derivatives.
Numerical examples illustrate the shape of solutions in dependence of the order
of the space-fractional derivative.
| 0 | 0 | 1 | 0 | 0 | 0 |
High-Mobility OFDM Downlink Transmission with Large-Scale Antenna Array | In this correspondence, we propose a new receiver design for high-mobility
orthogonal frequency division multiplexing (OFDM) downlink transmissions with a
large-scale antenna array. The downlink signal experiences the challenging fast
time-varying propagation channel. The time-varying nature originates from the
multiple carrier frequency offsets (CFOs) due to the transceiver oscillator
frequency offset (OFO) and multiple Doppler shifts. Let the received signal
first go through a carefully designed beamforming network, which could separate
multiple CFOs in the spatial domain with sufficient number of receive antennas.
A joint estimation method for the Doppler shifts and the OFO is further
developed. Then the conventional single-CFO compensation and channel estimation
method can be carried out for each beamforming branch. The proposed receiver
design avoids the complicated time-varying channel estimation, which differs a
lot from the conventional methods. More importantly, the proposed scheme can be
applied to the commonly used time-varying channel models, such as the Jakes'
channel model.
| 1 | 0 | 1 | 0 | 0 | 0 |
Feature Enhancement in Visually Impaired Images | One of the major open problems in computer vision is detection of features in
visually impaired images. In this paper, we describe a potential solution using
Phase Stretch Transform, a new computational approach for image analysis, edge
detection and resolution enhancement that is inspired by the physics of the
photonic time stretch technique. We mathematically derive the intrinsic
nonlinear transfer function and demonstrate how it leads to (1) superior
performance at low contrast levels and (2) a reconfigurable operator for
hyper-dimensional classification. We prove that the Phase Stretch Transform
equalizes the input image brightness across the range of intensities resulting
in a high dynamic range in visually impaired images. We also show further
improvement in the dynamic range by combining our method with the conventional
techniques. Finally, our results show a method for computation of mathematical
derivatives via group delay dispersion operations.
| 1 | 1 | 0 | 0 | 0 | 0 |
Heavy-Tailed Universality Predicts Trends in Test Accuracies for Very Large Pre-Trained Deep Neural Networks | Given two or more Deep Neural Networks (DNNs) with the same or similar
architectures, and trained on the same dataset, but trained with different
solvers, parameters, hyper-parameters, regularization, etc., can we predict
which DNN will have the best test accuracy, and can we do so without peeking at
the test data? In this paper, we show how to use a new Theory of Heavy-Tailed
Self-Regularization (HT-SR) to answer this. HT-SR suggests, among other things,
that modern DNNs exhibit what we call Heavy-Tailed Mechanistic Universality
(HT-MU), meaning that the correlations in the layer weight matrices can be fit
to a power law with exponents that lie in common Universality classes from
Heavy-Tailed Random Matrix Theory (HT-RMT). From this, we develop a Universal
capacity control metric that is a weighted average of these PL exponents.
Rather than considering small toy NNs, we examine over 50 different,
large-scale pre-trained DNNs, ranging over 15 different architectures, trained
on ImagetNet, each of which has been reported to have different test
accuracies. We show that this new capacity metric correlates very well with the
reported test accuracies of these DNNs, looking across each architecture
(VGG16/.../VGG19, ResNet10/.../ResNet152, etc.). We also show how to
approximate the metric by the more familiar Product Norm capacity measure, as
the average of the log Frobenius norm of the layer weight matrices. Our
approach requires no changes to the underlying DNN or its loss function, it
does not require us to train a model (although it could be used to monitor
training), and it does not even require access to the ImageNet data.
| 1 | 0 | 0 | 1 | 0 | 0 |
Confidence Interval Estimators for MOS Values | For the quantification of QoE, subjects often provide individual rating
scores on certain rating scales which are then aggregated into Mean Opinion
Scores (MOS). From the observed sample data, the expected value is to be
estimated. While the sample average only provides a point estimator, confidence
intervals (CI) are an interval estimate which contains the desired expected
value with a given confidence level. In subjective studies, the number of
subjects performing the test is typically small, especially in lab
environments. The used rating scales are bounded and often discrete like the
5-point ACR rating scale. Therefore, we review statistical approaches in the
literature for their applicability in the QoE domain for MOS interval
estimation (instead of having only a point estimator, which is the MOS). We
provide a conservative estimator based on the SOS hypothesis and binomial
distributions and compare its performance (CI width, outlier ratio of CI
violating the rating scale bounds) and coverage probability with well known CI
estimators. We show that the provided CI estimator works very well in practice
for MOS interval estimators, while the commonly used studentized CIs suffer
from a positive outlier ratio, i.e., CIs beyond the bounds of the rating scale.
As an alternative, bootstrapping, i.e., random sampling of the subjective
ratings with replacement, is an efficient CI estimator leading to typically
smaller CIs, but lower coverage than the proposed estimator.
| 1 | 0 | 0 | 0 | 0 | 0 |
Atmospheric Circulation and Cloud Evolution on the Highly Eccentric Extrasolar Planet HD 80606b | Observations of the highly-eccentric (e~0.9) hot-Jupiter HD 80606b with
Spitzer have provided some of best probes of the physics at work in exoplanet
atmospheres. By observing HD 80606b during its periapse passage, atmospheric
radiative, advective, and chemical timescales can be directly measured and used
to constrain fundamental planetary properties such as rotation period, tidal
dissipation rate, and atmospheric composition (including aerosols). Here we
present three-dimensional general circulation models for HD 80606b that aim to
further explore the atmospheric physics shaping HD 80606b's observed Spitzer
phase curves. We find that our models that assume a planetary rotation period
twice that of the pseudo-synchronous rotation period best reproduce the phase
variations observed for HD~80606b near periapse passage with Spitzer.
Additionally, we find that the rapid formation/dissipation and vertical
transport of clouds in HD 80606b's atmosphere near periapse passage likely
shapes its observed phase variations. We predict that observations near
periapse passage at visible wavelengths could constrain the composition and
formation/advection timescales of the dominant cloud species in HD 80606b's
atmosphere. The time-variable forcing experienced by exoplanets on eccentric
orbits provides a unique and important window on radiative, dynamical, and
chemical processes in planetary atmospheres and an important link between
exoplanet observations and theory.
| 0 | 1 | 0 | 0 | 0 | 0 |
Removal of Batch Effects using Generative Adversarial Networks | Many biological data analysis processes like Cytometry or Next Generation
Sequencing (NGS) produce massive amounts of data which needs to be processed in
batches for down-stream analysis. Such datasets are prone to technical
variations due to difference in handling the batches possibly at different
times, by different experimenters or under other different conditions. This
adds variation to the batches coming from the same source sample. These
variations are known as Batch Effects. It is possible that these variations and
natural variations due to biology confound but such situations can be avoided
by performing experiments in a carefully planned manner. Batch effects can
hamper down-stream analysis and may also cause results to be inconclusive.
Thus, it is essential to correct for these effects. Some recent methods propose
deep learning based solution to solve this problem. We demonstrate that this
can be solved using a novel Generative Adversarial Networks (GANs) based
framework. The advantage of using this framework over other prior approaches is
that here we do not require to choose a reproducing kernel and define its
parameters.We demonstrate results of our framework on a Mass Cytometry dataset.
| 1 | 0 | 0 | 1 | 0 | 0 |
The Current-Phase Relation of Ferromagnetic Josephson Junction Between Triplet Superconductors | We study the Josephson effect of a $\rm{T_1 F T_2}$ junction, consisting of
spin-triplet superconductors (T), a weak ferromagnetic metal (F), and
ferromagnetic insulating interfaces. Two types of the triplet order parameters
are considered; $(k_x +ik_y)\hat{z}$ and $k_x \hat{x}+k_y\hat{y}$. We compute
the current density in the ballistic limit by using the generalized
quasiclassical formalism developed to take into account the interference effect
of the multilayered ferromagnetic junction. We discuss in detail how the
current-phase relation is affected by orientations of the d-vectors of
superconductor and the magnetizations of the ferromagnetic tunneling barrier.
General condition for the anomalous Josephson effect is also derived.
| 0 | 1 | 0 | 0 | 0 | 0 |
SpreadCluster: Recovering Versioned Spreadsheets through Similarity-Based Clustering | Version information plays an important role in spreadsheet understanding,
maintaining and quality improving. However, end users rarely use version
control tools to document spreadsheet version information. Thus, the
spreadsheet version information is missing, and different versions of a
spreadsheet coexist as individual and similar spreadsheets. Existing approaches
try to recover spreadsheet version information through clustering these similar
spreadsheets based on spreadsheet filenames or related email conversation.
However, the applicability and accuracy of existing clustering approaches are
limited due to the necessary information (e.g., filenames and email
conversation) is usually missing. We inspected the versioned spreadsheets in
VEnron, which is extracted from the Enron Corporation. In VEnron, the different
versions of a spreadsheet are clustered into an evolution group. We observed
that the versioned spreadsheets in each evolution group exhibit certain common
features (e.g., similar table headers and worksheet names). Based on this
observation, we proposed an automatic clustering algorithm, SpreadCluster.
SpreadCluster learns the criteria of features from the versioned spreadsheets
in VEnron, and then automatically clusters spreadsheets with the similar
features into the same evolution group. We applied SpreadCluster on all
spreadsheets in the Enron corpus. The evaluation result shows that
SpreadCluster could cluster spreadsheets with higher precision and recall rate
than the filename-based approach used by VEnron. Based on the clustering result
by SpreadCluster, we further created a new versioned spreadsheet corpus
VEnron2, which is much bigger than VEnron. We also applied SpreadCluster on the
other two spreadsheet corpora FUSE and EUSES. The results show that
SpreadCluster can cluster the versioned spreadsheets in these two corpora with
high precision.
| 1 | 0 | 0 | 0 | 0 | 0 |
Field-induced coexistence of $s_{++}$ and $s_{\pm}$ superconducting states in dirty multiband superconductors | In multiband systems, such as iron-based superconductors, the superconducting
states with locking and anti-locking of the interband phase differences, are
usually considered as mutually exclusive. For example, a dirty two-band system
with interband impurity scattering undergoes a sharp crossover between the
$s_{\pm}$ state (which favors phase anti locking) and the $s_{++}$ state (which
favors phase locking). We discuss here that the situation can be much more
complex in the presence of an external field or superconducting currents. In an
external applied magnetic field, dirty two-band superconductors do not feature
a sharp $s_{\pm}\to s_{++}$ crossover but rather a washed-out crossover to a
finite region in the parameter space where both $s_{\pm}$ and $s_{++}$ states
can coexist for example as a lattice or a microemulsion of inclusions of
different states. The current-carrying regions such as the regions near vortex
cores can exhibit an $s_\pm$ state while it is the $s_{++}$ state that is
favored in the bulk. This coexistence of both states can even be realized in
the Meissner state at the domain's boundaries featuring Meissner currents. We
demonstrate that there is a magnetic-field-driven crossover between the pure
$s_{\pm}$ and the $s_{++}$ states.
| 0 | 1 | 0 | 0 | 0 | 0 |
Superregular grammars do not provide additional explanatory power but allow for a compact analysis of animal song | A pervasive belief with regard to the differences between human language and
animal vocal sequences (song) is that they belong to different classes of
computational complexity, with animal song belonging to regular languages,
whereas human language is superregular. This argument, however, lacks empirical
evidence since superregular analyses of animal song are understudied. The goal
of this paper is to perform a superregular analysis of animal song, using data
from gibbons as a case study, and demonstrate that a superregular analysis can
be effectively used with non-human data. A key finding is that a superregular
analysis does not increase explanatory power but rather provides for compact
analysis. For instance, fewer grammatical rules are necessary once
superregularity is allowed. This pattern is analogous to a previous
computational analysis of human language, and accordingly, the null hypothesis,
that human language and animal song are governed by the same type of
grammatical systems, cannot be rejected.
| 0 | 0 | 0 | 0 | 1 | 0 |
Deep Learning based Large Scale Visual Recommendation and Search for E-Commerce | In this paper, we present a unified end-to-end approach to build a large
scale Visual Search and Recommendation system for e-commerce. Previous works
have targeted these problems in isolation. We believe a more effective and
elegant solution could be obtained by tackling them together. We propose a
unified Deep Convolutional Neural Network architecture, called VisNet, to learn
embeddings to capture the notion of visual similarity, across several semantic
granularities. We demonstrate the superiority of our approach for the task of
image retrieval, by comparing against the state-of-the-art on the Exact
Street2Shop dataset. We then share the design decisions and trade-offs made
while deploying the model to power Visual Recommendations across a catalog of
50M products, supporting 2K queries a second at Flipkart, India's largest
e-commerce company. The deployment of our solution has yielded a significant
business impact, as measured by the conversion-rate.
| 1 | 0 | 0 | 0 | 0 | 0 |
The effect of inhomogeneous phase on the critical temperature of smart meta-superconductor MgB2 | The critical temperature (TC) of MgB2, one of the key factors limiting its
application, is highly desired to be improved. On the basis of the
meta-material structure, we prepared a smart meta-superconductor structure
consisting of MgB2 micro-particles and inhomogeneous phases by an ex situ
process. The effect of inhomogeneous phase on the TC of smart
meta-superconductor MgB2 was investigated. Results showed that the onset
temperature (Ton C) of doping samples was lower than those of pure MgB2.
However, the offset temperature (Toff C) of the sample doped with Y2O3:Eu3+
nanosheets with a thickness of 2~3 nm which is much less than the coherence
length of MgB2 is 1.2 K higher than that of pure MgB2. The effect of the
applied electric field on the TC of sample was also studied. Results indicated
that with the increase of current, Ton C is slightly increased in the samples
doping with different inhomogeneous phases. When increasing current, the Toff C
of the samples doped with nonluminous inhomogeneous phases was decreased.
However, the Toff C of the luminescent inhomogeneous phase doping samples
increased and then decreased as increasing current.
| 0 | 1 | 0 | 0 | 0 | 0 |
Journal of Open Source Software (JOSS): design and first-year review | This article describes the motivation, design, and progress of the Journal of
Open Source Software (JOSS). JOSS is a free and open-access journal that
publishes articles describing research software. It has the dual goals of
improving the quality of the software submitted and providing a mechanism for
research software developers to receive credit. While designed to work within
the current merit system of science, JOSS addresses the dearth of rewards for
key contributions to science made in the form of software. JOSS publishes
articles that encapsulate scholarship contained in the software itself, and its
rigorous peer review targets the software components: functionality,
documentation, tests, continuous integration, and the license. A JOSS article
contains an abstract describing the purpose and functionality of the software,
references, and a link to the software archive. The article is the entry point
of a JOSS submission, which encompasses the full set of software artifacts.
Submission and review proceed in the open, on GitHub. Editors, reviewers, and
authors work collaboratively and openly. Unlike other journals, JOSS does not
reject articles requiring major revision; while not yet accepted, articles
remain visible and under review until the authors make adequate changes (or
withdraw, if unable to meet requirements). Once an article is accepted, JOSS
gives it a DOI, deposits its metadata in Crossref, and the article can begin
collecting citations on indexers like Google Scholar and other services.
Authors retain copyright of their JOSS article, releasing it under a Creative
Commons Attribution 4.0 International License. In its first year, starting in
May 2016, JOSS published 111 articles, with more than 40 additional articles
under review. JOSS is a sponsored project of the nonprofit organization
NumFOCUS and is an affiliate of the Open Source Initiative.
| 1 | 0 | 0 | 0 | 0 | 0 |
Sample Complexity of Estimating the Policy Gradient for Nearly Deterministic Dynamical Systems | Reinforcement learning is a promising approach to learning robot controllers.
It has recently been shown that algorithms based on finite-difference estimates
of the policy gradient are competitive with algorithms based on the policy
gradient theorem. We propose a theoretical framework for understanding this
phenomenon. Our key insight is that many dynamical systems (especially those of
interest in robot control tasks) are \emph{nearly deterministic}---i.e., they
can be modeled as a deterministic system with a small stochastic perturbation.
We show that for such systems, finite-difference estimates of the policy
gradient can have substantially lower variance than estimates based on the
policy gradient theorem. We interpret these results in the context of
counterfactual estimation. Finally, we empirically evaluate our insights in an
experiment on the inverted pendulum.
| 1 | 0 | 0 | 1 | 0 | 0 |
NDSHA: robust and reliable seismic hazard assessment | The Neo-Deterministic Seismic Hazard Assessment (NDSHA) method reliably and
realistically simulates the suite of earthquake ground motions that may impact
civil populations as well as their heritage buildings. The modeling technique
is developed from comprehensive physical knowledge of the seismic source
process, the propagation of earthquake waves and their combined interactions
with site effects. NDSHA effectively accounts for the tensor nature of
earthquake ground motions formally described as the tensor product of the
earthquake source functions and the Green Functions of the pathway. NDSHA uses
all available information about the space distribution of large magnitude
earthquake, including Maximum Credible Earthquake (MCE) and geological and
geophysical data. It does not rely on scalar empirical ground motion
attenuation models, as these are often both weakly constrained by available
observations and unable to account for the tensor nature of earthquake ground
motion. Standard NDSHA provides robust and safely conservative hazard estimates
for engineering design and mitigation decision strategies without requiring
(often faulty) assumptions about the probabilistic risk analysis model of
earthquake occurrence. If specific applications may benefit from temporal
information the definition of the Gutenberg-Richter (GR) relation is performed
according to the multi-scale seismicity model and occurrence rate is associated
to each modeled source. Observations from recent destructive earthquakes in
Italy and Nepal have confirmed the validity of NDSHA approach and application,
and suggest that more widespread application of NDSHA will enhance earthquake
safety and resilience of civil populations in all earthquake-prone regions,
especially in tectonically active areas where the historic earthquake record is
too short.
| 0 | 1 | 0 | 0 | 0 | 0 |
Syllable-aware Neural Language Models: A Failure to Beat Character-aware Ones | Syllabification does not seem to improve word-level RNN language modeling
quality when compared to character-based segmentation. However, our best
syllable-aware language model, achieving performance comparable to the
competitive character-aware model, has 18%-33% fewer parameters and is trained
1.2-2.2 times faster.
| 1 | 0 | 0 | 1 | 0 | 0 |
Persistent Entropy for Separating Topological Features from Noise in Vietoris-Rips Complexes | Persistent homology studies the evolution of k-dimensional holes along a
nested sequence of simplicial complexes (called a filtration). The set of bars
(i.e. intervals) representing birth and death times of k-dimensional holes
along such sequence is called the persistence barcode. k-Dimensional holes with
short lifetimes are informally considered to be "topological noise", and those
with long lifetimes are considered to be "topological features" associated to
the filtration. Persistent entropy is defined as the Shannon entropy of the
persistence barcode of a given filtration. In this paper we present new
important properties of persistent entropy of Cech and Vietoris-Rips
filtrations. Among the properties, we put a focus on the stability theorem that
allows to use persistent entropy for comparing persistence barcodes. Later, we
derive a simple method for separating topological noise from features in
Vietoris-Rips filtrations.
| 1 | 0 | 0 | 0 | 0 | 0 |
Intrinsic alignment of redMaPPer clusters: cluster shape - matter density correlation | We measure the alignment of the shapes of galaxy clusters, as traced by their
satellite distributions, with the matter density field using the public
redMaPPer catalogue based on SDSS-DR8, which contains 26 111 clusters up to
z~0.6. The clusters are split into nine redshift and richness samples; in each
of them we detect a positive alignment, showing that clusters point towards
density peaks. We interpret the measurements within the tidal alignment
paradigm, allowing for a richness and redshift dependence. The intrinsic
alignment (IA) amplitude at the pivot redshift z=0.3 and pivot richness
\lambda=30 is A_{IA}^{gen}=12.6_{-1.2}^{+1.5}. We obtain tentative evidence
that the signal increases towards higher richness and lower redshift. Our
measurements agree well with results of maxBCG clusters and with
dark-matter-only simulations. Comparing our results to IA measurements of
luminous red galaxies, we find that the IA amplitude of galaxy clusters forms a
smooth extension towards higher mass. This suggests that these systems share a
common alignment mechanism, which can be exploited to improve our physical
understanding of IA.
| 0 | 1 | 0 | 0 | 0 | 0 |
A note on degenerate stirling polynomials of the second kind | In this paper, we consider the degenerate Stirling polynomials of the second
kind which are derived from the generating function. In addition, we give some
new identities for these polynomials.
| 0 | 0 | 1 | 0 | 0 | 0 |
Active learning machine learns to create new quantum experiments | How useful can machine learning be in a quantum laboratory? Here we raise the
question of the potential of intelligent machines in the context of scientific
research. A major motivation for the present work is the unknown reachability
of various entanglement classes in quantum experiments. We investigate this
question by using the projective simulation model, a physics-oriented approach
to artificial intelligence. In our approach, the projective simulation system
is challenged to design complex photonic quantum experiments that produce
high-dimensional entangled multiphoton states, which are of high interest in
modern quantum experiments. The artificial intelligence system learns to create
a variety of entangled states, and improves the efficiency of their
realization. In the process, the system autonomously (re)discovers experimental
techniques which are only now becoming standard in modern quantum optical
experiments - a trait which was not explicitly demanded from the system but
emerged through the process of learning. Such features highlight the
possibility that machines could have a significantly more creative role in
future research.
| 1 | 0 | 0 | 1 | 0 | 0 |
The Gaia-ESO Survey: radial distribution of abundances in the Galactic disc from open clusters and young field stars | The spatial distribution of elemental abundances in the disc of our Galaxy
gives insights both on its assembly process and subsequent evolution, and on
the stellar nucleogenesis of the different elements. Gradients can be traced
using several types of objects as, for instance, (young and old) stars, open
clusters, HII regions, planetary nebulae. We aim at tracing the radial
distributions of abundances of elements produced through different
nucleosynthetic channels -the alpha-elements O, Mg, Si, Ca and Ti, and the
iron-peak elements Fe, Cr, Ni and Sc - by using the Gaia-ESO idr4 results of
open clusters and young field stars. From the UVES spectra of member stars, we
determine the average composition of clusters with ages >0.1 Gyr. We derive
statistical ages and distances of field stars. We trace the abundance gradients
using the cluster and field populations and we compare them with a
chemo-dynamical Galactic evolutionary model. Results. The adopted
chemo-dynamical model, with the new generation of metallicity-dependent stellar
yields for massive stars, is able to reproduce the observed spatial
distributions of abundance ratios, in particular the abundance ratios of [O/Fe]
and [Mg/Fe] in the inner disc (5 kpc<RGC <7 kpc), with their differences, that
were usually poorly explained by chemical evolution models. Often, oxygen and
magnesium are considered as equivalent in tracing alpha-element abundances and
in deducing, e.g., the formation time-scales of different Galactic stellar
populations. In addition, often [alpha/Fe] is computed combining several
alpha-elements. Our results indicate, as expected, a complex and diverse
nucleosynthesis of the various alpha-elements, in particular in the high
metallicity regimes, pointing towards a different origin of these elements and
highlighting the risk of considering them as a single class with common
features.
| 0 | 1 | 0 | 0 | 0 | 0 |
The deterioration of materials from air pollution as derived from satellite and ground based observations | Dose-Response Functions (DRFs) are widely used in estimating corrosion and/or
soiling levels of materials used in constructions and cultural monuments. These
functions quantify the effects of air pollution and environmental parameters on
different materials through ground based measurements of specific air
pollutants and climatic parameters. Here, we propose a new approach where
available satellite observations are used instead of ground-based data. Through
this approach, the usage of DRFs is expanded in cases/areas where there is no
availability of in situ measurements, introducing also a totally new field
where satellite data can be shown to be very helpful. In the present work
satellite observations made by MODIS (MODerate resolution Imaging
Spectroradiometer) on board Terra and Aqua, OMI (Ozone Monitoring Instrument)
on board Aura and AIRS (Atmospheric Infrared Sounder) on board Aqua have been
used.
| 0 | 1 | 0 | 0 | 0 | 0 |
In-situ Optical Characterization of Noble Metal Thin Film Deposition and Development of a High-performance Plasmonic Sensor | The present work addressed in this thesis introduces, for the first time, the
use of tilted fiber Bragg grating (TFBG) sensors for accurate, real-time, and
in-situ characterization of CVD and ALD processes for noble metals, but with a
particular focus on gold due to its desirable optical and plasmonic properties.
Through the use of orthogonally-polarized transverse electric (TE) and
transverse magnetic (TM) resonance modes imposed by a boundary condition at the
cladding-metal interface of the optical fiber, polarization-dependent
resonances excited by the TFBG are easily decoupled. It was found that for
ultrathin thicknesses of gold films from CVD (~6-65 nm), the anisotropic
property of these films made it non-trivial to characterize their effective
optical properties such as the real component of the permittivity.
Nevertheless, the TFBG introduces a new sensing platform to the ALD and CVD
community for extremely sensitive in-situ process monitoring. We later also
demonstrate thin film growth at low (<10 cycle) numbers for the well-known
Al2O3 thermal ALD process, as well as the plasma-enhanced gold ALD process.
Finally, the use of ALD-grown gold coatings has been employed for the
development of a plasmonic TFBG-based sensor with ultimate refractometric
sensitivity (~550 nm/RIU).
| 0 | 1 | 0 | 0 | 0 | 0 |
Persistent Monitoring of Stochastic Spatio-temporal Phenomena with a Small Team of Robots | This paper presents a solution for persistent monitoring of real-world
stochastic phenomena, where the underlying covariance structure changes sharply
across time, using a small number of mobile robot sensors. We propose an
adaptive solution for the problem where stochastic real-world dynamics are
modeled as a Gaussian Process (GP). The belief on the underlying covariance
structure is learned from recently observed dynamics as a Gaussian Mixture (GM)
in the low-dimensional hyper-parameters space of the GP and adapted across time
using Sequential Monte Carlo methods. Each robot samples a belief point from
the GM and locally optimizes a set of informative regions by greedy
maximization of the submodular entropy function. The key contributions of this
paper are threefold: adapting the belief on the covariance using Markov Chain
Monte Carlo (MCMC) sampling such that particles survive even under sharp
covariance changes across time; exploiting the belief to transform the problem
of entropy maximization into a decentralized one; and developing an
approximation algorithm to maximize entropy on a set of informative regions in
the continuous space. We illustrate the application of the proposed solution
through extensive simulations using an artificial dataset and multiple real
datasets from fixed sensor deployments, and compare it to three competing
state-of-the-art approaches.
| 1 | 0 | 0 | 1 | 0 | 0 |
Guaranteed Sufficient Decrease for Variance Reduced Stochastic Gradient Descent | In this paper, we propose a novel sufficient decrease technique for variance
reduced stochastic gradient descent methods such as SAG, SVRG and SAGA. In
order to make sufficient decrease for stochastic optimization, we design a new
sufficient decrease criterion, which yields sufficient decrease versions of
variance reduction algorithms such as SVRG-SD and SAGA-SD as a byproduct. We
introduce a coefficient to scale current iterate and satisfy the sufficient
decrease property, which takes the decisions to shrink, expand or move in the
opposite direction, and then give two specific update rules of the coefficient
for Lasso and ridge regression. Moreover, we analyze the convergence properties
of our algorithms for strongly convex problems, which show that both of our
algorithms attain linear convergence rates. We also provide the convergence
guarantees of our algorithms for non-strongly convex problems. Our experimental
results further verify that our algorithms achieve significantly better
performance than their counterparts.
| 1 | 0 | 1 | 1 | 0 | 0 |
Unified Gas-kinetic Scheme with Multigrid Convergence for Rarefied Flow Study | The unified gas kinetic scheme (UGKS) is a direct modeling method based on
the gas dynamical model on the mesh size and time step scales. With the
implementation of particle transport and collision in a time-dependent flux
function, the UGKS can recover multiple flow physics from the kinetic particle
transport to the hydrodynamic wave propagation. In comparison with direct
simulation Monte Carlo (DSMC), the equations-based UGKS can use the implicit
techniques in the updates of macroscopic conservative variables and microscopic
distribution function. The implicit UGKS significantly increases the
convergence speed for steady flow computations, especially in the highly
rarefied and near continuum regime. In order to further improve the
computational efficiency, for the first time a geometric multigrid technique is
introduced into the implicit UGKS, where the prediction step for the
equilibrium state and the evolution step for the distribution function are both
treated with multigrid acceleration. The multigrid implicit UGKS (MIUGKS) is
used in the non-equilibrium flow study, which includes microflow, such as
lid-driven cavity flow and the flow passing through a finite-length flat plate,
and high speed one, such as supersonic flow over a square cylinder. The MIUGKS
shows 5 to 9 times efficiency increase over the previous implicit scheme. For
the low speed microflow, the efficiency of MIUGKS is several orders of
magnitude higher than the DSMC. Even for the hypersonic flow at Mach number 5
and Knudsen number 0.1, the MIUGKS is still more than 100 times faster than the
DSMC method for a convergent steady state solution.
| 0 | 1 | 0 | 0 | 0 | 0 |
Learning agile and dynamic motor skills for legged robots | Legged robots pose one of the greatest challenges in robotics. Dynamic and
agile maneuvers of animals cannot be imitated by existing methods that are
crafted by humans. A compelling alternative is reinforcement learning, which
requires minimal craftsmanship and promotes the natural evolution of a control
policy. However, so far, reinforcement learning research for legged robots is
mainly limited to simulation, and only few and comparably simple examples have
been deployed on real systems. The primary reason is that training with real
robots, particularly with dynamically balancing systems, is complicated and
expensive. In the present work, we introduce a method for training a neural
network policy in simulation and transferring it to a state-of-the-art legged
system, thereby leveraging fast, automated, and cost-effective data generation
schemes. The approach is applied to the ANYmal robot, a sophisticated
medium-dog-sized quadrupedal system. Using policies trained in simulation, the
quadrupedal machine achieves locomotion skills that go beyond what had been
achieved with prior methods: ANYmal is capable of precisely and
energy-efficiently following high-level body velocity commands, running faster
than before, and recovering from falling even in complex configurations.
| 1 | 0 | 0 | 1 | 0 | 0 |
Optimal partition problems for the fractional laplacian | In this work, we prove an existence result for an optimal partition problem
of the form $$\min \{F_s(A_1,\dots,A_m)\colon A_i \in \mathcal{A}_s, \, A_i\cap
A_j =\emptyset \mbox{ for } i\neq j\},$$ where $F_s$ is a cost functional with
suitable assumptions of monotonicity and lowersemicontinuity, $\mathcal{A}_s$
is the class of admissible domains and the condition $A_i\cap A_j =\emptyset$
is understood in the sense of the Gagliardo $s$-capacity, where $0<s<1$.
Examples of this type of problem are related to the fractional eigenvalues. In
addition, we prove some type of convergence of the $s$-minimizers to the
minimizer of the problem with $s=1$, studied in \cite{Bucur-Buttazzo-Henrot}.
| 0 | 0 | 1 | 0 | 0 | 0 |
Entire holomorphic curves into projective spaces intersecting a generic hypersurface of high degree | In this note, we establish the following Second Main Theorem type estimate
for every entire non-algebraically degenerate holomorphic curve
$f\colon\mathbb{C}\rightarrow\mathbb{P}^n(\mathbb{C})$, in present of a {\sl
generic} hypersuface $D\subset\mathbb{P}^n(\mathbb{C})$ of sufficiently high
degree $d\geq 15(5n+1)n^n$: \[ T_f(r) \leq \,N_f^{[1]}(r,D) + O\big(\log T_f(r)
+ \log r \big)\parallel, \] where $T_f(r)$ and $N_f^{[1]}(r,D)$ stand for the
order function and the $1$-truncated counting function in Nevanlinna theory.
This inequality quantifies recent results on the logarithmic Green--Griffiths
conjecture.
| 0 | 0 | 1 | 0 | 0 | 0 |
New results on sum-product type growth over fields | We prove a range of new sum-product type growth estimates over a general
field $\mathbb{F}$, in particular the special case $\mathbb{F}=\mathbb{F}_p$.
They are unified by the theme of "breaking the $3/2$ threshold", epitomising
the previous state of the art. These estimates stem from specially suited
applications of incidence bounds over $\mathbb{F}$, which apply to higher
moments of representation functions.
We establish the estimate $|R[A]| \gtrsim |A|^{8/5}$ for cardinality of the
set $R[A]$ of distinct cross-ratios defined by triples of elements of a
(sufficiently small if $\mathbb{F}$ has positive characteristic, similarly for
the rest of the estimates) set $A\subset \mathbb{F}$, pinned at infinity. The
cross-ratio naturally arises in various sum-product type questions of
projective nature and is the unifying concept underlying most of our results.
It enables one to take advantage of its symmetry properties as an onset of
growth of, for instance, products of difference sets. The geometric nature of
the cross-ratio enables us to break the version of the above threshold for the
minimum number of distinct triangle areas $Ouu'$, defined by points $u,u'$ of a
non-collinear point set $P\subset \mathbb{F}^2$.
Another instance of breaking the threshold is showing that if $A$ is
sufficiently small and has additive doubling constant $M$, then $|AA|\gtrsim
M^{-2}|A|^{14/9}$. This result has a second moment version, which allows for
new upper bounds for the number of collinear point triples in the set $A\times
A\subset \mathbb{F}^2$, the quantity often arising in applications of geometric
incidence estimates.
| 0 | 0 | 1 | 0 | 0 | 0 |
Precision matrix expansion - efficient use of numerical simulations in estimating errors on cosmological parameters | Computing the inverse covariance matrix (or precision matrix) of large data
vectors is crucial in weak lensing (and multi-probe) analyses of the large
scale structure of the universe. Analytically computed covariances are
noise-free and hence straightforward to invert, however the model
approximations might be insufficient for the statistical precision of future
cosmological data. Estimating covariances from numerical simulations improves
on these approximations, but the sample covariance estimator is inherently
noisy, which introduces uncertainties in the error bars on cosmological
parameters and also additional scatter in their best fit values. For future
surveys, reducing both effects to an acceptable level requires an unfeasibly
large number of simulations.
In this paper we describe a way to expand the true precision matrix around a
covariance model and show how to estimate the leading order terms of this
expansion from simulations. This is especially powerful if the covariance
matrix is the sum of two contributions, $\smash{\mathbf{C} =
\mathbf{A}+\mathbf{B}}$, where $\smash{\mathbf{A}}$ is well understood
analytically and can be turned off in simulations (e.g. shape-noise for cosmic
shear) to yield a direct estimate of $\smash{\mathbf{B}}$. We test our method
in mock experiments resembling tomographic weak lensing data vectors from the
Dark Energy Survey (DES) and the Large Synoptic Survey Telecope (LSST). For DES
we find that $400$ N-body simulations are sufficient to achive negligible
statistical uncertainties on parameter constraints. For LSST this is achieved
with $2400$ simulations. The standard covariance estimator would require
>$10^5$ simulations to reach a similar precision. We extend our analysis to a
DES multi-probe case finding a similar performance.
| 0 | 1 | 0 | 0 | 0 | 0 |
Structural and electronic properties of germanene on MoS$_2$ | To date, germanene has only been synthesized on metallic substrates. A
metallic substrate is usually detrimental for the two-dimensional Dirac nature
of germanene because the important electronic states near the Fermi level of
germanene can hybridize with the electronic states of the metallic substrate.
Here we report the successful synthesis of germanene on molybdenum disulfide
(MoS$_2$), a band gap material. Pre-existing defects in the MoS$_2$ surface act
as preferential nucleation sites for the germanene islands. The lattice
constant of the germanene layer (3.8 $\pm$ 0.2 \AA) is about 20\% larger than
the lattice constant of the MoS$_2$ substrate (3.16 \AA). Scanning tunneling
spectroscopy measurements and density functional theory calculations reveal
that there are, besides the linearly dispersing bands at the $K$ points, two
parabolic bands that cross the Fermi level at the $\Gamma$ point.
| 0 | 1 | 0 | 0 | 0 | 0 |
Fractional quiver W-algebras | We introduce quiver gauge theory associated with the non-simply-laced type
fractional quiver, and define fractional quiver W-algebras by using
construction of arXiv:1512.08533 and arXiv:1608.04651 with representation of
fractional quivers.
| 0 | 0 | 1 | 0 | 0 | 0 |
Vortex creep at very low temperatures in single crystals of the extreme type-II superconductor Rh$_9$In$_4$S$_4$ | We image vortex creep at very low temperatures using Scanning Tunneling
Microscopy (STM) in the superconductor Rh$_9$In$_4$S$_4$ ($T_c$=2.25 K). We
measure the superconducting gap of Rh$_9$In$_4$S$_4$, finding $\Delta\approx
0.33$meV and image a hexagonal vortex lattice up to close to H$_{c2}$,
observing slow vortex creep at temperatures as low as 150 mK. We estimate
thermal and quantum barriers for vortex motion and show that thermal
fluctuations likely cause vortex creep, in spite of being at temperatures
$T/T_c<0.1$. We study creeping vortex lattices by making images during long
times and show that the vortex lattice remains hexagonal during creep with
vortices moving along one of the high symmetry axis of the vortex lattice.
Furthermore, the creep velocity changes with the scanning window suggesting
that creep depends on the local arrangements of pinning centers. Vortices
fluctuate on small scale erratic paths, indicating that the vortex lattice
makes jumps trying different arrangements during its travel along the main
direction for creep. The images provide a visual account of how vortex lattice
motion maintains hexagonal order, while showing dynamic properties
characteristic of a glass.
| 0 | 1 | 0 | 0 | 0 | 0 |
Intention Games | Strategic interactions between competitive entities are generally considered
from the perspective of complete revelations of benefits achieved from those
interactions, in the form of public payoff functions in the announced games. In
this work, we propose a formal framework for a competitive ecosystem where each
player is permitted to deviate from publicly optimal strategies under certain
private payoffs greater than public payoffs, given that these deviations have
certain acceptable bounds as agreed by all players. We call this game theoretic
construction an Intention Game. We formally define an Intention Game, and
notions of equilibria that exist in such deviant interactions. We give an
example of a Cournot competition in a partially honest setting. We compare
Intention Games with conventional strategic form games. Finally, we give a
cryptographic use of Intention Games and a dual interpretation of this novel
framework.
| 1 | 0 | 0 | 0 | 0 | 0 |
Utilizing Lexical Similarity between Related, Low-resource Languages for Pivot-based SMT | We investigate pivot-based translation between related languages in a low
resource, phrase-based SMT setting. We show that a subword-level pivot-based
SMT model using a related pivot language is substantially better than word and
morpheme-level pivot models. It is also highly competitive with the best direct
translation model, which is encouraging as no direct source-target training
corpus is used. We also show that combining multiple related language pivot
models can rival a direct translation model. Thus, the use of subwords as
translation units coupled with multiple related pivot languages can compensate
for the lack of a direct parallel corpus.
| 1 | 0 | 0 | 0 | 0 | 0 |
Multiband Electronic Structure of Magnetic Quantum Dots: Numerical Studies | Semiconductor quantum dots (QDs) doped with magnetic impurities have been a
focus of continuous research for a couple of decades. A significant effort has
been devoted to studies of magnetic polarons (MP) in these nanostructures.
These collective states arise through exchange interaction between a carrier
confined in a QD and localized spins of the magnetic impurities (typically:
Mn). We discuss our theoretical description of various MP properties in
self-assembled QDs. We present a self-consistent, temperature-dependent
approach to MPs formed by a valence band hole. We use the Luttinger-Kohn k.p
Hamiltonian to account for the important effects of spin-orbit interaction.
| 0 | 1 | 0 | 0 | 0 | 0 |
Advances in Detection and Error Correction for Coherent Optical Communications: Regular, Irregular, and Spatially Coupled LDPC Code Designs | In this chapter, we show how the use of differential coding and the presence
of phase slips in the transmission channel affect the total achievable
information rates and capacity of a system. By means of the commonly used QPSK
modulation, we show that the use of differential coding does not decrease the
total amount of reliably conveyable information over the channel. It is a
common misconception that the use of differential coding introduces an
unavoidable differential loss. This perceived differential loss is rather a
consequence of simplified differential detection and decoding at the receiver.
Afterwards, we show how capacity-approaching coding schemes based on LDPC and
spatially coupled LDPC codes can be constructed by combining iterative
demodulation and decoding. For this, we first show how to modify the
differential decoder to account for phase slips and then how to use this
modified differential decoder to construct good LDPC codes. This construction
method can serve as a blueprint to construct good and practical LDPC codes for
other applications with iterative detection, such as higher order modulation
formats with non-square constellations, multi-dimensional optimized modulation
formats, turbo equalization to mitigate ISI (e.g., due to nonlinearities) and
many more. Finally, we introduce the class of spatially coupled (SC)-LDPC
codes, which are a generalization of LDPC codes with some outstanding
properties and which can be decoded with a very simple windowed decoder. We
show that the universal behavior of spatially coupled codes makes them an ideal
candidate for iterative differential demodulation/detection and decoding.
| 1 | 0 | 0 | 0 | 0 | 0 |
Risk-Averse Classification | We develop a new approach to solving classification problems, which is bases
on the theory of coherent measures of risk and risk sharing ideas. The proposed
approach aims at designing a risk-averse classifier. The new approach allows
for associating distinct risk functional to each classes. The risk may be
measured by different (non-linear in probability) measures,
We analyze the structure of the new classifier design problem and establish
its theoretical relation to known risk-neutral design problems. In particular,
we show that the risk-sharing classification problem is equivalent to an
implicitly defined optimization problem with unequal, implicitly defined but
unknown, weights for each data point. We implement our methodology in a binary
classification scenario on several different data sets and carry out numerical
comparison with classifiers which are obtained using the Huber loss function
and other loss functions known in the literature. We formulate specific
risk-averse support vector machines in order to demonstrate the viability of
our method.
| 0 | 0 | 0 | 1 | 0 | 0 |
Scattered Sentences have Few Separable Randomizations | In the paper "Randomizations of Scattered Sentences", Keisler showed that if
Martin's axiom for aleph one holds, then every scattered sentence has few
separable randomizations, and asked whether the conclusion could be proved in
ZFC alone. We show here that the answer is "yes". It follows that the absolute
Vaught conjecture holds if and only if every $L_{\omega_1\omega}$-sentence with
few separable randomizations has countably many countable models.
| 0 | 0 | 1 | 0 | 0 | 0 |
Zero divisor and unit elements with support of size 4 in group algebras of torsion free groups | Kaplansky Zero Divisor Conjecture states that if $G $ is a torsion free group
and $ \mathbb{F} $ is a field, then the group ring $\mathbb{F}[G]$ contains no
zero divisor and Kaplansky Unit Conjecture states that if $G $ is a torsion
free group and $ \mathbb{F} $ is a field, then $\mathbb{F}[G]$ contains no
non-trivial units. The support of an element $ \alpha= \sum_{x\in G}\alpha_xx$
in $\mathbb{F}[G] $, denoted by $supp(\alpha)$, is the set $ \{x \in
G|\alpha_x\neq 0\} $. In this paper we study possible zero divisors and units
with supports of size $ 4 $ in $\mathbb{F}[G]$. We prove that if
$ \alpha, \beta $ are non-zero elements in $ \mathbb{F}[G] $ for a possible
torsion free group $ G $ and an arbitrary field $ \mathbb{F} $ such that $
|supp(\alpha)|=4 $ and $ \alpha\beta=0 $, then $|supp(\beta)|\geq 7 $. In [J.
Group Theory, $16$ $ (2013),$ no. $5$, $667$-$693$], it is proved that if $
\mathbb{F}=\mathbb{F}_2 $ is the field with two elements, $ G $ is a torsion
free group and $ \alpha,\beta \in \mathbb{F}_2[G]\setminus \{0\}$ such that
$|supp(\alpha)|=4 $ and $ \alpha\beta =0 $, then $|supp(\beta)|\geq 8$. We
improve the latter result to $|supp(\beta)|\geq 9$. Also, concerning the Unit
Conjecture, we prove that if $\mathsf{a}\mathsf{b}=1$ for some
$\mathsf{a},\mathsf{b}\in \mathbb{F}[G]$ and $|supp(\mathsf{a})|=4$, then
$|supp(\mathsf{b})|\geq 6$.
| 0 | 0 | 1 | 0 | 0 | 0 |
G2 instantons and the Seiberg-Witten monopoles | I describe a relation (mostly conjectural) between the Seiberg-Witten
monopoles, Fueter sections, and G2 instantons. In the last part of this article
I gathered some open questions connected with this relation.
| 0 | 0 | 1 | 0 | 0 | 0 |
Multimodal Word Distributions | Word embeddings provide point representations of words containing useful
semantic information. We introduce multimodal word distributions formed from
Gaussian mixtures, for multiple word meanings, entailment, and rich uncertainty
information. To learn these distributions, we propose an energy-based
max-margin objective. We show that the resulting approach captures uniquely
expressive semantic information, and outperforms alternatives, such as word2vec
skip-grams, and Gaussian embeddings, on benchmark datasets such as word
similarity and entailment.
| 1 | 0 | 0 | 1 | 0 | 0 |
Efficient Policy Learning | In many areas, practitioners seek to use observational data to learn a
treatment assignment policy that satisfies application-specific constraints,
such as budget, fairness, simplicity, or other functional form constraints. For
example, policies may be restricted to take the form of decision trees based on
a limited set of easily observable individual characteristics. We propose a new
approach to this problem motivated by the theory of semiparametrically
efficient estimation. Our method can be used to optimize either binary
treatments or infinitesimal nudges to continuous treatments, and can leverage
observational data where causal effects are identified using a variety of
strategies, including selection on observables and instrumental variables.
Given a doubly robust estimator of the causal effect of assigning everyone to
treatment, we develop an algorithm for choosing whom to treat, and establish
strong guarantees for the asymptotic utilitarian regret of the resulting
policy.
| 0 | 0 | 1 | 1 | 0 | 0 |
Optimization by a quantum reinforcement algorithm | A reinforcement algorithm solves a classical optimization problem by
introducing a feedback to the system which slowly changes the energy landscape
and converges the algorithm to an optimal solution in the configuration space.
Here, we use this strategy to concentrate (localize) preferentially the wave
function of a quantum particle, which explores the configuration space of the
problem, on an optimal configuration. We examine the method by solving
numerically the equations governing the evolution of the system, which are
similar to the nonlinear Schrödinger equations, for small problem sizes. In
particular, we observe that reinforcement increases the minimal energy gap of
the system in a quantum annealing algorithm. Our numerical simulations and the
latter observation show that such kind of quantum feedbacks might be helpful in
solving a computationally hard optimization problem by a quantum reinforcement
algorithm.
| 1 | 1 | 0 | 0 | 0 | 0 |
Fast Image Processing with Fully-Convolutional Networks | We present an approach to accelerating a wide variety of image processing
operators. Our approach uses a fully-convolutional network that is trained on
input-output pairs that demonstrate the operator's action. After training, the
original operator need not be run at all. The trained network operates at full
resolution and runs in constant time. We investigate the effect of network
architecture on approximation accuracy, runtime, and memory footprint, and
identify a specific architecture that balances these considerations. We
evaluate the presented approach on ten advanced image processing operators,
including multiple variational models, multiscale tone and detail manipulation,
photographic style transfer, nonlocal dehazing, and nonphotorealistic
stylization. All operators are approximated by the same model. Experiments
demonstrate that the presented approach is significantly more accurate than
prior approximation schemes. It increases approximation accuracy as measured by
PSNR across the evaluated operators by 8.5 dB on the MIT-Adobe dataset (from
27.5 to 36 dB) and reduces DSSIM by a multiplicative factor of 3 compared to
the most accurate prior approximation scheme, while being the fastest. We show
that our models generalize across datasets and across resolutions, and
investigate a number of extensions of the presented approach. The results are
shown in the supplementary video at this https URL
| 1 | 0 | 0 | 0 | 0 | 0 |
Revealing patterns in HIV viral load data and classifying patients via a novel machine learning cluster summarization method | HIV RNA viral load (VL) is an important outcome variable in studies of HIV
infected persons. There exists only a handful of methods which classify
patients by viral load patterns. Most methods place limits on the use of viral
load measurements, are often specific to a particular study design, and do not
account for complex, temporal variation. To address this issue, we propose a
set of four unambiguous computable characteristics (features) of time-varying
HIV viral load patterns, along with a novel centroid-based classification
algorithm, which we use to classify a population of 1,576 HIV positive clinic
patients into one of five different viral load patterns (clusters) often found
in the literature: durably suppressed viral load (DSVL), sustained low viral
load (SLVL), sustained high viral load (SHVL), high viral load suppression
(HVLS), and rebounding viral load (RVL). The centroid algorithm summarizes
these clusters in terms of their centroids and radii. We show that this allows
new viral load patterns to be assigned pattern membership based on the distance
from the centroid relative to its radius, which we term radial normalization
classification. This method has the benefit of providing an objective and
quantitative method to assign viral load pattern membership with a concise and
interpretable model that aids clinical decision making. This method also
facilitates meta-analyses by providing computably distinct HIV categories.
Finally we propose that this novel centroid algorithm could also be useful in
the areas of cluster comparison for outcomes research and data reduction in
machine learning.
| 0 | 0 | 0 | 1 | 1 | 0 |
Superintegrable relativistic systems in spacetime-dependent background fields | We consider a relativistic charged particle in background electromagnetic
fields depending on both space and time. We identify which symmetries of the
fields automatically generate integrals (conserved quantities) of the charge
motion, accounting fully for relativistic and gauge invariance. Using this we
present new examples of superintegrable relativistic systems. This includes
examples where the integrals of motion are quadratic or nonpolynomial in the
canonical momenta.
| 0 | 1 | 1 | 0 | 0 | 0 |
Distributed Stochastic Model Predictive Control for Large-Scale Linear Systems with Private and Common Uncertainty Sources | This paper presents a distributed stochastic model predictive control (SMPC)
approach for large-scale linear systems with private and common uncertainties
in a plug-and-play framework. Using the so-called scenario approach, the
centralized SMPC involves formulating a large-scale finite-horizon scenario
optimization problem at each sampling time, which is in general computationally
demanding, due to the large number of required scenarios. We present two novel
ideas in this paper to address this issue. We first develop a technique to
decompose the large-scale scenario program into distributed scenario programs
that exchange a certain number of scenarios with each other in order to compute
local decisions using the alternating direction method of multipliers (ADMM).
We show the exactness of the decomposition with a-priori probabilistic
guarantees for the desired level of constraint fulfillment for both uncertainty
sources. As our second contribution, we develop an inter-agent soft
communication scheme based on a set parametrization technique together with the
notion of probabilistically reliable sets to reduce the required communication
between the subproblems. We show how to incorporate the probabilistic
reliability notion into existing results and provide new guarantees for the
desired level of constraint violations. Two different simulation studies of two
types of systems interactions, dynamically coupled and coupling constraints,
are presented to illustrate the advantages of the proposed framework.
| 1 | 0 | 1 | 0 | 0 | 0 |
Formalization of Transform Methods using HOL Light | Transform methods, like Laplace and Fourier, are frequently used for
analyzing the dynamical behaviour of engineering and physical systems, based on
their transfer function, and frequency response or the solutions of their
corresponding differential equations. In this paper, we present an ongoing
project, which focuses on the higher-order logic formalization of transform
methods using HOL Light theorem prover. In particular, we present the
motivation of the formalization, which is followed by the related work. Next,
we present the task completed so far while highlighting some of the challenges
faced during the formalization. Finally, we present a roadmap to achieve our
objectives, the current status and the future goals for this project.
| 1 | 0 | 0 | 0 | 0 | 0 |
The Velocity of the Decoding Wave for Spatially Coupled Codes on BMS Channels | We consider the dynamics of belief propagation decoding of spatially coupled
Low-Density Parity-Check codes. It has been conjectured that after a short
transient phase, the profile of "error probabilities" along the spatial
direction of a spatially coupled code develops a uniquely-shaped wave-like
solution that propagates with constant velocity v. Under this assumption, and
for transmission over general Binary Memoryless Symmetric channels, we derive a
formula for v. We also propose approximations that are simpler to compute and
support our findings using numerical data.
| 1 | 0 | 1 | 0 | 0 | 0 |
In Search of an Entity Resolution OASIS: Optimal Asymptotic Sequential Importance Sampling | Entity resolution (ER) presents unique challenges for evaluation methodology.
While crowdsourcing platforms acquire ground truth, sound approaches to
sampling must drive labelling efforts. In ER, extreme class imbalance between
matching and non-matching records can lead to enormous labelling requirements
when seeking statistically consistent estimates for rigorous evaluation. This
paper addresses this important challenge with the OASIS algorithm: a sampler
and F-measure estimator for ER evaluation. OASIS draws samples from a (biased)
instrumental distribution, chosen to ensure estimators with optimal asymptotic
variance. As new labels are collected OASIS updates this instrumental
distribution via a Bayesian latent variable model of the annotator oracle, to
quickly focus on unlabelled items providing more information. We prove that
resulting estimates of F-measure, precision, recall converge to the true
population values. Thorough comparisons of sampling methods on a variety of ER
datasets demonstrate significant labelling reductions of up to 83% without loss
to estimate accuracy.
| 1 | 0 | 0 | 1 | 0 | 0 |
Evidence of chaotic modes in the analysis of four delta Scuti stars | Since CoRoT observations unveiled the very low amplitude modes that form a
flat plateau in the power spectrum structure of delta Scuti stars, the nature
of this phenomenon, including the possibility of spurious signals due to the
light curve analysis, has been a matter of long-standing scientific debate. We
contribute to this debate by finding the structural parameters of a sample of
four delta Scuti stars, CID 546, CID 3619, CID 8669, and KIC 5892969, and
looking for a possible relation between these stars' structural parameters and
their power spectrum structure. For the purposes of characterization, we
developed a method of studying and analysing the power spectrum with high
precision and have applied it to both CoRoT and Kepler light curves. We obtain
the best estimates to date of these stars' structural parameters. Moreover, we
observe that the power spectrum structure depends on the inclination,
oblateness, and convective efficiency of each star. Our results suggest that
the power spectrum structure is real and is possibly formed by 2-period island
modes and chaotic modes.
| 0 | 1 | 0 | 0 | 0 | 0 |
Analytical Representations of Divisors of Integers | Certain analytical expressions which "feel" the divisors of natural numbers
are investigated. We show that these expressions encode to some extent the
well-known algorithm of the sieve of Eratosthenes. Most part of the text is
written in pedagogical style, however some formulas are new.
| 0 | 0 | 1 | 0 | 0 | 0 |
Semiparametrical Gaussian Processes Learning of Forward Dynamical Models for Navigating in a Circular Maze | This paper presents a problem of model learning for the purpose of learning
how to navigate a ball to a goal state in a circular maze environment with two
degrees of freedom. The motion of the ball in the maze environment is
influenced by several non-linear effects such as dry friction and contacts,
which are difficult to model physically. We propose a semiparametric model to
estimate the motion dynamics of the ball based on Gaussian Process Regression
equipped with basis functions obtained from physics first principles. The
accuracy of this semiparametric model is shown not only in estimation but also
in prediction at n-steps ahead and its compared with standard algorithms for
model learning. The learned model is then used in a trajectory optimization
algorithm to compute ball trajectories. We propose the system presented in the
paper as a benchmark problem for reinforcement and robot learning, for its
interesting and challenging dynamics and its relative ease of reproducibility.
| 1 | 0 | 0 | 1 | 0 | 0 |
Vestigial nematic order and superconductivity in the doped topological insulator Cu$_{x}$Bi$_{2}$Se$_{3}$ | If the topological insulator Bi$_{2}$Se$_{3}$ is doped with electrons,
superconductivity with $T_{\rm c}\approx3-4\:{\rm K}$ emerges for a low
density of carriers ($n\approx10^{20}{\rm cm}^{-3}$) and with a small ratio of
the superconducting coherence length and Fermi wave length:
$\xi/\lambda_{F}\approx2\cdots4$. These values make fluctuations of the
superconducting order parameter increasingly important, to the extend that the
$T_{c}$-value is surprisingly large. Strong spin-orbit interaction led to the
proposal of an odd-parity pairing state. This begs the question of the nature
of the transition in an unconventional superconductor with strong pairing
fluctuations. We show that for a multi-component order parameter, these
fluctuations give rise to a nematic phase at $T_{\rm nem}>T_{c}$. Below
$T_{c}$ several experiments demonstrated a rotational symmetry breaking where
the Cooper pair wave function is locked to the lattice. Our theory shows that
this rotational symmetry breaking, as vestige of the superconducting state,
already occurs above $T_{c}$. The nematic phase is characterized by vanishing
off-diagonal long range order, yet with anisotropic superconducting
fluctuations. It can be identified through direction-dependent
para-conductivity, lattice softening, and an enhanced Raman response in the
$E_{g}$ symmetry channel. In addition, nematic order partially avoids the usual
fluctuation suppression of $T_{c}$.
| 0 | 1 | 0 | 0 | 0 | 0 |
On the semi-continuity problem of normalized volumes of singularities | We show that in any $\mathbb{Q}$-Gorenstein flat family of klt singularities,
normalized volumes can only jump down at countably many subvarieties. A quick
consequence is that smooth points have the largest normalized volume among all
klt singularities. Using an alternative characterization of K-semistability
developed by Li, Xu and the author, we show that K-semistability is a very
generic or empty condition in any $\mathbb{Q}$-Gorenstein flat family of log
Fano pairs.
| 0 | 0 | 1 | 0 | 0 | 0 |
Non-Generic Unramified Representations in Metaplectic Covering Groups | Let $G^{(r)}$ denote the metaplectic covering group of the linear algebraic
group $G$. In this paper we study conditions on unramified representations of
the group $G^{(r)}$ not to have a nonzero Whittaker function. We state a
general Conjecture about the possible unramified characters $\chi$ such that
the unramified sub-representation of
$Ind_{B^{(r)}}^{G^{(r)}}\chi\delta_B^{1/2}$ will have no nonzero Whittaker
function. We prove this Conjecture for the groups $GL_n^{(r)}$ with $r\ge n-1$,
and for the exceptional groups $G_2^{(r)}$ when $r\ne 2$.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Generalization of Convolutional Neural Networks to Graph-Structured Data | This paper introduces a generalization of Convolutional Neural Networks
(CNNs) from low-dimensional grid data, such as images, to graph-structured
data. We propose a novel spatial convolution utilizing a random walk to uncover
the relations within the input, analogous to the way the standard convolution
uses the spatial neighborhood of a pixel on the grid. The convolution has an
intuitive interpretation, is efficient and scalable and can also be used on
data with varying graph structure. Furthermore, this generalization can be
applied to many standard regression or classification problems, by learning the
the underlying graph. We empirically demonstrate the performance of the
proposed CNN on MNIST, and challenge the state-of-the-art on Merck molecular
activity data set.
| 1 | 0 | 0 | 1 | 0 | 0 |
Implications of right-handed neutrinos in $B-L$ extended standard model with scalar dark matter | We investigate the Standard Model (SM) with a $U(1)_{B-L}$ gauge extension
where a $B-L$ charged scalar is a viable dark matter (DM) candidate. The
dominant annihilation process, for the DM particle is through the $B-L$
symmetry breaking scalar to right-handed neutrino pair. We exploit the effect
of decay and inverse decay of the right-handed neutrino in thermal relic
abundance of the DM. Depending on the values of the decay rate, the DM relic
density can be significantly different from what is obtained in the standard
calculation assuming the right-handed neutrino is in thermal equilibrium and
there appear different regions of the parameter space satisfying the observed
DM relic density. For a DM mass less than $\mathcal{O}$(TeV), the direct
detection experiments impose a competitive bound on the mass of the
$U(1)_{B-L}$ gauge boson $Z^\prime$ with the collider experiments. Utilizing
the non-observation of the displaced vertices arising from the right-handed
neutrino decays, bound on the mass of $Z^\prime$ has been obtained at present
and higher luminosities at the LHC with 14 TeV centre of mass energy where an
integrated luminosity of 100fb$^{-1}$ is sufficient to probe $m_{Z'} \sim 5.5$
TeV.
| 0 | 1 | 0 | 0 | 0 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.