title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Optimization and Testing in Linear Non-Gaussian Component Analysis | Independent component analysis (ICA) decomposes multivariate data into
mutually independent components (ICs). The ICA model is subject to a constraint
that at most one of these components is Gaussian, which is required for model
identifiability. Linear non-Gaussian component analysis (LNGCA) generalizes the
ICA model to a linear latent factor model with any number of both non-Gaussian
components (signals) and Gaussian components (noise), where observations are
linear combinations of independent components. Although the individual Gaussian
components are not identifiable, the Gaussian subspace is identifiable. We
introduce an estimator along with its optimization approach in which
non-Gaussian and Gaussian components are estimated simultaneously, maximizing
the discrepancy of each non-Gaussian component from Gaussianity while
minimizing the discrepancy of each Gaussian component from Gaussianity. When
the number of non-Gaussian components is unknown, we develop a statistical test
to determine it based on resampling and the discrepancy of estimated
components. Through a variety of simulation studies, we demonstrate the
improvements of our estimator over competing estimators, and we illustrate the
effectiveness of the test to determine the number of non-Gaussian components.
Further, we apply our method to real data examples and demonstrate its
practical value.
| 0 | 0 | 1 | 1 | 0 | 0 |
New Horizons Ring Collision Hazard: Constraints from Earth-based Observations | The New Horizons spacecraft's nominal trajectory crosses the planet's
satellite plane at $\sim 10,000\ \rm{km}$ from the barycenter, between the
orbits of Pluto and Charon. I have investigated the risk to the spacecraft
based on observational limits of rings and dust within this region, assuming
various particle size distributions. The best limits are placed by 2011 and
2012 HST observations, which significantly improve on the limits from stellar
occultations, although they do not go as close to the planet. From the HST data
and assuming a `reasonable worst case' for the size distribution, we place a
limit of $N < 20$ damaging impacts by grains of radius $> 0.2\ \textrm{mm}$
onto the spacecraft during the encounter. The number of hits is $\approx$
200$\times$ above the NH mission requirement, and $\approx$ $2000\times$ above
the mission's desired level. Stellar occultations remain valuable because they
are able to measure $N$ closer to the Pluto surface than direct imaging,
although with a sensitivity limit several orders of magnitude higher than that
from HST imaging. Neither HST nor occultations are sensitive enough to place
limits on $N$ at or below the mission requirements.
| 0 | 1 | 0 | 0 | 0 | 0 |
Dependability of Sensor Networks for Industrial Prognostics and Health Management | Maintenance is an important activity in industry. It is performed either to
revive a machine/component or to prevent it from breaking down. Different
strategies have evolved through time, bringing maintenance to its current
state: condition-based and predictive maintenances. This evolution was due to
the increasing demand of reliability in industry. The key process of
condition-based and predictive maintenances is prognostics and health
management, and it is a tool to predict the remaining useful life of
engineering assets. Nowadays, plants are required to avoid shutdowns while
offering safety and reliability. Nevertheless, planning a maintenance activity
requires accurate information about the system/component health state. Such
information is usually gathered by means of independent sensor nodes. In this
study, we consider the case where the nodes are interconnected and form a
wireless sensor network. As far as we know, no research work has considered
such a case of study for prognostics. Regarding the importance of data
accuracy, a good prognostics requires reliable sources of information. This is
why, in this paper, we will first discuss the dependability of wireless sensor
networks, and then present a state of the art in prognostic and health
management activities.
| 1 | 0 | 0 | 0 | 0 | 0 |
On Markov Chain Gradient Descent | Stochastic gradient methods are the workhorse (algorithms) of large-scale
optimization problems in machine learning, signal processing, and other
computational sciences and engineering. This paper studies Markov chain
gradient descent, a variant of stochastic gradient descent where the random
samples are taken on the trajectory of a Markov chain. Existing results of this
method assume convex objectives and a reversible Markov chain and thus have
their limitations. We establish new non-ergodic convergence under wider step
sizes, for nonconvex problems, and for non-reversible finite-state Markov
chains. Nonconvexity makes our method applicable to broader problem classes.
Non-reversible finite-state Markov chains, on the other hand, can mix
substatially faster. To obtain these results, we introduce a new technique that
varies the mixing levels of the Markov chains. The reported numerical results
validate our contributions.
| 0 | 0 | 0 | 1 | 0 | 0 |
Reduction of Second-Order Network Systems with Structure Preservation | This paper proposes a general framework for structure-preserving model
reduction of a secondorder network system based on graph clustering. In this
approach, vertex dynamics are captured by the transfer functions from inputs to
individual states, and the dissimilarities of vertices are quantified by the
H2-norms of the transfer function discrepancies. A greedy hierarchical
clustering algorithm is proposed to place those vertices with similar dynamics
into same clusters. Then, the reduced-order model is generated by the
Petrov-Galerkin method, where the projection is formed by the characteristic
matrix of the resulting network clustering. It is shown that the simplified
system preserves an interconnection structure, i.e., it can be again
interpreted as a second-order system evolving over a reduced graph.
Furthermore, this paper generalizes the definition of network controllability
Gramian to second-order network systems. Based on it, we develop an efficient
method to compute H2-norms and derive the approximation error between the
full-order and reduced-order models. Finally, the approach is illustrated by
the example of a small-world network.
| 1 | 0 | 0 | 0 | 0 | 0 |
Higher-rank graph algebras are iterated Cuntz-Pimsner algebras | Given a finitely aligned $k$-graph $\Lambda$, we let $\Lambda^i$ denote the
$(k-1)$-graph formed by removing all edges of degree $e_i$ from $\Lambda$. We
show that the Toeplitz-Cuntz-Krieger algebra of $\Lambda$, denoted by
$\mathcal{T}C^*(\Lambda)$, may be realised as the Toeplitz algebra of a Hilbert
$\mathcal{T}C^*(\Lambda^i)$-bimodule. When $\Lambda$ is locally-convex, we show
that the Cuntz-Krieger algebra of $\Lambda$, which we denote by $C^*(\Lambda)$,
may be realised as the Cuntz-Pimsner algebra of a Hilbert
$C^*(\Lambda^i)$-bimodule. Consequently, $\mathcal{T}C^*(\Lambda)$ and
$C^*(\Lambda)$ may be viewed as iterated Toeplitz and iterated Cuntz-Pimsner
algebras over $c_0(\Lambda^0)$ respectively.
| 0 | 0 | 1 | 0 | 0 | 0 |
Ultracold Atomic Gases in Artificial Magnetic Fields (PhD thesis) | A phenomenon can hardly be found that accompanied physical paradigms and
theoretical concepts in a more reflecting way than magnetism. From the
beginnings of metaphysics and the first classical approaches to magnetic poles
and streamlines of the field, it has inspired modern physics on its way to the
classical field description of electrodynamics, and further to the quantum
mechanical description of internal degrees of freedom of elementary particles.
Meanwhile, magnetic manifestations have posed and still do pose complex and
often controversially debated questions. This regards so various and utterly
distinct topics as quantum spin systems and the grand unification theory. This
may be foremost caused by the fact that all of these effects are based on
correlated structures, which are induced by the interplay of dynamics and
elementary interactions. It is strongly correlated systems that certainly
represent one of the most fascinating and universal fields of research. In
particular, low dimensional systems are in the focus of interest, as they
reveal strongly pronounced correlations of counterintuitive nature. As regards
this framework, the quantum Hall effect must be seen as one of the most
intriguing and complex problems of modern solid state physics. Even after two
decades and the same number of Nobel prizes, it still keeps researchers of
nearly all fields of physics occupied. In spite of seminal progress, its
inherent correlated order still lacks understanding on a microscopic level.
Despite this, it is obvious that the phenomenon is thoroughly fundamental of
nature. To resolve some puzzles of this nature is a key topic of this thesis.
(excerpt from abstract)
| 0 | 1 | 0 | 0 | 0 | 0 |
Stream VByte: Faster Byte-Oriented Integer Compression | Arrays of integers are often compressed in search engines. Though there are
many ways to compress integers, we are interested in the popular byte-oriented
integer compression techniques (e.g., VByte or Google's Varint-GB). They are
appealing due to their simplicity and engineering convenience. Amazon's
varint-G8IU is one of the fastest byte-oriented compression technique published
so far. It makes judicious use of the powerful single-instruction-multiple-data
(SIMD) instructions available in commodity processors. To surpass varint-G8IU,
we present Stream VByte, a novel byte-oriented compression technique that
separates the control stream from the encoded data. Like varint-G8IU, Stream
VByte is well suited for SIMD instructions. We show that Stream VByte decoding
can be up to twice as fast as varint-G8IU decoding over real data sets. In this
sense, Stream VByte establishes new speed records for byte-oriented integer
compression, at times exceeding the speed of the memcpy function. On a 3.4GHz
Haswell processor, it decodes more than 4 billion differentially-coded integers
per second from RAM to L1 cache.
| 1 | 0 | 0 | 0 | 0 | 0 |
Kondo Signatures of a Quantum Magnetic Impurity in Topological Superconductors | We study the Kondo physics of a quantum magnetic impurity in two-dimensional
topological superconductors (TSCs), either intrinsic or induced on the surface
of a bulk topological insulator, using a numerical renormalization group
technique. We show that, despite sharing the p + ip pairing symmetry, intrinsic
and extrinsic TSCs host different physical processes that produce distinct
Kondo signatures. Extrinsic TSCs harbor an unusual screening mechanism
involving both electron and orbital degrees of freedom that produces rich and
prominent Kondo phenomena, especially an intriguing pseudospin Kondo singlet
state in the superconducting gap and a spatially anisotropic spin correlation.
In sharp contrast, intrinsic TSCs support a robust impurity spin doublet ground
state and an isotropic spin correlation. These findings advance fundamental
knowledge of novel Kondo phenomena in TSCs and suggest experimental avenues for
their detection and distinction.
| 0 | 1 | 0 | 0 | 0 | 0 |
Waldschmidt constants for Stanley-Reisner ideals of a class of graphs | In the present note we study Waldschmidt constants of Stanley-Reisner ideals
of a hypergraph and a graph with vertices forming a bipyramid over a planar
n-gon. The case of the hypergraph has been studied by Bocci and Franci. We
reprove their main result. The case of the graph is new. Interestingly, both
cases provide series of ideals with Waldschmidt constants descending to 1. It
would be interesting to known if there are bounded ascending sequences of
Waldschmidt constants.
| 0 | 0 | 1 | 0 | 0 | 0 |
FO model checking of geometric graphs | Over the past two decades the main focus of research into first-order (FO)
model checking algorithms has been on sparse relational structures -
culminating in the FPT algorithm by Grohe, Kreutzer and Siebertz for FO model
checking of nowhere dense classes of graphs. On contrary to that, except the
case of locally bounded clique-width only little is currently known about FO
model checking of dense classes of graphs or other structures. We study the FO
model checking problem for dense graph classes definable by geometric means
(intersection and visibility graphs). We obtain new nontrivial FPT results,
e.g., for restricted subclasses of circular-arc, circle, box, disk, and
polygon-visibility graphs. These results use the FPT algorithm by Gajarský et
al. for FO model checking of posets of bounded width. We also complement the
tractability results by related hardness reductions.
| 1 | 0 | 0 | 0 | 0 | 0 |
Pseudo-linear regression identification based on generalized orthonormal transfer functions: Convergence conditions and bias distribution | In this paper we generalize three identification recursive algorithms
belonging to the pseudo-linear class, by introducing a predictor established on
a generalized orthonormal function basis. Contrary to the existing
identification schemes that use such functions, no constraint on the model
poles is imposed. Not only this predictor parameterization offers the
opportunity to relax the convergence conditions of the associated recursive
schemes, but it entails a modification of the bias distribution linked to the
basis poles. This result is specific to pseudo-linear regression properties,
and cannot be transposed to most of prediction error method algorithms. A
detailed bias distribution is provided, using the concept of equivalent
prediction error, which reveals strong analogies between the three proposed
schemes, corresponding to ARMAX, Output Error and a generalization of ARX
models. That leads to introduce an indicator of the basis poles location effect
on the bias distribution in the frequency domain. As shown by the simulations,
the said basis poles play the role of tuning parameters, allowing to manage the
model fit in the frequency domain, and allowing efficient identification of
fast sampled or stiff discrete-time systems.
| 1 | 0 | 0 | 0 | 0 | 0 |
Lower bounds for weak approximation errors for spatial spectral Galerkin approximations of stochastic wave equations | Although for a number of semilinear stochastic wave equations existence and
uniqueness results for corresponding solution processes are known from the
literature, these solution processes are typically not explicitly known and
numerical approximation methods are needed in order for mathematical modelling
with stochastic wave equations to become relevant for real world applications.
This, in turn, requires the numerical analysis of convergence rates for such
numerical approximation processes. A recent article by the authors proves upper
bounds for weak errors for spatial spectral Galerkin approximations of a class
of semilinear stochastic wave equations. The findings there are complemented by
the main result of this work, that provides lower bounds for weak errors which
show that in the general framework considered the established upper bounds can
essentially not be improved.
| 0 | 0 | 1 | 0 | 0 | 0 |
Structured low-rank matrix learning: algorithms and applications | We consider the problem of learning a low-rank matrix, constrained to lie in
a linear subspace, and introduce a novel factorization for modeling such
matrices. A salient feature of the proposed factorization scheme is it
decouples the low-rank and the structural constraints onto separate factors. We
formulate the optimization problem on the Riemannian spectrahedron manifold,
where the Riemannian framework allows to develop computationally efficient
conjugate gradient and trust-region algorithms. Experiments on problems such as
standard/robust/non-negative matrix completion, Hankel matrix learning and
multi-task learning demonstrate the efficacy of our approach. A shorter version
of this work has been published in ICML'18.
| 0 | 0 | 0 | 1 | 0 | 0 |
Greed Works - Online Algorithms For Unrelated Machine Stochastic Scheduling | This paper establishes the first performance guarantees for a combinatorial
online algorithm that schedules stochastic, nonpreemptive jobs on unrelated
machines to minimize the expected total weighted completion time. Prior work on
unrelated machine scheduling with stochastic jobs was restricted to the offline
case, and required sophisticated linear or convex programming relaxations for
the assignment of jobs to machines. The algorithm introduced in this paper is
based on a purely combinatorial assignment of jobs to machines, hence it also
works online. The performance bounds are of the same order of magnitude as
those of earlier work, and depend linearly on an upper bound $\Delta$ on the
squared coefficient of variation of the jobs' processing times. They are
$4+2\Delta$ when there are no release dates, and $12+6\Delta$ when jobs are
released over time. For the special case of deterministic processing times,
without and with release times, this paper shows that the same combinatorial
greedy algorithm has a competitive ratio of 4 and 6, respectively. As to the
technical contribution, the paper shows for the first time how dual fitting
techniques can be used for stochastic and nonpreemptive scheduling problems.
| 1 | 0 | 0 | 0 | 0 | 0 |
On a combinatorial curvature for surfaces with inversive distance circle packing metrics | In this paper, we introduce a new combinatorial curvature on triangulated
surfaces with inversive distance circle packing metrics. Then we prove that
this combinatorial curvature has global rigidity. To study the Yamabe problem
of the new curvature, we introduce a combinatorial Ricci flow, along which the
curvature evolves almost in the same way as that of scalar curvature along the
surface Ricci flow obtained by Hamilton \cite{Ham1}. Then we study the long
time behavior of the combinatorial Ricci flow and obtain that the existence of
a constant curvature metric is equivalent to the convergence of the flow on
triangulated surfaces with nonpositive Euler number. We further generalize the
combinatorial curvature to $\alpha$-curvature and prove that it is also
globally rigid, which is in fact a generalized Bower-Stephenson conjecture
\cite{BS}. We also use the combinatorial Ricci flow to study the corresponding
$\alpha$-Yamabe problem.
| 0 | 0 | 1 | 0 | 0 | 0 |
Improving End-to-End Speech Recognition with Policy Learning | Connectionist temporal classification (CTC) is widely used for maximum
likelihood learning in end-to-end speech recognition models. However, there is
usually a disparity between the negative maximum likelihood and the performance
metric used in speech recognition, e.g., word error rate (WER). This results in
a mismatch between the objective function and metric during training. We show
that the above problem can be mitigated by jointly training with maximum
likelihood and policy gradient. In particular, with policy learning we are able
to directly optimize on the (otherwise non-differentiable) performance metric.
We show that joint training improves relative performance by 4% to 13% for our
end-to-end model as compared to the same model learned through maximum
likelihood. The model achieves 5.53% WER on Wall Street Journal dataset, and
5.42% and 14.70% on Librispeech test-clean and test-other set, respectively.
| 1 | 0 | 0 | 1 | 0 | 0 |
Better Protocol for XOR Game using Communication Protocol and Nonlocal Boxes | Buhrman showed that an efficient communication protocol implies a reliable
XOR game protocol. This idea rederives Linial and Shraibman's lower bounds of
communication complexity, which was derived by using factorization norms, with
worse constant factor in much more intuitive way. In this work, we improve and
generalize Buhrman's idea, and obtain a class of lower bounds for classical
communication complexity including an exact Linial and Shraibman's lower bound
as a special case. In the proof, we explicitly construct a protocol for XOR
game from a classical communication protocol by using a concept of nonlocal
boxes and Paw{\l}owski et al.'s elegant protocol, which was used for showing
the violation of information causality in superquantum theories.
| 1 | 0 | 1 | 0 | 0 | 0 |
DTN: A Learning Rate Scheme with Convergence Rate of $\mathcal{O}(1/t)$ for SGD | We propose a novel diminishing learning rate scheme, coined
Decreasing-Trend-Nature (DTN), which allows us to prove fast convergence of the
Stochastic Gradient Descent (SGD) algorithm to a first-order stationary point
for smooth general convex and some class of nonconvex including neural network
applications for classification problems. We are the first to prove that SGD
with diminishing learning rate achieves a convergence rate of
$\mathcal{O}(1/t)$ for these problems. Our theory applies to neural network
applications for classification problems in a straightforward way.
| 1 | 0 | 0 | 1 | 0 | 0 |
Minimax Distribution Estimation in Wasserstein Distance | The Wasserstein metric is an important measure of distance between
probability distributions, with applications in machine learning, statistics,
probability theory, and data analysis. This paper provides upper and lower
bounds on statistical minimax rates for the problem of estimating a probability
distribution under Wasserstein loss, using only metric properties, such as
covering and packing numbers, of the sample space, and weak moment assumptions
on the probability distributions.
| 0 | 0 | 0 | 1 | 0 | 0 |
Uniformization and Steinness | It is shown that the unit ball in ${\mathbb C}^n$ is the only complex
manifold that can universally cover both Stein and non-Stein strictly
pseudoconvex domains.
| 0 | 0 | 1 | 0 | 0 | 0 |
Recent progress on conditional randomness | In this article, recent progress on ML-randomness with respect to conditional
probabilities is reviewed. In particular a new result of conditional randomness
with respect to mutually singular probabilities are shown, which is a
generalization of Hanssen's result (2010) for Bernoulli processes.
| 1 | 0 | 1 | 0 | 0 | 0 |
The evolution of red supergiants to supernovae | With red supergiants (RSGs) predicted to end their lives as Type IIP core
collapse supernova (CCSN), their behaviour before explosion needs to be fully
understood. Mass loss rates govern RSG evolution towards SN and have strong
implications on the appearance of the resulting explosion. To study how the
mass-loss rates change with the evolution of the star, we have measured the
amount of circumstellar material around 19 RSGs in a coeval cluster. Our study
has shown that mass loss rates ramp up throughout the lifetime of an RSG, with
more evolved stars having mass loss rates a factor of 40 higher than early
stage RSGs. Interestingly, we have also found evidence for an increase in
circumstellar extinction throughout the RSG lifetime, meaning the most evolved
stars are most severely affected. We find that, were the most evolved RSGs in
NGC2100 to go SN, this extra extinction would cause the progenitor's initial
mass to be underestimated by up to 9M$_\odot$.
| 0 | 1 | 0 | 0 | 0 | 0 |
Conformally invariant elliptic Liouville equation and its symmetry preserving discretization | The symmetry algebra of the real elliptic Liouville equation is an
infinite-dimensional loop algebra with the simple Lie algebra $o(3,1)$ as its
maximal finite-dimensional subalgebra. The entire algebra generates the
conformal group of the Euclidean plane $E_2$. This infinite-dimensional algebra
distinguishes the elliptic Liouville equation from the hyperbolic one with its
symmetry algebra that is the direct sum of two Virasoro algebras. Following a
discretisation procedure developed earlier, we present a difference scheme that
is invariant under the group $O(3,1)$ and has the elliptic Liouville equation
in polar coordinates as its continuous limit. The lattice is a solution of an
equation invariant under $O(3,1)$ and is itself invariant under a subgroup of
$O(3,1)$, namely the $O(2)$ rotations of the Euclidean plane.
| 0 | 1 | 1 | 0 | 0 | 0 |
Discrete diffusion Lyman-alpha radiative transfer | Due to its accuracy and generality, Monte Carlo radiative transfer (MCRT) has
emerged as the prevalent method for Ly$\alpha$ radiative transfer in arbitrary
geometries. The standard MCRT encounters a significant efficiency barrier in
the high optical depth, diffusion regime. Multiple acceleration schemes have
been developed to improve the efficiency of MCRT but the noise from photon
packet discretization remains a challenge. The discrete diffusion Monte Carlo
(DDMC) scheme has been successfully applied in state-of-the-art radiation
hydrodynamics (RHD) simulations. Still, the established framework is not
optimal for resonant line transfer. Inspired by the DDMC paradigm, we present a
novel extension to resonant DDMC (rDDMC) in which diffusion in space and
frequency are treated on equal footing. We explore the robustness of our new
method and demonstrate a level of performance that justifies incorporating the
method into existing Ly$\alpha$ codes. We present computational speedups of
$\sim 10^2$-$10^6$ relative to contemporary MCRT implementations with schemes
that skip scattering in the core of the line profile. This is because the rDDMC
runtime scales with the spatial and frequency resolution rather than the number
of scatterings - the latter is typically $\propto \tau_0$ for static media, or
$\propto (a \tau_0)^{2/3}$ with core-skipping. We anticipate new frontiers in
which on-the-fly Ly$\alpha$ radiative transfer calculations are feasible in 3D
RHD. More generally, rDDMC is transferable to any computationally demanding
problem amenable to a Fokker-Planck approximation of frequency redistribution.
| 0 | 1 | 0 | 0 | 0 | 0 |
Kosterlitz-Thouless transition and vortex-antivortex lattice melting in two-dimensional Fermi gases with $p$- or $d$-wave pairing | We present a theoretical study of the finite-temperature Kosterlitz-Thouless
(KT) and vortex-antivortex lattice (VAL) melting transitions in two-dimensional
Fermi gases with $p$- or $d$-wave pairing. For both pairings, when the
interaction is tuned from weak to strong attractions, we observe a quantum
phase transition from the Bardeen-Cooper-Schrieffer (BCS) superfluidity to the
Bose-Einstein condensation (BEC) of difermions. The KT and VAL transition
temperatures increase during this BCS-BEC transition and approach constant
values in the deep BEC region. The BCS-BEC transition is characterized by the
non-analyticities of the chemical potential, the superfluid order parameter,
and the sound velocities as functions of the interaction strength at both zero
and finite temperatures; however, the temperature effect tends to weaken the
non-analyticities comparing to the zero temperature case. The effect of
mismatched Fermi surfaces on the $d$-wave pairing is also studied.
| 0 | 1 | 0 | 0 | 0 | 0 |
Chiral Optical Tamm States: Temporal Coupled-Mode Theory | The chiral optical Tamm state (COTS) is a special localized state at the
interface of a handedness-preserving mirror and a structurally chiral medium
such as a cholesteric liquid crystal or a chiral sculptured thin film. The
spectral behavior of COTS, observed as reflection resonances, is described by
the temporal coupled-mode theory. Mode coupling is different for two circular
light polarizations because COTS has a helix structure replicating that of the
cholesteric. The mode coupling for co-handed circularly polarized light
exponentially attenuates with the cholesteric layer thickness since the COTS
frequency falls into the stop band. Cross-handed circularly polarized light
freely goes through the cholesteric layer and can excite COTS when reflected
from the handedness-preserving mirror. The coupling in this case is
proportional to anisotropy of the cholesteric and theoretically it is only
anisotropy of magnetic permittivity that can ultimately cancel this coupling.
These two couplings being equal results in a polarization crossover (the
Kopp--Genack effect) for which a linear polarization is optimal to excite COTS.
The corresponding cholesteric thickness and scattering matrix for COTS are
generally described by simple expressions.
| 0 | 1 | 0 | 0 | 0 | 0 |
Functional inequalities for Fox-Wright functions | In this paper, our aim is to show some mean value inequalities for the
Fox-Wright functions, such as Turán--type inequalities, Lazarević and
Wilker--type inequalities. As applications we derive some new type inequalities
for hypergeometric functions and the four--parametric Mittag--Leffler
functions. Furthermore, we prove monotonicity of ratios for sections of series
of Fox-Wright functions, the results is also closely connected with
Turán--type inequalities. Moreover, some other type inequalities are also
presented. At the end of the paper, some problems stated, which may be of
interest for further research.
| 0 | 0 | 1 | 0 | 0 | 0 |
Collaboration Spheres: a Visual Metaphor to Share and Reuse Research Objects | Research Objects (ROs) are semantically enhanced aggregations of resources
associated to scientific experiments, such as data, provenance of these data,
the scientific workflow used to run the experiment, intermediate results, logs
and the interpretation of the results. As the number of ROs increases, it is
becoming difficult to find ROs to be used, reused or re-purposed. New search
and retrieval techniques are required to find the most appropriate ROs for a
given researcher, paying attention to provide an intuitive user interface. In
this paper we show CollabSpheres, a user interface that provides a new visual
metaphor to find ROs by means of a recommendation system that takes advantage
of the social aspects of ROs. The experimental evaluation of this tool shows
that users perceive high values of usability, user satisfaction, usefulness and
ease of use. From the analysis of these results we argue that users perceive
the simplicity, intuitiveness and cleanness of this tool, as well as this tool
increases collaboration and reuse of research objects.
| 1 | 0 | 0 | 0 | 0 | 0 |
Intelligent flat-and-textureless object manipulation in Service Robots | This work introduces our approach to the flat and textureless object grasping
problem. In particular, we address the tableware and cutlery manipulation
problem where a service robot has to clean up a table. Our solution integrates
colour and 2D and 3D geometry information to describe objects, and this
information is given to the robot action planner to find the best grasping
trajectory depending on the object class. Furthermore, we use visual feedback
as a verification step to determine if the grasping process has successfully
occurred. We evaluate our approach in both an open and a standard service robot
platform following the RoboCup@Home international tournament regulations.
| 1 | 0 | 0 | 0 | 0 | 0 |
Constraints on the Intergalactic Magnetic Field from Bow Ties in the Gamma-ray Sky | Pair creation on the cosmic infrared background and subsequent
inverse-Compton scattering on the CMB potentially reprocesses the TeV emission
of blazars into faint GeV halos with structures sensitive to intergalactic
magnetic fields (IGMF). We attempt to detect such halos exploiting their highly
anisotropic shape. Their persistent nondetection excludes at greater than
$3.9\sigma$ an IGMF with correlation lengths >100 Mpc and current-day strengths
in the range $10^{-16}$ to $10^{-15}$ G, and at 2 sigma from $10^{-17}$ to
$10^{-14}$ G, covering the range implied by gamma-ray spectra of nearby TeV
emitters. Alternatively, plasma processes could pre-empt the inverse-Compton
cascade.
| 0 | 1 | 0 | 0 | 0 | 0 |
Learning Deep Representations with Probabilistic Knowledge Transfer | Knowledge Transfer (KT) techniques tackle the problem of transferring the
knowledge from a large and complex neural network into a smaller and faster
one. However, existing KT methods are tailored towards classification tasks and
they cannot be used efficiently for other representation learning tasks. In
this paper a novel knowledge transfer technique, that is capable of training a
student model that maintains the same amount of mutual information between the
learned representation and a set of (possible unknown) labels as the teacher
model, is proposed. Apart from outperforming existing KT techniques, the
proposed method allows for overcoming several limitations of existing methods
providing new insight into KT as well as novel KT applications, ranging from
knowledge transfer from handcrafted feature extractors to {cross-modal} KT from
the textual modality into the representation extracted from the visual modality
of the data.
| 0 | 0 | 0 | 1 | 0 | 0 |
Pluricanonical Periods over Compact Riemann Surfaces of Genus at least 2 | This article is an attempt to generalize Riemann's bilinear relations on
compact Riemann surface of genus at least 2, which may lead to new structures
in the theory of hyperbolic Riemann surfaces. No significant result is
obtained, the article serves to bring the readers' attention to the observation
made by [Bol-1949], and some easy consequences.
| 0 | 0 | 1 | 0 | 0 | 0 |
Stochastic Gradient Descent: Going As Fast As Possible But Not Faster | When applied to training deep neural networks, stochastic gradient descent
(SGD) often incurs steady progression phases, interrupted by catastrophic
episodes in which loss and gradient norm explode. A possible mitigation of such
events is to slow down the learning process. This paper presents a novel
approach to control the SGD learning rate, that uses two statistical tests. The
first one, aimed at fast learning, compares the momentum of the normalized
gradient vectors to that of random unit vectors and accordingly gracefully
increases or decreases the learning rate. The second one is a change point
detection test, aimed at the detection of catastrophic learning episodes; upon
its triggering the learning rate is instantly halved. Both abilities of
speeding up and slowing down the learning rate allows the proposed approach,
called SALeRA, to learn as fast as possible but not faster. Experiments on
standard benchmarks show that SALeRA performs well in practice, and compares
favorably to the state of the art.
| 0 | 0 | 0 | 1 | 0 | 0 |
Stacked Convolutional and Recurrent Neural Networks for Bird Audio Detection | This paper studies the detection of bird calls in audio segments using
stacked convolutional and recurrent neural networks. Data augmentation by
blocks mixing and domain adaptation using a novel method of test mixing are
proposed and evaluated in regard to making the method robust to unseen data.
The contributions of two kinds of acoustic features (dominant frequency and log
mel-band energy) and their combinations are studied in the context of bird
audio detection. Our best achieved AUC measure on five cross-validations of the
development data is 95.5% and 88.1% on the unseen evaluation data.
| 1 | 0 | 0 | 0 | 0 | 0 |
Gibbs posterior convergence and the thermodynamic formalism | In this paper we consider a Bayesian framework for making inferences about
dynamical systems from ergodic observations. The proposed Bayesian procedure is
based on the Gibbs posterior, a decision theoretic generalization of standard
Bayesian inference. We place a prior over a model class consisting of a
parametrized family of Gibbs measures on a mixing shift of finite type. This
model class generalizes (hidden) Markov chain models by allowing for long range
dependencies, including Markov chains of arbitrarily large orders. We
characterize the asymptotic behavior of the Gibbs posterior distribution on the
parameter space as the number of observations tends to infinity. In particular,
we define a limiting variational problem over the space of joinings of the
model system with the observed system, and we show that the Gibbs posterior
distributions concentrate around the solution set of this variational problem.
In the case of properly specified models our convergence results may be used to
establish posterior consistency. This work establishes tight connections
between Gibbs posterior inference and the thermodynamic formalism, which may
inspire new proof techniques in the study of Bayesian posterior consistency for
dependent processes.
| 0 | 0 | 1 | 1 | 0 | 0 |
Retrieval Analysis of the Emission Spectrum of WASP-12b: Sensitivity of Outcomes to Prior Assumptions and Implications for Formation History | We analyze the emission spectrum of the hot Jupiter WASP-12b using our
HELIOS-R retrieval code and HELIOS-K opacity calculator. When interpreting
Hubble and Spitzer data, the retrieval outcomes are found to be
prior-dominated. When the prior distributions of the molecular abundances are
assumed to be log-uniform, the volume mixing ratio of HCN is found to be
implausibly high. A VULCAN chemical kinetics model of WASP-12b suggests that
chemical equilibrium is a reasonable assumption even when atmospheric mixing is
implausibly rigorous. Guided by (exo)planet formation theory, we set Gaussian
priors on the elemental abundances of carbon, oxygen and nitrogen with the
Gaussian peaks being centered on the measured C/H, O/H and N/H values of the
star. By enforcing chemical equilibrium, we find substellar O/H and stellar to
slightly superstellar C/H for the dayside atmosphere of WASP-12b. The
superstellar carbon-to-oxygen ratio is just above unity, regardless of whether
clouds are included in the retrieval analysis, consistent with Madhusudhan et
al. (2011). Furthermore, whether a temperature inversion exists in the
atmosphere depends on one's assumption for the Gaussian width of the priors.
Our retrieved posterior distributions are consistent with the formation of
WASP-12b in a solar-composition protoplanetary disk, beyond the water iceline,
via gravitational instability or pebble accretion (without core erosion) and
migration inwards to its present orbital location via a disk-free mechanism,
and are inconsistent with both in-situ formation and core accretion with disk
migration, as predicted by Madhusudhan et al. (2017). We predict that the
interpretation of James Webb Space Telescope WASP-12b data will not be
prior-dominated.
| 0 | 1 | 0 | 0 | 0 | 0 |
Model Learning for Look-ahead Exploration in Continuous Control | We propose an exploration method that incorporates look-ahead search over
basic learnt skills and their dynamics, and use it for reinforcement learning
(RL) of manipulation policies . Our skills are multi-goal policies learned in
isolation in simpler environments using existing multigoal RL formulations,
analogous to options or macroactions. Coarse skill dynamics, i.e., the state
transition caused by a (complete) skill execution, are learnt and are unrolled
forward during lookahead search. Policy search benefits from temporal
abstraction during exploration, though itself operates over low-level primitive
actions, and thus the resulting policies does not suffer from suboptimality and
inflexibility caused by coarse skill chaining. We show that the proposed
exploration strategy results in effective learning of complex manipulation
policies faster than current state-of-the-art RL methods, and converges to
better policies than methods that use options or parametrized skills as
building blocks of the policy itself, as opposed to guiding exploration. We
show that the proposed exploration strategy results in effective learning of
complex manipulation policies faster than current state-of-the-art RL methods,
and converges to better policies than methods that use options or parameterized
skills as building blocks of the policy itself, as opposed to guiding
exploration.
| 1 | 0 | 0 | 0 | 0 | 0 |
Efficient and Scalable View Generation from a Single Image using Fully Convolutional Networks | Single-image-based view generation (SIVG) is important for producing 3D
stereoscopic content. Here, handling different spatial resolutions as input and
optimizing both reconstruction accuracy and processing speed is desirable.
Latest approaches are based on convolutional neural network (CNN), and they
generate promising results. However, their use of fully connected layers as
well as pre-trained VGG forces a compromise between reconstruction accuracy and
processing speed. In addition, this approach is limited to the use of a
specific spatial resolution. To remedy these problems, we propose exploiting
fully convolutional networks (FCN) for SIVG. We present two FCN architectures
for SIVG. The first one is based on combination of an FCN and a view-rendering
network called DeepView$_{ren}$. The second one consists of decoupled networks
for luminance and chrominance signals, denoted by DeepView$_{dec}$. To train
our solutions we present a large dataset of 2M stereoscopic images. Results
show that both of our architectures improve accuracy and speed over the state
of the art. DeepView$_{ren}$ generates competitive accuracy to the state of the
art, however, with the fastest processing speed of all. That is x5 times faster
speed and x24 times lower memory consumption compared to the state of the art.
DeepView$_{dec}$ has much higher accuracy, but with x2.5 times faster speed and
x12 times lower memory consumption. We evaluated our approach with both
objective and subjective studies.
| 1 | 0 | 0 | 0 | 0 | 0 |
An Empirical Analysis of Approximation Algorithms for the Euclidean Traveling Salesman Problem | With applications to many disciplines, the traveling salesman problem (TSP)
is a classical computer science optimization problem with applications to
industrial engineering, theoretical computer science, bioinformatics, and
several other disciplines. In recent years, there have been a plethora of novel
approaches for approximate solutions ranging from simplistic greedy to
cooperative distributed algorithms derived from artificial intelligence. In
this paper, we perform an evaluation and analysis of cornerstone algorithms for
the Euclidean TSP. We evaluate greedy, 2-opt, and genetic algorithms. We use
several datasets as input for the algorithms including a small dataset, a
mediumsized dataset representing cities in the United States, and a synthetic
dataset consisting of 200 cities to test algorithm scalability. We discover
that the greedy and 2-opt algorithms efficiently calculate solutions for
smaller datasets. Genetic algorithm has the best performance for optimality for
medium to large datasets, but generally have longer runtime. Our
implementations is public available.
| 1 | 0 | 0 | 0 | 0 | 0 |
Dynamical compensation and structural identifiability: analysis, implications, and reconciliation | The concept of dynamical compensation has been recently introduced to
describe the ability of a biological system to keep its output dynamics
unchanged in the face of varying parameters. Here we show that, according to
its original definition, dynamical compensation is equivalent to lack of
structural identifiability. This is relevant if model parameters need to be
estimated, which is often the case in biological modelling. This realization
prompts us to warn that care should we taken when using an unidentifiable model
to extract biological insight: the estimated values of structurally
unidentifiable parameters are meaningless, and model predictions about
unmeasured state variables can be wrong. Taking this into account, we explore
alternative definitions of dynamical compensation that do not necessarily imply
structural unidentifiability. Accordingly, we show different ways in which a
model can be made identifiable while exhibiting dynamical compensation. Our
analyses enable the use of the new concept of dynamical compensation in the
context of parameter identification, and reconcile it with the desirable
property of structural identifiability.
| 1 | 0 | 0 | 0 | 0 | 0 |
Characterization of the beam from the RFQ of the PIP-II Injector Test | A 2.1 MeV, 10 mA CW RFQ has been installed and commissioned at Fermilab's
test accelerator known as PIP-II Injector Test. This report describes the
measurements of the beam properties after acceleration in the RFQ, including
the energy and emittance.
| 0 | 1 | 0 | 0 | 0 | 0 |
Money on the Table: Statistical information ignored by Softmax can improve classifier accuracy | Softmax is a standard final layer used in Neural Nets (NNs) to summarize
information encoded in the trained NN and return a prediction. However, Softmax
leverages only a subset of the class-specific structure encoded in the trained
model and ignores potentially valuable information: During training, models
encode an array $D$ of class response distributions, where $D_{ij}$ is the
distribution of the $j^{th}$ pre-Softmax readout neuron's responses to the
$i^{th}$ class. Given a test sample, Softmax implicitly uses only the row of
this array $D$ that corresponds to the readout neurons' responses to the
sample's true class. Leveraging more of this array $D$ can improve classifier
accuracy, because the likelihoods of two competing classes can be encoded in
other rows of $D$.
To explore this potential resource, we develop a hybrid classifier
(Softmax-Pooling Hybrid, $SPH$) that uses Softmax on high-scoring samples, but
on low-scoring samples uses a log-likelihood method that pools the information
from the full array $D$. We apply $SPH$ to models trained on a vectorized MNIST
dataset to varying levels of accuracy. $SPH$ replaces only the final Softmax
layer in the trained NN, at test time only. All training is the same as for
Softmax. Because the pooling classifier performs better than Softmax on
low-scoring samples, $SPH$ reduces test set error by 6% to 23%, using the exact
same trained model, whatever the baseline Softmax accuracy. This reduction in
error reflects hidden capacity of the trained NN that is left unused by
Softmax.
| 1 | 0 | 0 | 1 | 0 | 0 |
Generalized Stieltjes constants and integrals involving the log-log function: Kummer's Theorem in action | In this note, we recall Kummer's Fourier series expansion of the 1-periodic
function that coincides with the logarithm of the Gamma function on the unit
interval $(0,1)$, and we use it to find closed forms for some numerical series
related to the generalized Stieltjes constants, and some integrals involving
the function $x\mapsto \ln \ln(1/x)$.
| 0 | 0 | 1 | 0 | 0 | 0 |
On the Combinatorial Lower Bound for the Extension Complexity of the Spanning Tree Polytope | In the study of extensions of polytopes of combinatorial optimization
problems, a notorious open question is that for the size of the smallest
extended formulation of the Minimum Spanning Tree problem on a complete graph
with $n$ nodes. The best known lower bound is the trival (dimension) bound,
$\Omega(n^2)$, the best known upper bound is the extended formulation by Wong
(1980) of size $O(n^3)$ (also Martin, 1991).
In this note we give a nondeterministic communication protocol with cost
$\log_2(n^2\log n)+O(1)$ for the support of the spanning tree slack matrix.
This means that the combinatorial lower bounds can improve the trivial lower
bound only by a factor of (at most) $O(\log n)$.
| 1 | 0 | 1 | 0 | 0 | 0 |
An Introduction to Classic DEVS | DEVS is a popular formalism for modelling complex dynamic systems using a
discrete-event abstraction. At this abstraction level, a timed sequence
ofpertinent "events" input to a system (or internal, in the case of timeouts)
cause instantaneous changes to the state of the system. Between events, the
state does not change, resulting in a a piecewise constant state trajectory.
Main advantages of DEVS are its rigorous formal definition, and its support for
modular composition.
This chapter introduces the Classic DEVS formalism in a bottom-up fashion,
using a simple traffic light example. The syntax and operational semantics of
Atomic (i.e., non-hierarchical) models are intruced first. The semantics of
Coupled (hierarchical) models is then given by translation into Atomic DEVS
models. As this formal "flattening" is not efficient, a modular abstract
simulator which operates directly on the coupled model is also presented. This
is the common basis for subsequent efficient implementations. We continue to
actual applications of DEVS modelling and simulation, as seen in performance
analysis for queueing systems. Finally, we present some of the shortcomings in
the Classic DEVS formalism, and show solutions to them in the form of variants
of the original formalism.
| 1 | 0 | 0 | 0 | 0 | 0 |
Nanoscale Solid State Batteries Enabled By Thermal Atomic Layer Deposition of a Lithium Polyphosphazene Solid State Electrolyte | Several active areas of research in novel energy storage technologies,
including three-dimensional solid state batteries and passivation coatings for
reactive battery electrode components, require conformal solid state
electrolytes. We describe an atomic layer deposition (ALD) process for a member
of the lithium phosphorus oxynitride (LiPON) family, which is employed as a
thin film lithium-conducting solid electrolyte. The reaction between lithium
tert-butoxide (LiO$^t$Bu) and diethyl phosphoramidate (DEPA) produces
conformal, ionically conductive thin films with a stoichiometry close to
Li$_2$PO$_2$N between 250 and 300$^\circ$C. The P/N ratio of the films is
always 1, indicative of a particular polymorph of LiPON which closely resembles
a polyphosphazene. Films grown at 300$^\circ$C have an ionic conductivity of
$6.51\:(\pm0.36)\times10^{-7}$ S/cm at 35$^\circ$C, and are functionally
electrochemically stable in the window from 0 to 5.3V vs. Li/Li$^+$. We
demonstrate the viability of the ALD-grown electrolyte by integrating it into
full solid state batteries, including thin film devices using LiCoO$_2$ as the
cathode and Si as the anode operating at up to 1 mA/cm$^2$. The high quality of
the ALD growth process allows pinhole-free deposition even on rough crystalline
surfaces, and we demonstrate the fabrication and operation of thin film
batteries with the thinnest (<100nm) solid state electrolytes yet reported.
Finally, we show an additional application of the moderate-temperature ALD
process by demonstrating a flexible solid state battery fabricated on a polymer
substrate.
| 0 | 1 | 0 | 0 | 0 | 0 |
Classification of crystallization outcomes using deep convolutional neural networks | The Machine Recognition of Crystallization Outcomes (MARCO) initiative has
assembled roughly half a million annotated images of macromolecular
crystallization experiments from various sources and setups. Here,
state-of-the-art machine learning algorithms are trained and tested on
different parts of this data set. We find that more than 94% of the test images
can be correctly labeled, irrespective of their experimental origin. Because
crystal recognition is key to high-density screening and the systematic
analysis of crystallization experiments, this approach opens the door to both
industrial and fundamental research applications.
| 0 | 0 | 0 | 1 | 1 | 0 |
Private Information, Credit Risk and Graph Structure in P2P Lending Networks | This research investigated the potential for improving Peer-to-Peer (P2P)
credit scoring by using "private information" about communications and travels
of borrowers. We found that P2P borrowers' ego networks exhibit scale-free
behavior driven by underlying preferential attachment mechanisms that connect
borrowers in a fashion that can be used to predict loan profitability. The
projection of these private networks onto networks of mobile phone
communication and geographical locations from mobile phone GPS potentially give
loan providers access to private information through graph and location metrics
which we used to predict loan profitability. Graph topology was found to be an
important predictor of loan profitability, explaining over 5.5% of variability.
Networks of borrower location information explain an additional 19% of the
profitability. Machine learning algorithms were applied to the data set
previously analyzed to develop the predictive model and resulted in a 4%
reduction in mean squared error.
| 1 | 0 | 0 | 0 | 0 | 1 |
Nonmonotonous classical magneto-conductivity of a two-dimensional electron gas in a disordered array of obstacles | Magnetotransport measurements in combination with molecular dynamics (MD)
simulations on two-dimensional disordered Lorentz gases in the classical regime
are reported. In quantitative agreement between experiment and simulation, the
magnetoconductivity displays a pronounced peak as a function of perpendicular
magnetic field $B$ which cannot be explained in the framework of existing
kinetic theories. We show that this peak is linked to the onset of a directed
motion of the electrons along the contour of the disordered obstacle matrix
when the cyclotron radius becomes smaller than the size of the obstacles. This
directed motion leads to transient superdiffusive motion and strong scaling
corrections in the vicinity of the insulator-to-conductor transitions of the
Lorentz gas.
| 0 | 1 | 0 | 0 | 0 | 0 |
Pushing STEM-education through a social-media-based contest format - experiences and lessons-learned from the H2020-project SciChallenge | Science education is a crucial issue with long-term impacts for Europe as the
low enrolment rates in the STEM-fields, including (natural) science,
technology, engineering and mathematics, will lead to a workforce problem in
research and development. In order to address this challenge, the EU-funded
research project SciChallenge (project.scichallenge.eu) aims to find a new way
for getting young people more interested in STEM. For this purpose, the project
developed and implemented a social-media-based STEM-contest for young people,
which aims at increasing the attractiveness of science education and careers
among young people. In the first two parts, the paper reflects on the problem,
introduces the project and highlights the main steps of the preparation of the
contest. The third section of the paper presents the idea, design and
implementation of the digital contest platform (www.scichallenge.eu), which
serves as the core of the challenge. The fourth part of the paper will provide
a status update on the contest pilot. It will provide a summary of the
experiences that the consortium made with this novel approach as well as the
main obstacles that the consortium was facing. The paper will conclude with a
preliminary reflection on the question if such an approach can help to increase
the interest of young people in STEM-education and careers.
| 1 | 0 | 0 | 0 | 0 | 0 |
Hidden area and mechanical nonlinearities in freestanding graphene | We investigated the effect of out-of-plane crumpling on the mechanical
response of graphene membranes. In our experiments, stress was applied to
graphene membranes using pressurized gas while the strain state was monitored
through two complementary techniques: interferometric profilometry and Raman
spectroscopy. By comparing the data obtained through these two techniques, we
determined the geometric hidden area which quantifies the crumpling strength.
While the devices with hidden area $\sim0~\%$ obeyed linear mechanics with
biaxial stiffness $428\pm10$ N/m, specimens with hidden area in the range
$0.5-1.0~\%$ were found to obey an anomalous Hooke's law with an exponent
$\sim0.1$.
| 0 | 1 | 0 | 0 | 0 | 0 |
Solving Partial Differential Equations on Manifolds From Incomplete Inter-Point Distance | Solutions of partial differential equations (PDEs) on manifolds have provided
important applications in different fields in science and engineering. Existing
methods are majorly based on discretization of manifolds as implicit functions,
triangle meshes, or point clouds, where the manifold structure is approximated
by either zero level set of an implicit function or a set of points. In many
applications, manifolds might be only provided as an inter-point distance
matrix with possible missing values. This paper discusses a framework to
discretize PDEs on manifolds represented as incomplete inter-point distance
information. Without conducting a time-consuming global coordinates
reconstruction, we propose a more efficient strategy by discretizing
differential operators only based on point-wisely local reconstruction. Our
local reconstruction model is based on the recent advances of low-rank matrix
completion theory, where only a very small random portion of distance
information is required. This method enables us to conduct analyses of
incomplete distance data using solutions of special designed PDEs such as the
Laplace-Beltrami (LB) eigen-system. As an application, we demonstrate a new way
of manifold reconstruction from an incomplete distance by stitching patches
using the spectrum of the LB operator. Intensive numerical experiments
demonstrate the effectiveness of the proposed methods.
| 1 | 0 | 1 | 0 | 0 | 0 |
Zonal Flow Magnetic Field Interaction in the Semi-Conducting Region of Giant Planets | All four giant planets in the Solar System feature zonal flows on the order
of 100 m/s in the cloud deck, and large-scale intrinsic magnetic fields on the
order of 1 Gauss near the surface. The vertical structure of the zonal flows
remains obscure. The end-member scenarios are shallow flows confined in the
radiative atmosphere and deep flows throughout the entire planet. The
electrical conductivity increases rapidly yet smoothly as a function of depth
inside Jupiter and Saturn. Deep zonal flows will inevitably interact with the
magnetic field, at depth with even modest electrical conductivity. Here we
investigate the interaction between zonal flows and magnetic fields in the
semi-conducting region of giant planets. Employing mean-field electrodynamics,
we show that the interaction will generate detectable poloidal magnetic field
perturbations spatially correlated with the deep zonal flows. Assuming the peak
amplitude of the dynamo alpha-effect to be 0.1 mm/s, deep zonal flows on the
order of 0.1 - 1 m/s in the semi-conducting region of Jupiter and Saturn would
generate poloidal magnetic perturbations on the order of 0.01% - 1% of the
background dipole field. These poloidal perturbations should be detectable with
the in-situ magnetic field measurements from the Juno mission and the Cassini
Grand Finale. This implies that magnetic field measurements can be employed to
constrain the properties of deep zonal flows in the semi-conducting region of
giant planets.
| 0 | 1 | 0 | 0 | 0 | 0 |
Superpixel-based Semantic Segmentation Trained by Statistical Process Control | Semantic segmentation, like other fields of computer vision, has seen a
remarkable performance advance by the use of deep convolution neural networks.
However, considering that neighboring pixels are heavily dependent on each
other, both learning and testing of these methods have a lot of redundant
operations. To resolve this problem, the proposed network is trained and tested
with only 0.37% of total pixels by superpixel-based sampling and largely
reduced the complexity of upsampling calculation. The hypercolumn feature maps
are constructed by pyramid module in combination with the convolution layers of
the base network. Since the proposed method uses a very small number of sampled
pixels, the end-to-end learning of the entire network is difficult with a
common learning rate for all the layers. In order to resolve this problem, the
learning rate after sampling is controlled by statistical process control (SPC)
of gradients in each layer. The proposed method performs better than or equal
to the conventional methods that use much more samples on Pascal Context,
SUN-RGBD dataset.
| 1 | 0 | 0 | 0 | 0 | 0 |
Photometric Stereo by Hemispherical Metric Embedding | Photometric Stereo methods seek to reconstruct the 3d shape of an object from
motionless images obtained with varying illumination. Most existing methods
solve a restricted problem where the physical reflectance model, such as
Lambertian reflectance, is known in advance. In contrast, we do not restrict
ourselves to a specific reflectance model. Instead, we offer a method that
works on a wide variety of reflectances. Our approach uses a simple yet
uncommonly used property of the problem - the sought after normals are points
on a unit hemisphere. We present a novel embedding method that maps pixels to
normals on the unit hemisphere. Our experiments demonstrate that this approach
outperforms existing manifold learning methods for the task of hemisphere
embedding. We further show successful reconstructions of objects from a wide
variety of reflectances including smooth, rough, diffuse and specular surfaces,
even in the presence of significant attached shadows. Finally, we empirically
prove that under these challenging settings we obtain more accurate shape
reconstructions than existing methods.
| 1 | 0 | 0 | 0 | 0 | 0 |
Time Reversal, SU(N) Yang-Mills and Cobordisms: Interacting Topological Superconductors/Insulators and Quantum Spin Liquids in 3+1D | We introduce a web of strongly correlated interacting 3+1D topological
superconductors/insulators of 10 particular global symmetry groups of Cartan
classes, realizable in electronic condensed matter systems, and their new SU(N)
generalizations. The symmetries include SU(N), SU(2), U(1), fermion parity,
time reversal and relate to each other through symmetry embeddings. We overview
the lattice Hamiltonian formalism. We complete the list of field theories of
bulk symmetry-protected topological invariants (SPT invariants/partition
functions that exhibit boundary 't Hooft anomalies) via cobordism calculations,
matching their full classification. We also present explicit 4-manifolds that
detect these SPTs. On the other hand, once we dynamically gauge part of their
global symmetries, we arrive in various new phases of SU(N) Yang-Mills (YM)
gauge theories, analogous to quantum spin liquids with emergent gauge fields.
We discuss how coupling YM theories to time reversal-SPTs affects the strongly
coupled theories at low energy. For example, we point out a possibility of
having two deconfined gapless time-reversal symmetric SU(2) YM theories at
$\theta=\pi$ as two distinct conformal field theories, which although are
secretly indistinguishable by correlators of local operators on orientable
spacetimes nor by gapped SPT states, can be distinguished on non-orientable
spacetimes or potentially by correlators of extended operators.
| 0 | 1 | 1 | 0 | 0 | 0 |
Forest-based methods and ensemble model output statistics for rainfall ensemble forecasting | Rainfall ensemble forecasts have to be skillful for both low precipitation
and extreme events. We present statistical post-processing methods based on
Quantile Regression Forests (QRF) and Gradient Forests (GF) with a parametric
extension for heavy-tailed distributions. Our goal is to improve ensemble
quality for all types of precipitation events, heavy-tailed included, subject
to a good overall performance. Our hybrid proposed methods are applied to daily
51-h forecasts of 6-h accumulated precipitation from 2012 to 2015 over France
using the M{é}t{é}o-France ensemble prediction system called PEARP. They
provide calibrated pre-dictive distributions and compete favourably with
state-of-the-art methods like Analogs method or Ensemble Model Output
Statistics. In particular, hybrid forest-based procedures appear to bring an
added value to the forecast of heavy rainfall.
| 0 | 0 | 1 | 1 | 0 | 0 |
Non interactive simulation of correlated distributions is decidable | A basic problem in information theory is the following: Let $\mathbf{P} =
(\mathbf{X}, \mathbf{Y})$ be an arbitrary distribution where the marginals
$\mathbf{X}$ and $\mathbf{Y}$ are (potentially) correlated. Let Alice and Bob
be two players where Alice gets samples $\{x_i\}_{i \ge 1}$ and Bob gets
samples $\{y_i\}_{i \ge 1}$ and for all $i$, $(x_i, y_i) \sim \mathbf{P}$. What
joint distributions $\mathbf{Q}$ can be simulated by Alice and Bob without any
interaction?
Classical works in information theory by G{á}cs-K{ö}rner and Wyner answer
this question when at least one of $\mathbf{P}$ or $\mathbf{Q}$ is the
distribution on $\{0,1\} \times \{0,1\}$ where each marginal is unbiased and
identical. However, other than this special case, the answer to this question
is understood in very few cases. Recently, Ghazi, Kamath and Sudan showed that
this problem is decidable for $\mathbf{Q}$ supported on $\{0,1\} \times
\{0,1\}$. We extend their result to $\mathbf{Q}$ supported on any finite
alphabet.
We rely on recent results in Gaussian geometry (by the authors) as well as a
new \emph{smoothing argument} inspired by the method of \emph{boosting} from
learning theory and potential function arguments from complexity theory and
additive combinatorics.
| 1 | 0 | 1 | 0 | 0 | 0 |
Effective Completeness for S4.3.1-Theories with Respect to Discrete Linear Models | The computable model theory of modal logic was initiated by Suman Ganguli and
Anil Nerode in [4]. They use an effective Henkin-type construction to
effectivize various completeness theorems from classical modal logic. This
construction has the feature of only producing models whose frames can be
obtained by adding edges to a tree digraph. Consequently, this construction
cannot prove an effective version of a well-known completeness theorem which
states that every $\mathsf{S4.3.1}$-theory has a model whose accessibility
relation is a linear order of order type $\omega$. We prove an effectivization
of that theorem by means of a new construction adapted from that of Ganguli and
Nerode.
| 0 | 0 | 1 | 0 | 0 | 0 |
DeepTingle | DeepTingle is a text prediction and classification system trained on the
collected works of the renowned fantastic gay erotica author Chuck Tingle.
Whereas the writing assistance tools you use everyday (in the form of
predictive text, translation, grammar checking and so on) are trained on
generic, purportedly "neutral" datasets, DeepTingle is trained on a very
specific, internally consistent but externally arguably eccentric dataset. This
allows us to foreground and confront the norms embedded in data-driven
creativity and productivity assistance tools. As such tools effectively
function as extensions of our cognition into technology, it is important to
identify the norms they embed within themselves and, by extension, us.
DeepTingle is realized as a web application based on LSTM networks and the
GloVe word embedding, implemented in JavaScript with Keras-JS.
| 1 | 0 | 0 | 0 | 0 | 0 |
Degrees of Freedom in Cached MIMO Relay Networks With Multiple Base Stations | The ability of physical layer relay caching to increase the degrees of
freedom (DoF) of a single cell was recently illustrated. In this paper, we
extend this result to the case of multiple cells in which a caching relay is
shared among multiple non-cooperative base stations (BSs). In particular, we
show that a large DoF gain can be achieved by exploiting the benefits of having
a shared relay that cooperates with the BSs. We first propose a cache-assisted
relaying protocol that improves the cooperation opportunity between the BSs and
the relay. Next, we consider the cache content placement problem that aims to
design the cache content at the relay such that the DoF gain is maximized. We
propose an optimal algorithm and a near-optimal low-complexity algorithm for
the cache content placement problem. Simulation results show significant
improvement in the DoF gain using the proposed relay-caching protocol.
| 1 | 0 | 0 | 0 | 0 | 0 |
A perturbation theory for water with an associating reference fluid | The theoretical description of the thermodynamics of water is challenged by
the structural transition towards tetrahedral symmetry at ambient conditions.
As perturbation theories typically assume a spherically symmetric reference
fluid, they are incapable of accurately describing the liquid properties of
water at ambient conditions. In this paper we solve this problem, by
introducing the concept of an associated reference perturbation theory (APT).
In APT we treat the reference fluid as an associating hard sphere fluid which
transitions to tetrahedral symmetry in the fully hydrogen bonded limit. We
calculate this transition in a theoretically self-consistent manner without
appealing to molecular simulations. This associated reference provides the
reference fluid for a second order Barker-Hendersen perturbative treatment of
the long-range attractions. We demonstrate that this new approach gives a
significantly improved description of water as compared to standard
perturbation theories.
| 0 | 1 | 0 | 0 | 0 | 0 |
Finite Sample Complexity of Sequential Monte Carlo Estimators | We present bounds for the finite sample error of sequential Monte Carlo
samplers on static spaces. Our approach explicitly relates the performance of
the algorithm to properties of the chosen sequence of distributions and mixing
properties of the associated Markov kernels. This allows us to give the first
finite sample comparison to other Monte Carlo schemes. We obtain bounds for the
complexity of sequential Monte Carlo approximations for a variety of target
distributions including finite spaces, product measures, and log-concave
distributions including Bayesian logistic regression. The bounds obtained are
within a logarithmic factor of similar bounds obtainable for Markov chain Monte
Carlo.
| 0 | 0 | 0 | 1 | 0 | 0 |
Reduced chemistry for butanol isomers at engine-relevant conditions | Butanol has received significant research attention as a second-generation
biofuel in the past few years. In the present study, skeletal mechanisms for
four butanol isomers were generated from two widely accepted, well-validated
detailed chemical kinetic models for the butanol isomers. The detailed models
were reduced using a two-stage approach consisting of the directed relation
graph with error propagation and sensitivity analysis. During the reduction
process, issues were encountered with pressure-dependent reactions formulated
using the logarithmic pressure interpolation approach; these issues are
discussed and recommendations made to avoid ambiguity in its future
implementation in mechanism development. The performance of the skeletal
mechanisms generated here was compared with that of detailed mechanisms in
simulations of autoignition delay times, laminar flame speeds, and perfectly
stirred reactor temperature response curves and extinction residence times,
over a wide range of pressures, temperatures, and equivalence ratios. The
detailed and skeletal mechanisms agreed well, demonstrating the adequacy of the
resulting reduced chemistry for all the butanol isomers in predicting global
combustion phenomena. In addition, the skeletal mechanisms closely predicted
the time-histories of fuel mass fractions in homogeneous compression-ignition
engine simulations. The performance of each butanol isomer was additionally
compared with that of a gasoline surrogate with an antiknock index of 87 in a
homogeneous compression-ignition engine simulation. The gasoline surrogate was
consumed faster than any of the butanol isomers, with tert-butanol exhibiting
the slowest fuel consumption rate. While n-butanol and isobutanol displayed the
most similar consumption profiles relative to the gasoline surrogate, the two
literature chemical kinetic models predicted different orderings.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Measure of Dependence Between Discrete and Continuous Variables | Mutual Information (MI) is an useful tool for the recognition of mutual
dependence berween data sets. Differen methods for the estimation of MI have
been developed when both data sets are discrete or when both data sets are
continuous. The MI estimation between a discrete data set and a continuous data
set has not received so much attention. We present here a method for the
estimation of MI for this last case based on the kernel density approximation.
The calculation may be of interest in diverse contexts. Since MI is closely
related to Jensen Shannon divergence, the method here developed is of
particular interest in the problem of sequence segmentation.
| 0 | 0 | 0 | 1 | 0 | 0 |
Stein Variational Online Changepoint Detection with Applications to Hawkes Processes and Neural Networks | Bayesian online changepoint detection (BOCPD) (Adams & MacKay, 2007) offers a
rigorous and viable way to identity changepoints in complex systems. In this
work, we introduce a Stein variational online changepoint detection (SVOCD)
method to provide a computationally tractable generalization of BOCPD beyond
the exponential family of probability distributions. We integrate the recently
developed Stein variational Newton (SVN) method (Detommaso et al., 2018) and
BOCPD to offer a full online Bayesian treatment for a large number of
situations with significant importance in practice. We apply the resulting
method to two challenging and novel applications: Hawkes processes and long
short-term memory (LSTM) neural networks. In both cases, we successfully
demonstrate the efficacy of our method on real data.
| 1 | 0 | 0 | 1 | 0 | 0 |
Quantum Teleportation and Super-dense Coding in Operator Algebras | Let $\mathcal{B}_d$ be the unital $C^*$-algebra generated by the elements
$u_{jk}, \, 0 \le i, j \le d-1$, satisfying the relations that $[u_{j,k}]$ is a
unitary operator, and let $C^*(\mathbb{F}_{d^2})$ be the full group
$C^*$-algebra of free group of $d^2$ generators. Based on the idea of
teleportation and super-dense coding in quantum information theory, we exhibit
the two $*$-isomorphisms $M_d(C^*(\mathbb{F}_{d^2}))\cong \mathcal{B}_d\rtimes
\mathbb{Z}_d\rtimes \mathbb{Z}_d$ and $M_d(\mathcal{B}_d)\cong
C^*(\mathbb{F}_{d^2})\rtimes \mathbb{Z}_d\rtimes \mathbb{Z}_d$, for certain
actions of $\mathbb{Z}_d$. As an application, we show that for any $d,m\ge 2$
with $(d,m)\neq (2,2)$, the matrix-valued generalization of the (tensor
product) quantum correlation set of $d$ inputs and $m$ outputs is not closed.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Practical Randomized CP Tensor Decomposition | The CANDECOMP/PARAFAC (CP) decomposition is a leading method for the analysis
of multiway data. The standard alternating least squares algorithm for the CP
decomposition (CP-ALS) involves a series of highly overdetermined linear least
squares problems. We extend randomized least squares methods to tensors and
show the workload of CP-ALS can be drastically reduced without a sacrifice in
quality. We introduce techniques for efficiently preprocessing, sampling, and
computing randomized least squares on a dense tensor of arbitrary order, as
well as an efficient sampling-based technique for checking the stopping
condition. We also show more generally that the Khatri-Rao product (used within
the CP-ALS iteration) produces conditions favorable for direct sampling. In
numerical results, we see improvements in speed, reductions in memory
requirements, and robustness with respect to initialization.
| 1 | 0 | 0 | 0 | 0 | 0 |
Two dimensional potential flow around a rectangular pole solved by a multiple linear regression | A potential flow around a circular cylinder is a commonly examined problem in
an introductory physics class. We pose a similar problem but with different
boundary conditions where a rectangular pole replaces a circular cylinder. We
demonstrate to solve the problem by deriving a general solution for the flow in
the form of an infinite series and determining the coefficients in the series
using a multiple linear regression. When the size of a pole is specified, our
solution provides a quantitative estimate of the characteristic length scale of
the potential flow. Our analysis implies that the potential flow around a
rectangular pole of the diagonal 1 is equivalent to the potential flow around a
circle of diameter 0.78 to a distant observer.
| 0 | 1 | 0 | 1 | 0 | 0 |
Fixed Price Approximability of the Optimal Gain From Trade | Bilateral trade is a fundamental economic scenario comprising a strategically
acting buyer and seller, each holding valuations for the item, drawn from
publicly known distributions. A mechanism is supposed to facilitate trade
between these agents, if such trade is beneficial. It was recently shown that
the only mechanisms that are simultaneously DSIC, SBB, and ex-post IR, are
fixed price mechanisms, i.e., mechanisms that are parametrised by a price p,
and trade occurs if and only if the valuation of the buyer is at least p and
the valuation of the seller is at most p. The gain from trade is the increase
in welfare that results from applying a mechanism; here we study the gain from
trade achievable by fixed price mechanisms. We explore this question for both
the bilateral trade setting, and a double auction setting where there are
multiple buyers and sellers. We first identify a fixed price mechanism that
achieves a gain from trade of at least 2/r times the optimum, where r is the
probability that the seller's valuation does not exceed the buyer's valuation.
This extends a previous result by McAfee. Subsequently, we improve this
approximation factor in an asymptotic sense, by showing that a more
sophisticated rule for setting the fixed price results in an expected gain from
trade within a factor O(log(1/r)) of the optimal gain from trade. This is
asymptotically the best approximation factor possible. Lastly, we extend our
study of fixed price mechanisms to the double auction setting defined by a set
of multiple i.i.d. unit demand buyers, and i.i.d. unit supply sellers. We
present a fixed price mechanism that achieves a gain from trade that achieves
for all epsilon > 0 a gain from trade of at least (1-epsilon) times the
expected optimal gain from trade with probability 1 - 2/e^{#T epsilon^2 /2},
where #T is the expected number of trades resulting from the double auction.
| 1 | 0 | 0 | 0 | 0 | 0 |
$^{139}$La and $^{63}$Cu NMR investigation of charge order in La$_{2}$CuO$_{4+y}$ ($T_{c}=42$K) | We report $^{139}$La and $^{63}$Cu NMR investigation of the successive charge
order, spin order, and superconducting transitions in super-oxygenated
La$_2$CuO$_{4+y}$ single crystal with stage-4 excess oxygen order at
$T_{stage}\simeq 290$ K. We show that the stage-4 order induces tilting of
CuO$_6$ octahedra below $T_{stage}$, which in turn causes $^{139}$La NMR line
broadening. The structural distortion continues to develop far below
$T_{stage}$, and completes at $T_{charge}\simeq 60$ K, where charge order sets
in. This sequence is reminiscent of the the charge order transition in Nd
co-doped La$_{1.88}$Sr$_{0.12}$CuO$_4$ that sets in once the low temperature
tetragonal (LTT) phase is established. We also show that the paramagnetic
$^{63}$Cu NMR signals are progressively wiped out below $T_{charge}$ due to
enhanced low frequency spin fluctuations, but the residual $^{63}$Cu NMR
signals continue to exhibit the characteristics expected for optimally doped
superconducting CuO$_2$ planes. This indicates that charge order in
La$_2$CuO$_{4+y}$ does not take place uniformly in space. Low frequency Cu spin
fluctuations as probed by $^{139}$La nuclear spin-lattice relaxation rate are
mildly glassy, and do not exhibit critical divergence at $T_{spin}$($\simeq
T_{c}$)=42 K. These findings, including the spatially inhomogeneous nature of
the charge ordered state, are qualitatively similar to the case of
La$_{1.885}$Sr$_{0.115}$CuO$_4$ [T. Imai et al., Phys. Rev. B 96 (2017) 224508,
and A. Arsenault et al., Phys. Rev. B 97 (2018) 064511], but both charge and
spin order take place more sharply in the present case.
| 0 | 1 | 0 | 0 | 0 | 0 |
An Exploratory Study of Field Failures | Field failures, that is, failures caused by faults that escape the testing
phase leading to failures in the field, are unavoidable. Improving verification
and validation activities before deployment can identify and timely remove many
but not all faults, and users may still experience a number of annoying
problems while using their software systems. This paper investigates the nature
of field failures, to understand to what extent further improving in-house
verification and validation activities can reduce the number of failures in the
field, and frames the need of new approaches that operate in the field. We
report the results of the analysis of the bug reports of five applications
belonging to three different ecosystems, propose a taxonomy of field failures,
and discuss the reasons why failures belonging to the identified classes cannot
be detected at design time but shall be addressed at runtime. We observe that
many faults (70%) are intrinsically hard to detect at design-time.
| 1 | 0 | 0 | 0 | 0 | 0 |
Crowdsourcing Predictors of Residential Electric Energy Usage | Crowdsourcing has been successfully applied in many domains including
astronomy, cryptography and biology. In order to test its potential for useful
application in a Smart Grid context, this paper investigates the extent to
which a crowd can contribute predictive hypotheses to a model of residential
electric energy consumption. In this experiment, the crowd generated hypotheses
about factors that make one home different from another in terms of monthly
energy usage. To implement this concept, we deployed a web-based system within
which 627 residential electricity customers posed 632 questions that they
thought predictive of energy usage. While this occurred, the same group
provided 110,573 answers to these questions as they accumulated. Thus users
both suggested the hypotheses that drive a predictive model and provided the
data upon which the model is built. We used the resulting question and answer
data to build a predictive model of monthly electric energy consumption, using
random forest regression. Because of the sparse nature of the answer data,
careful statistical work was needed to ensure that these models are valid. The
results indicate that the crowd can generate useful hypotheses, despite the
sparse nature of the dataset.
| 1 | 0 | 0 | 1 | 0 | 0 |
One-dimensional fluids with positive potentials | We study a class of one-dimensional classical fluids with penetrable
particles interacting through positive, purely repulsive, pair-potentials.
Starting from some lower bounds to the total potential energy, we draw results
on the thermodynamic limit of the given model.
| 0 | 1 | 1 | 0 | 0 | 0 |
Recovery of Sparse and Low Rank Components of Matrices Using Iterative Method with Adaptive Thresholding | In this letter, we propose an algorithm for recovery of sparse and low rank
components of matrices using an iterative method with adaptive thresholding. In
each iteration, the low rank and sparse components are obtained using a
thresholding operator. This algorithm is fast and can be implemented easily. We
compare it with one of the most common fast methods in which the rank and
sparsity are approximated by $\ell_1$ norm. We also apply it to some real
applications where the noise is not so sparse. The simulation results show that
it has a suitable performance with low run-time.
| 1 | 0 | 0 | 1 | 0 | 0 |
Reduced Order Modelling for the Simulation of Quenches in Superconducting Magnets | This contributions discusses the simulation of magnetothermal effects in
superconducting magnets as used in particle accelerators. An iterative coupling
scheme using reduced order models between a magnetothermal partial differential
model and an electrical lumped-element circuit is demonstrated. The
multiphysics, multirate and multiscale problem requires a consistent
formulation and framework to tackle the challenging transient effects occurring
at both system and device level.
| 1 | 1 | 0 | 0 | 0 | 0 |
Life in the "Matrix": Human Mobility Patterns in the Cyber Space | With the wide adoption of the multi-community setting in many popular social
media platforms, the increasing user engagements across multiple online
communities warrant research attention. In this paper, we introduce a novel
analogy between the movements in the cyber space and the physical space. This
analogy implies a new way of studying human online activities by modelling the
activities across online communities in a similar fashion as the movements
among locations. First, we quantitatively validate the analogy by comparing
several important properties of human online activities and physical movements.
Our experiments reveal striking similarities between the cyber space and the
physical space. Next, inspired by the established methodology on human mobility
in the physical space, we propose a framework to study human "mobility" across
online platforms. We discover three interesting patterns of user engagements in
online communities. Furthermore, our experiments indicate that people with
different mobility patterns also exhibit divergent preferences to online
communities. This work not only attempts to achieve a better understanding of
human online activities, but also intends to open a promising research
direction with rich implications and applications.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Novel Subclass of Univalent Functions Involving Operators of Fractional Calculus | In this paper, we introduce and investigate a novel class of analytic and
univalent functions of negative coefficients in the open unit disk. For this
function class, we obtain characterization and distortion theorems as well as
the radii of close-to-convexity, starlikeness and convexity by using fractional
calculus techniques.
| 0 | 0 | 1 | 0 | 0 | 0 |
Local bandwidth selection for kernel density estimation in bifurcating Markov chain model | We propose an adaptive estimator for the stationary distribution of a
bifurcating Markov Chain on $\mathbb R^d$. Bifurcating Markov chains (BMC for
short) are a class of stochastic processes indexed by regular binary trees. A
kernel estimator is proposed whose bandwidth is selected by a method inspired
by the works of Goldenshluger and Lepski [18]. Drawing inspiration from
dimension jump methods for model selection, we also provide an algorithm to
select the best constant in the penalty.
| 0 | 0 | 1 | 1 | 0 | 0 |
Sparse geometries handling in lattice-Boltzmann method implementation for graphic processors | We describe a high-performance implementation of the lattice-Boltzmann method
(LBM) for sparse geometries on graphic processors. In our implementation we
cover the whole geometry with a uniform mesh of small tiles and carry out
calculations for each tile independently with a proper data synchronization at
tile edges. For this method we provide both the theoretical analysis of
complexity and the results for real implementations for 2D and 3D geometries.
Based on the theoretical model, we show that tiles offer significantly smaller
bandwidth overhead than solutions based on indirect addressing. For
2-dimensional lattice arrangements a reduction of memory usage is also
possible, though at the cost of diminished performance. We reached the
performance of 682 MLUPS on GTX Titan (72\% of peak theoretical memory
bandwidth) for D3Q19 lattice arrangement and double precision data.
| 1 | 0 | 0 | 0 | 0 | 0 |
From Multimodal to Unimodal Webpages for Developing Countries | The multimodal web elements such as text and images are associated with
inherent memory costs to store and transfer over the Internet. With the limited
network connectivity in developing countries, webpage rendering gets delayed in
the presence of high-memory demanding elements such as images (relative to
text). To overcome this limitation, we propose a Canonical Correlation Analysis
(CCA) based computational approach to replace high-cost modality with an
equivalent low-cost modality. Our model learns a common subspace for low-cost
and high-cost modalities that maximizes the correlation between their visual
features. The obtained common subspace is used for determining the low-cost
(text) element of a given high-cost (image) element for the replacement. We
analyze the cost-saving performance of the proposed approach through an
eye-tracking experiment conducted on real-world webpages. Our approach reduces
the memory-cost by at least 83.35% by replacing images with text.
| 1 | 0 | 0 | 1 | 0 | 0 |
Light curves of hydrogen-poor Superluminous Supernovae from the Palomar Transient Factory | We investigate the light-curve properties of a sample of 26 spectroscopically
confirmed hydrogen-poor superluminous supernovae (SLSNe-I) in the Palomar
Transient Factory (PTF) survey. These events are brighter than SNe Ib/c and SNe
Ic-BL, on average, by about 4 and 2~mag, respectively. The peak absolute
magnitudes of SLSNe-I in rest-frame $g$ band span $-22\lesssim M_g
\lesssim-20$~mag, and these peaks are not powered by radioactive $^{56}$Ni,
unless strong asymmetries are at play. The rise timescales are longer for SLSNe
than for normal SNe Ib/c, by roughly 10 days, for events with similar decay
times. Thus, SLSNe-I can be considered as a separate population based on
photometric properties. After peak, SLSNe-I decay with a wide range of slopes,
with no obvious gap between rapidly declining and slowly declining events. The
latter events show more irregularities (bumps) in the light curves at all
times. At late times, the SLSN-I light curves slow down and cluster around the
$^{56}$Co radioactive decay rate. Powering the late-time light curves with
radioactive decay would require between 1 and 10${\rm M}_\odot$ of Ni masses.
Alternatively, a simple magnetar model can reasonably fit the majority of
SLSNe-I light curves, with four exceptions, and can mimic the radioactive decay
of $^{56}$Co, up to $\sim400$ days from explosion. The resulting spin values do
not correlate with the host-galaxy metallicities. Finally, the analysis of our
sample cannot strengthen the case for using SLSNe-I for cosmology.
| 0 | 1 | 0 | 0 | 0 | 0 |
Sharing Means Renting?: An Entire-marketplace Analysis of Airbnb | Airbnb, an online marketplace for accommodations, has experienced a
staggering growth accompanied by intense debates and scattered regulations
around the world. Current discourses, however, are largely focused on opinions
rather than empirical evidences. Here, we aim to bridge this gap by presenting
the first large-scale measurement study on Airbnb, using a crawled data set
containing 2.3 million listings, 1.3 million hosts, and 19.3 million reviews.
We measure several key characteristics at the heart of the ongoing debate and
the sharing economy. Among others, we find that Airbnb has reached a global yet
heterogeneous coverage. The majority of its listings across many countries are
entire homes, suggesting that Airbnb is actually more like a rental marketplace
rather than a spare-room sharing platform. Analysis on star-ratings reveals
that there is a bias toward positive ratings, amplified by a bias toward using
positive words in reviews. The extent of such bias is greater than Yelp
reviews, which were already shown to exhibit a positive bias. We investigate a
key issue---commercial hosts who own multiple listings on Airbnb---repeatedly
discussed in the current debate. We find that their existence is prevalent,
they are early-movers towards joining Airbnb, and their listings are
disproportionately entire homes and located in the US. Our work advances the
current understanding of how Airbnb is being used and may serve as an
independent and empirical reference to inform the debate.
| 1 | 1 | 0 | 0 | 0 | 0 |
What Happens - After the First Race? Enhancing the Predictive Power of Happens - Before Based Dynamic Race Detection | Dynamic race detection is the problem of determining if an observed program
execution reveals the presence of a data race in a program. The classical
approach to solving this problem is to detect if there is a pair of conflicting
memory accesses that are unordered by Lamport's happens-before (HB) relation.
HB based race detection is known to not report false positives, i.e., it is
sound. However, the soundness guarantee of HB only promises that the first pair
of unordered, conflicting events is a schedulable data race. That is, there can
be pairs of HB-unordered conflicting data accesses that are not schedulable
races because there is no reordering of the events of the execution, where the
events in race can be executed immediately after each other. We introduce a new
partial order, called schedulable happens-before (SHB) that exactly
characterizes the pairs of schedulable data races --- every pair of conflicting
data accesses that are identified by SHB can be scheduled, and every HB-race
that can be scheduled is identified by SHB. Thus, the SHB partial order is
truly sound. We present a linear time, vector clock algorithm to detect
schedulable races using SHB. Our experiments demonstrate the value of our
algorithm for dynamic race detection --- SHB incurs only little performance
overhead and can scale to executions from real-world software applications
without compromising soundness.
| 1 | 0 | 0 | 0 | 0 | 0 |
Proceedings of the Workshop on Data Mining for Oil and Gas | The process of exploring and exploiting Oil and Gas (O&G) generates a lot of
data that can bring more efficiency to the industry. The opportunities for
using data mining techniques in the "digital oil-field" remain largely
unexplored or uncharted. With the high rate of data expansion, companies are
scrambling to develop ways to develop near-real-time predictive analytics, data
mining and machine learning capabilities, and are expanding their data storage
infrastructure and resources. With these new goals, come the challenges of
managing data growth, integrating intelligence tools, and analyzing the data to
glean useful insights. Oil and Gas companies need data solutions to
economically extract value from very large volumes of a wide variety of data
generated from exploration, well drilling and production devices and sensors.
Data mining for oil and gas industry throughout the lifecycle of the
reservoir includes the following roles: locating hydrocarbons, managing
geological data, drilling and formation evaluation, well construction, well
completion, and optimizing production through the life of the oil field. For
each of these phases during the lifecycle of oil field, data mining play a
significant role. Based on which phase were talking about, knowledge creation
through scientific models, data analytics and machine learning, a effective,
productive, and on demand data insight is critical for decision making within
the organization.
The significant challenges posed by this complex and economically vital field
justify a meeting of data scientists that are willing to share their experience
and knowledge. Thus, the Worskhop on Data Mining for Oil and Gas (DM4OG) aims
to provide a quality forum for researchers that work on the significant
challenges arising from the synergy between data science, machine learning, and
the modeling and optimization problems in the O&G industry.
| 1 | 0 | 0 | 1 | 0 | 0 |
Chiral magnetic effect of light | We study a photonic analog of the chiral magnetic (vortical) effect. We
discuss that the vector component of magnetoelectric tensors plays a role of
"vector potential," and its rotation is understood as "magnetic field" of a
light. Using the geometrical optics approximation, we show that "magnetic
fields" cause an anomalous shift of a wave packet of a light through an
interplay with the Berry curvature of photons. The mechanism is the same as
that of the chiral magnetic (vortical) effect of a chiral fermion, so that we
term the anomalous shift "chiral magnetic effect of a light." We further study
the chiral magnetic effect of a light beyond geometric optics by directly
solving the transmission problem of a wave packet at a surface of a
magnetoelectric material. We show that the experimental signal of the chiral
magnetic effect of a light is the nonvanishing of transverse displacements for
the beam normally incident to a magnetoelectric material.
| 0 | 1 | 0 | 0 | 0 | 0 |
New Pressure-Induced Polymorphic Transitions of Anhydrous Magnesium Sulfate | The effects of pressure on the crystal structure of the three known
polymorphs of magnesium sulfate have been theoretically study by means of DFT
calculations up to 45 GPa. We determined that at ambient conditions gamma MgSO4
is an unstable polymorph, which decompose into MgO and SO3, and that the
response of the other two polymorphs to hydrostatic pressure is non isotropic.
Additionally we found that at all pressures beta MgSO4 has a largest enthalpy
than alpha MgSO4. This indicates that beta MgSO4 is thermodynamically unstable
versus alpha MgSO4 and predicts the occurrence of a beta alpha phase transition
under moderate compression. Our calculations also predict the existence under
pressure of additional phase transitions to two new polymorphs of MgSO4, which
we named as delta MgSO4 and epsilon MgSO4. The alpha delta transition is
predicted to occur at 17.5 GPa, and the delta epsilon transition at 35 GPa,
pressures that nowadays can be experimentally easily achieved. All the
predicted structural transforma ions are characterized as first order
transitions. This suggests that they can be non reversible, and therefore the
new polymorphs could be recovered as metastable polymorphs at ambient
conditions. The crystal structure of the two new polymorphs is reported. In
them, the coordination number of sulfur is four as in the previously known
polymorphs, but the coordination number of magnesium is eight instead of six.
In the article we will report the axial and bond compressibility for the four
polymorphs of MgSO4. The pressure volume equation of state of each phase is
also given. The values obtained for the bulk modulus are 62 GPa, 57 GPa, 102
GPa, and 119 GPa for alpha MgSO4, beta MgSO4, delta MgSO4, and epsilon MgSO4,
respectively. Finally, the electronic band structure of these four polymorphs
of MgSO4 has been calculated by the first time.
| 0 | 1 | 0 | 0 | 0 | 0 |
Optimal transport and integer partitions | We link the theory of optimal transportation to the theory of integer
partitions. Let $\mathscr P(n)$ denote the set of integer partitions of $n \in
\mathbb N$ and write partitions $\pi \in \mathscr P(n)$ as $(n_1, \dots,
n_{k(\pi)})$. Using terminology from optimal transport, we characterize certain
classes of partitions like symmetric partitions and those in Euler's identity
$|\{ \pi \in \mathscr P(n) |$ all $ n_i $ distinct $ \} | = | \{ \pi \in
\mathscr P(n) | $ all $ n_i $ odd $ \}|$.
Then we sketch how optimal transport might help to understand higher
dimensional partitions.
| 0 | 0 | 1 | 0 | 0 | 0 |
An Automated Auto-encoder Correlation-based Health-Monitoring and Prognostic Method for Machine Bearings | This paper studies an intelligent ultimate technique for health-monitoring
and prognostic of common rotary machine components, particularly bearings.
During a run-to-failure experiment, rich unsupervised features from vibration
sensory data are extracted by a trained sparse auto-encoder. Then, the
correlation of the extracted attributes of the initial samples (presumably
healthy at the beginning of the test) with the succeeding samples is calculated
and passed through a moving-average filter. The normalized output is named
auto-encoder correlation-based (AEC) rate which stands for an informative
attribute of the system depicting its health status and precisely identifying
the degradation starting point. We show that AEC technique well-generalizes in
several run-to-failure tests. AEC collects rich unsupervised features form the
vibration data fully autonomous. We demonstrate the superiority of the AEC over
many other state-of-the-art approaches for the health monitoring and prognostic
of machine bearings.
| 1 | 0 | 0 | 1 | 0 | 0 |
Temporal Type Theory: A topos-theoretic approach to systems and behavior | This book introduces a temporal type theory, the first of its kind as far as
we know. It is based on a standard core, and as such it can be formalized in a
proof assistant such as Coq or Lean by adding a number of axioms. Well-known
temporal logics---such as Linear and Metric Temporal Logic (LTL and
MTL)---embed within the logic of temporal type theory.
The types in this theory represent "behavior types". The language is rich
enough to allow one to define arbitrary hybrid dynamical systems, which are
mixtures of continuous dynamics---e.g. as described by a differential
equation---and discrete jumps. In particular, the derivative of a continuous
real-valued function is internally defined.
We construct a semantics for the temporal type theory in the topos of sheaves
on a translation-invariant quotient of the standard interval domain. In fact,
domain theory plays a recurring role in both the semantics and the type theory.
| 0 | 0 | 1 | 0 | 0 | 0 |
On Poletsky theory of discs in compact manifolds | We provide a direct construction of Poletsky discs via local arc
approximation and a Runge-type theorem by A. Gournay.
| 0 | 0 | 1 | 0 | 0 | 0 |
Continuous DR-submodular Maximization: Structure and Algorithms | DR-submodular continuous functions are important objectives with wide
real-world applications spanning MAP inference in determinantal point processes
(DPPs), and mean-field inference for probabilistic submodular models, amongst
others. DR-submodularity captures a subclass of non-convex functions that
enables both exact minimization and approximate maximization in polynomial
time.
In this work we study the problem of maximizing non-monotone DR-submodular
continuous functions under general down-closed convex constraints. We start by
investigating geometric properties that underlie such objectives, e.g., a
strong relation between (approximately) stationary points and global optimum is
proved. These properties are then used to devise two optimization algorithms
with provable guarantees. Concretely, we first devise a "two-phase" algorithm
with $1/4$ approximation guarantee. This algorithm allows the use of existing
methods for finding (approximately) stationary points as a subroutine, thus,
harnessing recent progress in non-convex optimization. Then we present a
non-monotone Frank-Wolfe variant with $1/e$ approximation guarantee and
sublinear convergence rate. Finally, we extend our approach to a broader class
of generalized DR-submodular continuous functions, which captures a wider
spectrum of applications. Our theoretical findings are validated on synthetic
and real-world problem instances.
| 1 | 0 | 0 | 1 | 0 | 0 |
A Structural Characterization for Certifying Robinsonian Matrices | A symmetric matrix is Robinsonian if its rows and columns can be
simultaneously reordered in such a way that entries are monotone nondecreasing
in rows and columns when moving toward the diagonal. The adjacency matrix of a
graph is Robinsonian precisely when the graph is a unit interval graph, so that
Robinsonian matrices form a matrix analogue of the class of unit interval
graphs. Here we provide a structural characterization for Robinsonian matrices
in terms of forbidden substructures, extending the notion of asteroidal triples
to weighted graphs. This implies the known characterization of unit interval
graphs and leads to an efficient algorithm for certifying that a matrix is not
Robinsonian.
| 1 | 0 | 1 | 0 | 0 | 0 |
A new scenario for gravity detection in plants: the position sensor hypothesis | The detection of gravity plays a fundamental role during the growth and
evolution of plants. Although progress has been made in our understanding of
the molecular, cellular and physical mechanisms involved in the gravity
detection, a coherent scenario consistent with all the observations is still
lacking. In this perspective paper we discuss recent experiments showing that
the response to inclination of shoots is independent of the gravity intensity,
meaning that the gravity sensor detects an inclination and not a force. This
result questions some of the commonly accepted hypotheses and leads to propose
a new "position sensor hypothesis". The implications of this new scenario are
discussed in the light of different observations available in the literature.
| 0 | 1 | 0 | 0 | 0 | 0 |
Techniques for proving Asynchronous Convergence results for Markov Chain Monte Carlo methods | Markov Chain Monte Carlo (MCMC) methods such as Gibbs sampling are finding
widespread use in applied statistics and machine learning. These often lead to
difficult computational problems, which are increasingly being solved on
parallel and distributed systems such as compute clusters. Recent work has
proposed running iterative algorithms such as gradient descent and MCMC in
parallel asynchronously for increased performance, with good empirical results
in certain problems. Unfortunately, for MCMC this parallelization technique
requires new convergence theory, as it has been explicitly demonstrated to lead
to divergence on some examples. Recent theory on Asynchronous Gibbs sampling
describes why these algorithms can fail, and provides a way to alter them to
make them converge. In this article, we describe how to apply this theory in a
generic setting, to understand the asynchronous behavior of any MCMC algorithm,
including those implemented using parameter servers, and those not based on
Gibbs sampling.
| 1 | 0 | 0 | 1 | 0 | 0 |
Predicting Hurricane Trajectories using a Recurrent Neural Network | Hurricanes are cyclones circulating about a defined center whose closed wind
speeds exceed 75 mph originating over tropical and subtropical waters. At
landfall, hurricanes can result in severe disasters. The accuracy of predicting
their trajectory paths is critical to reduce economic loss and save human
lives. Given the complexity and nonlinearity of weather data, a recurrent
neural network (RNN) could be beneficial in modeling hurricane behavior. We
propose the application of a fully connected RNN to predict the trajectory of
hurricanes. We employed the RNN over a fine grid to reduce typical truncation
errors. We utilized their latitude, longitude, wind speed, and pressure
publicly provided by the National Hurricane Center (NHC) to predict the
trajectory of a hurricane at 6-hour intervals. Results show that this proposed
technique is competitive to methods currently employed by the NHC and can
predict up to approximately 120 hours of hurricane path.
| 0 | 0 | 0 | 1 | 0 | 0 |
HJB equations in infinite dimension and optimal control of stochastic evolution equations via generalized Fukushima decomposition | A stochastic optimal control problem driven by an abstract evolution equation
in a separable Hilbert space is considered. Thanks to the identification of the
mild solution of the state equation as $\nu$-weak Dirichlet process, the value
processes is proved to be a real weak Dirichlet process. The uniqueness of the
corresponding decomposition is used to prove a verification theorem. Through
that technique several of the required assumptions are milder than those
employed in previous contributions about non-regular solutions of
Hamilton-Jacobi-Bellman equations.
| 0 | 0 | 1 | 0 | 0 | 0 |
Developing a Method to Determine Electrical Conductivity in Meteoritic Materials with Applications to Induction Heating Theory (2008 Student Thesis) | Magnetic induction was first proposed as a planetary heating mechanism by
Sonett and Colburn in 1968, in recent years this theory has lost favor as a
plausible source of heating in the early solar system. However, new models of
proto-planetary disk evolution suggest that magnetic fields play an important
role in solar system formation. In particular, the magneto-hydrodynamic
behavior of proto-planetary disks is believed to be responsible for the net
outward flow of angular momentum in the solar system. It is important to
re-evaluate the plausibility of magnetic induction based on the intense
magnetic field environments described by the most recent models of
proto-planetary disk evolution.
In order to re-evaluate electromagnetic induction theory the electrical
conductivity of meteorites must be determined. To develop a technique capable
of making these measurements, a time-varying magnetic field was generated to
inductively heat metallic control samples. The thermal response of each sample,
which depends on electrical conductivity, was monitored until a thermal steady
state was achieved. The relationship between conductivity and thermal response
can be exploited to estimate the electrical conductivity of unknown samples.
After applying the technique to various metals it was recognized that this
method is not capable of making precise electrical conductivity measurements.
However, this method can constrain the product of the electrical conductivity
and the square of the magnetic permeability, or ${\sigma}{{\mu}^2}$, for
meteoritic and metallic samples alike. The results also illustrate that along
with electrical conductivity {\sigma}, the magnetic permeability {\mu} of a
substance has an important effect on induction heating phenomena for
paramagnetic ({\mu}/{\mu}0 > 1) and especially ferromagnetic materials
({\mu}/{\mu}0 >> 1).
| 0 | 1 | 0 | 0 | 0 | 0 |
A KiDS weak lensing analysis of assembly bias in GAMA galaxy groups | We investigate possible signatures of halo assembly bias for
spectroscopically selected galaxy groups from the GAMA survey using weak
lensing measurements from the spatially overlapping regions of the deeper,
high-imaging-quality photometric KiDS survey. We use GAMA groups with an
apparent richness larger than 4 to identify samples with comparable mean host
halo masses but with a different radial distribution of satellite galaxies,
which is a proxy for the formation time of the haloes. We measure the weak
lensing signal for groups with a steeper than average and with a shallower than
average satellite distribution and find no sign of halo assembly bias, with the
bias ratio of $0.85^{+0.37}_{-0.25}$, which is consistent with the $\Lambda$CDM
prediction. Our galaxy groups have typical masses of $10^{13} M_{\odot}/h$,
naturally complementing previous studies of halo assembly bias on galaxy
cluster scales.
| 0 | 1 | 0 | 0 | 0 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.