title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Algebraic laminations for free products and arational trees | This work is the first step towards a description of the Gromov boundary of
the free factor graph of a free product, with applications to subgroup
classification for outer automorphisms. We extend the theory of algebraic
laminations dual to trees, as developed by Coulbois, Hilion, Lustig and
Reynolds, to the context of free products; this also gives us an opportunity to
give a unified account of this theory. We first show that any $\mathbb{R}$-tree
with dense orbits in the boundary of the corresponding outer space can be
reconstructed as a quotient of the boundary of the group by its dual
lamination. We then describe the dual lamination in terms of a band complex on
compact $\mathbb{R}$-trees (generalizing Coulbois-Hilion-Lustig's compact
heart), and we analyze this band complex using versions of the Rips machine and
of the Rauzy-Veech induction. An important output of the theory is that the
above map from the boundary of the group to the $\mathbb{R}$-tree is 2-to-1
almost everywhere.
A key point for our intended application is a unique duality result for
arational trees. It says that if two trees have a leaf in common in their dual
laminations, and if one of the trees is arational and relatively free, then
they are equivariantly homeomorphic.
This statement is an analogue of a result in the free group saying that if
two trees are dual to a common current and one of the trees is free arational,
then the two trees are equivariantly homeomorphic. However, we notice that in
the setting of free products, the continuity of the pairing between trees and
currents fails. For this reason, in all this paper, we work with laminations
rather than with currents.
| 0 | 0 | 1 | 0 | 0 | 0 |
Real-Time Recovery Efficiencies and Performance of the Palomar Transient Factory's Transient Discovery Pipeline | We present the transient source detection efficiencies of the Palomar
Transient Factory (PTF), parameterizing the number of transients that PTF
found, versus the number of similar transients that occurred over the same
period in the survey search area but that were missed. PTF was an optical sky
survey carried out with the Palomar 48-inch telescope over 2009-2012, observing
more than 8000 square degrees of sky with cadences of between 1 and 5 days,
locating around 50,000 non-moving transient sources, and spectroscopically
confirming around 1900 supernovae. We assess the effectiveness with which PTF
detected transient sources, by inserting ~7 million artificial point sources
into real PTF data. We then study the efficiency with which the PTF real-time
pipeline recovered these sources as a function of the source magnitude, host
galaxy surface brightness, and various observing conditions (using proxies for
seeing, sky brightness, and transparency). The product of this study is a
multi-dimensional recovery efficiency grid appropriate for the range of
observing conditions that PTF experienced, and that can then be used for
studies of the rates, environments, and luminosity functions of different
transient types using detailed Monte Carlo simulations. We illustrate the
technique using the observationally well-understood class of type Ia
supernovae.
| 0 | 1 | 0 | 0 | 0 | 0 |
Spatial Risk Measure for Max-Stable and Max-Mixture Processes | In this paper, we consider isotropic and stationary max-stable, inverse
max-stable and max-mixture processes $X=(X(s))\_{s\in\bR^2}$ and the damage
function $\cD\_X^{\nu}= |X|^\nu$ with $0<\nu<1/2$. We study the quantitative
behavior of a risk measure which is the variance of the average of
$\cD\_X^{\nu}$ over a region $\mathcal{A}\subset \bR^2$.} This kind of risk
measure has already been introduced and studied for \vero{some} max-stable
processes in \cite{koch2015spatial}. %\textcolor{red}{In this study, we
generalised this risk measure to be applicable for several models: asymptotic
dependence represented by max-stable, asymptotic independence represented by
inverse max-stable and mixing between of them.} We evaluated the proposed risk
measure by a simulation study.
| 0 | 0 | 1 | 1 | 0 | 0 |
Scale invariant transfer matrices and Hamiltionians | Given a direct system of Hilbert spaces $s\mapsto \mathcal H_s$ (with
isometric inclusion maps $\iota_s^t:\mathcal H_s\rightarrow \mathcal H_t$ for
$s\leq t$) corresponding to quantum systems on scales $s$, we define notions of
scale invariant and weakly scale invariant operators. Is some cases of quantum
spin chains we find conditions for transfer matrices and nearest neighbour
Hamiltonians to be scale invariant or weakly so. Scale invariance forces
spatial inhomogeneity of the spectral parameter. But weakly scale invariant
transfer matrices may be spatially homogeneous in which case the change of
spectral parameter from one scale to another is governed by a classical
dynamical system exhibiting fractal behaviour.
| 0 | 0 | 1 | 0 | 0 | 0 |
Visual-Based Analysis of Classification Measures with Applications to Imbalanced Data | With a plethora of available classification performance measures, choosing
the right metric for the right task requires careful thought. To make this
decision in an informed manner, one should study and compare general properties
of candidate measures. However, analysing measures with respect to complete
ranges of their domain values is a difficult and challenging task. In this
study, we attempt to support such analyses with a specialized visualization
technique, which operates in a barycentric coordinate system using a 3D
tetrahedron. Additionally, we adapt this technique to the context of imbalanced
data and put forward a set of properties which should be taken into account
when selecting a classification performance measure. As a result, we compare 22
popular measures and show important differences in their behaviour. Moreover,
for parametric measures such as the F$_{\beta}$ and IBA$_\alpha$(G-mean), we
analytically derive parameter thresholds that change measure properties.
Finally, we provide an online visualization tool that can aid the analysis of
complete domain ranges of performance measures.
| 1 | 0 | 0 | 0 | 0 | 0 |
Flow Fields: Dense Correspondence Fields for Highly Accurate Large Displacement Optical Flow Estimation | Modern large displacement optical flow algorithms usually use an
initialization by either sparse descriptor matching techniques or dense
approximate nearest neighbor fields. While the latter have the advantage of
being dense, they have the major disadvantage of being very outlier-prone as
they are not designed to find the optical flow, but the visually most similar
correspondence. In this article we present a dense correspondence field
approach that is much less outlier-prone and thus much better suited for
optical flow estimation than approximate nearest neighbor fields. Our approach
does not require explicit regularization, smoothing (like median filtering) or
a new data term. Instead we solely rely on patch matching techniques and a
novel multi-scale matching strategy. We also present enhancements for outlier
filtering. We show that our approach is better suited for large displacement
optical flow estimation than modern descriptor matching techniques. We do so by
initializing EpicFlow with our approach instead of their originally used
state-of-the-art descriptor matching technique. We significantly outperform the
original EpicFlow on MPI-Sintel, KITTI 2012, KITTI 2015 and Middlebury. In this
extended article of our former conference publication we further improve our
approach in matching accuracy as well as runtime and present more experiments
and insights.
| 1 | 0 | 0 | 0 | 0 | 0 |
Critical current density and vortex pinning mechanism of Lix(NH3)yFe2Te1.2Se0.8 single crystals | We grew Lix(NH3)yFe2Te1.2Se0.8 single crystals successfully using the
low-temperature ammonothermal method and the onset superconducting transition
temperature Tc,onset is increased to 21 K when compared to 14 K in the parent
compound FeTe0.6Se0.4. The derived critical current density Jc increases
remarkably to 2.6*10^5 A/cm^2 at 2 K. Further analysis indicates that the
dominant pinning mechanism in Lix(NH3)yFe2Te1.2Se0.8 single crystal is the
interaction between vortex and surface-like defects with normal core, possibly
originating from the stacking faults along the c axis, by variations in the
charge-carrier mean free path l near the defects (delta l pinning). Moreover,
the flux creep is important to the vortex dynamics of this material.
| 0 | 1 | 0 | 0 | 0 | 0 |
Pseudopotential for Many-Electron Atoms and Ions | Electron-electron correlation forms the basis of difficulties encountered in
many-body problems. Accurate treatment of the correlation problem is likely to
unravel some nice physical properties of matter embedded in this correlation.
In an effort to tackle this many-body problem, two complementary parameter-free
pseudopotentials for $n$-electron atoms and ions are suggested in this study.
Using one of the pseudopotentials, near-exact values of the groundstate
ionization potentials of helium, lithium, and berrylium atoms have been
calculated. The other pseudopotential also proves to be capable of yielding
reasonable and reliable quantum physical observables within the
non-relativistic quantum mechanics.
| 0 | 1 | 0 | 0 | 0 | 0 |
Secular Orbit Evolution in Systems with a Strong External Perturber - A Simple and Accurate Model | We present a semi-analytical correction to the seminal solution for the
secular motion of a planet's orbit under gravitational influence of an external
perturber derived by Heppenheimer (1978). A comparison between analytical
predictions and numerical simulations allows us to determine corrective factors
for the secular frequency and forced eccentricity in the co-planar restricted
three-body problem. The correction is given in the form of a polynomial
function of the system's parameters that can be applied to first-order forced
eccentricity and secular frequency estimates. The resulting secular equations
are simple, straight forward to use and improve the fidelity of Heppenheimer's
solution well beyond higher-order models. The quality and convergence of the
corrected secular equations are tested for a wide range of parameters and
limits of its applicability are given.
| 0 | 1 | 0 | 0 | 0 | 0 |
Reliable counting of weakly labeled concepts by a single spiking neuron model | Making an informed, correct and quick decision can be life-saving. It's
crucial for animals during an escape behaviour or for autonomous cars during
driving. The decision can be complex and may involve an assessment of the
amount of threats present and the nature of each threat. Thus, we should expect
early sensory processing to supply classification information fast and
accurately, even before relying the information to higher brain areas or more
complex system components downstream. Today, advanced convolutional artificial
neural networks can successfully solve visual detection and classification
tasks and are commonly used to build complex decision making systems. However,
in order to perform well on these tasks they require increasingly complex,
"very deep" model structure, which is costly in inference run-time, energy
consumption and number of training samples, only trainable on cloud-computing
clusters. A single spiking neuron has been shown to be able to solve
recognition tasks for homogeneous Poisson input statistics, a commonly used
model for spiking activity in the neocortex. When modeled as leaky integrate
and fire with gradient decent learning algorithm it was shown to posses a
variety of complex computational capabilities. Here we improve its
implementation. We also account for more natural stimulus generated inputs that
deviate from this homogeneous Poisson spiking. The improved gradient-based
local learning rule allows for significantly better and stable generalization.
We also show that with its improved capabilities it can count weakly labeled
concepts by applying our model to a problem of multiple instance learning (MIL)
with counting where labels are only available for collections of concepts. In
this counting MNIST task the neuron exploits the improved implementation and
outperforms conventional ConvNet architecture under similar condtions.
| 0 | 0 | 0 | 0 | 1 | 0 |
On the stability and applications of distance-based flexible formations | This paper investigates the stability of distance-based \textit{flexible}
undirected formations in the plane. Without rigidity, there exists a set of
connected shapes for given distance constraints, which is called the ambit. We
show that a flexible formation can lose its flexibility, or equivalently may
reduce the degrees of freedom of its ambit, if a small disturbance is
introduced in the range sensor of the agents. The stability of the disturbed
equilibrium can be characterized by analyzing the eigenvalues of the linearized
augmented error system. Unlike infinitesimally rigid formations, the disturbed
desired equilibrium can be turned unstable regardless of how small the
disturbance is. We finally present two examples of how to exploit these
disturbances as design parameters. The first example shows how to combine rigid
and flexible formations such that some of the agents can move freely in the
desired and locally stable ambit. The second example shows how to achieve a
specific shape with fewer edges than the necessary for the standard controller
in rigid formations.
| 1 | 0 | 0 | 0 | 0 | 0 |
Space-time domain solutions of the wave equation by a non-singular boundary integral method and Fourier transform | The general space-time evolution of the scattering of an incident acoustic
plane wave pulse by an arbitrary configuration of targets is treated by
employing a recently developed non-singular boundary integral method to solve
the Helmholtz equation in the frequency domain from which the fast Fourier
transform is used to obtain the full space-time solution of the wave equation.
The non-singular boundary integral solution can enforce the radiation boundary
condition at infinity exactly and can account for multiple scattering effects
at all spacings between scatterers without adverse effects on the numerical
precision. More generally, the absence of singular kernels in the non-singular
integral equation confers high numerical stability and precision for smaller
numbers of degrees of freedom. The use of fast Fourier transform to obtain the
time dependence is not constrained to discrete time steps and is particularly
efficient for studying the response to different incident pulses by the same
configuration of scatterers. The precision that can be attained using a smaller
number of Fourier components is also quantified.
| 0 | 1 | 0 | 0 | 0 | 0 |
Constrained Least Squares for Extended Complex Factor Analysis | For subspace estimation with an unknown colored noise, Factor Analysis (FA)
is a good candidate for replacing the popular eigenvalue decomposition (EVD).
Finding the unknowns in factor analysis can be done by solving a non-linear
least square problem. For this type of optimization problems, the Gauss-Newton
(GN) algorithm is a powerful and simple method. The most expensive part of the
GN algorithm is finding the direction of descent by solving a system of
equations at each iteration. In this paper we show that for FA, the matrices
involved in solving these systems of equations can be diagonalized in a closed
form fashion and the solution can be found in a computationally efficient way.
We show how the unknown parameters can be updated without actually constructing
these matrices. The convergence performance of the algorithm is studied via
numerical simulations.
| 0 | 0 | 0 | 1 | 0 | 0 |
Generalized $k$-core pruning process on directed networks | The resilience of a complex interconnected system concerns the size of the
macroscopic functioning node clusters after external perturbations based on a
random or designed scheme. For a representation of the interconnected systems
with directional or asymmetrical interactions among constituents, the directed
network is a convenient choice. Yet how the interaction directions affect the
network resilience still lacks thorough exploration. Here, we study the
resilience of directed networks with a generalized $k$-core pruning process as
a simple failure procedure based on both the in- and out-degrees of nodes, in
which any node with an in-degree $< k_{in}$ or an out-degree $< k_{ou}$ is
removed iteratively. With an explicitly analytical framework, we can predict
the relative sizes of residual node clusters on uncorrelated directed random
graphs. We show that the discontinuous transitions rise for cases with $k_{in}
\geq 2$ or $k_{ou} \geq 2$, and the unidirectional interactions among nodes
drive the networks more vulnerable against perturbations based on in- and
out-degrees separately.
| 0 | 1 | 0 | 0 | 0 | 0 |
Benchmarks for single-phase flow in fractured porous media | This paper presents several test cases intended to be benchmarks for
numerical schemes for single-phase fluid flow in fractured porous media. A
number of solution strategies are compared, including a vertex and a
cell-centered finite volume method, a non-conforming embedded discrete fracture
model, a primal and a dual extended finite element formulation, and a mortar
discrete fracture model. The proposed benchmarks test the schemes by increasing
the difficulties in terms of network geometry, e.g. intersecting fractures, and
physical parameters, e.g. low and high fracture-matrix permeability ratio as
well as heterogeneous fracture permeabilities. For each problem, the results
presented by the participants are the number of unknowns, the approximation
errors in the porous matrix and in the fractures with respect to a reference
solution, and the sparsity and condition number of the discretized linear
system. All data and meshes used in this study are publicly available for
further comparisons.
| 1 | 0 | 1 | 0 | 0 | 0 |
On the "Calligraphy" of Books | Authorship attribution is a natural language processing task that has been
widely studied, often by considering small order statistics. In this paper, we
explore a complex network approach to assign the authorship of texts based on
their mesoscopic representation, in an attempt to capture the flow of the
narrative. Indeed, as reported in this work, such an approach allowed the
identification of the dominant narrative structure of the studied authors. This
has been achieved due to the ability of the mesoscopic approach to take into
account relationships between different, not necessarily adjacent, parts of the
text, which is able to capture the story flow. The potential of the proposed
approach has been illustrated through principal component analysis, a
comparison with the chance baseline method, and network visualization. Such
visualizations reveal individual characteristics of the authors, which can be
understood as a kind of calligraphy.
| 1 | 0 | 0 | 0 | 0 | 0 |
Voice Conversion from Unaligned Corpora using Variational Autoencoding Wasserstein Generative Adversarial Networks | Building a voice conversion (VC) system from non-parallel speech corpora is
challenging but highly valuable in real application scenarios. In most
situations, the source and the target speakers do not repeat the same texts or
they may even speak different languages. In this case, one possible, although
indirect, solution is to build a generative model for speech. Generative models
focus on explaining the observations with latent variables instead of learning
a pairwise transformation function, thereby bypassing the requirement of speech
frame alignment. In this paper, we propose a non-parallel VC framework with a
variational autoencoding Wasserstein generative adversarial network (VAW-GAN)
that explicitly considers a VC objective when building the speech model.
Experimental results corroborate the capability of our framework for building a
VC system from unaligned data, and demonstrate improved conversion quality.
| 1 | 0 | 0 | 0 | 0 | 0 |
Calidad en repositorios digitales en Argentina, estudio comparativo y cualitativo | Numerous institutions and organizations need not only to preserve the
material and publications they produce, but also have as their task (although
it would be desirable it was an obligation) to publish, disseminate and make
publicly available all the results of the research and any other
scientific/academic material. The Open Archives Initiative (OAI) and the
introduction of Open Archives Initiative Protocol for Metadata Harvesting
(OAI-PMH), make this task much easier. The main objective of this work is to
make a comparative and qualitative study of the data -metadata specifically-
contained in the whole set of Argentine repositories listed in the ROAR portal,
focusing on the functional perspective of the quality of this metadata. Another
objective is to offer an overview of the status of these repositories, in an
attempt to detect common failures and errors institutions incur when storing
the metadata of the resources contained in these repositories, and thus be able
to suggest measures to be able to improve the load and further retrieval
processes. It was found that the eight most used Dublin Core fields are:
identifier, type, title, date, subject, creator, language and description. Not
all repositories fill all the fields, and the lack of normalization, or the
excessive use of fields like language, type, format and subject is somewhat
striking, and in some cases even alarming
| 1 | 0 | 0 | 0 | 0 | 0 |
Time-triggering versus event-triggering control over communication channels | Time-triggered and event-triggered control strategies for stabilization of an
unstable plant over a rate-limited communication channel subject to unknown,
bounded delay are studied and compared. Event triggering carries implicit
information, revealing the state of the plant. However, the delay in the
communication channel causes information loss, as it makes the state
information out of date. There is a critical delay value, when the loss of
information due to the communication delay perfectly compensates the implicit
information carried by the triggering events. This occurs when the maximum
delay equals the inverse of the entropy rate of the plant. In this context,
extensions of our previous results for event triggering strategies are
presented for vector systems and are compared with the data-rate theorem for
time-triggered control, that is extended here to a setting with unknown delay.
| 1 | 0 | 1 | 0 | 0 | 0 |
Lower spectral radius and spectral mapping theorem for suprema preserving mappings | We study Lipschitz, positively homogeneous and finite suprema preserving
mappings defined on a max-cone of positive elements in a normed vector lattice.
We prove that the lower spectral radius of such a mapping is always a minimum
value of its approximate point spectrum. We apply this result to show that the
spectral mapping theorem holds for the approximate point spectrum of such a
mapping. By applying this spectral mapping theorem we obtain new inequalites
for the Bonsall cone spectral radius of max type kernel operators.
| 0 | 0 | 1 | 0 | 0 | 0 |
SHINE: Signed Heterogeneous Information Network Embedding for Sentiment Link Prediction | In online social networks people often express attitudes towards others,
which forms massive sentiment links among users. Predicting the sign of
sentiment links is a fundamental task in many areas such as personal
advertising and public opinion analysis. Previous works mainly focus on textual
sentiment classification, however, text information can only disclose the "tip
of the iceberg" about users' true opinions, of which the most are unobserved
but implied by other sources of information such as social relation and users'
profile. To address this problem, in this paper we investigate how to predict
possibly existing sentiment links in the presence of heterogeneous information.
First, due to the lack of explicit sentiment links in mainstream social
networks, we establish a labeled heterogeneous sentiment dataset which consists
of users' sentiment relation, social relation and profile knowledge by
entity-level sentiment extraction method. Then we propose a novel and flexible
end-to-end Signed Heterogeneous Information Network Embedding (SHINE) framework
to extract users' latent representations from heterogeneous networks and
predict the sign of unobserved sentiment links. SHINE utilizes multiple deep
autoencoders to map each user into a low-dimension feature space while
preserving the network structure. We demonstrate the superiority of SHINE over
state-of-the-art baselines on link prediction and node recommendation in two
real-world datasets. The experimental results also prove the efficacy of SHINE
in cold start scenario.
| 1 | 0 | 0 | 1 | 0 | 0 |
Eliminating Field Quantifiers in Strongly Dependent Henselian Fields | We prove elimination of field quantifiers for strongly dependent henselian
fields in the Denef-Pas language. This is achieved by proving the result for a
class of fields generalizing algebraically maximal Kaplansky fields. We deduce
that if $(K,v)$ is strongly dependent then so is its henselization.
| 0 | 0 | 1 | 0 | 0 | 0 |
A study of existing Ontologies in the IoT-domain | Several domains have adopted the increasing use of IoT-based devices to
collect sensor data for generating abstractions and perceptions of the real
world. This sensor data is multi-modal and heterogeneous in nature. This
heterogeneity induces interoperability issues while developing cross-domain
applications, thereby restricting the possibility of reusing sensor data to
develop new applications. As a solution to this, semantic approaches have been
proposed in the literature to tackle problems related to interoperability of
sensor data. Several ontologies have been proposed to handle different aspects
of IoT-based sensor data collection, ranging from discovering the IoT sensors
for data collection to applying reasoning on the collected sensor data for
drawing inferences. In this paper, we survey these existing semantic ontologies
to provide an overview of the recent developments in this field. We highlight
the fundamental ontological concepts (e.g., sensor-capabilities and
context-awareness) required for an IoT-based application, and survey the
existing ontologies which include these concepts. Based on our study, we also
identify the shortcomings of currently available ontologies, which serves as a
stepping stone to state the need for a common unified ontology for the IoT
domain.
| 1 | 0 | 0 | 0 | 0 | 0 |
Dynamic Advisor-Based Ensemble (dynABE): Case Study in Stock Trend Prediction of Critical Metal Companies | The demand for metals by modern technology has been shifting from common base
metals to a variety of minor metals, such as cobalt or indium. The industrial
importance and limited geological availability of some minor metals have led to
them being considered more "critical," and there is a growing investment
interest in such critical metals and their producing companies. In this
research, we create a novel framework, Dynamic Advisor-Based Ensemble (dynABE),
for stock prediction and use critical metal companies as case study. dynABE
uses domain knowledge to diversify the feature set by dividing them into
different "advisors." creates high-level ensembles with complex base models for
each advisor, and combines the advisors together dynamically during validation
with a novel and effective online update strategy. We test dynABE on three
cobalt-related companies, and it achieves the best-case misclassification error
of 31.12% and excess return of 477% compared to the stock itself in a year and
a half. In addition to presenting an effective stock prediction model with
decent profitabilities, this research further analyzes dynABE to visualize how
it works in practice, which also yields discoveries of its interesting
behaviors when processing time-series data.
| 0 | 0 | 0 | 0 | 0 | 1 |
High-Speed Demodulation of weak FBGs Based on Microwave Photonics and Chromatic Dispersion | A high speed quasi-distributed demodulation method based on the microwave
photonics and the chromatic dispersion effect is designed and implemented for
weak fiber Bragg gratings (FBGs). Due to the effect of dispersion compensation
fiber (DCF), FBG wavelength shift leads to the change of the difference
frequency signal at the mixer. With the way of crossing microwave sweep cycle,
all wavelengths of cascade FBGs can be high speed obtained by measuring the
frequencies change. Moreover, through the introduction of Chirp-Z and Hanning
window algorithm, the analysis of difference frequency signal is achieved very
well. By adopting the single-peak filter as a reference, the length disturbance
of DCF caused by temperature can be also eliminated. Therefore, the accuracy of
this novel method is greatly improved, and high speed demodulation of FBGs can
easily realize. The feasibility and performance are experimentally demonstrated
using 105 FBGs with 0.1% reflectivity, 1 m spatial interval. Results show that
each grating can be distinguished well, and the demodulation rate is as high as
40 kHz, the accuracy is about 8 pm.
| 0 | 1 | 0 | 0 | 0 | 0 |
Destructive Impact of Molecular Noise on Nanoscale Electrochemical Oscillators | We study the loss of coherence of electrochemical oscillations on meso- and
nanosized electrodes with numeric simulations of the electrochemical master
equation for a prototypical electrochemical oscillator, the hydrogen peroxide
reduction on Pt electrodes in the presence of halides. On nanoelectrodes, the
electrode potential changes whenever a stochastic electron-transfer event takes
place. Electrochemical reaction rate coefficients depend exponentially on the
electrode potential and become thus fluctuating quantities as well. Therefore,
also the transition rates between system states become time-dependent which
constitutes a fundamental difference to purely chemical nanoscale oscillators.
Three implications are demonstrated: (a) oscillations and steady states shift
in phase space with decreasing system size, thereby also decreasing
considerably the oscillating parameter regions; (b) the minimal number of
molecules necessary to support correlated oscillations is more than 10 times as
large as for nanoscale chemical oscillators; (c) the relation between
correlation time and variance of the period of the oscillations predicted for
chemical oscillators in the weak noise limit is only fulfilled in a very
restricted parameter range for the electrochemical nano-oscillator.
| 0 | 1 | 0 | 0 | 0 | 0 |
A new method for recognising Suzuki groups | We present a new algorithm for constructive recognition of the Suzuki groups
in their natural representations. The algorithm runs in Las Vegas polynomial
time given a discrete logarithm oracle. An implementation is available in the
Magma computer algebra system.
| 1 | 0 | 1 | 0 | 0 | 0 |
Pure $Σ_2$-Elementarity beyond the Core | We display the entire structure ${\cal R}_2$ coding $\Sigma_1$- and
$\Sigma_2$-elementarity on the ordinals. This leads to the first steps for
analyzing pure $\Sigma_3$-elementary substructures.
| 0 | 0 | 1 | 0 | 0 | 0 |
Parsimonious Inference on Convolutional Neural Networks: Learning and applying on-line kernel activation rules | A new, radical CNN design approach is presented in this paper, considering
the reduction of the total computational load during inference. This is
achieved by a new holistic intervention on both the CNN architecture and the
training procedure, which targets to the parsimonious inference by learning to
exploit or remove the redundant capacity of a CNN architecture. This is
accomplished, by the introduction of a new structural element that can be
inserted as an add-on to any contemporary CNN architecture, whilst preserving
or even improving its recognition accuracy. Our approach formulates a
systematic and data-driven method for developing CNNs that are trained to
eventually change size and form in real-time during inference, targeting to the
smaller possible computational footprint. Results are provided for the optimal
implementation on a few modern, high-end mobile computing platforms indicating
a significant speed-up of up to x3 times.
| 1 | 0 | 0 | 0 | 0 | 0 |
Betweenness and Diversity in Journal Citation Networks as Measures of Interdisciplinarity -- A Tribute to Eugene Garfield -- | Journals were central to Eugene Garfield's research interests. Among other
things, journals are considered as units of analysis for bibliographic
databases such as the Web of Science (WoS) and Scopus. In addition to
disciplinary classifications of journals, journal citation patterns span
networks across boundaries to variable extents. Using betweenness centrality
(BC) and diversity, we elaborate on the question of how to distinguish and rank
journals in terms of interdisciplinarity. Interdisciplinarity, however, is
difficult to operationalize in the absence of an operational definition of
disciplines, the diversity of a unit of analysis is sample-dependent. BC can be
considered as a measure of multi-disciplinarity. Diversity of co-citation in a
citing document has been considered as an indicator of knowledge integration,
but an author can also generate trans-disciplinary--that is,
non-disciplined--variation by citing sources from other disciplines. Diversity
in the bibliographic coupling among citing documents can analogously be
considered as diffusion of knowledge across disciplines. Because the citation
networks in the cited direction reflect both structure and variation, diversity
in this direction is perhaps the best available measure of interdisciplinarity
at the journal level. Furthermore, diversity is based on a summation and can
therefore be decomposed, differences among (sub)sets can be tested for
statistical significance. In an appendix, a general-purpose routine for
measuring diversity in networks is provided.
| 1 | 0 | 0 | 0 | 0 | 0 |
Non-Markovian Control with Gated End-to-End Memory Policy Networks | Partially observable environments present an important open challenge in the
domain of sequential control learning with delayed rewards. Despite numerous
attempts during the two last decades, the majority of reinforcement learning
algorithms and associated approximate models, applied to this context, still
assume Markovian state transitions. In this paper, we explore the use of a
recently proposed attention-based model, the Gated End-to-End Memory Network,
for sequential control. We call the resulting model the Gated End-to-End Memory
Policy Network. More precisely, we use a model-free value-based algorithm to
learn policies for partially observed domains using this memory-enhanced neural
network. This model is end-to-end learnable and it features unbounded memory.
Indeed, because of its attention mechanism and associated non-parametric
memory, the proposed model allows us to define an attention mechanism over the
observation stream unlike recurrent models. We show encouraging results that
illustrate the capability of our attention-based model in the context of the
continuous-state non-stationary control problem of stock trading. We also
present an OpenAI Gym environment for simulated stock exchange and explain its
relevance as a benchmark for the field of non-Markovian decision process
learning.
| 1 | 0 | 0 | 1 | 0 | 0 |
A likely detection of a local interplanetary dust cloud passing near the Earth in the AKARI mid-infrared all-sky map | Context. We are creating the AKARI mid-infrared all-sky diffuse maps. Through
a foreground removal of the zodiacal emission, we serendipitously detected a
bright residual component whose angular size is about 50 x 20 deg. at a
wavelength of 9 micron. Aims. We investigate the origin and the physical
properties of the residual component. Methods. We measured the surface
brightness of the residual component in the AKARI mid-infrared all-sky maps.
Results. The residual component was significantly detected only in 2007
January, even though the same region was observed in 2006 July and 2007 July,
which shows that it is not due to the Galactic emission. We suggest that this
may be a small cloud passing near the Earth. By comparing the observed
intensity ratio of I_9um/I_18um with the expected intensity ratio assuming
thermal equilibrium of dust grains at 1 AU for various dust compositions and
sizes, we find that dust grains in the moving cloud are likely to be much
smaller than typical grains that produce the bulk of the zodiacal light.
Conclusions. Considering the observed date and position, it is likely that it
originates in the solar coronal mass ejection (CME) which took place on 2007
January 25.
| 0 | 1 | 0 | 0 | 0 | 0 |
Kepler red-clump stars in the field and in open clusters: constraints on core mixing | Convective mixing in Helium-core-burning (HeCB) stars is one of the
outstanding issues in stellar modelling. The precise asteroseismic measurements
of gravity-modes period spacing ($\Delta\Pi_1$) has opened the door to detailed
studies of the near-core structure of such stars, which had not been possible
before. Here we provide stringent tests of various core-mixing scenarios
against the largely unbiased population of red-clump stars belonging to the old
open clusters monitored by Kepler, and by coupling the updated precise
inference on $\Delta\Pi_1$ in thousands field stars with spectroscopic
constraints. We find that models with moderate overshooting successfully
reproduce the range observed of $\Delta\Pi_1$ in clusters. In particular we
show that there is no evidence for the need to extend the size of the
adiabatically stratified core, at least at the beginning of the HeCB phase.
This conclusion is based primarily on ensemble studies of $\Delta\Pi_1$ as a
function of mass and metallicity. While $\Delta\Pi_1$ shows no appreciable
dependence on the mass, we have found a clear dependence of $\Delta\Pi_1$ on
metallicity, which is also supported by predictions from models.
| 0 | 1 | 0 | 0 | 0 | 0 |
Balanced Excitation and Inhibition are Required for High-Capacity, Noise-Robust Neuronal Selectivity | Neurons and networks in the cerebral cortex must operate reliably despite
multiple sources of noise. To evaluate the impact of both input and output
noise, we determine the robustness of single-neuron stimulus selective
responses, as well as the robustness of attractor states of networks of neurons
performing memory tasks. We find that robustness to output noise requires
synaptic connections to be in a balanced regime in which excitation and
inhibition are strong and largely cancel each other. We evaluate the conditions
required for this regime to exist and determine the properties of networks
operating within it. A plausible synaptic plasticity rule for learning that
balances weight configurations is presented. Our theory predicts an optimal
ratio of the number of excitatory and inhibitory synapses for maximizing the
encoding capacity of balanced networks for a given statistics of afferent
activations. Previous work has shown that balanced networks amplify
spatio-temporal variability and account for observed asynchronous irregular
states. Here we present a novel type of balanced network that amplifies small
changes in the impinging signals, and emerges automatically from learning to
perform neuronal and network functions robustly.
| 1 | 1 | 0 | 1 | 0 | 0 |
On the coherent emission of radio frequency radiation from high energy particle showers | Extended Air Showers produced by cosmic rays impinging on the earth
atmosphere irradiate radio frequency radiation through different mechanisms.
Upon certain conditions, the emission has a coherent nature, with the
consequence that the emitted power is not proportional to the energy of the
primary cosmic rays, but to the energy squared. The effect was predicted in
1962 by Askaryan and it is nowadays experimentally well established and
exploited for the detection of ultra high energy cosmic rays.
In this paper we discuss in details the conditions for coherence, which in
literature have been too often taken for granted, and calculate them
analytically, finding a formulation which comprehends both the coherent and the
incoherent emissions. We apply the result to the Cherenkov effect, obtaining
the same conclusions derived by Askaryan, and to the geosynchrotron radiation.
| 0 | 1 | 0 | 0 | 0 | 0 |
La falacia del empate técnico electoral | It is argued that the concept of "technical tie" in electoral polls and quick
counts has no probabilistic basis, and that instead the uncertainty associated
with these statistical exercises should be expressed in terms of a probability
of victory of the leading candidate.
-----
Se argumenta que el concepto de "empate técnico" en encuestas y conteos
rápidos electorales no tiene fundamento probabilístico, y que en su lugar
la incertidumbre asociada a dichos ejercicios estadísticos debiera expresarse
en términos de una probabilidad de triunfo del candidato puntero.
| 0 | 0 | 0 | 1 | 0 | 0 |
Unexpected 3+ valence of iron in FeO$_2$, a geologically important material lying "in between" oxides and peroxides | Recent discovery of pyrite FeO$_2$, which can be an important ingredient of
the Earth's lower mantle and which in particular may serve as an extra source
of water in the Earth's interior, opens new perspectives for geophysics and
geochemistry, but this is also an extremely interesting material from physical
point of view. We found that in contrast to naive expectations Fe is nearly 3+
in this material, which strongly affects its magnetic properties and makes it
qualitatively different from well known sulfide analogue - FeS$_2$. Doping,
which is most likely to occur in the Earth's mantle, makes FeO$_2$ much more
magnetic. In addition we show that unique electronic structure places FeO$_2$
"in between" the usual dioxides and peroxides making this system interesting
both for physics and solid state chemistry.
| 0 | 1 | 0 | 0 | 0 | 0 |
Towards Smart Proof Search for Isabelle | Despite the recent progress in automatic theorem provers, proof engineers are
still suffering from the lack of powerful proof automation. In this position
paper we first report our proof strategy language based on a meta-tool
approach. Then, we propose an AI-based approach to drastically improve proof
automation for Isabelle, while identifying three major challenges we plan to
address for this objective.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Nearly Instance Optimal Algorithm for Top-k Ranking under the Multinomial Logit Model | We study the active learning problem of top-$k$ ranking from multi-wise
comparisons under the popular multinomial logit model. Our goal is to identify
the top-$k$ items with high probability by adaptively querying sets for
comparisons and observing the noisy output of the most preferred item from each
comparison. To achieve this goal, we design a new active ranking algorithm
without using any information about the underlying items' preference scores. We
also establish a matching lower bound on the sample complexity even when the
set of preference scores is given to the algorithm. These two results together
show that the proposed algorithm is nearly instance optimal (similar to
instance optimal [FLN03], but up to polylog factors). Our work extends the
existing literature on rank aggregation in three directions. First, instead of
studying a static problem with fixed data, we investigate the top-$k$ ranking
problem in an active learning setting. Second, we show our algorithm is nearly
instance optimal, which is a much stronger theoretical guarantee. Finally, we
extend the pairwise comparison to the multi-wise comparison, which has not been
fully explored in ranking literature.
| 1 | 0 | 0 | 1 | 0 | 0 |
Inverse problem on conservation laws | The first concise formulation of the inverse problem on conservation laws is
presented. In this problem one aims to derive the general form of systems of
differential equations that admit a prescribed set of conservation laws. The
particular cases of the inverse problem on first integrals of ordinary
differential equations and on conservation laws for evolution equations are
considered. We also solve the inverse problem on conservation laws for
differential equations admitting an infinite dimensional space of zero-order
characteristics. This particular case is further studied in the context of
conservative parameterization schemes for the two-dimensional incompressible
Euler equations. We exhaustively classify conservative parameterization schemes
for the eddy-vorticity flux that lead to a class of closed, averaged Euler
equations possessing generalized circulation, generalized momentum and energy
conservation.
| 0 | 1 | 1 | 0 | 0 | 0 |
Slow to fast infinitely extended reservoirs for the symmetric exclusion process with long jumps | We consider an exclusion process with long jumps in the box $\Lambda\_N=\{1,
\ldots,N-1\}$, for $N \ge 2$, in contact with infinitely extended reservoirs on
its left and on its right. The jump rate is described by a transition
probability $p(\cdot)$ which is symmetric, with infinite support but with
finite variance. The reservoirs add or remove particles with rate proportional
to $\kappa N^{-\theta}$, where $\kappa>0$ and $\theta \in\mathbb R$. If
$\theta>0$ (resp. $\theta<0$) the reservoirs add and fastly remove (resp.
slowly remove) particles in the bulk. According to the value of $\theta$ we
prove that the time evolution of the spatial density of particles is described
by some reaction-diffusion equations with various boundary conditions.
| 0 | 1 | 1 | 0 | 0 | 0 |
Multi-Dimensional Conservation Laws and Integrable Systems | In this paper we introduce a new property of two-dimensional integrable
systems -- existence of infinitely many local three-dimensional conservation
laws for pairs of integrable two-dimensional commuting flows. Infinitely many
three-dimensional local conservation laws for the Korteweg de Vries pair of
commuting flows and for the Benney commuting hydrodynamic chains are
constructed. As a by-product we established a new method for computation of
local conservation laws for three-dimensional integrable systems. The Mikhalev
equation and the dispersionless limit of the Kadomtsev--Petviashvili equation
are investigated. All known local and infinitely many new quasi-local
three-dimensional conservation laws are presented. Also four-dimensional
conservation laws are considered for couples of three-dimensional integrable
quasilinear systems and for triples of corresponding hydrodynamic chains.
| 0 | 1 | 0 | 0 | 0 | 0 |
Fibers in the NGC1333 proto-cluster | Are the initial conditions for clustered star formation the same as for
non-clustered star formation? To investigate the initial gas properties in
young proto-clusters we carried out a comprehensive and high-sensitivity study
of the internal structure, density, temperature, and kinematics of the dense
gas content of the NGC1333 region in Perseus, one of the nearest and best
studied embedded clusters. The analysis of the gas velocities in the
Position-Position-Velocity space reveals an intricate underlying gas
organization both in space and velocity. We identified a total of 14
velocity-coherent, (tran-)sonic structures within NGC1333, with similar
physical and kinematic properties than those quiescent, star-forming (aka
fertile) fibers previously identified in low-mass star-forming clouds. These
fibers are arranged in a complex spatial network, build-up the observed total
column density, and contain the dense cores and protostars in this cloud. Our
results demonstrate that the presence of fibers is not restricted to low-mass
clouds but can be extended to regions of increasing mass and complexity. We
propose that the observational dichotomy between clustered and non-clustered
star-forming regions might be naturally explained by the distinct spatial
density of fertile fibers in these environments.
| 0 | 1 | 0 | 0 | 0 | 0 |
Exploiting Friction in Torque Controlled Humanoid Robots | A common architecture for torque controlled humanoid robots consists in two
nested loops. The outer loop generates desired joint/motor torques, and the
inner loop stabilises these desired values. In doing so, the inner loop usually
compensates for joint friction phenomena, thus removing their inherent
stabilising property that may be also beneficial for high level control
objectives. This paper shows how to exploit friction for joint and task space
control of humanoid robots. Experiments are carried out using the humanoid
robot iCub.
| 1 | 0 | 0 | 0 | 0 | 0 |
What caused what? A quantitative account of actual causation using dynamical causal networks | Actual causation is concerned with the question "what caused what?" Consider
a transition between two states within a system of interacting elements, such
as an artificial neural network, or a biological brain circuit. Which
combination of synapses caused the neuron to fire? Which image features caused
the classifier to misinterpret the picture? Even detailed knowledge of the
system's causal network, its elements, their states, connectivity, and dynamics
does not automatically provide a straightforward answer to the "what caused
what?" question. Counterfactual accounts of actual causation based on graphical
models, paired with system interventions, have demonstrated initial success in
addressing specific problem cases in line with intuitive causal judgments.
Here, we start from a set of basic requirements for causation (realization,
composition, information, integration, and exclusion) and develop a rigorous,
quantitative account of actual causation that is generally applicable to
discrete dynamical systems. We present a formal framework to evaluate these
causal requirements that is based on system interventions and partitions, and
considers all counterfactuals of a state transition. This framework is used to
provide a complete causal account of the transition by identifying and
quantifying the strength of all actual causes and effects linking the two
consecutive system states. Finally, we examine several exemplary cases and
paradoxes of causation and show that they can be illuminated by the proposed
framework for quantifying actual causation.
| 1 | 0 | 1 | 1 | 0 | 0 |
Gaussian One-Armed Bandit and Optimization of Batch Data Processing | We consider the minimax setup for Gaussian one-armed bandit problem, i.e. the
two-armed bandit problem with Gaussian distributions of incomes and known
distribution corresponding to the first arm. This setup naturally arises when
the optimization of batch data processing is considered and there are two
alternative processing methods available with a priori known efficiency of the
first method. One should estimate the efficiency of the second method and
provide predominant usage of the most efficient of both them. According to the
main theorem of the theory of games minimax strategy and minimax risk are
searched for as Bayesian ones corresponding to the worst-case prior
distribution. As a result, we obtain the recursive integro-difference equation
and the second order partial differential equation in the limiting case as the
number of batches goes to infinity. This makes it possible to determine minimax
risk and minimax strategy by numerical methods. If the number of batches is
large enough we show that batch data processing almost does not influence the
control performance, i.e. the value of the minimax risk. Moreover, in case of
Bernoulli incomes and large number of batches, batch data processing provides
almost the same minimax risk as the optimal one-by-one data processing.
| 0 | 0 | 1 | 1 | 0 | 0 |
Machine Learning with World Knowledge: The Position and Survey | Machine learning has become pervasive in multiple domains, impacting a wide
variety of applications, such as knowledge discovery and data mining, natural
language processing, information retrieval, computer vision, social and health
informatics, ubiquitous computing, etc. Two essential problems of machine
learning are how to generate features and how to acquire labels for machines to
learn. Particularly, labeling large amount of data for each domain-specific
problem can be very time consuming and costly. It has become a key obstacle in
making learning protocols realistic in applications. In this paper, we will
discuss how to use the existing general-purpose world knowledge to enhance
machine learning processes, by enriching the features or reducing the labeling
work. We start from the comparison of world knowledge with domain-specific
knowledge, and then introduce three key problems in using world knowledge in
learning processes, i.e., explicit and implicit feature representation,
inference for knowledge linking and disambiguation, and learning with direct or
indirect supervision. Finally we discuss the future directions of this research
topic.
| 1 | 0 | 0 | 1 | 0 | 0 |
The Case for Learned Index Structures | Indexes are models: a B-Tree-Index can be seen as a model to map a key to the
position of a record within a sorted array, a Hash-Index as a model to map a
key to a position of a record within an unsorted array, and a BitMap-Index as a
model to indicate if a data record exists or not. In this exploratory research
paper, we start from this premise and posit that all existing index structures
can be replaced with other types of models, including deep-learning models,
which we term learned indexes. The key idea is that a model can learn the sort
order or structure of lookup keys and use this signal to effectively predict
the position or existence of records. We theoretically analyze under which
conditions learned indexes outperform traditional index structures and describe
the main challenges in designing learned index structures. Our initial results
show, that by using neural nets we are able to outperform cache-optimized
B-Trees by up to 70% in speed while saving an order-of-magnitude in memory over
several real-world data sets. More importantly though, we believe that the idea
of replacing core components of a data management system through learned models
has far reaching implications for future systems designs and that this work
just provides a glimpse of what might be possible.
| 1 | 0 | 0 | 0 | 0 | 0 |
Radiation reaction for spinning bodies in effective field theory II: Spin-spin effects | We compute the leading Post-Newtonian (PN) contributions at quadratic order
in the spins to the radiation-reaction acceleration and spin evolution for
binary systems, entering at four-and-a-half PN order. Our calculation includes
the back-reaction from finite-size spin effects, which is presented for the
first time. The computation is carried out, from first principles, using the
effective field theory framework for spinning extended objects. At this order,
nonconservative effects in the spin-spin sector are independent of the spin
supplementary conditions. A non-trivial consistency check is performed by
showing that the energy loss induced by the resulting radiation-reaction force
is equivalent to the total emitted power in the far zone. We find that, in
contrast to the spin-orbit contributions (reported in a companion paper), the
radiation reaction affects the evolution of the spin vectors once spin-spin
effects are incorporated.
| 0 | 1 | 0 | 0 | 0 | 0 |
Incremental control and guidance of hybrid aircraft applied to the Cyclone tailsitter UAV | Hybrid unmanned aircraft, that combine hover capability with a wing for fast
and efficient forward flight, have attracted a lot of attention in recent
years. Many different designs are proposed, but one of the most promising is
the tailsitter concept. However, tailsitters are difficult to control across
the entire flight envelope, which often includes stalled flight. Additionally,
their wing surface makes them susceptible to wind gusts. In this paper, we
propose incremental nonlinear dynamic inversion control for the attitude and
position control. The result is a single, continuous controller, that is able
to track the acceleration of the vehicle across the flight envelope. The
proposed controller is implemented on the Cyclone hybrid UAV. Multiple outdoor
experiments are performed, showing that unmodeled forces and moments are
effectively compensated by the incremental control structure, and that
accelerations can be tracked across the flight envelope. Finally, we provide a
comprehensive procedure for the implementation of the controller on other types
of hybrid UAVs.
| 1 | 0 | 0 | 0 | 0 | 0 |
Micromechanics based framework with second-order damage tensors | The harmonic product of tensors---leading to the concept of harmonic
factorization---has been defined in a previous work (Olive et al, 2017). In the
practical case of 3D crack density measurements on thin or thick walled
structures, this mathematical tool allows us to factorize the harmonic
(irreducible) part of the fourth-order damage tensor as an harmonic square: an
exact harmonic square in 2D, an harmonic square over the set of so-called
mechanically accessible directions for measurements in the 3D case. The
corresponding micro-mechanics framework based on second---instead of
fourth---order damage tensors is derived. An illustrating example is provided
showing how the proposed framework allows for the modeling of the so-called
hydrostatic sensitivity up to high damage levels.
| 0 | 1 | 0 | 0 | 0 | 0 |
Towards formal models and languages for verifiable Multi-Robot Systems | Incorrect operations of a Multi-Robot System (MRS) may not only lead to
unsatisfactory results, but can also cause economic losses and threats to
safety. These threats may not always be apparent, since they may arise as
unforeseen consequences of the interactions between elements of the system.
This call for tools and techniques that can help in providing guarantees about
MRSs behaviour. We think that, whenever possible, these guarantees should be
backed up by formal proofs to complement traditional approaches based on
testing and simulation.
We believe that tailored linguistic support to specify MRSs is a major step
towards this goal. In particular, reducing the gap between typical features of
an MRS and the level of abstraction of the linguistic primitives would simplify
both the specification of these systems and the verification of their
properties. In this work, we review different agent-oriented languages and
their features; we then consider a selection of case studies of interest and
implement them useing the surveyed languages. We also evaluate and compare
effectiveness of the proposed solution, considering, in particular, easiness of
expressing non-trivial behaviour.
| 1 | 0 | 0 | 0 | 0 | 0 |
On the Complexity of Simple and Optimal Deterministic Mechanisms for an Additive Buyer | We show that the Revenue-Optimal Deterministic Mechanism Design problem for a
single additive buyer is #P-hard, even when the distributions have support size
2 for each item and, more importantly, even when the optimal solution is
guaranteed to be of a very simple kind: the seller picks a price for each
individual item and a price for the grand bundle of all the items; the buyer
can purchase either the grand bundle at its given price or any subset of items
at their total individual prices. The following problems are also #P-hard, as
immediate corollaries of the proof:
1. determining if individual item pricing is optimal for a given instance,
2. determining if grand bundle pricing is optimal, and
3. computing the optimal (deterministic) revenue.
On the positive side, we show that when the distributions are i.i.d. with
support size 2, the optimal revenue obtainable by any mechanism, even a
randomized one, can be achieved by a simple solution of the above kind
(individual item pricing with a discounted price for the grand bundle) and
furthermore, it can be computed in polynomial time. The problem can be solved
in polynomial time too when the number of items is constant.
| 1 | 0 | 0 | 0 | 0 | 0 |
Further constraints on variations in the IMF from LMXB populations | We present constraints on variations in the initial mass function (IMF) of
nine local early-type galaxies based on their low mass X-ray binary (LMXB)
populations. Comprised of accreting black holes and neutron stars, these LMXBs
can be used to constrain the important high mass end of the IMF. We consider
the LMXB populations beyond the cores of the galaxies ($>0.2R_{e}$; covering
$75-90\%$ of their stellar light) and find no evidence for systematic
variations of the IMF with velocity dispersion ($\sigma$). We reject IMFs which
become increasingly bottom heavy with $\sigma$, up to steep power-laws
(exponent, $\alpha>2.8$) in massive galaxies ($\sigma>300$km/s), for
galactocentric radii $>1/4\ R_{e}$. Previously proposed IMFs that become
increasingly bottom heavy with $\sigma$ are consistent with these data if only
the number of low mass stars $(<0.5M_{\odot}$) varies. We note that our results
are consistent with some recent work which proposes that extreme IMFs are only
present in the central regions of these galaxies. We also consider IMFs that
become increasingly top-heavy with $\sigma$, resulting in significantly more
LMXBs. Such a model is consistent with these observations, but additional data
are required to significantly distinguish between this and an invariant IMF.
For six of these galaxies, we directly compare with published IMF mismatch
parameters from the Atlas3D survey, $\alpha_{dyn}$. We find good agreement with
the LMXB population if galaxies with higher $\alpha_{dyn}$ have more top-heavy
IMFs -- although we caution that our sample is quite small. Future LMXB
observations can provide further insights into the origin of $\alpha_{dyn}$
variations.
| 0 | 1 | 0 | 0 | 0 | 0 |
Black holes in vector-tensor theories | We study static and spherically symmetric black hole (BH) solutions in
second-order generalized Proca theories with nonminimal vector field derivative
couplings to the Ricci scalar, the Einstein tensor, and the double dual Riemann
tensor. We find concrete Lagrangians which give rise to exact BH solutions by
imposing two conditions of the two identical metric components and the constant
norm of the vector field. These exact solutions are described by either
Reissner-Nordström (RN), stealth Schwarzschild, or extremal RN solutions
with a non-trivial longitudinal mode of the vector field. We then numerically
construct BH solutions without imposing these conditions. For cubic and quartic
Lagrangians with power-law couplings which encompass vector Galileons as the
specific cases, we show the existence of BH solutions with the difference
between two non-trivial metric components. The quintic-order power-law
couplings do not give rise to non-trivial BH solutions regular throughout the
horizon exterior. The sixth-order and intrinsic vector-mode couplings can lead
to BH solutions with a secondary hair. For all the solutions, the vector field
is regular at least at the future or past horizon. The deviation from General
Relativity induced by the Proca hair can be potentially tested by future
measurements of gravitational waves in the nonlinear regime of gravity.
| 0 | 1 | 0 | 0 | 0 | 0 |
Proving the existence of loops in robot trajectories | This paper presents a reliable method to verify the existence of loops along
the uncertain trajectory of a robot, based on proprioceptive measurements only,
within a bounded-error context. The loop closure detection is one of the key
points in SLAM methods, especially in homogeneous environments with difficult
scenes recognitions. The proposed approach is generic and could be coupled with
conventional SLAM algorithms to reliably reduce their computing burden, thus
improving the localization and mapping processes in the most challenging
environments such as unexplored underwater extents.
To prove that a robot performed a loop whatever the uncertainties in its
evolution, we employ the notion of topological degree that originates in the
field of differential topology. We show that a verification tool based on the
topological degree is an optimal method for proving robot loops. This is
demonstrated both on datasets from real missions involving autonomous
underwater vehicles, and by a mathematical discussion.
| 1 | 0 | 0 | 0 | 0 | 0 |
The clock of chemical evolution | Chemical evolution is essential in understanding the origins of life. We
present a theory for the evolution of molecule masses and show that small
molecules grow by random diffusion and large molecules by a preferential
attachment process leading eventually to life's molecules. It reproduces
correctly the distribution of molecules found via mass spectroscopy for the
Murchison meteorite and estimates the start of chemical evolution back to 12.8
billion years following the birth of stars and supernovae. From the Frontier
mass between the random and preferential attachment dynamics the birth time of
molecule families can be estimated. Amino acids emerge about 165 million years
after the start of evolution. Using the scaling of reaction rates with the
distance of the molecules in space we recover correctly the few days emergence
time of amino acids in the Miller-Urey experiment. The distribution of
interstellar and extragalactic molecules are both consistent with the
evolutionary mass distribution, and their age is estimated to 108 and 65
million years after the start of evolution. From the model, we can determine
the number of different molecule compositions at the time of the creation of
Earth to be 1.6 million and the number of molecule compositions in interstellar
space to a mere 719.
| 0 | 0 | 0 | 0 | 1 | 0 |
Fast readout algorithm for cylindrical beam position monitors providing good accuracy for particle bunches with large offsets | A simple, analytically correct algorithm is developed for calculating pencil
beam coordinates using the signals from an ideal cylindrical particle beam
position monitor (BPM) with four pickup electrodes (PUEs) of infinitesimal
widths. The algorithm is then applied to simulations of realistic BPMs with
finite width PUEs. Surprisingly small deviations are found. Simple empirically
determined correction terms reduce the deviations even further. The algorithm
is then used to study the impact of beam-size upon the precision of BPMs in the
non-linear region. As an example of the data acquisition speed advantage, a
FPGA-based BPM readout implementation of the new algorithm has been developed
and characterized. Finally,the algorithm is tested with BPM data from the
Cornell Preinjector.
| 0 | 1 | 0 | 0 | 0 | 0 |
Approximation Algorithms for Independence and Domination on B$_1$-VPG and B$_1$-EPG Graphs | A graph $G$ is called B$_k$-VPG (resp., B$_k$-EPG), for some constant $k\geq
0$, if it has a string representation on a grid such that each vertex is an
orthogonal path with at most $k$ bends and two vertices are adjacent in $G$ if
and only if the corresponding strings intersect (resp., the corresponding
strings share at least one grid edge). If two adjacent strings of a B$_k$-VPG
graph intersect exactly once, then the graph is called a one-string B$_k$-VPG
graph.
In this paper, we study the Maximum Independent Set and Minimum Dominating
Set problems on B$_1$-VPG and B$_1$-EPG graphs. We first give a simple $O(\log
n)$-approximation algorithm for the Maximum Independent Set problem on
B$_1$-VPG graphs, improving the previous $O((\log n)^2)$-approximation
algorithm of Lahiri et al. (COCOA 2015). Then, we consider the Minimum
Dominating Set problem. We give an $O(1)$-approximation algorithm for this
problem on one-string B$_1$-VPG graphs, providing the first constant-factor
approximation algorithm for this problem. Moreover, we show that the Minimum
Dominating Set problem is APX-hard on B$_1$-EPG graphs, ruling out the
possibility of a PTAS unless P=NP. Finally, we give constant-factor
approximation algorithms for this problem on two non-trivial subclasses of
B$_1$-EPG graphs. To our knowledge, these are the first results for the Minimum
Dominating Set problem on B$_1$-EPG graphs, partially answering a question
posed by Epstein et al. (WADS 2013).
| 1 | 0 | 0 | 0 | 0 | 0 |
Phonon-mediated repulsion, sharp transitions and (quasi)self-trapping in the extended Peierls-Hubbard model | We study two identical fermions, or two hard-core bosons, in an infinite
chain and coupled to phonons by interactions that modulate their hopping as
described by the Peierls/Su-Schrieffer-Heeger (SSH) model. We show that
exchange of phonons generates effective nearest-neighbor repulsion between
particles and also gives rise to interactions that move the pair as a whole.
The two-polaron phase diagram exhibits two sharp transitions, leading to light
dimers at strong coupling and the flattening of the dimer dispersion at some
critical values of the parameters. This dimer (quasi)self-trapping occurs at
coupling strengths where single polarons are mobile. This illustrates that,
depending on the strength of the phonon-mediated interactions, the coupling to
phonons may completely suppress or strongly enhance quantum transport of
correlated particles.
| 0 | 1 | 0 | 0 | 0 | 0 |
Group Synchronization on Grids | Group synchronization requires to estimate unknown elements
$({\theta}_v)_{v\in V}$ of a compact group ${\mathfrak G}$ associated to the
vertices of a graph $G=(V,E)$, using noisy observations of the group
differences associated to the edges. This model is relevant to a variety of
applications ranging from structure from motion in computer vision to graph
localization and positioning, to certain families of community detection
problems.
We focus on the case in which the graph $G$ is the $d$-dimensional grid.
Since the unknowns ${\boldsymbol \theta}_v$ are only determined up to a global
action of the group, we consider the following weak recovery question. Can we
determine the group difference ${\theta}_u^{-1}{\theta}_v$ between far apart
vertices $u, v$ better than by random guessing? We prove that weak recovery is
possible (provided the noise is small enough) for $d\ge 3$ and, for certain
finite groups, for $d\ge 2$. Viceversa, for some continuous groups, we prove
that weak recovery is impossible for $d=2$. Finally, for strong enough noise,
weak recovery is always impossible.
| 0 | 0 | 1 | 1 | 0 | 0 |
An energy-based analysis of reduced-order models of (networked) synchronous machines | Stability of power networks is an increasingly important topic because of the
high penetration of renewable distributed generation units. This requires the
development of advanced (typically model-based) techniques for the analysis and
controller design of power networks. Although there are widely accepted
reduced-order models to describe the dynamic behavior of power networks, they
are commonly presented without details about the reduction procedure, hampering
the understanding of the physical phenomena behind them. The present paper aims
to provide a modular model derivation of multi-machine power networks. Starting
from first-principle fundamental physics, we present detailed dynamical models
of synchronous machines and clearly state the underlying assumptions which lead
to some of the standard reduced-order multi-machine models, including the
classical second-order swing equations. In addition, the energy functions for
the reduced-order multi-machine models are derived, which allows to represent
the multi-machine systems as port-Hamiltonian systems. Moreover, the systems
are proven to be passive with respect to its steady states, which permits for a
power-preserving interconnection with other passive components, including
passive controllers. As a result, the corresponding energy function or
Hamiltonian can be used to provide a rigorous stability analysis of advanced
models for the power network without having to linearize the system.
| 1 | 0 | 0 | 0 | 0 | 0 |
Binary Evolution and the Progenitor of SN 1987A | Since the majority of massive stars are members of binary systems, an
understanding of the intricacies of binary interactions is essential for
understanding the large variety of supernova types and sub-types. I therefore
briefly review the basic elements of binary evolution theory and discuss how
binary interactions affect the presupernova structure of massive stars and the
resulting supernovae.
SN 1987A was a highly anomalous supernova, almost certainly because of a
previous binary interaction. The most likely scenario at present is that the
progenitor was a member of a massive close binary that experienced dynamical
mass transfer during its second red-supergiant phase and merged completely with
its companion as a consequence. This can naturally explain the three main
anomalies of SN 1987A: the blue color of the progenitor, the chemical anomalies
and the complex triple-ring nebula.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Generative Model for Dynamic Networks with Applications | Networks observed in real world like social networks, collaboration networks
etc., exhibit temporal dynamics, i.e. nodes and edges appear and/or disappear
over time. In this paper, we propose a generative, latent space based,
statistical model for such networks (called dynamic networks). We consider the
case where the number of nodes is fixed, but the presence of edges can vary
over time. Our model allows the number of communities in the network to be
different at different time steps. We use a neural network based methodology to
perform approximate inference in the proposed model and its simplified version.
Experiments done on synthetic and real world networks for the task of community
detection and link prediction demonstrate the utility and effectiveness of our
model as compared to other similar existing approaches.
| 1 | 0 | 0 | 1 | 0 | 0 |
Algorithmic Chaining and the Role of Partial Feedback in Online Nonparametric Learning | We investigate contextual online learning with nonparametric (Lipschitz)
comparison classes under different assumptions on losses and feedback
information. For full information feedback and Lipschitz losses, we design the
first explicit algorithm achieving the minimax regret rate (up to log factors).
In a partial feedback model motivated by second-price auctions, we obtain
algorithms for Lipschitz and semi-Lipschitz losses with regret bounds improving
on the known bounds for standard bandit feedback. Our analysis combines novel
results for contextual second-price auctions with a novel algorithmic approach
based on chaining. When the context space is Euclidean, our chaining approach
is efficient and delivers an even better regret bound.
| 0 | 0 | 1 | 1 | 0 | 0 |
A hybrid primal heuristic for Robust Multiperiod Network Design | We investigate the Robust Multiperiod Network Design Problem, a
generalization of the classical Capacitated Network Design Problem that
additionally considers multiple design periods and provides solutions protected
against traffic uncertainty. Given the intrinsic difficulty of the problem,
which proves challenging even for state-of-the art commercial solvers, we
propose a hybrid primal heuristic based on the combination of ant colony
optimization and an exact large neighborhood search. Computational experiments
on a set of realistic instances from the SNDlib show that our heuristic can
find solutions of extremely good quality with low optimality gap.
| 1 | 0 | 1 | 0 | 0 | 0 |
Real-world Multi-object, Multi-grasp Detection | A deep learning architecture is proposed to predict graspable locations for
robotic manipulation. It considers situations where no, one, or multiple
object(s) are seen. By defining the learning problem to be classification with
null hypothesis competition instead of regression, the deep neural network with
RGB-D image input predicts multiple grasp candidates for a single object or
multiple objects, in a single shot. The method outperforms state-of-the-art
approaches on the Cornell dataset with 96.0% and 96.1% accuracy on image-wise
and object- wise splits, respectively. Evaluation on a multi-object dataset
illustrates the generalization capability of the architecture. Grasping
experiments achieve 96.0% grasp localization and 88.0% grasping success rates
on a test set of household objects. The real-time process takes less than .25 s
from image to plan.
| 1 | 0 | 0 | 0 | 0 | 0 |
Fault Tolerance of Random Graphs with respect to Connectivity: Phase Transition in Logarithmic Average Degree | The fault tolerance of random graphs with unbounded degrees with respect to
connectivity is investigated. It is related to the reliability of wireless
sensor networks with unreliable relay nodes. The model evaluates the network
breakdown probability that a graph is disconnected after stochastic node
removal. To establish a mean-field approximation for the model, the cavity
method for finite systems is proposed. Then the asymptotic analysis is applied.
As a result, the former enables us to obtain an approximation formula for any
number of nodes and an arbitrary and degree distribution. In addition, the
latter reveals that the phase transition occurs on random graphs with
logarithmic average degrees. Those results, which are supported by numerical
simulations, coincide with the mathematical results, indicating successful
predictions by mean-field approximation for unbounded but not dense random
graphs.
| 0 | 1 | 0 | 0 | 0 | 0 |
Belief Propagation, Bethe Approximation and Polynomials | Factor graphs are important models for succinctly representing probability
distributions in machine learning, coding theory, and statistical physics.
Several computational problems, such as computing marginals and partition
functions, arise naturally when working with factor graphs. Belief propagation
is a widely deployed iterative method for solving these problems. However,
despite its significant empirical success, not much is known about the
correctness and efficiency of belief propagation.
Bethe approximation is an optimization-based framework for approximating
partition functions. While it is known that the stationary points of the Bethe
approximation coincide with the fixed points of belief propagation, in general,
the relation between the Bethe approximation and the partition function is not
well understood. It has been observed that for a few classes of factor graphs,
the Bethe approximation always gives a lower bound to the partition function,
which distinguishes them from the general case, where neither a lower bound,
nor an upper bound holds universally. This has been rigorously proved for
permanents and for attractive graphical models.
Here we consider bipartite normal factor graphs and show that if the local
constraints satisfy a certain analytic property, the Bethe approximation is a
lower bound to the partition function. We arrive at this result by viewing
factor graphs through the lens of polynomials. In this process, we reformulate
the Bethe approximation as a polynomial optimization problem. Our sufficient
condition for the lower bound property to hold is inspired by recent
developments in the theory of real stable polynomials. We believe that this way
of viewing factor graphs and its connection to real stability might lead to a
better understanding of belief propagation and factor graphs in general.
| 1 | 0 | 0 | 1 | 0 | 0 |
Combining the Ensemble and Franck-Condon Approaches for Spectral Shapes of Molecules in Solution | The correct treatment of vibronic effects is vital for the modeling of
absorption spectra of solvated dyes, as many prominent spectral features can
often be ascribed to vibronic transitions. Vibronic spectra can be computed
within the Franck-Condon approximation for small dyes in solution using an
implicit solvent model. However, implicit solvent models neglect specific
solute-solvent interactions and provide only an approximate treatment of
solvent polarization effects. Furthermore, temperature-dependent solvent
broadening effects are often only accounted for using a broadening parameter
chosen to match experimental spectra. On the other hand, ensemble approaches
provide a straightforward way of accounting for solute-solvent interactions and
temperature-dependent broadening effects by computing vertical excitation
energies obtained from an ensemble of solute-solvent conformations. However,
ensemble approaches do not explicitly account for vibronic effects and often
produce spectral shapes in poor agreement with experiment. We address these
shortcomings by combining the vibronic fine structure of an excitation obtained
in the Franck-Condon picture at zero temperature with vertical excitations
computed for a room-temperature ensemble of solute-solvent configurations. In
this combined approach, all temperature-dependent broadening is therefore
treated classically through the sampling of configurations, with vibronic
contributions included as a zero-temperature correction to each vertical
transition. We test the proposed method on Nile Red and the green fluorescent
protein chromophore in polar and non-polar solvents. For systems with strong
solute-solvent interaction, the approach yields a significant improvement over
the ensemble approach, whereas for systems with weaker interactions, both the
shape and the width of the spectra are in excellent agreement with experiment.
| 0 | 1 | 0 | 0 | 0 | 0 |
Personalized Dialogue Generation with Diversified Traits | Endowing a dialogue system with particular personality traits is essential to
deliver more human-like conversations. However, due to the challenge of
embodying personality via language expression and the lack of large-scale
persona-labeled dialogue data, this research problem is still far from
well-studied. In this paper, we investigate the problem of incorporating
explicit personality traits in dialogue generation to deliver personalized
dialogues.
To this end, firstly, we construct PersonalDialog, a large-scale multi-turn
dialogue dataset containing various traits from a large number of speakers. The
dataset consists of 20.83M sessions and 56.25M utterances from 8.47M speakers.
Each utterance is associated with a speaker who is marked with traits like Age,
Gender, Location, Interest Tags, etc. Several anonymization schemes are
designed to protect the privacy of each speaker. This large-scale dataset will
facilitate not only the study of personalized dialogue generation, but also
other researches on sociolinguistics or social science.
Secondly, to study how personality traits can be captured and addressed in
dialogue generation, we propose persona-aware dialogue generation models within
the sequence to sequence learning framework. Explicit personality traits
(structured by key-value pairs) are embedded using a trait fusion module.
During the decoding process, two techniques, namely persona-aware attention and
persona-aware bias, are devised to capture and address trait-related
information. Experiments demonstrate that our model is able to address proper
traits in different contexts. Case studies also show interesting results for
this challenging research problem.
| 1 | 0 | 0 | 0 | 0 | 0 |
Hyperelliptic Jacobians and isogenies | Motivated by results of Mestre and Voisin, in this note we mainly consider
abelian varieties isogenous to hyperelliptic Jacobians
In the first part we prove that a very general hyperelliptic Jacobian of
genus $g\ge 4$ is not isogenous to a non-hyperelliptic Jacobian. As a
consequence we obtain that the Intermediate Jacobian of a very general cubic
threefold is not isogenous to a Jacobian. Another corollary tells that the
Jacobian of a very general $d$-gonal curve of genus $g \ge 4$ is not isogenous
to a different Jacobian.
In the second part we consider a closed subvariety $\mathcal Y \subset
\mathcal A_g$ of the moduli space of principally polarized varieties of
dimension $g\ge 3$. We show that if a very general element of $\mathcal Y$ is
dominated by a hyperelliptic Jacobian, then $\dim \mathcal Y\ge 2g$. In
particular, if the general element in $\mathcal Y$ is simple, its Kummer
variety does not contain rational curves. Finally we show that a closed
subvariety $\mathcal Y\subset \mathcal M_g$ of dimension $2g-1$ such that the
Jacobian of a very general element of $\mathcal Y$ is dominated by a
hyperelliptic Jacobian is contained either in the hyperelliptic or in the
trigonal locus.
| 0 | 0 | 1 | 0 | 0 | 0 |
Signal and Noise Statistics Oblivious Sparse Reconstruction using OMP/OLS | Orthogonal matching pursuit (OMP) and orthogonal least squares (OLS) are
widely used for sparse signal reconstruction in under-determined linear
regression problems. The performance of these compressed sensing (CS)
algorithms depends crucially on the \textit{a priori} knowledge of either the
sparsity of the signal ($k_0$) or noise variance ($\sigma^2$). Both $k_0$ and
$\sigma^2$ are unknown in general and extremely difficult to estimate in under
determined models. This limits the application of OMP and OLS in many practical
situations. In this article, we develop two computationally efficient
frameworks namely TF-IGP and RRT-IGP for using OMP and OLS even when $k_0$ and
$\sigma^2$ are unavailable. Both TF-IGP and RRT-IGP are analytically shown to
accomplish successful sparse recovery under the same set of restricted isometry
conditions on the design matrix required for OMP/OLS with \textit{a priori}
knowledge of $k_0$ and $\sigma^2$. Numerical simulations also indicate a highly
competitive performance of TF-IGP and RRT-IGP in comparison to OMP/OLS with
\textit{a priori} knowledge of $k_0$ and $\sigma^2$.
| 0 | 0 | 0 | 1 | 0 | 0 |
Experimenting with the p4est library for AMR simulations of two-phase flows | Many physical problems involve spatial and temporal inhomogeneities that
require a very fine discretization in order to be accurately simulated. Using
an adaptive mesh, a high level of resolution is used in the appropriate areas
while keeping a coarse mesh elsewhere. This idea allows to save time and
computations, but represents a challenge for distributed-memory environments.
The MARS project (for Multiphase Adaptative Refinement Solver) intends to
assess the parallel library p4est for adaptive mesh, in a case of a finite
volume scheme applied to two-phase flows. Besides testing the library's
performances, particularly for load balancing, its user-friendliness in use and
implementation are also exhibited here. First promising 3D simulations are even
presented.
| 1 | 1 | 0 | 0 | 0 | 0 |
Lifting CDCL to Template-based Abstract Domains for Program Verification | The success of Conflict Driven Clause Learning (CDCL) for Boolean
satisfiability has inspired adoption in other domains. We present a novel
lifting of CDCL to program analysis called Abstract Conflict Driven Learning
for Programs (ACDLP). ACDLP alternates between model search, which performs
over-approximate deduction with constraint propagation, and conflict analysis,
which performs under-approximate abduction with heuristic choice. We
instantiate the model search and conflict analysis algorithms to an abstract
domain of template polyhedra, strictly generalizing CDCL from the Boolean
lattice to a richer lattice structure. Our template polyhedra can express
intervals, octagons and restricted polyhedral constraints over program
variables. We have imple- mented ACDLP for automatic bounded safety
verification of C programs. We evaluate the performance of our analyser by
comparing with CBMC, which uses CDCL, and Astree, a commercial abstract
interpretation tool. We observe two orders of magnitude reduction in the number
of decisions, propagations, and conflicts as well as a 1.5x speedup in runtime
compared to CBMC. Compared to Astree, ACDLP solves twice as many benchmarks and
has much higher precision. This is the first instantiation of CDCL with a
template polyhedra abstract domain.
| 1 | 0 | 0 | 0 | 0 | 0 |
Binary Voting with Delegable Proxy: An Analysis of Liquid Democracy | The paper provides an analysis of the voting method known as delegable proxy
voting, or liquid democracy. The analysis first positions liquid democracy
within the theory of binary aggregation. It then focuses on two issues of the
system: the occurrence of delegation cycles; and the effect of delegations on
individual rationality when voting on logically interdependent propositions. It
finally points to proposals on how the system may be modified in order to
address the above issues.
| 1 | 0 | 0 | 0 | 0 | 0 |
Exact description of coalescing eigenstates in open quantum systems in terms of microscopic Hamiltonian dynamics | At the exceptional point where two eigenstates coalesce in open quantum
systems, the usual diagonalization scheme breaks down and the Hamiltonian can
only be reduced to Jordan block form. Most of the studies on the exceptional
point appearing in the literature introduce a phenomenological effective
Hamiltonian that essentially reduces the problem to that of a finite
non-Hermitian matrix for which it is straightforward to obtain the Jordan form.
In this paper, we demonstrate how the Hamiltonian of an open quantum system
reduces to Jordan block form at an exceptional point in an exact manner that
treats the continuum without any approximation. Our method relies on the
Brillouin-Wigner-Feshbach projection method according to which we can obtain a
finite dimensional effective Hamiltonian that shares the discrete sector of the
spectrum with the original Hamiltonian. While owing to its eigenvalue
dependence this effective Hamiltonian cannot be used to write the Jordan block
directly, we show that by formally extending the problem to include eigenstates
with complex eigenvalues that reside outside the usual Hilbert space, we can
obtain the Jordan block form at the exceptional point without introducing any
approximation. We also introduce an extended Jordan form basis away from the
exceptional point, which provides an alternative way to obtain the Jordan block
at an exceptional point. The extended Jordan block connects continuously to the
Jordan block exactly at the exceptional point implying that the observable
quantities are continuous at the exceptional point.
| 0 | 1 | 0 | 0 | 0 | 0 |
The maximum number of cycles in a graph with fixed number of edges | The main topic considered is maximizing the number of cycles in a graph with
given number of edges. In 2009, Király conjectured that there is constant $c$
such that any graph with $m$ edges has at most $(1.4)^m$ cycles. In this paper,
it is shown that for sufficiently large $m$, a graph with $m$ edges has at most
$(1.443)^m$ cycles. For sufficiently large $m$, examples of a graph with $m$
edges and $(1.37)^m$ cycles are presented. For a graph with given number of
vertices and edges an upper bound on the maximal number of cycles is given.
Also, exponentially tight bounds are proved for the maximum number of cycles in
a multigraph with given number of edges, as well as in a multigraph with given
number of vertices and edges.
| 0 | 0 | 1 | 0 | 0 | 0 |
Structural Nonrealism and Quantum Information | The article introduces a new concept of structure, defined, echoing J. A.
Wheeler's concept of "law without law," as a "structure without law," and a new
philosophical viewpoint, that of structural nnnrealism, and considers how this
concept and this viewpoint work in quantum theory in general and quantum
information theory in particular. It takes as its historical point of departure
W. Heisenberg's discovery of quantum mechanics, which, the article argues,
could, in retrospect, be considered in quantum-informational terms, while,
conversely, quantum information theory could be seen in Heisenbergian terms.
The article takes advantage of the circumstance that any instance of quantum
information is a "structure"--an organization of elements, ultimately bits, of
classical information, manifested in measuring instruments. While, however,
this organization can, along with the observed behavior of measuring
instruments, be described by means of classical physics, it cannot be predicted
by means of classical physics, but only, probabilistically or statistically, by
means of quantum mechanics, or in high-energy physics, by means of quantum
field theory (or possibly some alternative theories within each scope). By
contrast, the emergences of this information and of this structure cannot, in
the present view, be described by either classical or quantum theory, or
possibly by any other means, which leads to the concept of "structure without
law" and the viewpoint of structural nnnrealism. The article also considers,
from this perspective, some recent work in quantum information theory.
| 0 | 1 | 0 | 0 | 0 | 0 |
Robotics CTF (RCTF), a playground for robot hacking | Robots state of insecurity is onstage. There is an emerging concern about
major robot vulnerabilities and their adverse consequences. However, there is
still a considerable gap between robotics and cybersecurity domains. For the
purpose of filling that gap, the present technical report presents the Robotics
CTF (RCTF), an online playground to challenge robot security from any browser.
We describe the architecture of the RCTF and provide 9 scenarios where hackers
can challenge the security of different robotic setups. Our work empowers
security researchers to a) reproduce virtual robotic scenarios locally and b)
change the networking setup to mimic real robot targets. We advocate for hacker
powered security in robotics and contribute by open sourcing our scenarios.
| 1 | 0 | 0 | 0 | 0 | 0 |
Effects of transmutation elements in tungsten as a plasma facing material | Tungsten (W) is widely considered as the most promising plasma facing
material, which is used in nuclear fusion devices. During the operation of the
nuclear fusion devices, transmutation elements, such as Re, Os, and Ta, are
generated in W due to the transmutation reaction under fusion neutron
irradiation. In this paper, we investigated the effects of the transmutation
elements on the mechanical properties of W and the behavior of hydrogen/helium
(H/He) atom in W by using the rst principles calculation method. The results
are that the generation of the transmutation elements can enhance the ductility
of W without considering the dislocation and other defects, and this phenomenon
is called as solution toughen. However, there is not a strict linear
relationship between the change of the mechanical properties and the
transmutation elements concentration. Compared with the H/He atom in pure W,
the formation energy of the H/He in W are decreased by the transmutation
elements, but the transmutation elements does not change the most favorable
sites for H/He in W. An attractive interaction exists between the transmutation
elements and H/He in W, while a repulsive interaction exists between Ta and He
in W. The best diffusion path H/He in W is changed due to the interaction
between the transmutation elements and H/He. All of the above results provide
important information for application of W as the plasma facing material in the
nuclear fusion devices.
| 0 | 1 | 0 | 0 | 0 | 0 |
A playful note on spanning and surplus edges | Consider a (not necessarily near-critical) random graph running in continuous
time. A recent breadth-first-walk construction is extended in order to account
for the surplus edge data in addition to the spanning edge data. Two different
graph representations of the multiplicative coalescent, with different
advantages and drawbacks, are discussed in detail. A canonical multi-graph of
Bhamidi, Budhiraja and Wang (2014) naturally emerges. The presented framework
should facilitate understanding of scaling limits with surplus edges for
near-critical random graphs in the domain of attraction of general (not
necessarily standard) eternal multiplicative coalescent.
| 0 | 0 | 1 | 0 | 0 | 0 |
Logo Synthesis and Manipulation with Clustered Generative Adversarial Networks | Designing a logo for a new brand is a lengthy and tedious back-and-forth
process between a designer and a client. In this paper we explore to what
extent machine learning can solve the creative task of the designer. For this,
we build a dataset -- LLD -- of 600k+ logos crawled from the world wide web.
Training Generative Adversarial Networks (GANs) for logo synthesis on such
multi-modal data is not straightforward and results in mode collapse for some
state-of-the-art methods. We propose the use of synthetic labels obtained
through clustering to disentangle and stabilize GAN training. We are able to
generate a high diversity of plausible logos and we demonstrate latent space
exploration techniques to ease the logo design task in an interactive manner.
Moreover, we validate the proposed clustered GAN training on CIFAR 10,
achieving state-of-the-art Inception scores when using synthetic labels
obtained via clustering the features of an ImageNet classifier. GANs can cope
with multi-modal data by means of synthetic labels achieved through clustering,
and our results show the creative potential of such techniques for logo
synthesis and manipulation. Our dataset and models will be made publicly
available at this https URL.
| 1 | 0 | 0 | 1 | 0 | 0 |
Multi-view Supervision for Single-view Reconstruction via Differentiable Ray Consistency | We study the notion of consistency between a 3D shape and a 2D observation
and propose a differentiable formulation which allows computing gradients of
the 3D shape given an observation from an arbitrary view. We do so by
reformulating view consistency using a differentiable ray consistency (DRC)
term. We show that this formulation can be incorporated in a learning framework
to leverage different types of multi-view observations e.g. foreground masks,
depth, color images, semantics etc. as supervision for learning single-view 3D
prediction. We present empirical analysis of our technique in a controlled
setting. We also show that this approach allows us to improve over existing
techniques for single-view reconstruction of objects from the PASCAL VOC
dataset.
| 1 | 0 | 0 | 0 | 0 | 0 |
Efficient Rank Minimization via Solving Non-convexPenalties by Iterative Shrinkage-Thresholding Algorithm | Rank minimization (RM) is a wildly investigated task of finding solutions by
exploiting low-rank structure of parameter matrices. Recently, solving RM
problem by leveraging non-convex relaxations has received significant
attention. It has been demonstrated by some theoretical and experimental work
that non-convex relaxation, e.g. Truncated Nuclear Norm Regularization (TNNR)
and Reweighted Nuclear Norm Regularization (RNNR), can provide a better
approximation of original problems than convex relaxations. However, designing
an efficient algorithm with theoretical guarantee remains a challenging
problem. In this paper, we propose a simple but efficient proximal-type method,
namely Iterative Shrinkage-Thresholding Algorithm(ISTA), with concrete analysis
to solve rank minimization problems with both non-convex weighted and
reweighted nuclear norm as low-rank regularizers. Theoretically, the proposed
method could converge to the critical point under very mild assumptions with
the rate in the order of $O(1/T)$. Moreover, the experimental results on both
synthetic data and real world data sets show that proposed algorithm
outperforms state-of-arts in both efficiency and accuracy.
| 0 | 0 | 0 | 1 | 0 | 0 |
Almost Buchsbaumness of some rings arising from complexes with isolated singularities | We study properties of the Stanley-Reisner rings of simplicial complexes with
isolated singularities modulo two generic linear forms. Miller, Novik, and
Swartz proved that if a complex has homologically isolated singularities, then
its Stanley-Reisner ring modulo one generic linear form is Buchsbaum. Here we
examine the case of non-homologically isolated singularities, providing many
examples in which the Stanley-Reisner ring modulo two generic linear forms is a
quasi-Buchsbaum but not Buchsbaum ring.
| 0 | 0 | 1 | 0 | 0 | 0 |
Dynamically controlled plasmonic nano-antenna phased array utilizing vanadium dioxide | We propose and analyze theoretically an approach for realizing a tunable
optical phased-array antenna utilizing the properties of VO2 for electronic
beam steering applications in the near-IR spectral range. The device is based
on a 1D array of slot nano-antennas engraved in a thin Au film grown over VO2
layer. The tuning is obtained by inducing a temperature gradient over the
device, which changes the refractive index of the VO2, and hence modifies the
phase response of the elements comprising the array, by producing a thermal
gradient within the underlying PCM layer. Using a 10-element array, we show
that an incident beam can be steered up to with respect to the normal, by
applying a gradient of less than 10°C.
| 0 | 1 | 0 | 0 | 0 | 0 |
Comparing deep neural networks against humans: object recognition when the signal gets weaker | Human visual object recognition is typically rapid and seemingly effortless,
as well as largely independent of viewpoint and object orientation. Until very
recently, animate visual systems were the only ones capable of this remarkable
computational feat. This has changed with the rise of a class of computer
vision algorithms called deep neural networks (DNNs) that achieve human-level
classification performance on object recognition tasks. Furthermore, a growing
number of studies report similarities in the way DNNs and the human visual
system process objects, suggesting that current DNNs may be good models of
human visual object recognition. Yet there clearly exist important
architectural and processing differences between state-of-the-art DNNs and the
primate visual system. The potential behavioural consequences of these
differences are not well understood. We aim to address this issue by comparing
human and DNN generalisation abilities towards image degradations. We find the
human visual system to be more robust to image manipulations like contrast
reduction, additive noise or novel eidolon-distortions. In addition, we find
progressively diverging classification error-patterns between humans and DNNs
when the signal gets weaker, indicating that there may still be marked
differences in the way humans and current DNNs perform visual object
recognition. We envision that our findings as well as our carefully measured
and freely available behavioural datasets provide a new useful benchmark for
the computer vision community to improve the robustness of DNNs and a
motivation for neuroscientists to search for mechanisms in the brain that could
facilitate this robustness.
| 1 | 0 | 0 | 1 | 0 | 0 |
Quantum critical response: from conformal perturbation theory to holography | We discuss dynamical response functions near quantum critical points,
allowing for both a finite temperature and detuning by a relevant operator.
When the quantum critical point is described by a conformal field theory (CFT),
conformal perturbation theory and the operator product expansion can be used to
fix the first few leading terms at high frequencies. Knowledge of the high
frequency response allows us then to derive non-perturbative sum rules. We
show, via explicit computations, how holography recovers the general results of
CFT, and the associated sum rules, for any holographic field theory with a
conformal UV completion -- regardless of any possible new ordering and/or
scaling physics in the IR. We numerically obtain holographic response functions
at all frequencies, allowing us to probe the breakdown of the asymptotic
high-frequency regime. Finally, we show that high frequency response functions
in holographic Lifshitz theories are quite similar to their conformal
counterparts, even though they are not strongly constrained by symmetry.
| 0 | 1 | 0 | 0 | 0 | 0 |
Testing High-dimensional Covariance Matrices under the Elliptical Distribution and Beyond | We study testing high-dimensional covariance matrices under a generalized
elliptical model. The model accommodates several stylized facts of real data
including heteroskedasticity, heavy-tailedness, asymmetry, etc. We consider the
high-dimensional setting where the dimension $p$ and the sample size $n$ grow
to infinity proportionally, and establish a central limit theorem for the
{linear spectral statistic} of the sample covariance matrix based on
self-normalized observations. The central limit theorem is different from the
existing ones for the linear spectral statistic of the usual sample covariance
matrix. Our tests based on the new central limit theorem neither assume a
specific parametric distribution nor involve the kurtosis of data. Simulation
studies show that our tests work well even when the fourth moment does not
exist. Empirically, we analyze the idiosyncratic returns under the Fama-French
three-factor model for S\&P 500 Financials sector stocks, and our tests reject
the hypothesis that the idiosyncratic returns are uncorrelated.
| 0 | 0 | 1 | 1 | 0 | 0 |
Work Analysis with Resource-Aware Session Types | While there exist several successful techniques for supporting programmers in
deriving static resource bounds for sequential code, analyzing the resource
usage of message-passing concurrent processes poses additional challenges. To
meet these challenges, this article presents an analysis for statically
deriving worst-case bounds on the total work performed by message-passing
processes. To decompose interacting processes into components that can be
analyzed in isolation, the analysis is based on novel resource-aware session
types, which describe protocols and resource contracts for inter-process
communication. A key innovation is that both messages and processes carry
potential to share and amortize cost while communicating. To symbolically
express resource usage in a setting without static data structures and
intrinsic sizes, resource contracts describe bounds that are functions of
interactions between processes. Resource-aware session types combine standard
binary session types and type-based amortized resource analysis in a linear
type system. This type system is formulated for a core session-type calculus of
the language SILL and proved sound with respect to a multiset-based operational
cost semantics that tracks the total number of messages that are exchanged in a
system. The effectiveness of the analysis is demonstrated by analyzing standard
examples from amortized analysis and the literature on session types and by a
comparative performance analysis of different concurrent programs implementing
the same interface.
| 1 | 0 | 0 | 0 | 0 | 0 |
Seven dimensional cohomogeneity one manifolds with nonnegative curvature | We show that a certain family of cohomogeneity one manifolds does not admit
an invariant metric of nonnegative sectional curvature, unless it admits one
with positive curvature. As a consequence, the classification of nonnegatively
curved cohomogeneity one manifolds in dimension 7 is reduced to only one
further family of candidates
| 0 | 0 | 1 | 0 | 0 | 0 |
Optimal Rates of Sketched-regularized Algorithms for Least-Squares Regression over Hilbert Spaces | We investigate regularized algorithms combining with projection for
least-squares regression problem over a Hilbert space, covering nonparametric
regression over a reproducing kernel Hilbert space. We prove convergence
results with respect to variants of norms, under a capacity assumption on the
hypothesis space and a regularity condition on the target function. As a
result, we obtain optimal rates for regularized algorithms with randomized
sketches, provided that the sketch dimension is proportional to the effective
dimension up to a logarithmic factor. As a byproduct, we obtain similar results
for Nyström regularized algorithms. Our results are the first ones with
optimal, distribution-dependent rates that do not have any saturation effect
for sketched/Nyström regularized algorithms, considering both the
attainable and non-attainable cases.
| 0 | 0 | 0 | 1 | 0 | 0 |
The scaling properties and the multiple derivative of Legendre polynomials | In this paper, we study the scaling properties of Legendre polynomials Pn(x).
We show that Pn(ax), where a is a constant, can be expanded as a sum of either
Legendre polynomials Pn(x) or their multiple derivatives dkPn(x)/dxk, and we
derive a general expression for the expansion coefficients. In addition, we
demonstrate that the multiple derivative dkPn(x)/dxk can also be expressed as a
sum of Legendre polynomials and we obtain a recurrence relation for the
coefficients.
| 0 | 0 | 1 | 0 | 0 | 0 |
Knockoffs for the mass: new feature importance statistics with false discovery guarantees | An important problem in machine learning and statistics is to identify
features that causally affect the outcome. This is often impossible to do from
purely observational data, and a natural relaxation is to identify features
that are correlated with the outcome even conditioned on all other observed
features. For example, we want to identify that smoking really is correlated
with cancer conditioned on demographics. The knockoff procedure is a recent
breakthrough in statistics that, in theory, can identify truly correlated
features while guaranteeing that the false discovery is limited. The idea is to
create synthetic data -knockoffs- that captures correlations amongst the
features. However there are substantial computational and practical challenges
to generating and using knockoffs. This paper makes several key advances that
enable knockoff application to be more efficient and powerful. We develop an
efficient algorithm to generate valid knockoffs from Bayesian Networks. Then we
systematically evaluate knockoff test statistics and develop new statistics
with improved power. The paper combines new mathematical guarantees with
systematic experiments on real and synthetic data.
| 0 | 0 | 0 | 1 | 0 | 0 |
Phase boundaries in alternating field quantum XY model with Dzyaloshinskii-Moriya interaction: Sustainable entanglement in dynamics | We report all phases and corresponding critical lines of the quantum
anisotropic transverse XY model with Dzyaloshinskii-Moriya (DM) interaction
along with uniform and alternating transverse magnetic fields (ATXY) by using
appropriately chosen order parameters. We prove that when DM interaction is
weaker than the anisotropy parameter, it has no effect at all on the
zero-temperature states of the XY model with uniform transverse magnetic field
which is not the case for the ATXY model. However, when DM interaction is
stronger than the anisotropy parameter, we show appearance of a new gapless
phase - a chiral phase - in the XY model with uniform as well as alternating
field. We further report that first derivatives of nearest neighbor two-site
entanglement with respect to magnetic fields can detect all the critical lines
present in the system. We also observe that the factorization surface at
zero-temperature present in this model without DM interaction becomes a volume
on the introduction of the later. We find that DM interaction can generate
bipartite entanglement sustainable at large times, leading to a proof of
ergodic nature of bipartite entanglement in this system, and can induce a
transition from non-monotonicity of entanglement with temperature to a
monotonic one.
| 0 | 1 | 0 | 0 | 0 | 0 |
Descent of equivalences and character bijections | Categorical equivalences between block algebras of finite groups - such as
Morita and derived equivalences - are well-known to induce character bijections
which commute with the Galois groups of field extensions. This is the
motivation for attempting to realise known Morita and derived equivalences over
non splitting fields. This article presents various result on the theme of
descent. We start with the observation that perfect isometries induced by a
virtual Morita equivalence induce isomorphisms of centers in non-split
situations, and explain connections with Navarro's generalisation of the
Alperin-McKay conjecture. We show that Rouquier's splendid Rickard complex for
blocks with cyclic defect groups descends to the non-split case. We also prove
a descent theorem for Morita equivalences with endopermutation source.
| 0 | 0 | 1 | 0 | 0 | 0 |
A function field analogue of the Rasmussen-Tamagawa conjecture: The Drinfeld module case | In the arithmetic of function fields, Drinfeld modules play the role that
elliptic curves play in the arithmetic of number fields. The aim of this paper
is to study a non-existence problem of Drinfeld modules with constrains on
torsion points at places with large degree. This is motivated by a conjecture
of Christopher Rasmussen and Akio Tamagawa on the non-existence of abelian
varieties over number fields with some arithmetic constraints. We prove the
non-existence of Drinfeld modules satisfying Rasmussen-Tamagawa type conditions
in the case where the inseparable degree of base fields is not divisible by the
rank of Drinfeld modules. Conversely if the rank divides the inseparable
degree, then we give an example of Drinfeld modules satisfying
Rasmussen-Tamagawa-type conditions.
| 0 | 0 | 1 | 0 | 0 | 0 |
Malware Detection Using Dynamic Birthmarks | In this paper, we explore the effectiveness of dynamic analysis techniques
for identifying malware, using Hidden Markov Models (HMMs) and Profile Hidden
Markov Models (PHMMs), both trained on sequences of API calls. We contrast our
results to static analysis using HMMs trained on sequences of opcodes, and show
that dynamic analysis achieves significantly stronger results in many cases.
Furthermore, in contrasting our two dynamic analysis techniques, we find that
using PHMMs consistently outperforms our analysis based on HMMs.
| 1 | 0 | 0 | 1 | 0 | 0 |
Semi-analytical approximations to statistical moments of sigmoid and softmax mappings of normal variables | This note is concerned with accurate and computationally efficient
approximations of moments of Gaussian random variables passed through sigmoid
or softmax mappings. These approximations are semi-analytical (i.e. they
involve the numerical adjustment of parametric forms) and highly accurate (they
yield 5% error at most). We also highlight a few niche applications of these
approximations, which arise in the context of, e.g., drift-diffusion models of
decision making or non-parametric data clustering approaches. We provide these
as examples of efficient alternatives to more tedious derivations that would be
needed if one was to approach the underlying mathematical issues in a more
formal way. We hope that this technical note will be helpful to modellers
facing similar mathematical issues, although maybe stemming from different
academic prospects.
| 0 | 0 | 0 | 1 | 0 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.