title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
A Parsimonious Dynamical Model for Structural Learning in the Human Brain | The human brain is capable of diverse feats of intelligence. A particularly
salient example is the ability to deduce structure from time-varying auditory
and visual stimuli, enabling humans to master the rules of language and to
build rich expectations of their physical environment. The broad relevance of
this ability for human cognition motivates the need for a first-principles
model explicating putative mechanisms. Here we propose a general framework for
structural learning in the brain, composed of an evolving, high-dimensional
dynamical system driven by external stimuli or internal processes. We
operationalize the scenario in which humans learn the rules that generate a
sequence of stimuli, rather than the exemplar stimuli themselves. We model
external stimuli as seemingly disordered chaotic time series generated by
complex dynamical systems; the underlying structure being deduced is then that
of the corresponding chaotic attractor. This approach allows us to demonstrate
and theoretically explain the emergence of five distinct phenomena reminiscent
of cognitive functions: (i) learning the structure of a chaotic system purely
from time series, (ii) generating new streams of stimuli from a chaotic system,
(iii) switching stream generation among multiple learned chaotic systems,
either spontaneously or in response to external perturbations, (iv) inferring
missing data from sparse observations of the chaotic system, and (v)
deciphering superimposed input from different chaotic systems. Numerically, we
show that these phenomena emerge naturally from a recurrent neural network of
Erdos-Renyi topology in which the synaptic strengths adapt in a Hebbian-like
manner. Broadly, our work blends chaotic theory and artificial neural networks
to answer the long standing question of how neural systems can learn the
structure underlying temporal sequences of stimuli.
| 0 | 0 | 0 | 0 | 1 | 0 |
Estimation of mean residual life | Yang (1978) considered an empirical estimate of the mean residual life
function on a fixed finite interval. She proved it to be strongly uniformly
consistent and (when appropriately standardized) weakly convergent to a
Gaussian process. These results are extended to the whole half line, and the
variance of the the limiting process is studied. Also, nonparametric
simultaneous confidence bands for the mean residual life function are obtained
by transforming the limiting process to Brownian motion.
| 0 | 0 | 1 | 1 | 0 | 0 |
Trace-free characters and abelian knot contact homology II | We calculate ghost characters for the (5,6)-torus knot, and using them we
show that the (5,6)-torus knot gives a counter-example of Ng's conjecture
concerned with the relationship between degree 0 abelian knot contact homology
and the character variety of the 2-fold branched covering of the 3-sphere
branched along the knot.
| 0 | 0 | 1 | 0 | 0 | 0 |
Surface Plasmon Excitation of Second Harmonic light: Emission and Absorption | We aim to clarify the role that absorption plays in nonlinear optical
processes in a variety of metallic nanostructures and show how it relates to
emission and conversion efficiency. We define a figure of merit that
establishes the structure's ability to either favor or impede second harmonic
generation. Our findings suggest that, despite the best efforts embarked upon
to enhance local fields and light coupling via plasmon excitation, nearly
always the absorbed harmonic energy far surpasses the harmonic energy emitted
in the far field. Qualitative and quantitative understanding of absorption
processes is crucial in the evaluation of practical designs of plasmonic
nanostructures for the purpose of frequency mixing.
| 0 | 1 | 0 | 0 | 0 | 0 |
Fault diagnosability of data center networks | The data center networks $D_{n,k}$, proposed in 2008, has many desirable
features such as high network capacity. A kind of generalization of
diagnosability for network $G$ is $g$-good-neighbor diagnosability which is
denoted by $t_g(G)$. Let $\kappa^g(G)$ be the $R^g$-connectivity. Lin et. al.
in [IEEE Trans. on Reliability, 65 (3) (2016) 1248--1262] and Xu et. al in
[Theor. Comput. Sci. 659 (2017) 53--63] gave the same problem independently
that: the relationship between the $R^g$-connectivity $\kappa^g(G)$ and
$t_g(G)$ of a general graph $G$ need to be studied in the future. In this
paper, this open problem is solved for general regular graphs. We firstly
establish the relationship of $\kappa^g(G)$ and $t_g(G)$, and obtain that
$t_g(G)=\kappa^g(G)+g$ under some conditions. Secondly, we obtain the
$g$-good-neighbor diagnosability of $D_{k,n}$ which are
$t_g(D_{k,n})=(g+1)(k-1)+n+g$ for $1\leq g\leq n-1$ under the PMC model and the
MM model, respectively. Further more, we show that $D_{k,n}$ is tightly super
$(n+k-1)$-connected for $n\geq 2$ and $k\geq 2$ and we also prove that the
largest connected component of the survival graph contains almost all of the
remaining vertices in $D_{k,n}$ when $2k+n-2$ vertices removed.
| 1 | 0 | 0 | 0 | 0 | 0 |
The Effects of Protostellar Disk Turbulence on CO Emission Lines: A Comparison Study of Disks with Constant CO Abundance vs. Chemically Evolving Disks | Turbulence is the leading candidate for angular momentum transport in
protoplanetary disks and therefore influences disk lifetimes and planet
formation timescales. However, the turbulent properties of protoplanetary disks
are poorly constrained observationally. Recent studies have found turbulent
speeds smaller than what fully-developed MRI would produce (Flaherty et al.
2015, 2017). However, existing studies assumed a constant CO/H2 ratio of 0.0001
in locations where CO is not frozen-out or photo-dissociated. Our previous
studies of evolving disk chemistry indicate that CO is depleted by
incorporation into complex organic molecules well inside the freeze-out radius
of CO. We consider the effects of this chemical depletion on measurements of
turbulence. Simon et al. (2015) suggested that the ratio of the peak line flux
to the flux at line center of the CO J=3-2 transition is a reasonable
diagnostic of turbulence, so we focus on that metric, while adding some
analysis of the more complex effects on spatial distribution. We simulate the
emission lines of CO based on chemical evolution models presented in Yu et al.
(2016), and find that the peak-to-trough ratio changes as a function of time as
CO is destroyed. Specifically, a CO-depleted disk with high turbulent velocity
mimics the peak-to-trough ratios of a non-CO-depleted disk with lower turbulent
velocity. We suggest that disk observers and modelers take into account the
possibility of CO depletion when using line peak-to-trough ratios to constrain
the degree of turbulence in disks. Assuming that CO/H2 = 0.0001 at all disk
radii can lead to underestimates of turbulent speeds in the disk by at least
0.2 km/s.
| 0 | 1 | 0 | 0 | 0 | 0 |
A General Framework of Multi-Armed Bandit Processes by Arm Switch Restrictions | This paper proposes a general framework of multi-armed bandit (MAB) processes
by introducing a type of restrictions on the switches among arms evolving in
continuous time.
The Gittins index process is constructed for any single arm subject to the
restrictions on switches and then the optimality of the corresponding Gittins
index rule is established. The Gittins indices defined in this paper are
consistent with the ones for MAB processes in continuous time, integer time,
semi-Markovian setting as well as general discrete time setting, so that the
new theory covers the classical models as special cases and also applies to
many other situations that have not yet been touched in the literature. While
the proof of the optimality of Gittins index policies benefits from ideas in
the existing theory of MAB processes in continuous time, new techniques are
introduced which drastically simplify the proof.
| 0 | 0 | 0 | 1 | 0 | 0 |
Autonomous Urban Localization and Navigation with Limited Information | Urban environments offer a challenging scenario for autonomous driving.
Globally localizing information, such as a GPS signal, can be unreliable due to
signal shadowing and multipath errors. Detailed a priori maps of the
environment with sufficient information for autonomous navigation typically
require driving the area multiple times to collect large amounts of data,
substantial post-processing on that data to obtain the map, and then
maintaining updates on the map as the environment changes. This paper addresses
the issue of autonomous driving in an urban environment by investigating
algorithms and an architecture to enable fully functional autonomous driving
with limited information. An algorithm to autonomously navigate urban roadways
with little to no reliance on an a priori map or GPS is developed. Localization
is performed with an extended Kalman filter with odometry, compass, and sparse
landmark measurement updates. Navigation is accomplished by a compass-based
navigation control law. Key results from Monte Carlo studies show success rates
of urban navigation under different environmental conditions. Experiments
validate the simulated results and demonstrate that, for given test conditions,
an expected range can be found for a given success rate.
| 1 | 0 | 0 | 0 | 0 | 0 |
Iterative Refinement for $\ell_p$-norm Regression | We give improved algorithms for the $\ell_{p}$-regression problem, $\min_{x}
\|x\|_{p}$ such that $A x=b,$ for all $p \in (1,2) \cup (2,\infty).$ Our
algorithms obtain a high accuracy solution in $\tilde{O}_{p}(m^{\frac{|p-2|}{2p
+ |p-2|}}) \le \tilde{O}_{p}(m^{\frac{1}{3}})$ iterations, where each iteration
requires solving an $m \times m$ linear system, $m$ being the dimension of the
ambient space.
By maintaining an approximate inverse of the linear systems that we solve in
each iteration, we give algorithms for solving $\ell_{p}$-regression to $1 /
\text{poly}(n)$ accuracy that run in time $\tilde{O}_p(m^{\max\{\omega,
7/3\}}),$ where $\omega$ is the matrix multiplication constant. For the current
best value of $\omega > 2.37$, we can thus solve $\ell_{p}$ regression as fast
as $\ell_{2}$ regression, for all constant $p$ bounded away from $1.$
Our algorithms can be combined with fast graph Laplacian linear equation
solvers to give minimum $\ell_{p}$-norm flow / voltage solutions to $1 /
\text{poly}(n)$ accuracy on an undirected graph with $m$ edges in
$\tilde{O}_{p}(m^{1 + \frac{|p-2|}{2p + |p-2|}}) \le
\tilde{O}_{p}(m^{\frac{4}{3}})$ time.
For sparse graphs and for matrices with similar dimensions, our iteration
counts and running times improve on the $p$-norm regression algorithm by
[Bubeck-Cohen-Lee-Li STOC`18] and general-purpose convex optimization
algorithms. At the core of our algorithms is an iterative refinement scheme for
$\ell_{p}$-norms, using the smoothed $\ell_{p}$-norms introduced in the work of
Bubeck et al. Given an initial solution, we construct a problem that seeks to
minimize a quadratically-smoothed $\ell_{p}$ norm over a subspace, such that a
crude solution to this problem allows us to improve the initial solution by a
constant factor, leading to algorithms with fast convergence.
| 1 | 0 | 0 | 1 | 0 | 0 |
Minimal free resolution of the associated graded ring of certain monomial curves | In this article, we give the explicit minimal free resolution of the
associated graded ring of certain affine monomial curves in affine 4-space
based on the standard basis theory. As a result, we give the minimal graded
free resolution and compute the Hilbert function of the tangent cone of these
families.
| 0 | 0 | 1 | 0 | 0 | 0 |
Knowledge distillation using unlabeled mismatched images | Current approaches for Knowledge Distillation (KD) either directly use
training data or sample from the training data distribution. In this paper, we
demonstrate effectiveness of 'mismatched' unlabeled stimulus to perform KD for
image classification networks. For illustration, we consider scenarios where
this is a complete absence of training data, or mismatched stimulus has to be
used for augmenting a small amount of training data. We demonstrate that
stimulus complexity is a key factor for distillation's good performance. Our
examples include use of various datasets for stimulating MNIST and CIFAR
teachers.
| 1 | 0 | 0 | 1 | 0 | 0 |
Forbidden triads and Creative Success in Jazz: The Miles Davis Factor | This article argues for the importance of forbidden triads - open triads with
high-weight edges - in predicting success in creative fields. Forbidden triads
had been treated as a residual category beyond closed and open triads, yet I
argue that these structures provide opportunities to combine socially evolved
styles in new ways. Using data on the entire history of recorded jazz from 1896
to 2010, I show that observed collaborations have tolerated the openness of
high weight triads more than expected, observed jazz sessions had more
forbidden triads than expected, and the density of forbidden triads contributed
to the success of recording sessions, measured by the number of record releases
of session material. The article also shows that the sessions of Miles Davis
had received an especially high boost from forbidden triads.
| 0 | 1 | 0 | 1 | 0 | 0 |
Structural analysis of rubble-pile asteroids applied to collisional evolution | Solar system small bodies come in a wide variety of shapes and sizes, which
are achieved following very individual evolutional paths through billions of
years. This paper focuses on the reshaping process of rubble-pile asteroids
driven by meteorite impacts. In our study, numerous possible equilibrium
configurations are obtained via Monte Carlo simulation, and the structural
stability of these configurations is determined via eigen analysis of the
geometric constructions. The eigen decomposition reveals a connection between
the cluster's reactions and the types of external disturbance. Numerical
simulations are performed to verify the analytical results. The gravitational
N-body code pkdgrav is used to mimic the responses of the cluster under
intermittent non-dispersive impacts. We statistically confirm that the
stability index, the total gravitational potential and the volume of inertia
ellipsoid show consistent tendency of variation. A common regime is found in
which the clusters tend towards crystallization under intermittent impacts,
i.e., only the configurations with high structural stability survive under the
external disturbances. The results suggest the trivial non-disruptive impacts
might play an important role in the rearrangement of the constituent blocks,
which may strengthen these rubble piles and help to build a robust structure
under impacts of similar magnitude. The final part of this study consists of
systematic simulations over two parameters, the projectile momentum and the
rotational speed of the cluster. The results show a critical value exists for
the projectile momentum, as predicted by theory, below which all clusters
become responseless to external disturbances; and the rotation proves to be
significant for it exhibits an "enhancing" effect on loose-packed clusters,
which coincides with the observation that several fast-spinning asteroids have
low bulk densities.
| 0 | 1 | 0 | 0 | 0 | 0 |
The Imani Periodic Functions: Genesis and Preliminary Results | The Leah-Hamiltonian, $H(x,y)=y^2/2+3x^{4/3}/4$, is introduced as a
functional equation for $x(t)$ and $y(t)$. By means of a nonlinear
transformation to new independent variables, we show that this functional
equation has a special class of periodic solutions which we designate the Imani
functions. The explicit construction of these functions is done such that they
possess many of the general properties of the standard trigonometric cosine and
sine functions. We conclude by providing a listing of a number of currently
unresolved issues relating to the Imani functions.
| 0 | 0 | 1 | 0 | 0 | 0 |
Towards fully automated protein structure elucidation with NMR spectroscopy | Nuclear magnetic resonance (NMR) spectroscopy is one of the leading
techniques for protein studies. The method features a number of properties,
allowing to explain macromolecular interactions mechanistically and resolve
structures with atomic resolution. However, due to laborious data analysis, a
full potential of NMR spectroscopy remains unexploited. Here we present an
approach aiming at automation of two major bottlenecks in the analysis
pipeline, namely, peak picking and chemical shift assignment. Our approach
combines deep learning, non-parametric models and combinatorial optimization,
and is able to detect signals of interest in a multidimensional NMR data with
high accuracy and match them with atoms in medium-length protein sequences,
which is a preliminary step to solve protein spatial structure.
| 0 | 0 | 0 | 1 | 0 | 0 |
On families of fibred knots with equal Seifert forms | For every genus $g\geq 2$, we construct an infinite family of strongly
quasipositive fibred knots having the same Seifert form as the torus knot
$T(2,2g+1)$. In particular, their signatures and four-genera are maximal and
their homological monodromies (hence their Alexander module structures) agree.
On the other hand, the geometric stretching factors are pairwise distinct and
the knots are pairwise not ribbon concordant.
| 0 | 0 | 1 | 0 | 0 | 0 |
Robust Computation in 2D Absolute EIT (a-EIT) Using D-bar Methods with the `exp' Approximation | Objective: Absolute images have important applications in medical Electrical
Impedance Tomography (EIT) imaging, but the traditional minimization and
statistical based computations are very sensitive to modeling errors and noise.
In this paper, it is demonstrated that D-bar reconstruction methods for
absolute EIT are robust to such errors. Approach: The effects of errors in
domain shape and electrode placement on absolute images computed with 2D D-bar
reconstruction algorithms are studied on experimental data. Main Results: It is
demonstrated with tank data from several EIT systems that these methods are
quite robust to such modeling errors, and furthermore the artefacts arising
from such modeling errors are similar to those occurring in classic
time-difference EIT imaging. Significance: This study is promising for clinical
applications where absolute EIT images are desirable, but previously thought
impossible.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Comparison of Spatial-based Targeted Disease Containment Strategies using Mobile Phone Data | Epidemic outbreaks are an important healthcare challenge, especially in
developing countries where they represent one of the major causes of mortality.
Approaches that can rapidly target subpopulations for surveillance and control
are critical for enhancing containment processes during epidemics.
Using a real-world dataset from Ivory Coast, this work presents an attempt to
unveil the socio-geographical heterogeneity of disease transmission dynamics.
By employing a spatially explicit meta-population epidemic model derived from
mobile phone Call Detail Records (CDRs), we investigate how the differences in
mobility patterns may affect the course of a realistic infectious disease
outbreak. We consider different existing measures of the spatial dimension of
human mobility and interactions, and we analyse their relevance in identifying
the highest risk sub-population of individuals, as the best candidates for
isolation countermeasures. The approaches presented in this paper provide
further evidence that mobile phone data can be effectively exploited to
facilitate our understanding of individuals' spatial behaviour and its
relationship with the risk of infectious diseases' contagion. In particular, we
show that CDRs-based indicators of individuals' spatial activities and
interactions hold promise for gaining insight of contagion heterogeneity and
thus for developing containment strategies to support decision-making during
country-level pandemics.
| 1 | 1 | 0 | 0 | 0 | 0 |
Attosecond Streaking in the Water Window: A New Regime of Attosecond Pulse Characterization | We report on the first streaking measurement of water-window attosecond
pulses generated via high harmonic generation, driven by sub-2-cycle,
CEP-stable, 1850 nm laser pulses. Both the central photon energy and the energy
bandwidth far exceed what has been demonstrated thus far, warranting the
investigation of the attosecond streaking technique for the soft X-ray regime
and the limits of the FROGCRAB retrieval algorithm under such conditions. We
also discuss the problem of attochirp compensation and issues regarding much
lower photo-ionization cross sections compared with the XUV in addition to the
fact that several shells of target gases are accessed simultaneously. Based on
our investigation, we caution that the vastly different conditions in the soft
X-ray regime warrant a diligent examination of the fidelity of the measurement
and the retrieval procedure.
| 0 | 1 | 0 | 0 | 0 | 0 |
The intrinsic Baldwin effect in broad Balmer lines of six long-term monitored AGNs | We investigate the intrinsic Baldwin effect (Beff) of the broad H$\alpha$ and
H$\beta$ emission lines for six Type 1 active galactic nuclei (AGNs) with
different broad line characteristics: two Seyfert 1 (NGC 4151 and NGC 5548),
two AGNs with double-peaked broad line profiles (3C 390.3 and Arp 102B), one
narrow line Seyfert 1 (Ark 564), and one high-luminosity quasar with highly red
asymmetric broad line profiles (E1821+643). We found that a significant
intrinsic Beff was present in all Type 1 AGNs in our sample. Moreover, we do
not see strong difference in intrinsic Beff slopes in different types of AGNs
which probably have different physical properties, such as inclination, broad
line region geometry, or accretion rate. Additionally, we found that the
intrinsic Beff was not connected with the global one, which, instead, could not
be detected in the broad H$\alpha$ or H$\beta$ emission lines. In the case of
NGC 4151, the detected variation of the Beff slope could be due to the change
in the site of line formation in the BLR. Finally, the intrinsic Beff might be
caused by the additional optical continuum component that is not part of the
ionization continuum.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Novel Formal Agent-based Simulation Modeling Framework of an AIDS Complex Adaptive System | HIV/AIDS spread depends upon complex patterns of interaction among various
sub-sets emerging at population level. This added complexity makes it difficult
to study and model AIDS and its dynamics. AIDS is therefore a natural candidate
to be modeled using agent-based modeling, a paradigm well-known for modeling
Complex Adaptive Systems (CAS). While agent-based models are also well-known to
effectively model CAS, often times models can tend to be ambiguous and the use
of purely text-based specifications (such as ODD) can make models difficult to
be replicated. Previous work has shown how formal specification may be used in
conjunction with agent-based modeling to develop models of various CAS.
However, to the best of our knowledge, no such model has been developed in
conjunction with AIDS. In this paper, we present a Formal Agent-Based
Simulation modeling framework (FABS-AIDS) for an AIDS-based CAS. FABS-AIDS
employs the use of a formal specification model in conjunction with an
agent-based model to reduce ambiguity as well as improve clarity in the model
definition. The proposed model demonstrates the effectiveness of using formal
specification in conjunction with agent-based simulation for developing models
of CAS in general and, social network-based agent-based models, in particular.
| 1 | 1 | 0 | 0 | 0 | 0 |
Interplay of Fluorescence and Phosphorescence in Organic Biluminescent Emitters | Biluminescent organic emitters show simultaneous fluorescence and
phosphorescence at room temperature. So far, the optimization of the room
temperature phosphorescence (RTP) in these materials has drawn the attention of
research. However, the continuous wave operation of these emitters will
consequently turn them into systems with vastly imbalanced singlet and triplet
populations, which is due to the respective excited state lifetimes. This study
reports on the exciton dynamics of the biluminophore NPB
(N,N-di(1-naphthyl)-N,N-diphenyl-(1,1-biphenyl)-4,4-diamine). In the extreme
case, the singlet and triplet exciton lifetimes stretch from 3 ns to 300 ms,
respectively. Through sample engineering and oxygen quenching experiments, the
triplet exciton density can be controlled over several orders of magnitude
allowing to studying exciton interactions between singlet and triplet
manifolds. The results show, that singlet-triplet annihilation reduces the
overall biluminescence efficiency already at moderate excitation levels.
Additionally, the presented system represents an illustrative role model to
study excitonic effects in organic materials.
| 0 | 1 | 0 | 0 | 0 | 0 |
Ion-impact-induced multifragmentation of liquid droplets | An instability of a liquid droplet traversed by an energetic ion is explored.
This instability is brought about by the predicted shock wave induced by the
ion. An observation of multifragmentation of small droplets traversed by ions
with high linear energy transfer is suggested to demonstrate the existence of
shock waves. A number of effects are analysed in effort to find the conditions
for such an experiment to be signifying. The presence of shock waves crucially
affects the scenario of radiation damage with ions since the shock waves
significantly contribute to the thermomechanical damage of biomolecules as well
as the transport of reactive species. While the scenario has been upheld by
analyses of biological experiments, the shock waves have not yet been observed
directly, regardless of a number of ideas of experiments to detect them were
exchanged at conferences.
| 0 | 1 | 0 | 0 | 0 | 0 |
Planning Hybrid Driving-Stepping Locomotion on Multiple Levels of Abstraction | Navigating in search and rescue environments is challenging, since a variety
of terrains has to be considered. Hybrid driving-stepping locomotion, as
provided by our robot Momaro, is a promising approach. Similar to other
locomotion methods, it incorporates many degrees of freedom---offering high
flexibility but making planning computationally expensive for larger
environments.
We propose a navigation planning method, which unifies different levels of
representation in a single planner. In the vicinity of the robot, it provides
plans with a fine resolution and a high robot state dimensionality. With
increasing distance from the robot, plans become coarser and the robot state
dimensionality decreases. We compensate this loss of information by enriching
coarser representations with additional semantics. Experiments show that the
proposed planner provides plans for large, challenging scenarios in feasible
time.
| 1 | 0 | 0 | 0 | 0 | 0 |
Blackbody Radiation in Classical Physics: A Historical Perspective | We point out that current textbooks of modern physics are a century
out-of-date in their treatment of blackbody radiation within classical physics.
Relativistic classical electrodynamics including classical electromagnetic
zero-point radiation gives the Planck spectrum with zero-point radiation as the
blackbody radiation spectrum. In contrast, nonrelativistic mechanics cannot
support the idea of zero-point energy; therefore if nonrelativistic classical
statistical mechanics or nonrelativistic mechanical scatterers are invoked for
radiation equilibrium, one arrives at only the low-frequency Rayleigh-Jeans
part of the spectrum which involves no zero-point energy, and does not include
the high-frequency part of the spectrum involving relativistically-invariant
classical zero-point radiation. Here we first discuss the correct understanding
of blackbody radiation within relativistic classical physics, and then we
review the historical treatment. Finally, we point out how the presence of
Lorentz-invariant classical zero-point radiation and the use of relativistic
particle interactions transform the previous historical arguments so as now to
give the Planck spectrum including classical zero-point radiation. Within
relativistic classical electromagnetic theory, Planck's constant h appears as
the scale of source-free zero-point radiation.
| 0 | 1 | 0 | 0 | 0 | 0 |
The Lyman Continuum escape fraction of faint galaxies at z~3.3 in the CANDELS/GOODS-North, EGS, and COSMOS fields with LBC | The reionization of the Universe is one of the most important topics of
present day astrophysical research. The most plausible candidates for the
reionization process are star-forming galaxies, which according to the
predictions of the majority of the theoretical and semi-analytical models
should dominate the HI ionizing background at z~3. We aim at measuring the
Lyman continuum escape fraction, which is one of the key parameters to compute
the contribution of star-forming galaxies to the UV background. We have used
ultra-deep U-band imaging (U=30.2mag at 1sigma) by LBC/LBT in the
CANDELS/GOODS-North field, as well as deep imaging in COSMOS and EGS fields, in
order to estimate the Lyman continuum escape fraction of 69 star-forming
galaxies with secure spectroscopic redshifts at 3.27<z<3.40 to faint magnitude
limits (L=0.2L*, or equivalently M1500~-19). We have measured through stacks a
stringent upper limit (<1.7% at 1sigma) for the relative escape fraction of HI
ionizing photons from bright galaxies (L>L*), while for the faint population
(L=0.2L*) the limit to the escape fraction is ~10%. We have computed the
contribution of star-forming galaxies to the observed UV background at z~3 and
we have found that it is not enough to keep the Universe ionized at these
redshifts, unless their escape fraction increases significantly (>10%) at low
luminosities (M1500>-19). We compare our results on the Lyman continuum escape
fraction of high-z galaxies with recent estimates in the literature and discuss
future prospects to shed light on the end of the Dark Ages. In the future,
strong gravitational lensing will be fundamental to measure the Lyman continuum
escape fraction down to faint magnitudes (M1500~-16) which are inaccessible
with the present instrumentation on blank fields.
| 0 | 1 | 0 | 0 | 0 | 0 |
The effect of stellar and AGN feedback on the low redshift Lyman-$α$ forest in the Sherwood simulation suite | We study the effect of different feedback prescriptions on the properties of
the low redshift ($z\leq1.6$) Ly$\alpha$ forest using a selection of
hydrodynamical simulations drawn from the Sherwood simulation suite. The
simulations incorporate stellar feedback, AGN feedback and a simplified scheme
for efficiently modelling the low column density Ly$\alpha$ forest. We confirm
a discrepancy remains between Cosmic Origins Spectrograph (COS) observations of
the Ly$\alpha$ forest column density distribution function (CDDF) at $z \simeq
0.1$ for high column density systems ($N_{\rm HI}>10^{14}\rm\,cm^{-2}$), as
well as Ly$\alpha$ velocity widths that are too narrow compared to the COS
data. Stellar or AGN feedback -- as currently implemented in our simulations --
have only a small effect on the CDDF and velocity width distribution. We
conclude that resolving the discrepancy between the COS data and simulations
requires an increase in the temperature of overdense gas with $\Delta=4$--$40$,
either through additional He$\,\rm \scriptstyle II\ $ photo-heating at $z>2$ or
fine-tuned feedback that ejects overdense gas into the IGM at just the right
temperature for it to still contribute significantly to the Ly$\alpha$ forest.
Alternatively a larger, currently unresolved turbulent component to the line
width could resolve the discrepancy.
| 0 | 1 | 0 | 0 | 0 | 0 |
Asteroid 2017 FZ2 et al.: signs of recent mass-shedding from YORP? | The first direct detection of the asteroidal YORP effect, a phenomenon that
changes the spin states of small bodies due to thermal reemission of sunlight
from their surfaces, was obtained for (54509) YORP 2000 PH5. Such an alteration
can slowly increase the rotation rate of asteroids, driving them to reach their
fission limit and causing their disruption. This process can produce binaries
and unbound asteroid pairs. Secondary fission opens the door to the eventual
formation of transient but genetically-related groupings. Here, we show that
the small near-Earth asteroid (NEA) 2017 FZ2 was a co-orbital of our planet of
the quasi-satellite type prior to their close encounter on 2017 March 23.
Because of this flyby with the Earth, 2017 FZ2 has become a non-resonant NEA.
Our N-body simulations indicate that this object may have experienced
quasi-satellite engagements with our planet in the past and it may return as a
co-orbital in the future. We identify a number of NEAs that follow similar
paths, the largest named being YORP, which is also an Earth's co-orbital. An
apparent excess of NEAs moving in these peculiar orbits is studied within the
framework of two orbit population models. A possibility that emerges from this
analysis is that such an excess, if real, could be the result of mass shedding
from YORP itself or a putative larger object that produced YORP. Future
spectroscopic observations of 2017 FZ2 during its next visit in 2018 (and of
related objects when feasible) may be able to confirm or reject this
interpretation.
| 0 | 1 | 0 | 0 | 0 | 0 |
On Increasing Self-Confidence in Non-Bayesian Social Learning over Time-Varying Directed Graphs | We study the convergence of the log-linear non-Bayesian social learning
update rule, for a group of agents that collectively seek to identify a
parameter that best describes a joint sequence of observations. Contrary to
recent literature, we focus on the case where agents assign decaying weights to
its neighbors, and the network is not connected at every time instant but over
some finite time intervals. We provide a necessary and sufficient condition for
the rate at which agents decrease the weights and still guarantees social
learning.
| 1 | 0 | 0 | 0 | 0 | 0 |
Exploring a search for long-duration transient gravitational waves associated with magnetar bursts | Soft gamma repeaters and anomalous X-ray pulsars are thought to be magnetars,
neutron stars with strong magnetic fields of order $\mathord{\sim}
10^{13}$--$10^{15} \, \mathrm{gauss}$. These objects emit intermittent bursts
of hard X-rays and soft gamma rays. Quasiperiodic oscillations in the X-ray
tails of giant flares imply the existence of neutron star oscillation modes
which could emit gravitational waves powered by the magnetar's magnetic energy
reservoir. We describe a method to search for transient gravitational-wave
signals associated with magnetar bursts with durations of 10s to 1000s of
seconds. The sensitivity of this method is estimated by adding simulated
waveforms to data from the sixth science run of Laser Interferometer
Gravitational-wave Observatory (LIGO). We find a search sensitivity in terms of
the root sum square strain amplitude of $h_{\mathrm{rss}} = 1.3 \times 10^{-21}
\, \mathrm{Hz}^{-1/2}$ for a half sine-Gaussian waveform with a central
frequency $f_0 = 150 \, \mathrm{Hz}$ and a characteristic time $\tau = 400 \,
\mathrm{s}$. This corresponds to a gravitational wave energy of
$E_{\mathrm{GW}} = 4.3 \times 10^{46} \, \mathrm{erg}$, the same order of
magnitude as the 2004 giant flare which had an estimated electromagnetic energy
of $E_{\mathrm{EM}} = \mathord{\sim} 1.7 \times 10^{46} (d/ 8.7 \,
\mathrm{kpc})^2 \, \mathrm{erg}$, where $d$ is the distance to SGR 1806-20. We
present an extrapolation of these results to Advanced LIGO, estimating a
sensitivity to a gravitational wave energy of $E_{\mathrm{GW}} = 3.2 \times
10^{43} \, \mathrm{erg}$ for a magnetar at a distance of $1.6 \, \mathrm{kpc}$.
These results suggest this search method can probe significantly below the
energy budgets for magnetar burst emission mechanisms such as crust cracking
and hydrodynamic deformation.
| 0 | 1 | 0 | 0 | 0 | 0 |
Optomechanical characterization of silicon nitride membrane arrays | We report on the optical and mechanical characterization of arrays of
parallel micromechanical membranes. Pairs of high-tensile stress, 100 nm-thick
silicon nitride membranes are assembled parallel with each other with
separations ranging from 8.5 to 200 $\mu$m. Their optical properties are
accurately determined using a combination of broadband and monochromatic
illuminations and the lowest vibrational mode frequencies and mechanical
quality factors are determined interferometrically. The results and techniques
demonstrated are promising for investigations of collective phenomena in
optomechanical arrays.
| 0 | 1 | 0 | 0 | 0 | 0 |
Coupling between a charge density wave and magnetism in an Heusler material | The Prototypical magnetic memory shape alloy Ni$_2$MnGa undergoes various
phase transitions as a function of temperature, pressure, and doping. In the
low-temperature phases below 260 K, an incommensurate structural modulation
occurs along the [110] direction which is thought to arise from softening of a
phonon mode. It is not at present clear how this phenomenon is related, if at
all, to the magnetic memory effect. Here we report time-resolved measurements
which track both the structural and magnetic components of the phase transition
from the modulated cubic phase as it is brought into the high-symmetry phase.
The results suggest that the photoinduced demagnetization modifies the Fermi
surface in regions that couple strongly to the periodicity of the structural
modulation through the nesting vector. The amplitude of the periodic lattice
distortion, however, appears to be less affected by the demagnetizaton.
| 0 | 1 | 0 | 0 | 0 | 0 |
Best-Effort FPGA Programming: A Few Steps Can Go a Long Way | FPGA-based heterogeneous architectures provide programmers with the ability
to customize their hardware accelerators for flexible acceleration of many
workloads. Nonetheless, such advantages come at the cost of sacrificing
programmability. FPGA vendors and researchers attempt to improve the
programmability through high-level synthesis (HLS) technologies that can
directly generate hardware circuits from high-level language descriptions.
However, reading through recent publications on FPGA designs using HLS, one
often gets the impression that FPGA programming is still hard in that it leaves
programmers to explore a very large design space with many possible
combinations of HLS optimization strategies.
In this paper we make two important observations and contributions. First, we
demonstrate a rather surprising result: FPGA programming can be made easy by
following a simple best-effort guideline of five refinement steps using HLS. We
show that for a broad class of accelerator benchmarks from MachSuite, the
proposed best-effort guideline improves the FPGA accelerator performance by
42-29,030x. Compared to the baseline CPU performance, the FPGA accelerator
performance is improved from an average 292.5x slowdown to an average 34.4x
speedup. Moreover, we show that the refinement steps in the best-effort
guideline, consisting of explicit data caching, customized pipelining,
processing element duplication, computation/communication overlapping and
scratchpad reorganization, correspond well to the best practice guidelines for
multicore CPU programming. Although our best-effort guideline may not always
lead to the optimal solution, it substantially simplifies the FPGA programming
effort, and will greatly support the wide adoption of FPGA-based acceleration
by the software programming community.
| 1 | 0 | 0 | 0 | 0 | 0 |
Towards the study of least squares estimators with convex penalty | Penalized least squares estimation is a popular technique in high-dimensional
statistics. It includes such methods as the LASSO, the group LASSO, and the
nuclear norm penalized least squares. The existing theory of these methods is
not fully satisfying since it allows one to prove oracle inequalities with
fixed high probability only for the estimators depending on this probability.
Furthermore, the control of compatibility factors appearing in the oracle
bounds is often not explicit. Some very recent developments suggest that the
theory of oracle inequalities can be revised in an improved way. In this paper,
we provide an overview of ideas and tools leading to such an improved theory.
We show that, along with overcoming the disadvantages mentioned above, the
methodology extends to the hilbertian framework and it applies to a large class
of convex penalties. This paper is partly expository. In particular, we provide
adapted proofs of some results from other recent work.
| 0 | 0 | 1 | 1 | 0 | 0 |
A Multi-traffic Inter-cell Interference Coordination Scheme in Dense Cellular Networks | This paper proposes a novel semi-distributed and practical ICIC scheme based
on the Almost Blank SubFrame (ABSF) approach specified by 3GPP. We define two
mathematical programming problems for the cases of guaranteed and best-effort
traffic, and use game theory to study the properties of the derived ICIC
distributed schemes, which are compared in detail against unaffordable
centralized schemes. Based on the analysis of the proposed models, we define
Distributed Multi-traffic Scheduling (DMS), a unified distributed framework for
adaptive interference-aware scheduling of base stations in future cellular
networks which accounts for both guaranteed and best-effort traffic. DMS
follows a two-tier approach, consisting of local ABSF schedulers, which perform
the resource distribution between guaranteed and best effort traffic, and a
lightweight local supervisor, which coordinates ABSF local decisions. As a
result of such a two-tier design, DMS requires very light signaling to drive
the local schedulers to globally efficient operating points. As shown by means
of numerical results, DMS allows to (i) maximize radio resources reuse, (ii)
provide requested quality for guaranteed traffic, (iii) minimize the time
dedicated to guaranteed traffic to leave room for best-effort traffic, and (iv)
maximize resource utilization efficiency for best-effort traffic.
| 1 | 0 | 0 | 0 | 0 | 0 |
Distributional Adversarial Networks | We propose a framework for adversarial training that relies on a sample
rather than a single sample point as the fundamental unit of discrimination.
Inspired by discrepancy measures and two-sample tests between probability
distributions, we propose two such distributional adversaries that operate and
predict on samples, and show how they can be easily implemented on top of
existing models. Various experimental results show that generators trained with
our distributional adversaries are much more stable and are remarkably less
prone to mode collapse than traditional models trained with pointwise
prediction discriminators. The application of our framework to domain
adaptation also results in considerable improvement over recent
state-of-the-art.
| 1 | 0 | 0 | 0 | 0 | 0 |
First order magneto-structural transition and magnetocaloric effect in MnNiGe$_{0.9}$Ga$_{0.1}$ | The first order magneto-structural transition ($T_t\simeq95$ K) and
magnetocaloric effect in MnNiGe$_{0.9}$Ga$_{0.1}$ are studied via powder x-ray
diffraction and magnetization measurements. Temperature dependent x-ray
diffraction measurements reveal that the magneto-structural transition remains
incomplete down to 23 K, resulting in a coexistence of antiferromagnetic and
ferromagnetic phases at low temperatures. The fraction of the high temperature
Ni$_2$In-type hexagonal ferromagnetic and low temperature TiNiSi-type
orthorhombic antiferromagnetic phases is estimated to be $\sim 40\%$ and $\sim
60\%$, respectively at 23 K. The ferromagnetic phase fraction increases with
increasing field which is found to be in non-equilibrium state and gives rise
to a weak re-entrant transition while warming under field-cooled condition. It
shows a large inverse magnetocaloric effect across the magneto-structural
transition and a conventional magnetocaloric effect across the second order
paramagnetic to ferromagnetic transition. The relative cooling power which
characterizes the performance of a magnetic refrigerant material is found to be
reasonably high compared to the other reported magnetocaloric alloys.
| 0 | 1 | 0 | 0 | 0 | 0 |
Parametric Gaussian Process Regression for Big Data | This work introduces the concept of parametric Gaussian processes (PGPs),
which is built upon the seemingly self-contradictory idea of making Gaussian
processes parametric. Parametric Gaussian processes, by construction, are
designed to operate in "big data" regimes where one is interested in
quantifying the uncertainty associated with noisy data. The proposed
methodology circumvents the well-established need for stochastic variational
inference, a scalable algorithm for approximating posterior distributions. The
effectiveness of the proposed approach is demonstrated using an illustrative
example with simulated data and a benchmark dataset in the airline industry
with approximately 6 million records.
| 0 | 0 | 0 | 1 | 0 | 0 |
Achieving non-discrimination in prediction | Discrimination-aware classification is receiving an increasing attention in
data science fields. The pre-process methods for constructing a
discrimination-free classifier first remove discrimination from the training
data, and then learn the classifier from the cleaned data. However, they lack a
theoretical guarantee for the potential discrimination when the classifier is
deployed for prediction. In this paper, we fill this gap by mathematically
bounding the probability of the discrimination in prediction being within a
given interval in terms of the training data and classifier. We adopt the
causal model for modeling the data generation mechanism, and formally defining
discrimination in population, in a dataset, and in prediction. We obtain two
important theoretical results: (1) the discrimination in prediction can still
exist even if the discrimination in the training data is completely removed;
and (2) not all pre-process methods can ensure non-discrimination in prediction
even though they can achieve non-discrimination in the modified training data.
Based on the results, we develop a two-phase framework for constructing a
discrimination-free classifier with a theoretical guarantee. The experiments
demonstrate the theoretical results and show the effectiveness of our two-phase
framework.
| 1 | 0 | 0 | 1 | 0 | 0 |
Fixing and almost fixing a planar convex body | A set of points a 1 ,. .. , a n fixes a planar convex body K if the points
are on bdK, the boundary of K, and if any small move of K brings some point of
the set in intK, the interior of K. The points a 1 ,. .. , a n $\in$ bdK almost
fix K if, for any neighbourhoods V i of a i (i = 1,. .. , n), there are pairs
of points a i , a i $\in$ V i $\cap$ bdK such that a 1 , a 1 ,. .. , a n fix K.
This note compares several definitions of these notions and gives first order
conditions for a 1 ,. .. , a n $\in$ bdK to fix, and to almost fix, K.
| 0 | 0 | 1 | 0 | 0 | 0 |
Undersampled dynamic X-ray tomography with dimension reduction Kalman filter | In this paper, we consider prior-based dimension reduction Kalman filter for
undersampled dynamic X-ray tomography. With this method, the X-ray
reconstructions are parameterized by a low-dimensional basis. Thus, the
proposed method is a) computationally very light; and b) extremely robust as
all the computations can be done explicitly. With real and simulated
measurement data, we show that the method provides accurate reconstructions
even with very limited number of angular directions.
| 0 | 0 | 0 | 1 | 0 | 0 |
End-to-End Sound Source Separation Conditioned On Instrument Labels | Can we perform an end-to-end sound source separation (SSS) with a variable
number of sources using a deep learning model? This paper presents an extension
of the Wave-U-Net model which allows end-to-end monaural source separation with
a non-fixed number of sources. Furthermore, we propose multiplicative
conditioning with instrument labels at the bottleneck of the Wave-U-Net and
show its effect on the separation results. This approach can be further
extended to other types of conditioning such as audio-visual SSS and
score-informed SSS.
| 1 | 0 | 0 | 0 | 0 | 0 |
Group analysis of general Burgers-Korteweg-de Vries equations | The complete group classification problem for the class of (1+1)-dimensional
$r$th order general variable-coefficient Burgers-Korteweg-de Vries equations is
solved for arbitrary values of $r$ greater than or equal to two. We find the
equivalence groupoids of this class and its various subclasses obtained by
gauging equation coefficients with equivalence transformations. Showing that
this class and certain gauged subclasses are normalized in the usual sense, we
reduce the complete group classification problem for the entire class to that
for the selected maximally gauged subclass, and it is the latter problem that
is solved efficiently using the algebraic method of group classification.
Similar studies are carried out for the two subclasses of equations with
coefficients depending at most on the time or space variable, respectively.
Applying an original technique, we classify Lie reductions of equations from
the class under consideration with respect to its equivalence group. Studying
of alternative gauges for equation coefficients with equivalence
transformations allows us not only to justify the choice of the most
appropriate gauge for the group classification but also to construct for the
first time classes of differential equations with nontrivial generalized
equivalence group such that equivalence-transformation components corresponding
to equation variables locally depend on nonconstant arbitrary elements of the
class. For the subclass of equations with coefficients depending at most on the
time variable, which is normalized in the extended generalized sense, we
explicitly construct its extended generalized equivalence group in a rigorous
way. The new notion of effective generalized equivalence group is introduced.
| 0 | 1 | 1 | 0 | 0 | 0 |
Evolutionary Generative Adversarial Networks | Generative adversarial networks (GAN) have been effective for learning
generative models for real-world data. However, existing GANs (GAN and its
variants) tend to suffer from training problems such as instability and mode
collapse. In this paper, we propose a novel GAN framework called evolutionary
generative adversarial networks (E-GAN) for stable GAN training and improved
generative performance. Unlike existing GANs, which employ a pre-defined
adversarial objective function alternately training a generator and a
discriminator, we utilize different adversarial training objectives as mutation
operations and evolve a population of generators to adapt to the environment
(i.e., the discriminator). We also utilize an evaluation mechanism to measure
the quality and diversity of generated samples, such that only well-performing
generator(s) are preserved and used for further training. In this way, E-GAN
overcomes the limitations of an individual adversarial training objective and
always preserves the best offspring, contributing to progress in and the
success of GANs. Experiments on several datasets demonstrate that E-GAN
achieves convincing generative performance and reduces the training problems
inherent in existing GANs.
| 0 | 0 | 0 | 1 | 0 | 0 |
Extended Sammon Projection and Wavelet Kernel Extreme Learning Machine for Gait-Based Legitimate User Identification on Smartphones | Smartphones have ubiquitously integrated into our home and work environments,
however, users normally rely on explicit but inefficient identification
processes in a controlled environment. Therefore, when a device is stolen, a
thief can have access to the owner's personal information and services against
the stored password/s. As a result of this potential scenario, this work
demonstrates the possibilities of legitimate user identification in a
semi-controlled environment through the built-in smartphones motion dynamics
captured by two different sensors. This is a two-fold process: sub-activity
recognition followed by user/impostor identification. Prior to the
identification; Extended Sammon Projection (ESP) method is used to reduce the
redundancy among the features. To validate the proposed system, we first
collected data from four users walking with their device freely placed in one
of their pants pockets. Through extensive experimentation, we demonstrate that
together time and frequency domain features optimized by ESP to train the
wavelet kernel based extreme learning machine classifier is an effective system
to identify the legitimate user or an impostor with \(97\%\) accuracy.
| 1 | 0 | 0 | 0 | 0 | 0 |
On the free path length distribution for linear motion in an n-dimensional box | We consider the distribution of free path lengths, or the distance between
consecutive bounces of random particles, in an n-dimensional rectangular box.
If each particle travels a distance R, then, as R tends to infinity the free
path lengths coincides with the distribution of the length of the intersection
of a random line with the box (for a natural ensemble of random lines) and we
give an explicit formula (piecewise real analytic) for the probability density
function in dimension two and three.
In dimension two we also consider a closely related model where each particle
is allowed to bounce N times, as N tends to infinity, and give an explicit
(again piecewise real analytic) formula for its probability density function.
Further, in both models we can recover the side lengths of the box from the
location of the discontinuities of the probability density functions.
| 0 | 0 | 1 | 0 | 0 | 0 |
Spin-Orbit Misalignments of Three Jovian Planets via Doppler Tomography | We present measurements of the spin-orbit misalignments of the hot Jupiters
HAT-P-41 b and WASP-79 b, and the aligned warm Jupiter Kepler-448 b. We
obtained these measurements with Doppler tomography, where we spectroscopically
resolve the line profile perturbation during the transit due to the
Rossiter-McLaughlin effect. We analyze time series spectra obtained during
portions of five transits of HAT-P-41 b, and find a value of the spin-orbit
misalignment of $\lambda = -22.1_{-6.0}^{+0.8 \circ}$. We reanalyze the radial
velocity Rossiter-McLaughlin data on WASP-79 b obtained by Addison et al.
(2013) using Doppler tomographic methodology. We measure
$\lambda=-99.1_{-3.9}^{+4.1\circ}$, consistent with but more precise than the
value found by Addison et al. (2013). For Kepler-448 b we perform a joint fit
to the Kepler light curve, Doppler tomographic data, and a radial velocity
dataset from Lillo-Box et al. (2015). We find an approximately aligned orbit
($\lambda=-7.1^{+4.2 \circ}_{-2.8}$), in modest disagreement with the value
found by Bourrier et al. (2015). Through analysis of the Kepler light curve we
measure a stellar rotation period of $P_{\mathrm{rot}}=1.27 \pm 0.11$ days, and
use this to argue that the full three-dimensional spin-orbit misalignment is
small, $\psi\sim0^{\circ}$.
| 0 | 1 | 0 | 0 | 0 | 0 |
Accurate parameter estimation for Bayesian Network Classifiers using Hierarchical Dirichlet Processes | This paper introduces a novel parameter estimation method for the probability
tables of Bayesian network classifiers (BNCs), using hierarchical Dirichlet
processes (HDPs). The main result of this paper is to show that improved
parameter estimation allows BNCs to outperform leading learning methods such as
Random Forest for both 0-1 loss and RMSE, albeit just on categorical datasets.
As data assets become larger, entering the hyped world of "big", efficient
accurate classification requires three main elements: (1) classifiers with
low-bias that can capture the fine-detail of large datasets (2) out-of-core
learners that can learn from data without having to hold it all in main memory
and (3) models that can classify new data very efficiently.
The latest Bayesian network classifiers (BNCs) satisfy these requirements.
Their bias can be controlled easily by increasing the number of parents of the
nodes in the graph. Their structure can be learned out of core with a limited
number of passes over the data. However, as the bias is made lower to
accurately model classification tasks, so is the accuracy of their parameters'
estimates, as each parameter is estimated from ever decreasing quantities of
data. In this paper, we introduce the use of Hierarchical Dirichlet Processes
for accurate BNC parameter estimation.
We conduct an extensive set of experiments on 68 standard datasets and
demonstrate that our resulting classifiers perform very competitively with
Random Forest in terms of prediction, while keeping the out-of-core capability
and superior classification time.
| 1 | 0 | 0 | 1 | 0 | 0 |
Modeling and Simulation of the Dynamics of the Quick Return Mechanism: A Bond Graph Approach | This paper applies the multibond graph approach for rigid multibody systems
to model the dynamics of general spatial mechanisms. The commonly used quick
return mechanism which comprises of revolute as well as prismatic joints has
been chosen as a representative example to demonstrate the application of this
technique and its resulting advantages. In this work, the links of the quick
return mechanism are modeled as rigid bodies. The rigid links are then coupled
at the joints based on the nature of constraint. This alternative method of
formulation of system dynamics, using Bond Graphs, offers a rich set of
features that include pictorial representation of the dynamics of translation
and rotation for each link of the mechanism in the inertial frame,
representation and handling of constraints at the joints, depiction of
causality, obtaining dynamic reaction forces and moments at various locations
in the mechanism and so on. Yet another advantage of this approach is that the
coding for simulation can be carried out directly from the Bond Graph in an
algorithmic manner, without deriving system equations. In this work, the
program code for simulation is written in MATLAB. The vector and tensor
operations are conveniently represented in MATLAB, resulting in a compact and
optimized code. The simulation results are plotted and discussed in detail.
| 1 | 0 | 0 | 0 | 0 | 0 |
Nauticle: a general-purpose particle-based simulation tool | Nauticle is a general-purpose simulation tool for the flexible and highly
configurable application of particle-based methods of either discrete or
continuum phenomena. It is presented that Nauticle has three distinct layers
for users and developers, then the top two layers are discussed in detail. The
paper introduces the Symbolic Form Language (SFL) of Nauticle, which
facilitates the formulation of user-defined numerical models at the top level
in text-based configuration files and provides simple application examples of
use. On the other hand, at the intermediate level, it is shown that the SFL can
be intuitively extended with new particle methods without tedious recoding or
even the knowledge of the bottom level. Finally, the efficiency of the code is
also tested through a performance benchmark.
| 1 | 1 | 0 | 0 | 0 | 0 |
Local and non-local energy spectra of superfluid $^3$He turbulence | Below the phase transition temperature $Tc \simeq 10^{-3}$K He-3B has a
mixture of normal and superfluid components. Turbulence in this material is
carried predominantly by the superfluid component. We explore the statistical
properties of this quantum turbulence, stressing the differences from the
better known classical counterpart. To this aim we study the time-honored
Hall-Vinen-Bekarevich-Khalatnikov coarse-grained equations of superfluid
turbulence. We combine pseudo-spectral direct numerical simulations with
analytic considerations based on an integral closure for the energy flux. We
avoid the assumption of locality of the energy transfer which was used
previously in both analytic and numerical studies of the superfluid He-3B
turbulence. For T<0.37 Tc, with relatively weak mutual friction, we confirm the
previously found "subcritical" energy spectrum E(k), given by a superposition
of two power laws that can be approximated as $E(k)~ k^{-x}$ with an apparent
scaling exponent 5/3 <x(k)< 3. For T>0.37 Tc and with strong mutual friction,
we observed numerically and confirmed analytically the scale-invariant spectrum
$E(k)~ k^{-x}$ with a (k-independent) exponent x > 3 that gradually increases
with the temperature and reaches a value $x\simeq 9$ for $T\approx 0.72 Tc$. In
the near-critical regimes we discover a strong enhancement of intermittency
which exceeds by an order of magnitude the corresponding level in classical
hydrodynamic turbulence.
| 0 | 1 | 0 | 0 | 0 | 0 |
Microwave SQUID Multiplexer demonstration for Cosmic Microwave Background Imagers | Key performance characteristics are demonstrated for the microwave SQUID
multiplexer ($\mu$MUX) coupled to transition edge sensor (TES) bolometers that
have been optimized for cosmic microwave background (CMB) observations. In a
64-channel demonstration, we show that the $\mu$MUX produces a white, input
referred current noise level of 29~pA$/\sqrt{\mathrm{Hz}}$ at -77~dB microwave
probe tone power, which is well below expected fundamental detector and photon
noise sources for a ground-based CMB-optimized bolometer. Operated with
negligible photon loading, we measure 98~pA$/\sqrt{\mathrm{Hz}}$ in the
TES-coupled channels biased at 65% of the sensor normal resistance. This noise
level is consistent with that predicted from bolometer thermal fluctuation
(i.e., phonon) noise. Furthermore, the power spectral density exhibits a white
spectrum at low frequencies ($\sim$~100~mHz), which enables CMB mapping on
large angular scales that constrain the physics of inflation. Additionally, we
report cross-talk measurements that indicate a level below 0.3%, which is less
than the level of cross-talk from multiplexed readout systems in deployed CMB
imagers. These measurements demonstrate the $\mu$MUX as a viable readout
technique for future CMB imaging instruments.
| 0 | 1 | 0 | 0 | 0 | 0 |
Preserving Differential Privacy Between Features in Distributed Estimation | Privacy is crucial in many applications of machine learning. Legal, ethical
and societal issues restrict the sharing of sensitive data making it difficult
to learn from datasets that are partitioned between many parties. One important
instance of such a distributed setting arises when information about each
record in the dataset is held by different data owners (the design matrix is
"vertically-partitioned").
In this setting few approaches exist for private data sharing for the
purposes of statistical estimation and the classical setup of differential
privacy with a "trusted curator" preparing the data does not apply. We work
with the notion of $(\epsilon,\delta)$-distributed differential privacy which
extends single-party differential privacy to the distributed,
vertically-partitioned case. We propose PriDE, a scalable framework for
distributed estimation where each party communicates perturbed random
projections of their locally held features ensuring
$(\epsilon,\delta)$-distributed differential privacy is preserved. For
$\ell_2$-penalized supervised learning problems PriDE has bounded estimation
error compared with the optimal estimates obtained without privacy constraints
in the non-distributed setting. We confirm this empirically on real world and
synthetic datasets.
| 1 | 0 | 0 | 1 | 0 | 0 |
Parallel implementation of a vehicle rail dynamical model for multi-core systems | This research presents a model of a complex dynamic object running on a
multi-core system. Discretization and numerical integration for multibody
models of vehicle rail elements in the vertical longitudinal plane fluctuations
is considered. The implemented model and solution of the motion differential
equations allow estimating the basic processes occurring in the system with
various external influences. Hence the developed programming model can be used
for performing analysis and comparing new vehicle designs.
Keywords-dynamic model; multi-core system; SMP system; rolling stock.
| 1 | 0 | 0 | 0 | 0 | 0 |
Towards Neural Phrase-based Machine Translation | In this paper, we present Neural Phrase-based Machine Translation (NPMT). Our
method explicitly models the phrase structures in output sequences using
Sleep-WAke Networks (SWAN), a recently proposed segmentation-based sequence
modeling method. To mitigate the monotonic alignment requirement of SWAN, we
introduce a new layer to perform (soft) local reordering of input sequences.
Different from existing neural machine translation (NMT) approaches, NPMT does
not use attention-based decoding mechanisms. Instead, it directly outputs
phrases in a sequential order and can decode in linear time. Our experiments
show that NPMT achieves superior performances on IWSLT 2014
German-English/English-German and IWSLT 2015 English-Vietnamese machine
translation tasks compared with strong NMT baselines. We also observe that our
method produces meaningful phrases in output languages.
| 1 | 0 | 0 | 1 | 0 | 0 |
Towards Accurate Multi-person Pose Estimation in the Wild | We propose a method for multi-person detection and 2-D pose estimation that
achieves state-of-art results on the challenging COCO keypoints task. It is a
simple, yet powerful, top-down approach consisting of two stages.
In the first stage, we predict the location and scale of boxes which are
likely to contain people; for this we use the Faster RCNN detector. In the
second stage, we estimate the keypoints of the person potentially contained in
each proposed bounding box. For each keypoint type we predict dense heatmaps
and offsets using a fully convolutional ResNet. To combine these outputs we
introduce a novel aggregation procedure to obtain highly localized keypoint
predictions. We also use a novel form of keypoint-based Non-Maximum-Suppression
(NMS), instead of the cruder box-level NMS, and a novel form of keypoint-based
confidence score estimation, instead of box-level scoring.
Trained on COCO data alone, our final system achieves average precision of
0.649 on the COCO test-dev set and the 0.643 test-standard sets, outperforming
the winner of the 2016 COCO keypoints challenge and other recent state-of-art.
Further, by using additional in-house labeled data we obtain an even higher
average precision of 0.685 on the test-dev set and 0.673 on the test-standard
set, more than 5% absolute improvement compared to the previous best performing
method on the same dataset.
| 1 | 0 | 0 | 0 | 0 | 0 |
Global band topology of simple and double Dirac-point (semi-)metals | We combine space group representation theory together with scanning of closed
subdomains of the Brillouin zone with Wilson loops to algebraically determine
global band structure topology. Considering space group #19 as a case study, we
show that the energy ordering of the irreducible representations at the
high-symmetry points $\{\Gamma,S,T,U\}$ fully determines the global band
topology, with all topological classes characterized through their simple and
double Dirac-points.
| 0 | 1 | 0 | 0 | 0 | 0 |
Topological Perspectives on Statistical Quantities I | In statistics cumulants are defined to be functions that measure the linear
independence of random variables. In the non-communicative case the Boolean
cumulants can be described as functions that measure deviation of a map between
algebras from being an algebra morphism. In Algebraic topology maps that are
homotopic to being algebra morphisms are studied using the theory of $A_\infty$
algebras. In this paper we will explore the link between these two points of
views on maps between algebras that are not algebra maps.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Python Calculator for Supernova Remnant Evolution | A freely available Python code for modelling SNR evolution has been created.
This software is intended for two purposes: to understand SNR evolution; and to
use in modelling observations of SNR for obtaining good estimates of SNR
properties. It includes all phases for the standard path of evolution for
spherically symmetric SNRs. In addition, alternate evolutionary models are
available, including evolution in a cloudy ISM, the fractional energy loss
model, and evolution in a hot low-density ISM. The graphical interface takes in
various parameters and produces outputs such as shock radius and velocity vs.
time, SNR surface brightness profile and spectrum. Some interesting properties
of SNR evolution are demonstrated using the program.
| 0 | 1 | 0 | 0 | 0 | 0 |
Über die Präzision interprozeduraler Analysen | In this work, we examine two approaches to interprocedural data-flow analysis
of Sharir and Pnueli in terms of precision: the functional and the call-string
approach. In doing so, not only the theoretical best, but all solutions are
regarded which occur when using abstract interpretation or widening
additionally. It turns out that the solutions of both approaches coincide. This
property is preserved when using abstract interpretation; in the case of
widening, a comparison of the results is not always possible.
| 1 | 0 | 0 | 0 | 0 | 0 |
Gee-Haw Whammy Diddle | Gee-Haw Whammy Diddle is a seemingly simple mechanical toy consisting of a
wooden stick and a second stick that is made up of a series of notches with a
propeller at its end. When the wooden stick is pulled over the notches, the
propeller starts to rotate. In spite of its simplicity, physical principles
governing the motion of the stick and the propeller are rather complicated and
interesting. Here we provide a thorough analysis of the system and parameters
influencing the motion. We show that contrary to the results published on this
topic so far, neither elliptic motion of the stick nor frequency
synchronization is needed for starting the motion of the propeller.
| 0 | 1 | 0 | 0 | 0 | 0 |
Cancellation theorem for Grothendieck-Witt-correspondences and Witt-correspondences | The cancellation theorem for Grothendieck-Witt-correspondences and
Witt-correspondences between smooth varieties over an infinite prefect field
$k$, $char k \neq 2$, is proved, the isomorphism
$$Hom_{\mathbf{DM}^\mathrm{GW}_\mathrm{eff}}(A^\bullet,B^\bullet) \simeq
Hom_{\mathbf{DM}^\mathrm{GW}_\mathrm{eff}}(A^\bullet(1),B^\bullet(1)),$$ for
$A^\bullet,B^\bullet\in \mathbf{DM}^\mathrm{GW}_\mathrm{eff}(k)$ in the
category of effective Grothendieck-Witt-motives constructed in
\cite{AD_DMGWeff} is obtained (and similarly for Witt-motives).
This implies that the canonical functor $\Sigma_{\mathbb G_m^{\wedge
1}}^\infty\colon \mathbf{DM}^\mathrm{GW}_\mathrm{eff}(k)\to
\mathbf{DM}^\mathrm{GW}(k)$ is fully faithful, where
$\mathbf{DM}^\mathrm{GW}(k)$ is the category of non-effective GW-motives
(defined by stabilization of $\mathbf{DM}^\mathrm{GW}_\mathrm{eff}(k)$ along
$\mathbb G_m^{\wedge 1}$) and yields the main property of motives of smooth
varieties in the category $\mathbf{DM}^\mathrm{GW}(k)$: $$
Hom_{\mathbf{DM}^\mathrm{GW}(k)}(M^{GW}(X), \Sigma_{\mathbb G_m^{\wedge
1}}^\infty\mathcal F[i]) \simeq H^i_{Nis}(X,\mathcal F) ,$$ for any smooth
variety $X$ and homotopy invariant sheave with GW-transfers $\mathcal F$ (and
similarly for $\mathbf{DM}^\mathrm{W}(k)$).
| 0 | 0 | 1 | 0 | 0 | 0 |
Jumping across biomedical contexts using compressive data fusion | Motivation: The rapid growth of diverse biological data allows us to consider
interactions between a variety of objects, such as genes, chemicals, molecular
signatures, diseases, pathways and environmental exposures. Often, any pair of
objects--such as a gene and a disease--can be related in different ways, for
example, directly via gene-disease associations or indirectly via functional
annotations, chemicals and pathways. Different ways of relating these objects
carry different semantic meanings. However, traditional methods disregard these
semantics and thus cannot fully exploit their value in data modeling.
Results: We present Medusa, an approach to detect size-k modules of objects
that, taken together, appear most significant to another set of objects. Medusa
operates on large-scale collections of heterogeneous data sets and explicitly
distinguishes between diverse data semantics. It advances research along two
dimensions: it builds on collective matrix factorization to derive different
semantics, and it formulates the growing of the modules as a submodular
optimization program. Medusa is flexible in choosing or combining semantic
meanings and provides theoretical guarantees about detection quality. In a
systematic study on 310 complex diseases, we show the effectiveness of Medusa
in associating genes with diseases and detecting disease modules. We
demonstrate that in predicting gene-disease associations Medusa compares
favorably to methods that ignore diverse semantic meanings. We find that the
utility of different semantics depends on disease categories and that, overall,
Medusa recovers disease modules more accurately when combining different
semantics.
| 1 | 0 | 0 | 1 | 0 | 0 |
Memories of a Theoretical Physicist | While I was dealing with a brain injury and finding it difficult to work, two
friends (Derek Westen, a friend of the KITP, and Steve Shenker, with whom I was
recently collaborating), suggested that a new direction might be good. Steve in
particular regarded me as a good writer and suggested that I try that. I
quickly took to Steve's suggestion. Having only two bodies of knowledge, myself
and physics, I decided to write an autobiography about my development as a
theoretical physicist.
This is not written for any particular audience, but just to give myself a
goal. It will probably have too much physics for a nontechnical reader, and too
little for a physicist, but perhaps there with be different things for each.
Parts may be tedious. But it is somewhat unique, I think, a blow-by-blow
history of where I started and where I got to.
Probably the target audience is theoretical physicists, especially young
ones, who may enjoy comparing my struggles with their own. Some disclaimers:
This is based on my own memories, jogged by the arXiv and Inspire. There will
surely be errors and omissions. And note the title: this is about my memories,
which will be different for other people. Also, it would not be possible for me
to mention all the authors whose work might intersect mine, so this should not
be treated as a reference work.
| 0 | 1 | 0 | 0 | 0 | 0 |
Connecting the dots between mechanosensitive channel abundance, osmotic shock, and survival at single-cell resolution | Rapid changes in extracellular osmolarity are one of many insults microbial
cells face on a daily basis. To protect against such shocks, Escherichia coli
and other microbes express several types of transmembrane channels which open
and close in response to changes in membrane tension. In E. coli, one of the
most abundant channels is the mechanosensitive channel of large conductance
(MscL). While this channel has been heavily characterized through structural
methods, electrophysiology, and theoretical modeling, our understanding of its
physiological role in preventing cell death by alleviating high membrane
tension remains tenuous. In this work, we examine the contribution of MscL
alone to cell survival after osmotic shock at single cell resolution using
quantitative fluorescence microscopy. We conduct these experiments in an E.
coli strain which is lacking all mechanosensitive channel genes save for MscL
whose expression is tuned across three orders of magnitude through
modifications of the Shine-Dalgarno sequence. While theoretical models suggest
that only a few MscL channels would be needed to alleviate even large changes
in osmotic pressure, we find that between 500 and 700 channels per cell are
needed to convey upwards of 80% survival. This number agrees with the average
MscL copy number measured in wild-type E. coli cells through proteomic studies
and quantitative Western blotting. Furthermore, we observe zero survival events
in cells with less than 100 channels per cell. This work opens new questions
concerning the contribution of other mechanosensitive channels to survival as
well as regulation of their activity.
| 0 | 0 | 0 | 0 | 1 | 0 |
Optical Mapping Near-eye Three-dimensional Display with Correct Focus Cues | We present an optical mapping near-eye (OMNI) three-dimensional display
method for wearable devices. By dividing a display screen into different
sub-panels and optically mapping them to various depths, we create a multiplane
volumetric image with correct focus cues for depth perception. The resultant
system can drive the eye's accommodation to the distance that is consistent
with binocular stereopsis, thereby alleviating the vergence-accommodation
conflict, the primary cause for eye fatigue and discomfort. Compared with the
previous methods, the OMNI display offers prominent advantages in adaptability,
image dynamic range, and refresh rate.
| 1 | 1 | 0 | 0 | 0 | 0 |
Quantum anomalous Hall state from spatially decaying interactions on the decorated honeycomb lattice | Topological phases typically encode topology at the level of the single
particle band structure. But a remarkable class of models shows that quantum
anomalous Hall effects can be driven exclusively by interactions, while the
parent non-interacting band structure is topologically trivial. Unfortunately,
these models have so far relied on interactions that do not spatially decay and
are therefore unphysical. We study a model of spinless fermions on a decorated
honeycomb lattice. Using complementary methods, mean-field theory and exact
diagonalization, we find a robust quantum anomalous Hall phase arising from
spatially decaying interactions. Our finding paves the way for observing the
quantum anomalous Hall effect driven entirely by interactions.
| 0 | 1 | 0 | 0 | 0 | 0 |
Poisson--Gamma Dynamical Systems | We introduce a new dynamical system for sequentially observed multivariate
count data. This model is based on the gamma--Poisson construction---a natural
choice for count data---and relies on a novel Bayesian nonparametric prior that
ties and shrinks the model parameters, thus avoiding overfitting. We present an
efficient MCMC inference algorithm that advances recent work on augmentation
schemes for inference in negative binomial models. Finally, we demonstrate the
model's inductive bias using a variety of real-world data sets, showing that it
exhibits superior predictive performance over other models and infers highly
interpretable latent structure.
| 1 | 0 | 0 | 1 | 0 | 0 |
Best-Choice Edge Grafting for Efficient Structure Learning of Markov Random Fields | Incremental methods for structure learning of pairwise Markov random fields
(MRFs), such as grafting, improve scalability by avoiding inference over the
entire feature space in each optimization step. Instead, inference is performed
over an incrementally grown active set of features. In this paper, we address
key computational bottlenecks that current incremental techniques still suffer
by introducing best-choice edge grafting, an incremental, structured method
that activates edges as groups of features in a streaming setting. The method
uses a reservoir of edges that satisfy an activation condition, approximating
the search for the optimal edge to activate. It also reorganizes the search
space using search-history and structure heuristics. Experiments show a
significant speedup for structure learning and a controllable trade-off between
the speed and quality of learning.
| 1 | 0 | 0 | 1 | 0 | 0 |
Cavity-enhanced transport of charge | We theoretically investigate charge transport through electronic bands of a
mesoscopic one-dimensional system, where inter-band transitions are coupled to
a confined cavity mode, initially prepared close to its vacuum. This coupling
leads to light-matter hybridization where the dressed fermionic bands interact
via absorption and emission of dressed cavity-photons. Using a self-consistent
non-equilibrium Green's function method, we compute electronic transmissions
and cavity photon spectra and demonstrate how light-matter coupling can lead to
an enhancement of charge conductivity in the steady-state. We find that
depending on cavity loss rate, electronic bandwidth, and coupling strength, the
dynamics involves either an individual or a collective response of Bloch
states, and explain how this affects the current enhancement. We show that the
charge conductivity enhancement can reach orders of magnitudes under
experimentally relevant conditions.
| 0 | 1 | 0 | 0 | 0 | 0 |
Isoparameteric hypersurfaces in a Randers sphere of constant flag curvature | In this paper, I study the isoparametric hypersurfaces in a Randers sphere
$(S^n,F)$ of constant flag curvature, with the navigation datum $(h,W)$. I
prove that an isoparametric hypersurface $M$ for the standard round sphere
$(S^n,h)$ which is tangent to $W$ remains isoparametric for $(S^n,F)$ after the
navigation process. This observation provides a special class of isoparametric
hypersurfaces in $(S^n,F)$, which can be equivalently described as the regular
level sets of isoparametric functions $f$ satisfying $-f$ is transnormal. I
provide a classification for these special isoparametric hypersurfaces $M$,
together with their ambient metric $F$ on $S^n$, except the case that $M$ is of
the OT-FKM type with the multiplicities $(m_1,m_2)=(8,7)$. I also give a
complete classificatoin for all homogeneous hypersurfaces in $(S^n,F)$. They
all belong to these special isoparametric hypersurfaces. Because of the extra
$W$, the number of distinct principal curvature can only be 1,2 or 4, i.e.
there are less homogeneous hypersurfaces for $(S^n,F)$ than those for
$(S^n,h)$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Bright-field microscopy of transparent objects: a ray tracing approach | Formation of a bright-field microscopic image of a transparent phase object
is described in terms of elementary geometrical optics. Our approach is based
on the premise that image replicates the intensity distribution (real or
virtual) at the front focal plane of the objective. The task is therefore
reduced to finding the change in intensity at the focal plane caused by the
object. This can be done by ray tracing complemented with the requirement of
conservation of the number of rays. Despite major simplifications involved in
such an analysis, it reproduces some results from the paraxial wave theory.
Additionally, our analysis suggests two ways of extracting quantitative phase
information from bright-field images: by vertically shifting the focal plane
(the approach used in the transport-of-intensity analysis) or by varying the
angle of illumination. In principle, information thus obtained should allow
reconstruction of the object morphology.
| 0 | 1 | 0 | 0 | 0 | 0 |
Entropy Production Rate is Maximized in Non-Contractile Actomyosin | The actin cytoskeleton is an active semi-flexible polymer network whose
non-equilibrium properties coordinate both stable and contractile behaviors to
maintain or change cell shape. While myosin motors drive the actin cytoskeleton
out-of-equilibrium, the role of myosin-driven active stresses in the
accumulation and dissipation of mechanical energy is unclear. To investigate
this, we synthesize an actomyosin material in vitro whose active stress content
can tune the network from stable to contractile. Each increment in activity
determines a characteristic spectrum of actin filament fluctuations which is
used to calculate the total mechanical work and the production of entropy in
the material. We find that the balance of work and entropy does not increase
monotonically and, surprisingly, the entropy production rate is maximized in
the non-contractile, stable state. Our study provides evidence that the origins
of system entropy production and activity-dependent dissipation arise from
disorder in the molecular interactions between actin and myosin
| 0 | 0 | 0 | 0 | 1 | 0 |
Intrinsic resolving power of XUV diffraction gratings measured with Fizeau interferometry | We introduce a method for using Fizeau interferometry to measure the
intrinsic resolving power of a diffraction grating. This method is more
accurate than traditional techniques based on a long-trace profiler (LTP),
since it is sensitive to long-distance phase errors not revealed by a d-spacing
map. We demonstrate 50,400 resolving power for a mechanically ruled XUV grating
from Inprentus, Inc.
| 0 | 1 | 0 | 0 | 0 | 0 |
When do we have the power to detect biological interactions in spatial point patterns? | Determining the relative importance of environmental factors, biotic
interactions and stochasticity in assembling and maintaining species-rich
communities remains a major challenge in ecology. In plant communities,
interactions between individuals of different species are expected to leave a
spatial signature in the form of positive or negative spatial correlations over
distances relating to the spatial scale of interaction. Most studies using
spatial point process tools have found relatively little evidence for
interactions between pairs of species. More interactions tend to be detected in
communities with fewer species. However, there is currently no understanding of
how the power to detect spatial interactions may change with sample size, or
the scale and intensity of interactions.
We use a simple 2-species model where the scale and intensity of interactions
are controlled to simulate point pattern data. In combination with an
approximation to the variance of the spatial summary statistics that we sample,
we investigate the power of current spatial point pattern methods to correctly
reject the null model of bivariate species independence.
We show that the power to detect interactions is positively related to the
abundances of the species tested, and the intensity and scale of interactions.
Increasing imbalance in abundances has a negative effect on the power to detect
interactions. At population sizes typically found in currently available
datasets for species-rich plant communities we find only a very low power to
detect interactions. Differences in power may explain the increased frequency
of interactions in communities with fewer species. Furthermore, the
community-wide frequency of detected interactions is very sensitive to a
minimum abundance criterion for including species in the analyses.
| 0 | 0 | 0 | 0 | 1 | 0 |
Spectral Algorithms for Computing Fair Support Vector Machines | Classifiers and rating scores are prone to implicitly codifying biases, which
may be present in the training data, against protected classes (i.e., age,
gender, or race). So it is important to understand how to design classifiers
and scores that prevent discrimination in predictions. This paper develops
computationally tractable algorithms for designing accurate but fair support
vector machines (SVM's). Our approach imposes a constraint on the covariance
matrices conditioned on each protected class, which leads to a nonconvex
quadratic constraint in the SVM formulation. We develop iterative algorithms to
compute fair linear and kernel SVM's, which solve a sequence of relaxations
constructed using a spectral decomposition of the nonconvex constraint. Its
effectiveness in achieving high prediction accuracy while ensuring fairness is
shown through numerical experiments on several data sets.
| 1 | 0 | 0 | 1 | 0 | 0 |
Random matrix approach for primal-dual portfolio optimization problems | In this paper, we revisit the portfolio optimization problems of the
minimization/maximization of investment risk under constraints of budget and
investment concentration (primal problem) and the maximization/minimization of
investment concentration under constraints of budget and investment risk (dual
problem) for the case that the variances of the return rates of the assets are
identical. We analyze both optimization problems by using the Lagrange
multiplier method and the random matrix approach. Thereafter, we compare the
results obtained from our proposed approach with the results obtained in
previous work. Moreover, we use numerical experiments to validate the results
obtained from the replica approach and the random matrix approach as methods
for analyzing both the primal and dual portfolio optimization problems.
| 1 | 1 | 0 | 0 | 0 | 0 |
On a method for constructing the Lax pairs for integrable models via quadratic ansatz | A method for constructing the Lax pairs for nonlinear integrable models is
suggested. First we look for a nonlinear invariant manifold to the
linearization of the given equation. Examples show that such invariant manifold
does exist and can effectively be found. Actually it is defined by a quadratic
form. As a result we get a nonlinear Lax pair consisting of the linearized
equation and the invariant manifold. Our second step consists of finding an
appropriate change of the variables to linearize the found nonlinear Lax pair.
The desired change of the variables is again defined by a quadratic form. The
method is illustrated by the well-known KdV equation and the modified Volterra
chain. New Lax pairs are found. The formal asymptotic expansions for their
eigenfunctions are constructed around the singular values of the spectral
parameter. By applying the method of the formal diagonalization to these Lax
pairs the infinite series of the local conservation laws are obtained for the
corresponding nonlinear models.
| 0 | 1 | 0 | 0 | 0 | 0 |
Insight into the modeling of seismic waves for detection of underground cavities | Motivated by the need to detect an underground cavity within the procedure of
an On-Site-Inspection (OSI), of the Comprehensive Nuclear Test Ban Treaty
Organization, the aim of this paper is to present results on the comparison of
our numerical simulations with an analytic solution. The accurate numerical
modeling can facilitate the development of proper analysis techniques to detect
the remnants of an underground nuclear test. The larger goal is to help set a
rigorous scientific base of OSI and to contribute to bringing the Treaty into
force. For our 3D numerical simulations, we use the discontinuous Galerkin
Spectral Element Code SPEED jointly developed at MOX (The Laboratory for
Modeling and Scientific Computing, Department of Mathematics) and at DICA
(Department of Civil and Environmental Engineering) of the Politecnico di
Milano.
| 0 | 1 | 0 | 0 | 0 | 0 |
Uncorrelated far AGN flaring with their delayed UHECRs events | The most distant AGN, within the allowed GZK cut-off radius, have been
recently candidate by many authors as the best location for observed UHECR
origination. Indeed, the apparent homogeneity and isotropy of recent UHECR
signals seems to require a far cosmic isotropic and homogeneous scenario
involving a proton UHECR courier: our galaxy or nearest local group or super
galactic plane (ruled by Virgo cluster) are too much near and apparently too
much anisotropic in disagreement with PAO and TA almost homogeneous sample
data. However, the few and mild observed UHECR clustering, the North and South
Hot Spots, are smeared in wide solid angles. Their consequent random walk
flight from most far GZK UHECR sources, nearly at 100 Mpc, must be delayed
(with respect to a straight AGN photon gamma flaring arrival trajectory) at
least by a million years. During this time, the AGN jet blazing signal, its
probable axis deflection (such as the helical jet in Mrk501), its miss
alignment or even its almost certain exhaust activity may lead to a complete
misleading correlation between present UHECR events and a much earlier active
AGN ejection. UHECR maps maybe anyway related to galactic or nearest (Cen A,
M82) AGN extragalactic UHECR sources shining in twin Hot Spot. Therefore we
defend our (quite different) scenarios where UHECR are mostly made by lightest
UHECR nuclei originated by nearby AGN sources, or few galactic sources, whose
delayed signals reach us within few thousand years in the observed smeared sky
areas.
| 0 | 1 | 0 | 0 | 0 | 0 |
How to centralize and normalize quandle extensions | We show that quandle coverings in the sense of Eisermann form a (regular
epi)-reflective subcategory of the category of surjective quandle
homomorphisms, both by using arguments coming from categorical Galois theory
and by constructing concretely a centralization congruence. Moreover, we show
that a similar result holds for normal quandle extensions.
| 0 | 0 | 1 | 0 | 0 | 0 |
Geometric Fluctuation Theorem | We derive an extended fluctuation theorem for a geometric pumping in a
spin-boson system under a periodic control of environmental temperatures by
using a Markovian quantum master equation. We perform the Monte-Carlo
simulation and obtain the current distribution, the average current and the
fluctuation. Using the extended fluctuation theorem we try to explain the
results of our simulation. The fluctuation theorem leads to the fluctuation
dissipation relations but the absence of the conventional reciprocal relation.
| 0 | 1 | 0 | 0 | 0 | 0 |
Opinion Dynamics via Search Engines (and other Algorithmic Gatekeepers) | Ranking algorithms are the information gatekeepers of the Internet era. We
develop a stylized model to study the effects of ranking algorithms on opinion
dynamics. We consider a search engine that uses an algorithm based on
popularity and on personalization. We find that popularity-based rankings
generate an advantage of the fewer effect: fewer websites reporting a given
signal attract relatively more traffic overall. This highlights a novel,
ranking-driven channel that explains the diffusion of misinformation, as
websites reporting incorrect information may attract an amplified amount of
traffic precisely because they are few. Furthermore, when individuals provide
sufficiently positive feedback to the ranking algorithm, popularity-based
rankings tend to aggregate information while personalization acts in the
opposite direction.
| 1 | 0 | 0 | 0 | 0 | 1 |
Exploring home robot capabilities by medium fidelity prototyping | In order for autonomous robots to be able to support people's well-being in
homes and everyday environments, new interactive capabilities will be required,
as exemplified by the soft design used for Disney's recent robot character
Baymax in popular fiction. Home robots will be required to be easy to interact
with and intelligent--adaptive, fun, unobtrusive and involving little effort to
power and maintain--and capable of carrying out useful tasks both on an
everyday level and during emergencies. The current article adopts an
exploratory medium fidelity prototyping approach for testing some new robotic
capabilities in regard to recognizing people's activities and intentions and
behaving in a way which is transparent to people. Results are discussed with
the aim of informing next designs.
| 1 | 0 | 0 | 0 | 0 | 0 |
Unsupervised Domain Adaptation Based on Source-guided Discrepancy | Unsupervised domain adaptation is the problem setting where data generating
distributions in the source and target domains are different, and labels in the
target domain are unavailable. One important question in unsupervised domain
adaptation is how to measure the difference between the source and target
domains. A previously proposed discrepancy that does not use the source domain
labels requires high computational cost to estimate and may lead to a loose
generalization error bound in the target domain. To mitigate these problems, we
propose a novel discrepancy called source-guided discrepancy (S-disc), which
exploits labels in the source domain. As a consequence, S-disc can be computed
efficiently with a finite sample convergence guarantee. In addition, we show
that S-disc can provide a tighter generalization error bound than the one based
on an existing discrepancy. Finally, we report experimental results that
demonstrate the advantages of S-disc over the existing discrepancies.
| 0 | 0 | 0 | 1 | 0 | 0 |
On structured surfaces with defects: geometry, strain incompatibility, internal stress, and natural shapes | Given a distribution of defects on a structured surface, such as those
represented by 2-dimensional crystalline materials, liquid crystalline
surfaces, and thin sandwiched shells, what is the resulting stress field and
the deformed shape? Motivated by this concern, we first classify, and quantify,
the translational, rotational, and metrical defects allowable over a broad
class of structured surfaces. With an appropriate notion of strain, the defect
densities are then shown to appear as sources of strain incompatibility. The
strain incompatibility relations, with appropriate kinematical assumptions on
the decomposition of strain into elastic and plastic parts, and the stress
equilibrium relations, with a suitable choice of material response, provide the
necessary equations for determining both the internal stress field and the
deformed shape. We demonstrate this by applying our theory to Kirchhoff-Love
shells with a kinematics which allows for small in-surface strains but
moderately large rotations.
| 0 | 1 | 1 | 0 | 0 | 0 |
Management system for the SND experiments | A new management system for the SND detector experiments (at VEPP-2000
collider in Novosibirsk) is developed. We describe here the interaction between
a user and the SND databases. These databases contain experiment configuration,
conditions and metadata. The new system is designed in client-server
architecture. It has several logical layers corresponding to the users roles. A
new template engine is created. A web application is implemented using Node.js
framework. At the time the application provides: showing and editing
configuration; showing experiment metadata and experiment conditions data
index; showing SND log (prototype).
| 1 | 1 | 0 | 0 | 0 | 0 |
Fatiguing STDP: Learning from Spike-Timing Codes in the Presence of Rate Codes | Spiking neural networks (SNNs) could play a key role in unsupervised machine
learning applications, by virtue of strengths related to learning from the fine
temporal structure of event-based signals. However, some spike-timing-related
strengths of SNNs are hindered by the sensitivity of spike-timing-dependent
plasticity (STDP) rules to input spike rates, as fine temporal correlations may
be obstructed by coarser correlations between firing rates. In this article, we
propose a spike-timing-dependent learning rule that allows a neuron to learn
from the temporally-coded information despite the presence of rate codes. Our
long-term plasticity rule makes use of short-term synaptic fatigue dynamics. We
show analytically that, in contrast to conventional STDP rules, our fatiguing
STDP (FSTDP) helps learn the temporal code, and we derive the necessary
conditions to optimize the learning process. We showcase the effectiveness of
FSTDP in learning spike-timing correlations among processes of different rates
in synthetic data. Finally, we use FSTDP to detect correlations in real-world
weather data from the United States in an experimental realization of the
algorithm that uses a neuromorphic hardware platform comprising phase-change
memristive devices. Taken together, our analyses and demonstrations suggest
that FSTDP paves the way for the exploitation of the spike-based strengths of
SNNs in real-world applications.
| 1 | 0 | 0 | 1 | 0 | 0 |
Kernel-Based Learning for Smart Inverter Control | Distribution grids are currently challenged by frequent voltage excursions
induced by intermittent solar generation. Smart inverters have been advocated
as a fast-responding means to regulate voltage and minimize ohmic losses. Since
optimal inverter coordination may be computationally challenging and preset
local control rules are subpar, the approach of customized control rules
designed in a quasi-static fashion features as a golden middle. Departing from
affine control rules, this work puts forth non-linear inverter control
policies. Drawing analogies to multi-task learning, reactive control is posed
as a kernel-based regression task. Leveraging a linearized grid model and given
anticipated data scenarios, inverter rules are jointly designed at the feeder
level to minimize a convex combination of voltage deviations and ohmic losses
via a linearly-constrained quadratic program. Numerical tests using real-world
data on a benchmark feeder demonstrate that nonlinear control rules driven also
by a few non-local readings can attain near-optimal performance.
| 1 | 0 | 0 | 1 | 0 | 0 |
Temperature fluctuations in a changing climate: an ensemble-based experimental approach | There is an ongoing debate in the literature about whether the present global
warming is increasing local and global temperature variability. The central
methodological issues of this debate relate to the proper treatment of
normalised temperature anomalies and trends in the studied time series which
may be difficult to separate from time-evolving fluctuations. Some argue that
temperature variability is indeed increasing globally, whereas others conclude
it is decreasing or remains practically unchanged. Meanwhile, a consensus
appears to emerge that local variability in certain regions (e.g. Western
Europe and North America) has indeed been increasing in the past 40 years. Here
we investigate the nature of connections between external forcing and climate
variability conceptually by using a laboratory-scale minimal model of
mid-latitude atmospheric thermal convection subject to continuously decreasing
`equator-to-pole' temperature contrast, mimicking climate change. The analysis
of temperature records from an ensemble of experimental runs (`realisations')
all driven by identical time-dependent external forcing reveals that the
collective variability of the ensemble and that of individual realisations may
be markedly different -- a property to be considered when interpreting climate
records.
| 0 | 1 | 0 | 0 | 0 | 0 |
Matrix factorizations for quantum complete intersections | We introduce twisted matrix factorizations for quantum complete intersections
of codimension two. For such an algebra, we show that in a given dimension,
almost all the indecomposable modules with bounded minimal projective
resolutions correspond to such matrix factorizations.
| 0 | 0 | 1 | 0 | 0 | 0 |
Coresets for Vector Summarization with Applications to Network Graphs | We provide a deterministic data summarization algorithm that approximates the
mean $\bar{p}=\frac{1}{n}\sum_{p\in P} p$ of a set $P$ of $n$ vectors in
$\REAL^d$, by a weighted mean $\tilde{p}$ of a \emph{subset} of $O(1/\eps)$
vectors, i.e., independent of both $n$ and $d$. We prove that the squared
Euclidean distance between $\bar{p}$ and $\tilde{p}$ is at most $\eps$
multiplied by the variance of $P$. We use this algorithm to maintain an
approximated sum of vectors from an unbounded stream, using memory that is
independent of $d$, and logarithmic in the $n$ vectors seen so far. Our main
application is to extract and represent in a compact way friend groups and
activity summaries of users from underlying data exchanges. For example, in the
case of mobile networks, we can use GPS traces to identify meetings, in the
case of social networks, we can use information exchange to identify friend
groups. Our algorithm provably identifies the {\it Heavy Hitter} entries in a
proximity (adjacency) matrix. The Heavy Hitters can be used to extract and
represent in a compact way friend groups and activity summaries of users from
underlying data exchanges. We evaluate the algorithm on several large data
sets.
| 1 | 0 | 0 | 0 | 0 | 0 |
Two-dimensional matter-wave solitons and vortices in competing cubic-quintic nonlinear lattices | The nonlinear lattice---a new and nonlinear class of periodic
potentials---was recently introduced to generate various nonlinear localized
modes. Several attempts failed to stabilize two-dimensional (2D) solitons
against their intrinsic critical collapse in Kerr media. Here, we provide a
possibility for supporting 2D matter-wave solitons and vortices in an extended
setting---the cubic and quintic model---by introducing another nonlinear
lattice whose period is controllable and can be different from its cubic
counterpart, to its quintic nonlinearity, therefore making a fully `nonlinear
quasi-crystal'.
A variational approximation based on Gaussian ansatz is developed for the
fundamental solitons and in particular, their stability exactly follows the
inverted \textit{Vakhitov-Kolokolov} stability criterion, whereas the vortex
solitons are only studied by means of numerical methods. Stability regions for
two types of localized mode---the fundamental and vortex solitons---are
provided. A noteworthy feature of the localized solutions is that the vortex
solitons are stable only when the period of the quintic nonlinear lattice is
the same as the cubic one or when the quintic nonlinearity is constant, while
the stable fundamental solitons can be created under looser conditions. Our
physical setting (cubic-quintic model) is in the framework of the
Gross-Pitaevskii equation (GPE) or nonlinear Schrödinger equation, the
predicted localized modes thus may be implemented in Bose-Einstein condensates
and nonlinear optical media with tunable cubic and quintic nonlinearities.
| 0 | 1 | 0 | 0 | 0 | 0 |
RELink: A Research Framework and Test Collection for Entity-Relationship Retrieval | Improvements of entity-relationship (E-R) search techniques have been
hampered by a lack of test collections, particularly for complex queries
involving multiple entities and relationships. In this paper we describe a
method for generating E-R test queries to support comprehensive E-R search
experiments. Queries and relevance judgments are created from content that
exists in a tabular form where columns represent entity types and the table
structure implies one or more relationships among the entities. Editorial work
involves creating natural language queries based on relationships represented
by the entries in the table. We have publicly released the RELink test
collection comprising 600 queries and relevance judgments obtained from a
sample of Wikipedia List-of-lists-of-lists tables. The latter comprise tuples
of entities that are extracted from columns and labelled by corresponding
entity types and relationships they represent. In order to facilitate research
in complex E-R retrieval, we have created and released as open source the
RELink Framework that includes Apache Lucene indexing and search specifically
tailored to E-R retrieval. RELink includes entity and relationship indexing
based on the ClueWeb-09-B Web collection with FACC1 text span annotations
linked to Wikipedia entities. With ready to use search resources and a
comprehensive test collection, we support community in pursuing E-R research at
scale.
| 1 | 0 | 0 | 0 | 0 | 0 |
NimbRo-OP2X: Adult-sized Open-source 3D Printed Humanoid Robot | Humanoid robotics research depends on capable robot platforms, but recently
developed advanced platforms are often not available to other research groups,
expensive, dangerous to operate, or closed-source. The lack of available
platforms forces researchers to work with smaller robots, which have less
strict dynamic constraints or with simulations, which lack many real-world
effects. We developed NimbRo-OP2X to address this need. At a height of 135 cm
our robot is large enough to interact in a human environment. Its low weight of
only 19 kg makes the operation of the robot safe and easy, as no special
operational equipment is necessary. Our robot is equipped with a fast onboard
computer and a GPU to accelerate parallel computations. We extend our already
open-source software by a deep-learning based vision system and gait parameter
optimisation. The NimbRo-OP2X was evaluated during RoboCup 2018 in Montréal,
Canada, where it won all possible awards in the Humanoid AdultSize class.
| 1 | 0 | 0 | 0 | 0 | 0 |
Optical bandgap engineering in nonlinear silicon nitride waveguides | Silicon nitride is awell-established material for photonic devices and
integrated circuits. It displays a broad transparency window spanning from the
visible to the mid-IR and waveguides can be manufactured with low losses. An
absence of nonlinear multi-photon absorption in the erbium lightwave
communications band has enabled various nonlinear optic applications in the
past decade. Silicon nitride is a dielectric material whose optical and
mechanical properties strongly depend on the deposition conditions. In
particular, the optical bandgap can be modified with the gas flow ratio during
low-pressure chemical vapor deposition (LPCVD). Here we show that this
parameter can be controlled in a highly reproducible manner, providing an
approach to synthesize the nonlinear Kerr coefficient of the material. This
holistic empirical study provides relevant guidelines to optimize the
properties of LPCVD silicon nitride waveguides for nonlinear optics
applications that rely on the Kerr effect.
| 0 | 1 | 0 | 0 | 0 | 0 |
Approximate Steepest Coordinate Descent | We propose a new selection rule for the coordinate selection in coordinate
descent methods for huge-scale optimization. The efficiency of this novel
scheme is provably better than the efficiency of uniformly random selection,
and can reach the efficiency of steepest coordinate descent (SCD), enabling an
acceleration of a factor of up to $n$, the number of coordinates. In many
practical applications, our scheme can be implemented at no extra cost and
computational efficiency very close to the faster uniform selection. Numerical
experiments with Lasso and Ridge regression show promising improvements, in
line with our theoretical guarantees.
| 1 | 0 | 1 | 0 | 0 | 0 |
How Robust are Deep Neural Networks? | Convolutional and Recurrent, deep neural networks have been successful in
machine learning systems for computer vision, reinforcement learning, and other
allied fields. However, the robustness of such neural networks is seldom
apprised, especially after high classification accuracy has been attained. In
this paper, we evaluate the robustness of three recurrent neural networks to
tiny perturbations, on three widely used datasets, to argue that high accuracy
does not always mean a stable and a robust (to bounded perturbations,
adversarial attacks, etc.) system. Especially, normalizing the spectrum of the
discrete recurrent network to bound the spectrum (using power method, Rayleigh
quotient, etc.) on a unit disk produces stable, albeit highly non-robust neural
networks. Furthermore, using the $\epsilon$-pseudo-spectrum, we show that
training of recurrent networks, say using gradient-based methods, often result
in non-normal matrices that may or may not be diagonalizable. Therefore, the
open problem lies in constructing methods that optimize not only for accuracy
but also for the stability and the robustness of the underlying neural network,
a criterion that is distinct from the other.
| 0 | 0 | 0 | 1 | 0 | 0 |
Learning Latent Representations for Speech Generation and Transformation | An ability to model a generative process and learn a latent representation
for speech in an unsupervised fashion will be crucial to process vast
quantities of unlabelled speech data. Recently, deep probabilistic generative
models such as Variational Autoencoders (VAEs) have achieved tremendous success
in modeling natural images. In this paper, we apply a convolutional VAE to
model the generative process of natural speech. We derive latent space
arithmetic operations to disentangle learned latent representations. We
demonstrate the capability of our model to modify the phonetic content or the
speaker identity for speech segments using the derived operations, without the
need for parallel supervisory data.
| 1 | 0 | 0 | 1 | 0 | 0 |
Approximate Profile Maximum Likelihood | We propose an efficient algorithm for approximate computation of the profile
maximum likelihood (PML), a variant of maximum likelihood maximizing the
probability of observing a sufficient statistic rather than the empirical
sample. The PML has appealing theoretical properties, but is difficult to
compute exactly. Inspired by observations gleaned from exactly solvable cases,
we look for an approximate PML solution, which, intuitively, clumps comparably
frequent symbols into one symbol. This amounts to lower-bounding a certain
matrix permanent by summing over a subgroup of the symmetric group rather than
the whole group during the computation. We extensively experiment with the
approximate solution, and find the empirical performance of our approach is
competitive and sometimes significantly better than state-of-the-art
performance for various estimation problems.
| 1 | 0 | 0 | 1 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.