title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Blocks with the hyperfocal subgroup $Z_{2^n}\times Z_{2^n}$ | In this paper, we calculate the numbers of irreducible ordinary characters
and irreducible Brauer characters in a block of a finite group $G$, whose
associated fusion system over a 2-subgroup $P$ of $G$ (which is a defect group
of the block) has the hyperfocal subgroup $\mathbb Z_{2^n}\times \mathbb
Z_{2^n}$ for some $n\geq 2$, when the block is controlled by the normalizer
$N_G(P)$ and the hyperfocal subgroup is contained in the center of $P$, or when
the block is not controlled by $N_G(P)$ and the hyperfocal subgroup is
contained in the center of the unique essential subgroup in the fusion system.
In particular, Alperin's weight conjecture holds in the considered cases.
| 0 | 0 | 1 | 0 | 0 | 0 |
Asymptotic Analysis via Stochastic Differential Equations of Gradient Descent Algorithms in Statistical and Computational Paradigms | This paper investigates asymptotic behaviors of gradient descent algorithms
(particularly accelerated gradient descent and stochastic gradient descent) in
the context of stochastic optimization arose in statistics and machine learning
where objective functions are estimated from available data. We show that these
algorithms can be modeled by continuous-time ordinary or stochastic
differential equations, and their asymptotic dynamic evolutions and
distributions are governed by some linear ordinary or stochastic differential
equations, as the data size goes to infinity. We illustrate that our study can
provide a novel unified framework for a joint computational and statistical
asymptotic analysis on dynamic behaviors of these algorithms with the time (or
the number of iterations in the algorithms) and large sample behaviors of the
statistical decision rules (like estimators and classifiers) that the
algorithms are applied to compute, where the statistical decision rules are the
limits of the random sequences generated from these iterative algorithms as the
number of iterations goes to infinity. The analysis results may shed light on
the empirically observed phenomenon of escaping from saddle points, avoiding
bad local minimizers, and converging to good local minimizers, which depends on
local geometry, learning rate and batch size, when stochastic gradient descent
algorithms are applied to solve non-convex optimization problems.
| 0 | 0 | 0 | 1 | 0 | 0 |
Pigeonring: A Principle for Faster Thresholded Similarity Search | The pigeonhole principle states that if n items are contained in m boxes,
then at least one box has no more than n/m items. It is utilized to solve many
data management problems, especially for thresholded similarity searches.
Despite many pigeonhole principle-based solutions proposed in the last few
decades, the condition stated by the principle is weak. It only constrains the
number of items in a single box. By organizing the boxes in a ring, we propose
a new principle, called the pigeonring principle, which constrains the number
of items in multiple boxes and yields stronger conditions.
To utilize the new principle, we focus on problems defined in the form of
identifying data objects whose similarities or distances to the query is
constrained by a threshold. Many solutions to these problems utilize the
pigeonhole principle to find candidates that satisfy a filtering condition. By
the new principle, stronger filtering conditions can be established. We show
that the pigeonhole principle is a special case of the new principle. This
suggests that all the pigeonhole principle-based solutions are possible to be
accelerated by the new principle. A universal filtering framework is introduced
to encompass the solutions to these problems based on the new principle.
Besides, we discuss how to quickly find candidates specified by the new
principle. The implementation requires only minor modifications on top of
existing pigeonhole principle-based algorithms. Experimental results on real
datasets demonstrate the applicability of the new principle as well as the
superior performance of the algorithms based on the new principle.
| 1 | 0 | 0 | 0 | 0 | 0 |
Circle compactification and 't Hooft anomaly | Anomaly matching constrains low-energy physics of strongly-coupled field
theories, but it is not useful at finite temperature due to contamination from
high-energy states. The known exception is an 't Hooft anomaly involving
one-form symmetries as in pure $SU(N)$ Yang-Mills theory at $\theta=\pi$.
Recent development about large-$N$ volume independence, however, gives us a
circumstantial evidence that 't Hooft anomalies can also remain under circle
compactifications in some theories without one-form symmetries. We develop a
systematic procedure for deriving an 't Hooft anomaly of the
circle-compactified theory starting from the anomaly of the original
uncompactified theory without one-form symmetries, where the twisted boundary
condition for the compactified direction plays a pivotal role. As an
application, we consider $\mathbb{Z}_N$-twisted $\mathbb{C}P^{N-1}$ sigma model
and massless $\mathbb{Z}_N$-QCD, and compute their anomalies explicitly.
| 0 | 1 | 0 | 0 | 0 | 0 |
High-speed 100 MHz strain monitor using fiber Bragg grating and optical filter for magnetostriction measurements under ultrahigh magnetic fields | A high-speed 100 MHz strain monitor using a fiber Bragg grating, an optical
filter, and a mode-locked optical fiber laser has been devised, which has a
resolution of $\Delta L/L\sim10^{-4}$. The strain monitor is sufficiently fast
and robust for the magnetostriction measurements of magnetic materials under
ultrahigh magnetic fields generated with destructive pulse magnets, where the
sweep rate is in the range of 10-100 T/$\mu$s. As a working example, the
magnetostriction of LaCoO$_{3}$ was measured at room temperature, 115 K, and
7$\sim$4.2 K up to a maximum magnetic field of 150 T. The smooth $B^{2}$
dependence and the first-order transition were observed at 115 K and 7$\sim$4.2
K, respectively, reflecting the field-induced spin-state evolution.
| 0 | 1 | 0 | 0 | 0 | 0 |
Functional limit laws for the increments of Lévy processes | We present a functional form of the Erdös-Renyi law of large numbers for
Levy processes.
| 0 | 0 | 1 | 1 | 0 | 0 |
R$^3$PUF: A Highly Reliable Memristive Device based Reconfigurable PUF | We present a memristive device based R$ ^3 $PUF construction achieving highly
desired PUF properties, which are not offered by most current PUF designs: (1)
High reliability, almost 100\% that is crucial for PUF-based cryptographic key
generations, significantly reducing, or even eliminating the expensive overhead
of on-chip error correction logic and the associated helper on-chip data
storage or off-chip storage and transfer. (2) Reconfigurability, while current
PUF designs rarely exhibit such an attractive property. We validate our R$ ^3
$PUF via extensive Monte-Carlo simulations in Cadence based on parameters of
real devices. The R$ ^3 $PUF is simple, cost-effective and easy to manage
compared to other PUF constructions exhibiting high reliability or
reconfigurability. None of previous PUF constructions is able to provide both
desired high reliability and reconfigurability concurrently.
| 1 | 0 | 0 | 0 | 0 | 0 |
FML-based Dynamic Assessment Agent for Human-Machine Cooperative System on Game of Go | In this paper, we demonstrate the application of Fuzzy Markup Language (FML)
to construct an FML-based Dynamic Assessment Agent (FDAA), and we present an
FML-based Human-Machine Cooperative System (FHMCS) for the game of Go. The
proposed FDAA comprises an intelligent decision-making and learning mechanism,
an intelligent game bot, a proximal development agent, and an intelligent
agent. The intelligent game bot is based on the open-source code of Facebook
Darkforest, and it features a representational state transfer application
programming interface mechanism. The proximal development agent contains a
dynamic assessment mechanism, a GoSocket mechanism, and an FML engine with a
fuzzy knowledge base and rule base. The intelligent agent contains a GoSocket
engine and a summarization agent that is based on the estimated win rate,
real-time simulation number, and matching degree of predicted moves.
Additionally, the FML for player performance evaluation and linguistic
descriptions for game results commentary are presented. We experimentally
verify and validate the performance of the FDAA and variants of the FHMCS by
testing five games in 2016 and 60 games of Google Master Go, a new version of
the AlphaGo program, in January 2017. The experimental results demonstrate that
the proposed FDAA can work effectively for Go applications.
| 1 | 0 | 0 | 0 | 0 | 0 |
ALMA reveals starburst-like interstellar medium conditions in a compact star-forming galaxy at z ~ 2 using [CI] and CO | We present ALMA detections of the [CI] 1-0, CO J=3-2, and CO J=4-3 emission
lines, as well as the ALMA band 4 continuum for a compact star-forming galaxy
(cSFG) at z=2.225, 3D-HST GS30274. As is typical for cSFGs, this galaxy has a
stellar mass of $1.89 \pm 0.47\,\times 10^{11}\,\rm{M}_\odot$, with a star
formation rate of $214\pm44\,\rm{M}_\odot\,\rm{yr}^{-1}$ putting it on the
star-forming `main-sequence', but with an H-band effective radius of 2.5 kpc,
making it much smaller than the bulk of `main-sequence' star-forming galaxies.
The intensity ratio of the line detections yield an ISM density (~ 6 $\times
10^{4}\,\rm{cm}^{-3}$) and a UV-radiation field ( ~2 $\times 10^4\,\rm{G}_0$),
similar to the values in local starburst and ultra-luminous infrared galaxy
environments. A starburst phase is consistent with the short depletion times
($t_{\rm H2, dep} \leq 140$ Myr) we find using three different proxies for the
H2 mass ([CI], CO, dust mass). This depletion time is significantly shorter
than in more extended SFGs with similar stellar masses and SFRs. Moreover, the
gas fraction of 3D-HST GS30274 is smaller than typically found in extended
galaxies. We measure the CO and [CI] kinematics and find a FWHM line width of
~$750 \pm 41 $ km s$^{-1}$. The CO and [CI] FWHM are consistent with a
previously measured H$\alpha$ FWHM for this source. The line widths are
consistent with gravitational motions, suggesting we are seeing a compact
molecular gas reservoir. A previous merger event, as suggested by the
asymmetric light profile, may be responsible for the compact distribution of
gas and has triggered a central starburst event. This event gives rise to the
starburst-like ISM properties and short depletion times. The centrally located
and efficient star formation is quickly building up a dense core of stars,
responsible for the compact distribution of stellar light in 3D-HST GS30274.
| 0 | 1 | 0 | 0 | 0 | 0 |
Faster Betweenness Centrality Updates in Evolving Networks | Finding central nodes is a fundamental problem in network analysis.
Betweenness centrality is a well-known measure which quantifies the importance
of a node based on the fraction of shortest paths going though it. Due to the
dynamic nature of many today's networks, algorithms that quickly update
centrality scores have become a necessity. For betweenness, several dynamic
algorithms have been proposed over the years, targeting different update types
(incremental- and decremental-only, fully-dynamic). In this paper we introduce
a new dynamic algorithm for updating betweenness centrality after an edge
insertion or an edge weight decrease. Our method is a combination of two
independent contributions: a faster algorithm for updating pairwise distances
as well as number of shortest paths, and a faster algorithm for updating
dependencies. Whereas the worst-case running time of our algorithm is the same
as recomputation, our techniques considerably reduce the number of operations
performed by existing dynamic betweenness algorithms.
| 1 | 0 | 0 | 0 | 0 | 0 |
Constraining Reionization with the $z \sim 5-6$ Lyman-$α$ Forest Power Spectrum: the Outlook after Planck | The latest measurements of CMB electron scattering optical depth reported by
Planck significantly reduces the allowed space of HI reionization models,
pointing towards a later ending and/or less extended phase transition than
previously believed. Reionization impulsively heats the intergalactic medium
(IGM) to $\sim10^4$ K, and owing to long cooling and dynamical times in the
diffuse gas, comparable to the Hubble time, memory of reionization heating is
retained. Therefore, a late ending reionization has significant implications
for the structure of the $z\sim5-6$ Lyman-$\alpha$ (ly$\alpha$) forest. Using
state-of-the-art hydrodynamical simulations that allow us to vary the timing of
reionization and its associated heat injection, we argue that extant thermal
signatures from reionization can be detected via the ly$\alpha$ forest power
spectrum at $5< z<6$. This arises because the small-scale cutoff in the power
depends not only the the IGMs temperature at these epochs, but is also
particularly sensitive to the pressure smoothing scale set by the IGMs full
thermal history. Comparing our different reionization models with existing
measurements of the ly$\alpha$ forest flux power spectrum at $z=5.0-5.4$, we
find that models satisfying Planck's $\tau_e$ constraint, favor a moderate
amount of heat injection consistent with galaxies driving reionization, but
disfavoring quasar driven scenarios. We explore the impact of different
reionization histories and heating models on the shape of the power spectrum,
and find that they can produce similar effects, but argue that this degeneracy
can be broken with high enough quality data. We study the feasibility of
measuring the flux power spectrum at $z\simeq 6$ using mock quasar spectra and
conclude that a sample of $\sim10$ high-resolution spectra with attainable S/N
ratio will allow to discriminate between different reionization scenarios.
| 0 | 1 | 0 | 0 | 0 | 0 |
Third-Person Imitation Learning | Reinforcement learning (RL) makes it possible to train agents capable of
achiev- ing sophisticated goals in complex and uncertain environments. A key
difficulty in reinforcement learning is specifying a reward function for the
agent to optimize. Traditionally, imitation learning in RL has been used to
overcome this problem. Unfortunately, hitherto imitation learning methods tend
to require that demonstra- tions are supplied in the first-person: the agent is
provided with a sequence of states and a specification of the actions that it
should have taken. While powerful, this kind of imitation learning is limited
by the relatively hard problem of collect- ing first-person demonstrations.
Humans address this problem by learning from third-person demonstrations: they
observe other humans perform tasks, infer the task, and accomplish the same
task themselves.
In this paper, we present a method for unsupervised third-person imitation
learn- ing. Here third-person refers to training an agent to correctly achieve
a simple goal in a simple environment when it is provided a demonstration of a
teacher achieving the same goal but from a different viewpoint; and
unsupervised refers to the fact that the agent receives only these third-person
demonstrations, and is not provided a correspondence between teacher states and
student states. Our methods primary insight is that recent advances from domain
confusion can be utilized to yield domain agnostic features which are crucial
during the training process. To validate our approach, we report successful
experiments on learning from third-person demonstrations in a pointmass domain,
a reacher domain, and inverted pendulum.
| 1 | 0 | 0 | 0 | 0 | 0 |
Model Trees for Identifying Exceptional Players in the NHL Draft | Drafting strong players is crucial for the team success. We describe a new
data-driven interpretable approach for assessing draft prospects in the
National Hockey League. Successful previous approaches have built a predictive
model based on player features, or derived performance predictions from the
observed performance of comparable players in a cohort. This paper develops
model tree learning, which incorporates strengths of both model-based and
cohort-based approaches. A model tree partitions the feature space according to
the values of discrete features, or learned thresholds for continuous features.
Each leaf node in the tree defines a group of players, easily described to
hockey experts, with its own group regression model. Compared to a single
model, the model tree forms an ensemble that increases predictive power.
Compared to cohort-based approaches, the groups of comparables are discovered
from the data, without requiring a similarity metric. The performance
predictions of the model tree are competitive with the state-of-the-art
methods, which validates our model empirically. We show in case studies that
the model tree player ranking can be used to highlight strong and weak points
of players.
| 1 | 0 | 0 | 0 | 0 | 0 |
Collapsed Dark Matter Structures | The distributions of dark matter and baryons in the Universe are known to be
very different: the dark matter resides in extended halos, while a significant
fraction of the baryons have radiated away much of their initial energy and
fallen deep into the potential wells. This difference in morphology leads to
the widely held conclusion that dark matter cannot cool and collapse on any
scale. We revisit this assumption, and show that a simple model where dark
matter is charged under a "dark electromagnetism" can allow dark matter to form
gravitationally collapsed objects with characteristic mass scales much smaller
than that of a Milky Way-type galaxy. Though the majority of the dark matter in
spiral galaxies would remain in the halo, such a model opens the possibility
that galaxies and their associated dark matter play host to a significant
number of collapsed substructures. The observational signatures of such
structures are not well explored, but potentially interesting.
| 0 | 1 | 0 | 0 | 0 | 0 |
Locally free actions of groupoids and proper topological correspondences | Let $(G,\alpha)$ and $(H,\beta)$ be locally compact Hausdorff groupoids with
Haar systems, and let $(X,\lambda)$ be a topological correspondence from
$(G,\alpha)$ to $(H,\beta)$ which induce the ${C}^*$-correspondence
$\mathcal{H}(X)\colon {C}^*(G,\alpha)\to {C}^*(H,\beta)$. We give sufficient
topological conditions which when satisfied the ${C}^*$-correspondence
$\mathcal{H}(X)$ is proper, that is, the ${C}^*$-algebra ${C}^*(G,\alpha)$ acts
on the Hilbert ${C}^*(H,\beta)$-module ${H}(X)$ via the comapct operators. Thus
a proper topological correspondence produces an element in
${KK}({C}^*(G,\alpha),{C}^*(H,\beta))$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Adaptive Lock-Free Data Structures in Haskell: A General Method for Concurrent Implementation Swapping | A key part of implementing high-level languages is providing built-in and
default data structures. Yet selecting good defaults is hard. A mutable data
structure's workload is not known in advance, and it may shift over its
lifetime - e.g., between read-heavy and write-heavy, or from heavy contention
by multiple threads to single-threaded or low-frequency use. One idea is to
switch implementations adaptively, but it is nontrivial to switch the
implementation of a concurrent data structure at runtime. Performing the
transition requires a concurrent snapshot of data structure contents, which
normally demands special engineering in the data structure's design. However,
in this paper we identify and formalize an relevant property of lock-free
algorithms. Namely, lock-freedom is sufficient to guarantee that freezing
memory locations in an arbitrary order will result in a valid snapshot. Several
functional languages have data structures that freeze and thaw, transitioning
between mutable and immutable, such as Haskell vectors and Clojure transients,
but these enable only single-threaded writers. We generalize this approach to
augment an arbitrary lock-free data structure with the ability to gradually
freeze and optionally transition to a new representation. This augmentation
doesn't require changing the algorithm or code for the data structure, only
replacing its datatype for mutable references with a freezable variant. In this
paper, we present an algorithm for lifting plain to adaptive data and prove
that the resulting hybrid data structure is itself lock-free, linearizable, and
simulates the original. We also perform an empirical case study in the context
of heating up and cooling down concurrent maps.
| 1 | 0 | 0 | 0 | 0 | 0 |
Output feedback exponential stabilization of a nonlinear 1-D wave equation with boundary input | This paper develops systematically the output feedback exponential
stabilization for a one-dimensional unstable/anti-stable wave equation where
the control boundary suffers from both internal nonlinear uncertainty and
external disturbance. Using only two displacement signals, we propose a
disturbance estimator that not only can estimate successfully the disturbance
in the sense that the error is in $L^2(0,\infty)$ but also is free high-gain.
With the estimated disturbance, we design a state observer that is
exponentially convergent to the state of original system. An observer-based
output feedback stabilizing control law is proposed. The disturbance is then
canceled in the feedback loop by its approximated value. The closed-loop system
is shown to be exponentially stable and it can be guaranteed that all internal
signals are uniformly bounded.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Microphotonic Astrocomb | One of the essential prerequisites for detection of Earth-like extra-solar
planets or direct measurements of the cosmological expansion is the accurate
and precise wavelength calibration of astronomical spectrometers. It has
already been realized that the large number of exactly known optical
frequencies provided by laser frequency combs ('astrocombs') can significantly
surpass conventionally used hollow-cathode lamps as calibration light sources.
A remaining challenge, however, is generation of frequency combs with lines
resolvable by astronomical spectrometers. Here we demonstrate an astrocomb
generated via soliton formation in an on-chip microphotonic resonator
('microresonator') with a resolvable line spacing of 23.7 GHz. This comb is
providing wavelength calibration on the 10 cm/s radial velocity level on the
GIANO-B high-resolution near-infrared spectrometer. As such, microresonator
frequency combs have the potential of providing broadband wavelength
calibration for the next-generation of astronomical instruments in
planet-hunting and cosmological research.
| 0 | 1 | 0 | 0 | 0 | 0 |
Scale out for large minibatch SGD: Residual network training on ImageNet-1K with improved accuracy and reduced time to train | For the past 5 years, the ILSVRC competition and the ImageNet dataset have
attracted a lot of interest from the Computer Vision community, allowing for
state-of-the-art accuracy to grow tremendously. This should be credited to the
use of deep artificial neural network designs. As these became more complex,
the storage, bandwidth, and compute requirements increased. This means that
with a non-distributed approach, even when using the most high-density server
available, the training process may take weeks, making it prohibitive.
Furthermore, as datasets grow, the representation learning potential of deep
networks grows as well by using more complex models. This synchronicity
triggers a sharp increase in the computational requirements and motivates us to
explore the scaling behaviour on petaflop scale supercomputers. In this paper
we will describe the challenges and novel solutions needed in order to train
ResNet-50 in this large scale environment. We demonstrate above 90\% scaling
efficiency and a training time of 28 minutes using up to 104K x86 cores. This
is supported by software tools from Intel's ecosystem. Moreover, we show that
with regular 90 - 120 epoch train runs we can achieve a top-1 accuracy as high
as 77\% for the unmodified ResNet-50 topology. We also introduce the novel
Collapsed Ensemble (CE) technique that allows us to obtain a 77.5\% top-1
accuracy, similar to that of a ResNet-152, while training a unmodified
ResNet-50 topology for the same fixed training budget. All ResNet-50 models as
well as the scripts needed to replicate them will be posted shortly.
| 1 | 0 | 0 | 1 | 0 | 0 |
Alphabet-dependent Parallel Algorithm for Suffix Tree Construction for Pattern Searching | Suffix trees have recently become very successful data structures in handling
large data sequences such as DNA or Protein sequences. Consequently parallel
architectures have become ubiquitous. We present a novel alphabet-dependent
parallel algorithm which attempts to take advantage of the perverseness of the
multicore architecture. Microsatellites are important for their biological
relevance hence our algorithm is based on time efficient construction for
identification of such. We experimentally achieved up to 15x speedup over the
sequential algorithm on different input sizes of biological sequences.
| 1 | 0 | 0 | 0 | 0 | 0 |
A coupled mitral valve -- left ventricle model with fluid-structure interaction | Understanding the interaction between the valves and walls of the heart is
important in assessing and subsequently treating heart dysfunction. With
advancements in cardiac imaging, nonlinear mechanics and computational
techniques, it is now possible to explore the mechanics of valve-heart
interactions using anatomically and physiologically realistic models. This
study presents an integrated model of the mitral valve (MV) coupled to the left
ventricle (LV), with the geometry derived from in vivo clinical magnetic
resonance images. Numerical simulations using this coupled MV-LV model are
developed using an immersed boundary/finite element method. The model
incorporates detailed valvular features, left ventricular contraction,
nonlinear soft tissue mechanics, and fluid-mediated interactions between the MV
and LV wall. We use the model to simulate the cardiac function from diastole to
systole, and investigate how myocardial active relaxation function affects the
LV pump function. The results of the new model agree with in vivo measurements,
and demonstrate that the diastolic filling pressure increases significantly
with impaired myocardial active relaxation to maintain the normal cardiac
output. The coupled model has the potential to advance fundamental knowledge of
mechanisms underlying MV-LV interaction, and help in risk stratification and
optimization of therapies for heart diseases.
| 1 | 1 | 0 | 0 | 0 | 0 |
Performance of an Algorithm for Estimation of Flux, Background and Location on One-Dimensional Signals | Optimal estimation of signal amplitude, background level, and photocentre
location is crucial to the combined extraction of astrometric and photometric
information from focal plane images, and in particular from the one-dimensional
measurements performed by Gaia on intermediate to faint magnitude stars. Our
goal is to define a convenient maximum likelihood framework, suited to
efficient iterative implementation and to assessment of noise level, bias, and
correlation among variables. The analytical model is investigated numerically
and verified by simulation over a range of magnitude and background values. The
estimates are unbiased, with a well-understood correlation between amplitude
and background, and with a much lower correlation of either of them with
location, further alleviated in case of signal symmetry. Two versions of the
algorithm are implemented and tested against each other, respectively, for
independent and combined parameter estimation. Both are effective and provide
consistent results, but the latter is more efficient because it takes into
account the flux-background estimate correlation.
| 0 | 1 | 0 | 0 | 0 | 0 |
Depth Creates No Bad Local Minima | In deep learning, \textit{depth}, as well as \textit{nonlinearity}, create
non-convex loss surfaces. Then, does depth alone create bad local minima? In
this paper, we prove that without nonlinearity, depth alone does not create bad
local minima, although it induces non-convex loss surface. Using this insight,
we greatly simplify a recently proposed proof to show that all of the local
minima of feedforward deep linear neural networks are global minima. Our
theoretical results generalize previous results with fewer assumptions, and
this analysis provides a method to show similar results beyond square loss in
deep linear models.
| 1 | 0 | 1 | 1 | 0 | 0 |
Some Formulas for Numbers of Restricted Words | We define a quantity $c_m(n,k)$ as a generalization of the notion of the
composition of the positive integer $n$ into $k$ parts. We proceed to derive
some known properties of this quantity. In particular, we relate two partial
Bell polynomials, in which the sequence of the variables of one polynomial is
the invert transform of the sequence of the variables of the other. We connect
the quantities $c_m(n,k)$ and $c_{m-1}(n,k)$ via Pascal matrices. We then
relate $c_m(n,k)$ with the numbers of some restricted words over a finite
alphabet. We develop a method which transfers some properties of restricted
words over an alphabet of $N$ letters to the restricted words over an alphabet
of $N+1$ letters. Several examples illustrate our findings. Note that all our
results depend solely on the initial arithmetic function $f_0$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Mean Birds: Detecting Aggression and Bullying on Twitter | In recent years, bullying and aggression against users on social media have
grown significantly, causing serious consequences to victims of all
demographics. In particular, cyberbullying affects more than half of young
social media users worldwide, and has also led to teenage suicides, prompted by
prolonged and/or coordinated digital harassment. Nonetheless, tools and
technologies for understanding and mitigating it are scarce and mostly
ineffective. In this paper, we present a principled and scalable approach to
detect bullying and aggressive behavior on Twitter. We propose a robust
methodology for extracting text, user, and network-based attributes, studying
the properties of cyberbullies and aggressors, and what features distinguish
them from regular users. We find that bully users post less, participate in
fewer online communities, and are less popular than normal users, while
aggressors are quite popular and tend to include more negativity in their
posts. We evaluate our methodology using a corpus of 1.6M tweets posted over 3
months, and show that machine learning classification algorithms can accurately
detect users exhibiting bullying and aggressive behavior, achieving over 90%
AUC.
| 1 | 0 | 0 | 0 | 0 | 0 |
Towards an open standard for assessing the severity of robot security vulnerabilities, the Robot Vulnerability Scoring System (RVSS) | Robots are typically not created with security as a main concern. Contrasting
to typical IT systems, cyberphysical systems rely on security to handle safety
aspects. In light of the former, classic scoring methods such as the Common
Vulnerability Scoring System (CVSS) are not able to accurately capture the
severity of robot vulnerabilities. The present research work focuses upon
creating an open and free to access Robot Vulnerability Scoring System (RVSS)
that considers major relevant issues in robotics including a) robot safety
aspects, b) assessment of downstream implications of a given vulnerability, c)
library and third-party scoring assessments and d) environmental variables,
such as time since vulnerability disclosure or exposure on the web. Finally, an
experimental evaluation of RVSS with contrast to CVSS is provided and discussed
with focus on the robotics security landscape.
| 1 | 0 | 0 | 0 | 0 | 0 |
Fully Distributed and Asynchronized Stochastic Gradient Descent for Networked Systems | This paper considers a general data-fitting problem over a networked system,
in which many computing nodes are connected by an undirected graph. This kind
of problem can find many real-world applications and has been studied
extensively in the literature. However, existing solutions either need a
central controller for information sharing or requires slot synchronization
among different nodes, which increases the difficulty of practical
implementations, especially for a very large and heterogeneous system.
As a contrast, in this paper, we treat the data-fitting problem over the
network as a stochastic programming problem with many constraints. By adapting
the results in a recent paper, we design a fully distributed and asynchronized
stochastic gradient descent (SGD) algorithm. We show that our algorithm can
achieve global optimality and consensus asymptotically by only local
computations and communications. Additionally, we provide a sharp lower bound
for the convergence speed in the regular graph case. This result fits the
intuition and provides guidance to design a `good' network topology to speed up
the convergence. Also, the merit of our design is validated by experiments on
both synthetic and real-world datasets.
| 1 | 0 | 0 | 0 | 0 | 0 |
Priv'IT: Private and Sample Efficient Identity Testing | We develop differentially private hypothesis testing methods for the small
sample regime. Given a sample $\cal D$ from a categorical distribution $p$ over
some domain $\Sigma$, an explicitly described distribution $q$ over $\Sigma$,
some privacy parameter $\varepsilon$, accuracy parameter $\alpha$, and
requirements $\beta_{\rm I}$ and $\beta_{\rm II}$ for the type I and type II
errors of our test, the goal is to distinguish between $p=q$ and
$d_{\rm{TV}}(p,q) \geq \alpha$.
We provide theoretical bounds for the sample size $|{\cal D}|$ so that our
method both satisfies $(\varepsilon,0)$-differential privacy, and guarantees
$\beta_{\rm I}$ and $\beta_{\rm II}$ type I and type II errors. We show that
differential privacy may come for free in some regimes of parameters, and we
always beat the sample complexity resulting from running the $\chi^2$-test with
noisy counts, or standard approaches such as repetition for endowing
non-private $\chi^2$-style statistics with differential privacy guarantees. We
experimentally compare the sample complexity of our method to that of recently
proposed methods for private hypothesis testing.
| 1 | 0 | 1 | 1 | 0 | 0 |
Radiative Transfer for Exoplanet Atmospheres | Remote sensing of the atmospheres of distant worlds motivates a firm
understanding of radiative transfer. In this review, we provide a pedagogical
cookbook that describes the principal ingredients needed to perform a radiative
transfer calculation and predict the spectrum of an exoplanet atmosphere,
including solving the radiative transfer equation, calculating opacities (and
chemistry), iterating for radiative equilibrium (or not), and adapting the
output of the calculations to the astronomical observations. A review of the
state of the art is performed, focusing on selected milestone papers.
Outstanding issues, including the need to understand aerosols or clouds and
elucidating the assumptions and caveats behind inversion methods, are
discussed. A checklist is provided to assist referees/reviewers in their
scrutiny of works involving radiative transfer. A table summarizing the
methodology employed by past studies is provided.
| 0 | 1 | 0 | 0 | 0 | 0 |
Computational Results for Extensive-Form Adversarial Team Games | We provide, to the best of our knowledge, the first computational study of
extensive-form adversarial team games. These games are sequential, zero-sum
games in which a team of players, sharing the same utility function, faces an
adversary. We define three different scenarios according to the communication
capabilities of the team. In the first, the teammates can communicate and
correlate their actions both before and during the play. In the second, they
can only communicate before the play. In the third, no communication is
possible at all. We define the most suitable solution concepts, and we study
the inefficiency caused by partial or null communication, showing that the
inefficiency can be arbitrarily large in the size of the game tree.
Furthermore, we study the computational complexity of the equilibrium-finding
problem in the three scenarios mentioned above, and we provide, for each of the
three scenarios, an exact algorithm. Finally, we empirically evaluate the
scalability of the algorithms in random games and the inefficiency caused by
partial or null communication.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Parametric MPC Approach to Balancing the Cost of Abstraction for Differential-Drive Mobile Robots | When designing control strategies for differential-drive mobile robots, one
standard tool is the consideration of a point at a fixed distance along a line
orthogonal to the wheel axis instead of the full pose of the vehicle. This
abstraction supports replacing the non-holonomic, three-state unicycle model
with a much simpler two-state single-integrator model (i.e., a
velocity-controlled point). Yet this transformation comes at a performance
cost, through the robot's precision and maneuverability. This work contains
derivations for expressions of these precision and maneuverability costs in
terms of the transformation's parameters. Furthermore, these costs show that
only selecting the parameter once over the course of an application may cause
an undue loss of precision. Model Predictive Control (MPC) represents one such
method to ameliorate this condition. However, MPC typically realizes a control
signal, rather than a parameter, so this work also proposes a Parametric Model
Predictive Control (PMPC) method for parameter and sampling horizon
optimization. Experimental results are presented that demonstrate the effects
of the parameterization on the deployment of algorithms developed for the
single-integrator model on actual differential-drive mobile robots.
| 1 | 0 | 0 | 0 | 0 | 0 |
A norm knockout method on indirect reciprocity to reveal indispensable norms | Although various norms for reciprocity-based cooperation have been suggested
that are evolutionarily stable against invasion from free riders, the process
of alternation of norms and the role of diversified norms remain unclear in the
evolution of cooperation. We clarify the co-evolutionary dynamics of norms and
cooperation in indirect reciprocity and also identify the indispensable norms
for the evolution of cooperation. Inspired by the gene knockout method, a
genetic engineering technique, we developed the norm knockout method and
clarified the norms necessary for the establishment of cooperation. The results
of numerical investigations revealed that the majority of norms gradually
transitioned to tolerant norms after defectors are eliminated by strict norms.
Furthermore, no cooperation emerges when specific norms that are intolerant to
defectors are knocked out.
| 1 | 1 | 0 | 0 | 0 | 0 |
Online control of the false discovery rate with decaying memory | In the online multiple testing problem, p-values corresponding to different
null hypotheses are observed one by one, and the decision of whether or not to
reject the current hypothesis must be made immediately, after which the next
p-value is observed. Alpha-investing algorithms to control the false discovery
rate (FDR), formulated by Foster and Stine, have been generalized and applied
to many settings, including quality-preserving databases in science and
multiple A/B or multi-armed bandit tests for internet commerce. This paper
improves the class of generalized alpha-investing algorithms (GAI) in four
ways: (a) we show how to uniformly improve the power of the entire class of
monotone GAI procedures by awarding more alpha-wealth for each rejection,
giving a win-win resolution to a recent dilemma raised by Javanmard and
Montanari, (b) we demonstrate how to incorporate prior weights to indicate
domain knowledge of which hypotheses are likely to be non-null, (c) we allow
for differing penalties for false discoveries to indicate that some hypotheses
may be more important than others, (d) we define a new quantity called the
decaying memory false discovery rate (mem-FDR) that may be more meaningful for
truly temporal applications, and which alleviates problems that we describe and
refer to as "piggybacking" and "alpha-death". Our GAI++ algorithms incorporate
all four generalizations simultaneously, and reduce to more powerful variants
of earlier algorithms when the weights and decay are all set to unity. Finally,
we also describe a simple method to derive new online FDR rules based on an
estimated false discovery proportion.
| 1 | 0 | 1 | 1 | 0 | 0 |
Multi-agent Reinforcement Learning Embedded Game for the Optimization of Building Energy Control and Power System Planning | Most of the current game-theoretic demand-side management methods focus
primarily on the scheduling of home appliances, and the related numerical
experiments are analyzed under various scenarios to achieve the corresponding
Nash-equilibrium (NE) and optimal results. However, not much work is conducted
for academic or commercial buildings. The methods for optimizing
academic-buildings are distinct from the optimal methods for home appliances.
In my study, we address a novel methodology to control the operation of
heating, ventilation, and air conditioning system (HVAC). With the development
of Artificial Intelligence and computer technologies, reinforcement learning
(RL) can be implemented in multiple realistic scenarios and help people to
solve thousands of real-world problems. Reinforcement Learning, which is
considered as the art of future AI, builds the bridge between agents and
environments through Markov Decision Chain or Neural Network and has seldom
been used in power system. The art of RL is that once the simulator for a
specific environment is built, the algorithm can keep learning from the
environment. Therefore, RL is capable of dealing with constantly changing
simulator inputs such as power demand, the condition of power system and
outdoor temperature, etc. Compared with the existing distribution power system
planning mechanisms and the related game theoretical methodologies, our
proposed algorithm can plan and optimize the hourly energy usage, and have the
ability to corporate with even shorter time window if needed.
| 1 | 0 | 0 | 1 | 0 | 0 |
A locally quasi-convex abelian group without Mackey topology | We give the first example of a locally quasi-convex (even countable reflexive
and $k_\omega$) abelian group $G$ which does not admit the strongest compatible
locally quasi-convex group topology. Our group $G$ is the Graev free abelian
group $A_G(\mathbf{s})$ over a convergent sequence $\mathbf{s}$.
| 0 | 0 | 1 | 0 | 0 | 0 |
How accurate is density functional theory at predicting dipole moments? An assessment using a new database of 200 benchmark values | Dipole moments are a simple, global measure of the accuracy of the electron
density of a polar molecule. Dipole moments also affect the interactions of a
molecule with other molecules as well as electric fields. To directly assess
the accuracy of modern density functionals for calculating dipole moments, we
have developed a database of 200 benchmark dipole moments, using coupled
cluster theory through triple excitations, extrapolated to the complete basis
set limit. This new database is used to assess the performance of 88 popular or
recently developed density functionals. The results suggest that double hybrid
functionals perform the best, yielding dipole moments within about 3.6-4.5%
regularized RMS error versus the reference values---which is not very different
from the 4% regularized RMS error produced by coupled cluster singles and
doubles. Many hybrid functionals also perform quite well, generating
regularized RMS errors in the 5-6% range. Some functionals however exhibit
large outliers and local functionals in general perform less well than hybrids
or double hybrids.
| 0 | 1 | 0 | 0 | 0 | 0 |
Understanding the Impact of Early Citers on Long-Term Scientific Impact | This paper explores an interesting new dimension to the challenging problem
of predicting long-term scientific impact (LTSI) usually measured by the number
of citations accumulated by a paper in the long-term. It is well known that
early citations (within 1-2 years after publication) acquired by a paper
positively affects its LTSI. However, there is no work that investigates if the
set of authors who bring in these early citations to a paper also affect its
LTSI. In this paper, we demonstrate for the first time, the impact of these
authors whom we call early citers (EC) on the LTSI of a paper. Note that this
study of the complex dynamics of EC introduces a brand new paradigm in citation
behavior analysis. Using a massive computer science bibliographic dataset we
identify two distinct categories of EC - we call those authors who have high
overall publication/citation count in the dataset as influential and the rest
of the authors as non-influential. We investigate three characteristic
properties of EC and present an extensive analysis of how each category
correlates with LTSI in terms of these properties. In contrast to popular
perception, we find that influential EC negatively affects LTSI possibly owing
to attention stealing. To motivate this, we present several representative
examples from the dataset. A closer inspection of the collaboration network
reveals that this stealing effect is more profound if an EC is nearer to the
authors of the paper being investigated. As an intuitive use case, we show that
incorporating EC properties in the state-of-the-art supervised citation
prediction models leads to high performance margins. At the closing, we present
an online portal to visualize EC statistics along with the prediction results
for a given query paper.
| 1 | 0 | 0 | 0 | 0 | 0 |
Cause-Effect Deep Information Bottleneck For Incomplete Covariates | Estimating the causal effects of an intervention in the presence of
confounding is a frequently occurring problem in applications such as medicine.
The task is challenging since there may be multiple confounding factors, some
of which may be missing, and inferences must be made from high-dimensional,
noisy measurements. In this paper, we propose a decision-theoretic approach to
estimate the causal effects of interventions where a subset of the covariates
is unavailable for some patients during testing. Our approach uses the
information bottleneck principle to perform a discrete, low-dimensional
sufficient reduction of the covariate data to estimate a distribution over
confounders. In doing so, we can estimate the causal effect of an intervention
where only partial covariate information is available. Our results on a causal
inference benchmark and a real application for treating sepsis show that our
method achieves state-of-the-art performance, without sacrificing
interpretability.
| 0 | 0 | 0 | 1 | 0 | 0 |
On the restricted almost unbiased Liu estimator in the Logistic regression model | It is known that when the multicollinearity exists in the logistic regression
model, variance of maximum likelihood estimator is unstable. As a remedy, in
the context of biased shrinkage ridge estimation, Chang (2015) introduced an
almost unbiased Liu estimator in the logistic regression model. Making use of
his approach, when some prior knowledge in the form of linear restrictions are
also available, we introduce a restricted almost unbiased Liu estimator in the
logistic regression model. Statistical properties of this newly defined
estimator are derived and some comparison result are also provided in the form
of theorems. A Monte Carlo simulation study along with a real data example are
given to investigate the performance of this estimator.
| 0 | 0 | 1 | 1 | 0 | 0 |
Camera-trap images segmentation using multi-layer robust principal component analysis | The segmentation of animals from camera-trap images is a difficult task. To
illustrate, there are various challenges due to environmental conditions and
hardware limitation in these images. We proposed a multi-layer robust principal
component analysis (multi-layer RPCA) approach for background subtraction. Our
method computes sparse and low-rank images from a weighted sum of descriptors,
using color and texture features as case of study for camera-trap images
segmentation. The segmentation algorithm is composed of histogram equalization
or Gaussian filtering as pre-processing, and morphological filters with active
contour as post-processing. The parameters of our multi-layer RPCA were
optimized with an exhaustive search. The database consists of camera-trap
images from the Colombian forest taken by the Instituto de Investigación de
Recursos Biológicos Alexander von Humboldt. We analyzed the performance of
our method in inherent and therefore challenging situations of camera-trap
images. Furthermore, we compared our method with some state-of-the-art
algorithms of background subtraction, where our multi-layer RPCA outperformed
these other methods. Our multi-layer RPCA reached 76.17 and 69.97% of average
fine-grained F-measure for color and infrared sequences, respectively. To our
best knowledge, this paper is the first work proposing multi-layer RPCA and
using it for camera-trap images segmentation.
| 1 | 0 | 0 | 0 | 0 | 0 |
The Bi-Lipschitz Equisingularity of Essentially Isolated Determinantal Singularities | The bi-Lipschitz geometry is one of the main subjects in the modern approach
of Singularity Theory. However, it rises from works of important mathematicians
of the last century, especially Zariski. In this work we investigate the
Bi-Lipschitz equisingularity of families of Essentially Isolated Determinantal
Singularities inspired by the approach of Mostowski and Gaffney.
| 0 | 0 | 1 | 0 | 0 | 0 |
Braid group symmetries of Grassmannian cluster algebras | We define an action of the extended affine d-strand braid group on the open
positroid stratum in the Grassmannian Gr(k,n), for d the greatest common
divisor of k and n. The action is by quasi-automorphisms of the cluster
structure on the Grassmannian, determining a homomorphism from the extended
affine braid group to the cluster modular group. We also define a
quasi-isomorphism between the Grassmannian Gr(k,rk) and the Fock-Goncharov
configuration space of 2r-tuples of affine flags for SL(k). This identifies the
cluster variables, clusters, and cluster modular groups, in these two cluster
structures.
Fomin and Pylyavskyy proposed a description of the cluster combinatorics for
Gr(3,n) in terms of Kuperberg's basis of non-elliptic webs. As our main
application, we prove many of their conjectures for Gr(3,9) and give a
presentation for its cluster modular group. We establish similar results for
Gr(4,8). These results rely on the fact that both of these Grassmannians have
finite mutation type.
| 0 | 0 | 1 | 0 | 0 | 0 |
Bi-$s^*$-concave distributions | We introduce a new shape-constrained class of distribution functions on R,
the bi-$s^*$-concave class. In parallel to results of Dümbgen, Kolesnyk, and
Wilke (2017) for what they called the class of bi-log-concave distribution
functions, we show that every s-concave density f has a bi-$s^*$-concave
distribution function $F$ and that every bi-$s^*$-concave distribution function
satisfies $\gamma (F) \le 1/(1+s)$ where finiteness of $$ \gamma (F) \equiv
\sup_{x} F(x) (1-F(x)) \frac{| f' (x)|}{f^2 (x)}, $$ the Csörgő -
Révész constant of F, plays an important role in the theory of quantile
processes on $R$.
| 0 | 0 | 1 | 1 | 0 | 0 |
A central limit theorem for the realised covariation of a bivariate Brownian semistationary process | This article presents a weak law of large numbers and a central limit theorem
for the scaled realised covariation of a bivariate Brownian semistationary
process. The novelty of our results lies in the fact that we derive the
suitable asymptotic theory both in a multivariate setting and outside the
classical semimartingale framework. The proofs rely heavily on recent
developments in Malliavin calculus.
| 0 | 0 | 1 | 1 | 0 | 0 |
First principles investigations of electronic, magnetic and bonding peculiarities of uranium nitride-fluoride UNF | Based on geometry optimization and magnetic structure investigations within
density functional theory, unique uranium nitride fluoride UNF, isoelectronic
with UO2, is shown to present peculiar differentiated physical properties. Such
specificities versus the oxide are related with the mixed anionic sublattices
and the layered-like tetragonal structure characterized by covalent like
[U2N2]2+motifs interlayered by ionic like [F2]2- ones and illustrated herein
with electron localization function graphs. Particularly the ionocovalent
chemical picture shows, based on overlap population analyses, stronger U-N
bonding versus N-F and d(U-N) < d(U-F) distances. Based on LDA+U calculations
the ground state magnetic structure is insulating antiferromagnet with 2 Bohr
Magnetons magnetization per magnetic subcell and ~2 eV band gap.
| 0 | 1 | 0 | 0 | 0 | 0 |
Eight-cluster structure of chloroplast genomes differs from similar one observed for bacteria | Previously, a seven-cluster pattern claiming to be a universal one in
bacterial genomes has been reported. Keeping in mind the most popular theory of
chloroplast origin, we checked whether a similar pattern is observed in
chloroplast genomes. Surprisingly, eight cluster structure has been found, for
chloroplasts. The pattern observed for chloroplasts differs rather
significantly, from bacterial one, and from that latter observed for
cyanobacteria. The structure is provided by clustering of the fragments of
equal length isolated within a genome so that each fragment is converted in
triplet frequency dictionary with non-overlapping triplets with no gaps in
frame tiling. The points in 63-dimensional space were clustered due to elastic
map technique. The eight cluster found in chloroplasts comprises the fragments
of a genome bearing tRNA genes and exhibiting excessively high
$\mathsf{GC}$-content, in comparison to the entire genome.
| 0 | 0 | 0 | 0 | 1 | 0 |
Remote Document Encryption - encrypting data for e-passport holders | We show how any party can encrypt data for an e-passport holder such that
only with physical possession of the e-passport decryption is possible. The
same is possible for electronic identity cards and driver licenses. We also
indicate possible applications. Dutch passports allow for 160 bit security,
theoretically giving sufficient security beyond the year 2079, exceeding
current good practice of 128 bit security. We also introduce the notion of RDE
Extraction PIN which effectively provides the same security as a regular PIN.
Our results ironically suggest that carrying a passport when traveling abroad
might violate export or import laws on strong cryptography.
| 1 | 0 | 0 | 0 | 0 | 0 |
Full likelihood inference for max-stable data | We show how to perform full likelihood inference for max-stable multivariate
distributions or processes based on a stochastic Expectation-Maximisation
algorithm, which combines statistical and computational efficiency in
high-dimensions. The good performance of this methodology is demonstrated by
simulation based on the popular logistic and Brown--Resnick models, and it is
shown to provide dramatic computational time improvements with respect to a
direct computation of the likelihood. Strategies to further reduce the
computational burden are also discussed.
| 0 | 0 | 0 | 1 | 0 | 0 |
Nef vector bundles on a projective space with first Chern class 3 and second Chern class 8 | We describe nef vector bundles on a projective space with first Chern class
three and second Chern class eight over an algebraically closed field of
characteristic zero by giving them a minimal resolution in terms of a full
strong exceptional collection of line bundles.
| 0 | 0 | 1 | 0 | 0 | 0 |
Performance of irradiated thin n-in-p planar pixel sensors for the ATLAS Inner Tracker upgrade | The ATLAS collaboration will replace its tracking detector with new all
silicon pixel and strip systems. This will allow to cope with the higher
radiation and occupancy levels expected after the 5-fold increase in the
luminosity of the LHC accelerator complex (HL-LHC). In the new tracking
detector (ITk) pixel modules with increased granularity will implement to
maintain the occupancy with a higher track density. In addition, both sensors
and read-out chips composing the hybrid modules will be produced employing more
radiation hard technologies with respect to the present pixel detector. Due to
their outstanding performance in terms of radiation hardness, thin n-in-p
sensors are promising candidates to instrument a section of the new pixel
system. Recently produced and developed sensors of new designs will be
presented. To test the sensors before interconnection to chips, a punch-through
biasing structure has been implemented. Its design has been optimized to
decrease the possible tracking efficiency losses observed. After irradiation,
they were caused by the punch-through biasing structure. A sensor compatible
with the ATLAS FE-I4 chip with a pixel size of 50x250 $\mathrm{\mu}$m$^{2}$,
subdivided into smaller pixel implants of 30x30 $\mathrm{\mu}$m$^{2}$ size was
designed to investigate the performance of the 50x50 $\mathrm{\mu}$m$^{2}$
pixel cells foreseen for the HL-LHC. Results on sensor performance of 50x250
and 50x50 $\mathrm{\mu}$m$^{2}$ pixel cells in terms of efficiency, charge
collection and electric field properties are obtained with beam tests and the
Transient Current Technique.
| 0 | 1 | 0 | 0 | 0 | 0 |
Assessment of sound spatialisation algorithms for sonic rendering with headsets | Given an input sound signal and a target virtual sound source, sound
spatialisation algorithms manipulate the signal so that a listener perceives it
as though it were emitted from the target source. There exist several
established spatialisation approaches that deliver satisfactory results when
loudspeakers are used to playback the manipulated signal. As headphones have a
number of desirable characteristics over loudspeakers, such as portability,
isolation from the surrounding environment, cost and ease of use, it is
interesting to explore how a sense of acoustic space can be conveyed through
them. This article first surveys traditional spatialisation approaches intended
for loudspeakers, and then reviews them with regard to their adaptability to
headphones.
| 1 | 0 | 0 | 0 | 0 | 0 |
Identification of individual coherent sets associated with flow trajectories using Coherent Structure Coloring | We present a method for identifying the coherent structures associated with
individual Lagrangian flow trajectories even where only sparse particle
trajectory data is available. The method, based on techniques in spectral graph
theory, uses the Coherent Structure Coloring vector and associated eigenvectors
to analyze the distance in higher-dimensional eigenspace between a selected
reference trajectory and other tracer trajectories in the flow. By analyzing
this distance metric in a hierarchical clustering, the coherent structure of
which the reference particle is a member can be identified. This algorithm is
proven successful in identifying coherent structures of varying complexities in
canonical unsteady flows. Additionally, the method is able to assess the
relative coherence of the associated structure in comparison to the surrounding
flow. Although the method is demonstrated here in the context of fluid flow
kinematics, the generality of the approach allows for its potential application
to other unsupervised clustering problems in dynamical systems such as neuronal
activity, gene expression, or social networks.
| 0 | 1 | 0 | 1 | 0 | 0 |
Gaussian Processes for HRF estimation for BOLD fMRI | We present a non-parametric joint estimation method for fMRI task activation
values and the hemodynamic response function (HRF). The HRF is modeled as a
Gaussian process, making continuous evaluation possible for jittered paradigms
and providing a variance estimate at each point.
| 0 | 0 | 0 | 1 | 0 | 0 |
Iterated Elliptic and Hypergeometric Integrals for Feynman Diagrams | We calculate 3-loop master integrals for heavy quark correlators and the
3-loop QCD corrections to the $\rho$-parameter. They obey non-factorizing
differential equations of second order with more than three singularities,
which cannot be factorized in Mellin-$N$ space either. The solution of the
homogeneous equations is possible in terms of convergent close integer power
series as $_2F_1$ Gau\ss{} hypergeometric functions at rational argument. In
some cases, integrals of this type can be mapped to complete elliptic integrals
at rational argument. This class of functions appears to be the next one
arising in the calculation of more complicated Feynman integrals following the
harmonic polylogarithms, generalized polylogarithms, cyclotomic harmonic
polylogarithms, square-root valued iterated integrals, and combinations
thereof, which appear in simpler cases. The inhomogeneous solution of the
corresponding differential equations can be given in terms of iterative
integrals, where the new innermost letter itself is not an iterative integral.
A new class of iterative integrals is introduced containing letters in which
(multiple) definite integrals appear as factors. For the elliptic case, we also
derive the solution in terms of integrals over modular functions and also
modular forms, using $q$-product and series representations implied by Jacobi's
$\vartheta_i$ functions and Dedekind's $\eta$-function. The corresponding
representations can be traced back to polynomials out of Lambert--Eisenstein
series, having representations also as elliptic polylogarithms, a $q$-factorial
$1/\eta^k(\tau)$, logarithms and polylogarithms of $q$ and their $q$-integrals.
Due to the specific form of the physical variable $x(q)$ for different
processes, different representations do usually appear. Numerical results are
also presented.
| 1 | 0 | 0 | 0 | 0 | 0 |
Stein Variational Message Passing for Continuous Graphical Models | We propose a novel distributed inference algorithm for continuous graphical
models, by extending Stein variational gradient descent (SVGD) to leverage the
Markov dependency structure of the distribution of interest. Our approach
combines SVGD with a set of structured local kernel functions defined on the
Markov blanket of each node, which alleviates the curse of high dimensionality
and simultaneously yields a distributed algorithm for decentralized inference
tasks. We justify our method with theoretical analysis and show that the use of
local kernels can be viewed as a new type of localized approximation that
matches the target distribution on the conditional distributions of each node
over its Markov blanket. Our empirical results show that our method outperforms
a variety of baselines including standard MCMC and particle message passing
methods.
| 1 | 0 | 0 | 1 | 0 | 0 |
PEBP1/RKIP: from multiple functions to a common role in cellular processes | PEBPs (PhosphatidylEthanolamine Binding Proteins) form a protein family
widely present in the living world since they are encountered in
microorganisms, plants and animals. In all organisms PEBPs appear to regulate
important mechanisms that govern cell cycle, proliferation, differentiation and
motility. In humans, three PEBPs have been identified, namely PEBP1, PEBP2 and
PEBP4. PEBP1 and PEBP4 are the most studied as they are implicated in the
development of various cancers. PEBP2 is specific of testes in mammals and was
essentially studied in rats and mice where it is very abundant. A lot of
information has been gained on PEBP1 also named RKIP (Raf Kinase Inhibitory
protein) due to its role as a metastasis suppressor in cancer. PEBP1 was also
demonstrated to be implicated in Alzheimers disease, diabetes and
nephropathies. Furthermore, PEBP1 was described to be involved in many cellular
processes, among them are signal transduction, inflammation, cell cycle,
proliferation, adhesion, differentiation, apoptosis, autophagy, circadian
rhythm and mitotic spindle checkpoint. On the molecular level, PEBP1 was shown
to regulate several signaling pathways such as Raf/MEK/ERK, NFkB,
PI3K/Akt/mTOR, p38, Notch and Wnt. PEBP1 acts by inhibiting most of the kinases
of these signaling cascades. Moreover, PEBP1 is able to bind to a variety of
small ligands such as ATP, phospholipids, nucleotides, flavonoids or drugs.
Considering PEBP1 is a small cytoplasmic protein (21kDa), its involvement in so
many diseases and cellular mechanisms is amazing. The aim of this review is to
highlight the molecular systems that are common to all these cellular
mechanisms in order to decipher the specific role of PEBP1. Recent discoveries
enable us to propose that PEBP1 is a modulator of molecular interactions that
control signal transduction during membrane and cytoskeleton reorganization.
| 0 | 0 | 0 | 0 | 1 | 0 |
A Characterization Theorem for a Modal Description Logic | Modal description logics feature modalities that capture dependence of
knowledge on parameters such as time, place, or the information state of
agents. E.g., the logic S5-ALC combines the standard description logic ALC with
an S5-modality that can be understood as an epistemic operator or as
representing (undirected) change. This logic embeds into a corresponding modal
first-order logic S5-FOL. We prove a modal characterization theorem for this
embedding, in analogy to results by van Benthem and Rosen relating ALC to
standard first-order logic: We show that S5-ALC with only local roles is, both
over finite and over unrestricted models, precisely the bisimulation invariant
fragment of S5-FOL, thus giving an exact description of the expressive power of
S5-ALC with only local roles.
| 1 | 0 | 1 | 0 | 0 | 0 |
Emergence of magnetic long-range order in kagome quantum antiferromagnets | The existence of a spin-liquid ground state of the $s=1/2$ Heisenberg kagome
antiferromagnet (KAFM) is well established. Meanwhile, also for the $s=1$
Heisenberg KAFM evidence for the absence of magnetic long-range order (LRO) was
found. Magnetic LRO in Heisenberg KAFMs can emerge by increasing the spin
quantum number $s$ to $s>1$ and for $s=1$ by an easy-plane anisotropy. In the
present paper we discuss the route to magnetic order in $s=1/2$ KAFMs by
including an isotropic interlayer coupling (ILC) $J_\perp$ as well as an
easy-plane anisotropy in the kagome layers by using the coupled-cluster method
to high orders of approximation. We consider ferro- as well as
antiferromagnetic $J_\perp$. To discuss the general question for the crossover
from a purely two-dimensional (2D) to a quasi-2D and finally to a
three-dimensional system we consider the simplest model of stacked (unshifted)
kagome layers. Although the ILC of real kagome compounds is often more
sophisticated, such a geometry of the ILC can be relevant for barlowite. We
find that the spin-liquid ground state present for the strictly 2D $s=1/2$
$XXZ$ KAFM survives a finite ILC, where the spin-liquid region shrinks
monotonously with increasing anisotropy. If the ILC becomes large enough (about
15\% of intralayer coupling for the isotropic Heisenberg case and about 4\% for
the $XY$ limit) magnetic LRO can be established, where the $q=0$ symmetry is
favorable if $J_\perp$ is of moderate strength. If the strength of the ILC
further increases, $\sqrt{3}\times \sqrt{3}$ LRO can become favorable against
$q=0$ LRO.
| 0 | 1 | 0 | 0 | 0 | 0 |
The multidimensional truncated Moment Problem: Atoms, Determinacy, and Core Variety | This paper is about the moment problem on a finite-dimensional vector space
of continuous functions. We investigate the structure of the convex cone of
moment functionals (supporting hyperplanes, exposed faces, inner points) and
treat various important special topics on moment functionals (determinacy, set
of atoms of representing measures, core variety).
| 0 | 0 | 1 | 0 | 0 | 0 |
PROOF OF VALUE ALIENATION (PoVA) - a concept of a cryptocurrency issuance protocol | In this paper, we will describe a concept of a cryptocurrency issuance
protocol which supports digital currencies in a Proof-of-Work (< PoW >) like
manner. However, the methods assume alternative utilization of assets used for
cryptocurrency creation (rather than purchasing electricity necessary for <
mining >).
| 0 | 0 | 0 | 0 | 0 | 1 |
Top-Rank Enhanced Listwise Optimization for Statistical Machine Translation | Pairwise ranking methods are the basis of many widely used discriminative
training approaches for structure prediction problems in natural language
processing(NLP). Decomposing the problem of ranking hypotheses into pairwise
comparisons enables simple and efficient solutions. However, neglecting the
global ordering of the hypothesis list may hinder learning. We propose a
listwise learning framework for structure prediction problems such as machine
translation. Our framework directly models the entire translation list's
ordering to learn parameters which may better fit the given listwise samples.
Furthermore, we propose top-rank enhanced loss functions, which are more
sensitive to ranking errors at higher positions. Experiments on a large-scale
Chinese-English translation task show that both our listwise learning framework
and top-rank enhanced listwise losses lead to significant improvements in
translation quality.
| 1 | 0 | 0 | 0 | 0 | 0 |
Performance Limits of Stochastic Sub-Gradient Learning, Part II: Multi-Agent Case | The analysis in Part I revealed interesting properties for subgradient
learning algorithms in the context of stochastic optimization when gradient
noise is present. These algorithms are used when the risk functions are
non-smooth and involve non-differentiable components. They have been long
recognized as being slow converging methods. However, it was revealed in Part I
that the rate of convergence becomes linear for stochastic optimization
problems, with the error iterate converging at an exponential rate $\alpha^i$
to within an $O(\mu)-$neighborhood of the optimizer, for some $\alpha \in
(0,1)$ and small step-size $\mu$. The conclusion was established under weaker
assumptions than the prior literature and, moreover, several important problems
(such as LASSO, SVM, and Total Variation) were shown to satisfy these weaker
assumptions automatically (but not the previously used conditions from the
literature). These results revealed that sub-gradient learning methods have
more favorable behavior than originally thought when used to enable continuous
adaptation and learning. The results of Part I were exclusive to single-agent
adaptation. The purpose of the current Part II is to examine the implications
of these discoveries when a collection of networked agents employs subgradient
learning as their cooperative mechanism. The analysis will show that, despite
the coupled dynamics that arises in a networked scenario, the agents are still
able to attain linear convergence in the stochastic case; they are also able to
reach agreement within $O(\mu)$ of the optimizer.
| 1 | 0 | 1 | 1 | 0 | 0 |
Aggregating incoherent agents who disagree | In this paper, we explore how we should aggregate the degrees of belief of of
a group of agents to give a single coherent set of degrees of belief, when at
least some of those agents might be probabilistically incoherent. There are a
number of way of aggregating degrees of belief, and there are a number of ways
of fixing incoherent degrees of belief. When we have picked one of each, should
we aggregate first and then fix, or fix first and then aggregate? Or should we
try to do both at once? And when do these different procedures agree with one
another? In this paper, we focus particularly on the final question.
| 1 | 0 | 0 | 1 | 0 | 0 |
On Fundamental Limits of Robust Learning | We consider the problems of robust PAC learning from distributed and
streaming data, which may contain malicious errors and outliers, and analyze
their fundamental complexity questions. In particular, we establish lower
bounds on the communication complexity for distributed robust learning
performed on multiple machines, and on the space complexity for robust learning
from streaming data on a single machine. These results demonstrate that gaining
robustness of learning algorithms is usually at the expense of increased
complexities. As far as we know, this work gives the first complexity results
for distributed and online robust PAC learning.
| 1 | 0 | 0 | 1 | 0 | 0 |
Robust Multi-view Pedestrian Tracking Using Neural Networks | In this paper, we present a real-time robust multi-view pedestrian detection
and tracking system for video surveillance using neural networks which can be
used in dynamic environments. The proposed system consists of two phases:
multi-view pedestrian detection and tracking. First, pedestrian detection
utilizes background subtraction to segment the foreground blob. An adaptive
background subtraction method where each of the pixel of input image models as
a mixture of Gaussians and uses an on-line approximation to update the model
applies to extract the foreground region. The Gaussian distributions are then
evaluated to determine which are most likely to result from a background
process. This method produces a steady, real-time tracker in outdoor
environment that consistently deals with changes of lighting condition, and
long-term scene change. Second, the Tracking is performed at two phases:
pedestrian classification and tracking the individual subject. A sliding window
is applied on foreground binary image to select an input window which is used
for selecting the input image patches from actually input frame. The neural
networks is used for classification with PHOG features. Finally, a Kalman
filter is applied to calculate the subsequent step for tracking that aims at
finding the exact position of pedestrians in an input image. The experimental
result shows that the proposed approach yields promising performance on
multi-view pedestrian detection and tracking on different benchmark datasets.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Hybrid Algorithm for Period Analysis from Multi-band Data with Sparse and Irregular Sampling for Arbitrary Light Curve Shapes | Ongoing and future surveys with repeat imaging in multiple bands are
producing (or will produce) time-spaced measurements of brightness, resulting
in the identification of large numbers of variable sources in the sky. A large
fraction of these are periodic variables: compilations of these are of
scientific interest for a variety of purposes. Unavoidably, the data-sets from
many such surveys not only have sparse sampling, but also have embedded
frequencies in the observing cadence that beat against the natural
periodicities of any object under investigation. Such limitations can make
period determination ambiguous and uncertain. For multi-band data sets with
asynchronous measurements in multiple pass-bands, we want to maximally utilize
the information on periodicity in a manner that is agnostic of differences in
the light curve shapes across the different channels. Given large volumes of
data, computational efficiency is also at a premium. This paper develops and
presents a computationally economic method for determining periodicity which
combines the results from two different classes of period determination
algorithms. The underlying principles are illustrated through examples. The
effectiveness of this approach for combining asynchronously sampled
measurements in multiple observables that share an underlying fundamental
frequency is also demonstrated.
| 0 | 1 | 0 | 0 | 0 | 0 |
A behavioral interpretation of belief functions | Shafer's belief functions were introduced in the seventies of the previous
century as a mathematical tool in order to model epistemic probability. One of
the reasons that they were not picked up by mainstream probability was the lack
of a behavioral interpretation. In this paper we provide such a behavioral
interpretation, and re-derive Shafer's belief functions via a betting
interpretation reminiscent of the classical Dutch Book Theorem for probability
distributions. We relate our betting interpretation of belief functions to the
existing literature.
| 0 | 0 | 1 | 0 | 0 | 0 |
Radiation Hardness of Fiber Bragg Grating Thermometers | Photonics sensing has long been valued for its tolerance to harsh
environments where traditional sensing technologies fail. As photonic
components continue to evolve and find new applications, their tolerance to
radiation is emerging as an important line of inquiry. Here we report on our
investigation of the impact of gamma-ray exposure on the temperature response
of fiber Bragg gratings. At 25 degrees C, exposures leading to an accumulated
dose of up to 600 kGy result in complex dose-dependent drift in Bragg
wavelength, significantly increasing the uncertainty in temperature
measurements obtained if appreciable dose is delivered over the measurement
interval. We note that temperature sensitivity is not severely impacted by the
integrated dose, suggesting such devices could be used to measure relative
changes in temperature.
| 0 | 1 | 0 | 0 | 0 | 0 |
On the inherent competition between valid and spurious inductive inferences in Boolean data | Inductive inference is the process of extracting general rules from specific
observations. This problem also arises in the analysis of biological networks,
such as genetic regulatory networks, where the interactions are complex and the
observations are incomplete. A typical task in these problems is to extract
general interaction rules as combinations of Boolean covariates, that explain a
measured response variable. The inductive inference process can be considered
as an incompletely specified Boolean function synthesis problem. This
incompleteness of the problem will also generate spurious inferences, which are
a serious threat to valid inductive inference rules. Using random Boolean data
as a null model, here we attempt to measure the competition between valid and
spurious inductive inference rules from a given data set. We formulate two
greedy search algorithms, which synthesize a given Boolean response variable in
a sparse disjunct normal form, and respectively a sparse generalized algebraic
normal form of the variables from the observation data, and we evaluate
numerically their performance.
| 0 | 0 | 0 | 0 | 1 | 0 |
Computing Stable Models of Normal Logic Programs Without Grounding | We present a method for computing stable models of normal logic programs,
i.e., logic programs extended with negation, in the presence of predicates with
arbitrary terms. Such programs need not have a finite grounding, so traditional
methods do not apply. Our method relies on the use of a non-Herbrand universe,
as well as coinduction, constructive negation and a number of other novel
techniques. Using our method, a normal logic program with predicates can be
executed directly under the stable model semantics without requiring it to be
grounded either before or during execution and without requiring that its
variables range over a finite domain. As a result, our method is quite general
and supports the use of terms as arguments, including lists and complex data
structures. A prototype implementation and non-trivial applications have been
developed to demonstrate the feasibility of our method.
| 1 | 0 | 0 | 0 | 0 | 0 |
Security for 4G and 5G Cellular Networks: A Survey of Existing Authentication and Privacy-preserving Schemes | This paper presents a comprehensive survey of existing authentication and
privacy-preserving schemes for 4G and 5G cellular networks. We start by
providing an overview of existing surveys that deal with 4G and 5G
communications, applications, standardization, and security. Then, we give a
classification of threat models in 4G and 5G cellular networks in four
categories, including, attacks against privacy, attacks against integrity,
attacks against availability, and attacks against authentication. We also
provide a classification of countermeasures into three types of categories,
including, cryptography methods, humans factors, and intrusion detection
methods. The countermeasures and informal and formal security analysis
techniques used by the authentication and privacy preserving schemes are
summarized in form of tables. Based on the categorization of the authentication
and privacy models, we classify these schemes in seven types, including,
handover authentication with privacy, mutual authentication with privacy, RFID
authentication with privacy, deniable authentication with privacy,
authentication with mutual anonymity, authentication and key agreement with
privacy, and three-factor authentication with privacy. In addition, we provide
a taxonomy and comparison of authentication and privacy-preserving schemes for
4G and 5G cellular networks in form of tables. Based on the current survey,
several recommendations for further research are discussed at the end of this
paper.
| 1 | 0 | 0 | 0 | 0 | 0 |
Deep Learning for Unsupervised Insider Threat Detection in Structured Cybersecurity Data Streams | Analysis of an organization's computer network activity is a key component of
early detection and mitigation of insider threat, a growing concern for many
organizations. Raw system logs are a prototypical example of streaming data
that can quickly scale beyond the cognitive power of a human analyst. As a
prospective filter for the human analyst, we present an online unsupervised
deep learning approach to detect anomalous network activity from system logs in
real time. Our models decompose anomaly scores into the contributions of
individual user behavior features for increased interpretability to aid
analysts reviewing potential cases of insider threat. Using the CERT Insider
Threat Dataset v6.2 and threat detection recall as our performance metric, our
novel deep and recurrent neural network models outperform Principal Component
Analysis, Support Vector Machine and Isolation Forest based anomaly detection
baselines. For our best model, the events labeled as insider threat activity in
our dataset had an average anomaly score in the 95.53 percentile, demonstrating
our approach's potential to greatly reduce analyst workloads.
| 1 | 0 | 0 | 1 | 0 | 0 |
Singular branched covers of four-manifolds | Consider a dihedral cover $f: Y\to X$ with $X$ and $Y$ four-manifolds and $f$
branched along an oriented surface embedded in $X$ with isolated cone
singularities. We prove that only a slice knot can arise as the unique
singularity on an irregular dihedral cover $f: Y\to S^4$ if $Y$ is homotopy
equivalent to $\mathbb{CP}^2$ and construct an explicit infinite family of such
covers with $Y$ diffeomorphic to $\mathbb{CP}^2$. An obstruction to a knot
being homotopically ribbon arises in this setting, and we describe a class of
potential counter-examples to the Slice-Ribbon Conjecture.
Our tools include lifting a trisection of a singularly embedded surface in a
four-manifold $X$ to obtain a trisection of the corresponding irregular
dihedral branched cover of $X$, when such a cover exists. We also develop a
combinatorial procedure to compute, using a formula by the second author, the
contribution to the signature of the covering manifold which results from the
presence of a singularity on the branching set.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Sample Complexity Measure with Applications to Learning Optimal Auctions | We introduce a new sample complexity measure, which we refer to as
split-sample growth rate. For any hypothesis $H$ and for any sample $S$ of size
$m$, the split-sample growth rate $\hat{\tau}_H(m)$ counts how many different
hypotheses can empirical risk minimization output on any sub-sample of $S$ of
size $m/2$. We show that the expected generalization error is upper bounded by
$O\left(\sqrt{\frac{\log(\hat{\tau}_H(2m))}{m}}\right)$. Our result is enabled
by a strengthening of the Rademacher complexity analysis of the expected
generalization error. We show that this sample complexity measure, greatly
simplifies the analysis of the sample complexity of optimal auction design, for
many auction classes studied in the literature. Their sample complexity can be
derived solely by noticing that in these auction classes, ERM on any sample or
sub-sample will pick parameters that are equal to one of the points in the
sample.
| 1 | 0 | 1 | 1 | 0 | 0 |
MDNet: A Semantically and Visually Interpretable Medical Image Diagnosis Network | The inability to interpret the model prediction in semantically and visually
meaningful ways is a well-known shortcoming of most existing computer-aided
diagnosis methods. In this paper, we propose MDNet to establish a direct
multimodal mapping between medical images and diagnostic reports that can read
images, generate diagnostic reports, retrieve images by symptom descriptions,
and visualize attention, to provide justifications of the network diagnosis
process. MDNet includes an image model and a language model. The image model is
proposed to enhance multi-scale feature ensembles and utilization efficiency.
The language model, integrated with our improved attention mechanism, aims to
read and explore discriminative image feature descriptions from reports to
learn a direct mapping from sentence words to image pixels. The overall network
is trained end-to-end by using our developed optimization strategy. Based on a
pathology bladder cancer images and its diagnostic reports (BCIDR) dataset, we
conduct sufficient experiments to demonstrate that MDNet outperforms
comparative baselines. The proposed image model obtains state-of-the-art
performance on two CIFAR datasets as well.
| 1 | 0 | 0 | 0 | 0 | 0 |
I0 and rank-into-rank axioms | Just a survey on I0: The basics, some things known but never published, some
things published but not known.
| 0 | 0 | 1 | 0 | 0 | 0 |
Low-Dose CT with a Residual Encoder-Decoder Convolutional Neural Network (RED-CNN) | Given the potential X-ray radiation risk to the patient, low-dose CT has
attracted a considerable interest in the medical imaging field. The current
main stream low-dose CT methods include vendor-specific sinogram domain
filtration and iterative reconstruction, but they need to access original raw
data whose formats are not transparent to most users. Due to the difficulty of
modeling the statistical characteristics in the image domain, the existing
methods for directly processing reconstructed images cannot eliminate image
noise very well while keeping structural details. Inspired by the idea of deep
learning, here we combine the autoencoder, the deconvolution network, and
shortcut connections into the residual encoder-decoder convolutional neural
network (RED-CNN) for low-dose CT imaging. After patch-based training, the
proposed RED-CNN achieves a competitive performance relative to
the-state-of-art methods in both simulated and clinical cases. Especially, our
method has been favorably evaluated in terms of noise suppression, structural
preservation and lesion detection.
| 1 | 1 | 0 | 0 | 0 | 0 |
Primes In Arithmetic Progressions And Primitive Roots | Let $x\geq 1$ be a large number, and let $1 \leq a <q $ be integers such that
$\gcd(a,q)=1$ and $q=O(\log^c)$ with $c>0$ constant. This note proves that the
counting function for the number of primes $p \in \{p=qn+a: n \geq1 \}$ with a
fixed primitive root $u\ne \pm 1, v^2$ has the asymptotic formula
$\pi_u(x,q,a)=\delta(u,q,a)x/ \log x +O(x/\log^b x),$ where $\delta(u,q,a)>0$
is the density, and $b>c+1$ is a constant.
| 0 | 0 | 1 | 0 | 0 | 0 |
Photometric redshift estimation via deep learning | The need to analyze the available large synoptic multi-band surveys drives
the development of new data-analysis methods. Photometric redshift estimation
is one field of application where such new methods improved the results,
substantially. Up to now, the vast majority of applied redshift estimation
methods have utilized photometric features. We aim to develop a method to
derive probabilistic photometric redshift directly from multi-band imaging
data, rendering pre-classification of objects and feature extraction obsolete.
A modified version of a deep convolutional network was combined with a mixture
density network. The estimates are expressed as Gaussian mixture models
representing the probability density functions (PDFs) in the redshift space. In
addition to the traditional scores, the continuous ranked probability score
(CRPS) and the probability integral transform (PIT) were applied as performance
criteria. We have adopted a feature based random forest and a plain mixture
density network to compare performances on experiments with data from SDSS
(DR9). We show that the proposed method is able to predict redshift PDFs
independently from the type of source, for example galaxies, quasars or stars.
Thereby the prediction performance is better than both presented reference
methods and is comparable to results from the literature. The presented method
is extremely general and allows us to solve of any kind of probabilistic
regression problems based on imaging data, for example estimating metallicity
or star formation rate of galaxies. This kind of methodology is tremendously
important for the next generation of surveys.
| 0 | 1 | 0 | 0 | 0 | 0 |
Effect of antipsychotics on community structure in functional brain networks | Schizophrenia, a mental disorder that is characterized by abnormal social
behavior and failure to distinguish one's own thoughts and ideas from reality,
has been associated with structural abnormalities in the architecture of
functional brain networks. Using various methods from network analysis, we
examine the effect of two classical therapeutic antipsychotics --- Aripiprazole
and Sulpiride --- on the structure of functional brain networks of healthy
controls and patients who have been diagnosed with schizophrenia. We compare
the community structures of functional brain networks of different individuals
using mesoscopic response functions, which measure how community structure
changes across different scales of a network. We are able to do a reasonably
good job of distinguishing patients from controls, and we are most successful
at this task on people who have been treated with Aripiprazole. We demonstrate
that this increased separation between patients and controls is related only to
a change in the control group, as the functional brain networks of the patient
group appear to be predominantly unaffected by this drug. This suggests that
Aripiprazole has a significant and measurable effect on community structure in
healthy individuals but not in individuals who are diagnosed with
schizophrenia. In contrast, we find for individuals are given the drug
Sulpiride that it is more difficult to separate the networks of patients from
those of controls. Overall, we observe differences in the effects of the drugs
(and a placebo) on community structure in patients and controls and also that
this effect differs across groups. We thereby demonstrate that different types
of antipsychotic drugs selectively affect mesoscale structures of brain
networks, providing support that mesoscale structures such as communities are
meaningful functional units in the brain.
| 0 | 0 | 0 | 1 | 1 | 0 |
Unsupervised Learning of Mixture Models with a Uniform Background Component | Gaussian Mixture Models are one of the most studied and mature models in
unsupervised learning. However, outliers are often present in the data and
could influence the cluster estimation. In this paper, we study a new model
that assumes that data comes from a mixture of a number of Gaussians as well as
a uniform "background" component assumed to contain outliers and other
non-interesting observations. We develop a novel method based on robust loss
minimization that performs well in clustering such GMM with a uniform
background. We give theoretical guarantees for our clustering algorithm to
obtain best clustering results with high probability. Besides, we show that the
result of our algorithm does not depend on initialization or local optima, and
the parameter tuning is an easy task. By numeric simulations, we demonstrate
that our algorithm enjoys high accuracy and achieves the best clustering
results given a large enough sample size. Finally, experimental comparisons
with typical clustering methods on real datasets witness the potential of our
algorithm in real applications.
| 0 | 0 | 0 | 1 | 0 | 0 |
Sound emitted by some grassland animals as an indicator of motion in the surroundings | It is argued based on the results of both numerical modelling and the
experiments performed on an artificial substitute of a meadow that the sound
emitted by animals living in a dense surrounding such as a meadow or shrubs can
be used as a tool for detection of motion. Some characteristics of the sound
emitted by these animals, e.g. its frequency, seem to be adjusted to the meadow
density to optimize the effectiveness of this skill. This kind of sensing the
environment could be used as a useful tool improving detection of mates or
predators. A study thereof would be important both from the basic-knowledge and
ecological points of view (unnatural environmental changes like increasing of a
noise or changes in plants species composition can make this sensing
ineffective).
| 0 | 1 | 0 | 0 | 0 | 0 |
Monochromaticity of coherent Smith-Purcell radiation from finite size grating | Investigation of coherent Smith-Purcell Radiation (SPR) spectral
characteristics was performed both experimentally and by numerical simulation.
The measurement of SPR spectral line shapes of different diffraction orders was
carried out at KEK LUCX facility. A pair of room-temperature Schottky barrier
diode (SBD) detectors with sensitivity bands of $60-90$~GHz and $320-460$~GHz
was used in the measurements. Reasonable agreement of experimental results and
simulations performed with CST Studio Suite justifies the use of different
narrow-band SBD detectors to investigate different SPR diffraction orders. It
was shown that monochromaticity of the SPR spectral lines increases with
diffraction order. The comparison of coherent transition radiation and coherent
SPR intensities in sub-THz frequency range showed that the brightnesses of both
radiation mechanisms were comparable. A fine tuning feasibility of the SPR
spectral lines is discussed.
| 0 | 1 | 0 | 0 | 0 | 0 |
Search for water vapor in the high-resolution transmission spectrum of HD189733b in the visible | Ground-based telescopes equipped with state-of-the-art spectrographs are able
to obtain high-resolution transmission and emission spectra of exoplanets that
probe the structure and composition of their atmospheres. Various atomic and
molecular species, such as Na, CO, H2O have been already detected. Molecular
species have been observed only in the near-infrared while atomic species have
been observed in the visible. In particular, the detection and abundance
determination of water vapor bring important constraints to the planet
formation process. We search for water vapor in the atmosphere of the exoplanet
HD189733b using a high-resolution transmission spectrum in the visible obtained
with HARPS. We use Molecfit to correct for telluric absorption features. Then
we compute the high-resolution transmission spectrum of the planet using 3
transit datasets. We finally search for water vapor absorption using a
cross-correlation technique that combines the signal of 800 individual lines.
Telluric features are corrected to the noise level. We place a 5-sigma upper
limit of 100 ppm on the strength of the 6500 A water vapor band. The 1-sigma
precision of 20 ppm on the transmission spectrum demonstrates that space-like
sensitivity can be achieved from the ground. This approach opens new
perspectives to detect various atomic and molecular species with future
instruments such as ESPRESSO at the VLT. Extrapolating from our results, we
show that only 1 transit with ESPRESSO would be sufficient to detect water
vapor on HD189733b-like hot Jupiter with a cloud-free atmosphere. Upcoming
near-IR spectrographs will be even more efficient and sensitive to a wider
range of molecular species. Moreover, the detection of the same molecular
species in different bands (e.g. visible and IR) is key to constrain the
structure and composition of the atmosphere, such as the presence of Rayleigh
scattering or aerosols.
| 0 | 1 | 0 | 0 | 0 | 0 |
The Capacity of Some Classes of Polyhedra | K. Borsuk in 1979, in the Topological Conference in Moscow, introduced the
concept of the capacity of a compactum. In this paper, we compute the capacity
of the product of two spheres of the same or different dimensions and the
capacity of lense spaces. Also, we present an upper bound for the capacity of a
$\mathbb{Z}_n$-complex, i.e., a connected finite 2-dimensional CW-complex with
finite cyclic fundamental group $\mathbb{Z}_n$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Network Transplanting (extended abstract) | This paper focuses on a new task, i.e., transplanting a
category-and-task-specific neural network to a generic, modular network without
strong supervision. We design a functionally interpretable structure for the
generic network. Like building LEGO blocks, we teach the generic network a new
category by directly transplanting the module corresponding to the category
from a pre-trained network with a few or even without sample annotations. Our
method incrementally adds new categories to the generic network but does not
affect representations of existing categories. In this way, our method breaks
the typical bottleneck of learning a net for massive tasks and categories,
i.e., the requirement of collecting samples for all tasks and categories at the
same time before the learning begins. Thus, we use a new distillation
algorithm, namely back-distillation, to overcome specific challenges of network
transplanting. Our method without training samples even outperformed the
baseline with 100 training samples.
| 1 | 0 | 0 | 1 | 0 | 0 |
Extracting Build Changes with BUILDDIFF | Build systems are an essential part of modern software engineering projects.
As software projects change continuously, it is crucial to understand how the
build system changes because neglecting its maintenance can lead to expensive
build breakage. Recent studies have investigated the (co-)evolution of build
configurations and reasons for build breakage, but they did this only on a
coarse grained level. In this paper, we present BUILDDIFF, an approach to
extract detailed build changes from MAVEN build files and classify them into 95
change types. In a manual evaluation of 400 build changing commits, we show
that BUILDDIFF can extract and classify build changes with an average precision
and recall of 0.96 and 0.98, respectively. We then present two studies using
the build changes extracted from 30 open source Java projects to study the
frequency and time of build changes. The results show that the top 10 most
frequent change types account for 73% of the build changes. Among them, changes
to version numbers and changes to dependencies of the projects occur most
frequently. Furthermore, our results show that build changes occur frequently
around releases. With these results, we provide the basis for further research,
such as for analyzing the (co-)evolution of build files with other artifacts or
improving effort estimation approaches. Furthermore, our detailed change
information enables improvements of refactoring approaches for build
configurations and improvements of models to identify error-prone build files.
| 1 | 0 | 0 | 0 | 0 | 0 |
Integrating car path optimization with train formation plan: a non-linear binary programming model and simulated annealing based heuristics | An essential issue that a freight transportation system faced is how to
deliver shipments (OD pairs) on a capacitated physical network optimally; that
is, to determine the best physical path for each OD pair and assign each OD
pair into the most reasonable freight train service sequence. Instead of
pre-specifying or pre-solving the railcar routing beforehand and optimizing the
train formation plan subsequently, which is a standard practice in China
railway system and a widely used method in existing literature to reduce the
problem complexity, this paper proposes a non-linear binary programming model
to address the integrated railcar itinerary and train formation plan
optimization problem. The model comprehensively considers various operational
requirements and a set of capacity constraints, including link capacity, yard
reclassification capacity and the maximal number of blocks a yard can be
formed, while trying to minimize the total costs of accumulation,
reclassification and transportation. An efficient simulated annealing based
heuristic solution approach is developed to solve the mathematical model. To
tackle the difficult capacity constraints, we use a penalty function method.
Furthermore, a customized heuristics for satisfying the operational
requirements is designed as well.
| 0 | 0 | 1 | 0 | 0 | 0 |
Modeling of networks and globules of charged domain walls observed in pump and pulse induced states | Experiments on optical and STM injection of carriers in layered
$\mathrm{MX_2}$ materials revealed the formation of nanoscale patterns with
networks and globules of domain walls. This is thought to be responsible for
the metallization transition of the Mott insulator and for stabilization of a
"hidden" state. In response, here we present studies of the classical charged
lattice gas model emulating the superlattice of polarons ubiquitous to the
material of choice $1T-\mathrm{TaS_2}$. The injection pulse was simulated by
introducing a small random concentration of voids which subsequent evolution
was followed by means of Monte Carlo cooling. Below the detected phase
transition, the voids gradually coalesce into domain walls forming locally
connected globules and then the global network leading to a mosaic
fragmentation into domains with different degenerate ground states. The
obtained patterns closely resemble the experimental STM visualizations. The
surprising aggregation of charged voids is understood by fractionalization of
their charges across the walls' lines.
| 0 | 1 | 0 | 0 | 0 | 0 |
cGAN-based Manga Colorization Using a Single Training Image | The Japanese comic format known as Manga is popular all over the world. It is
traditionally produced in black and white, and colorization is time consuming
and costly. Automatic colorization methods generally rely on greyscale values,
which are not present in manga. Furthermore, due to copyright protection,
colorized manga available for training is scarce. We propose a manga
colorization method based on conditional Generative Adversarial Networks
(cGAN). Unlike previous cGAN approaches that use many hundreds or thousands of
training images, our method requires only a single colorized reference image
for training, avoiding the need of a large dataset. Colorizing manga using
cGANs can produce blurry results with artifacts, and the resolution is limited.
We therefore also propose a method of segmentation and color-correction to
mitigate these issues. The final results are sharp, clear, and in high
resolution, and stay true to the character's original color scheme.
| 1 | 0 | 0 | 0 | 0 | 0 |
Cooperation and Environment Characterize the Low-Lying Optical Spectrum of Liquid Water | The optical spectrum of liquid water is analyzed by subsystem time-dependent
density functional theory. We provide simple explanations for several important
(and so far elusive) features. Due to the disordered environment surrounding
each water molecule, the joint density of states of the liquid is much broader
than that of the vapor. This results in a red shifted Urbach tail. Confinement
effects provided by the first solvation shell are responsible for the blue
shift of the first absorption peak compared to the vapor. In addition, we also
characterize many-body excitonic effects. These dramatically affect the
spectral weights at low frequencies, contributing to the refractive index by a
small but significant amount.
| 0 | 1 | 0 | 0 | 0 | 0 |
Possible particle-hole instabilities in interacting type-II Weyl semimetals | Type II Weyl semimetal, a three dimensional gapless topological phase, has
drawn enormous interest recently. These topological semimetals enjoy overtilted
dispersion and Weyl nodes that separate the particle and hole pocket. Using
perturbation renormalization group, we identify possible renormalization of the
interaction vertices, which show a tendency toward instability. We further
adopt a self-consistent mean-field approach to study possible instability of
the type II Weyl semimetals under short-range electron-electron interaction. It
is found that the instabilities are much easier to form in type II Weyl
semimetals than the type I case. Eight different mean-field orders are
identified, among which we further show that the polar charge density wave
(CDW) phase exhibits the lowest energy. This CDW order is originated from the
nesting of the Fermi surfaces and could be a possible ground state in
interacting type II Weyl semimetals.
| 0 | 1 | 0 | 0 | 0 | 0 |
Turbulent shear layers in confining channels | We present a simple model for the development of shear layers between
parallel flows in confining channels. Such flows are important across a wide
range of topics from diffusers, nozzles and ducts to urban air flow and
geophysical fluid dynamics. The model approximates the flow in the shear layer
as a linear profile separating uniform-velocity streams. Both the channel
geometry and wall drag affect the development of the flow. The model shows good
agreement with both particle-image-velocimetry experiments and computational
turbulence modelling. The low computational cost of the model allows it to be
used for design purposes, which we demonstrate by investigating optimal
pressure recovery in diffusers with non-uniform inflow.
| 0 | 1 | 0 | 0 | 0 | 0 |
Crystal and Magnetic Structures in Layered, Transition Metal Dihalides and Trihalides | Materials composed of two dimensional layers bonded to one another through
weak van der Waals interactions often exhibit strongly anisotropic behaviors
and can be cleaved into very thin specimens and sometimes into monolayer
crystals. Interest in such materials is driven by the study of low dimensional
physics and the design of functional heterostructures. Binary compounds with
the compositions MX2 and MX3 where M is a metal cation and X is a halogen anion
often form such structures. Magnetism can be incorporated by choosing a
transition metal with a partially filled d-shell for M, enabling ferroic
responses for enhanced functionality. Here a brief overview of binary
transition metal dihalides and trihalides is given, summarizing their
crystallographic properties and long-range-ordered magnetic structures,
focusing on those materials with layered crystal structures and partially
filled d-shells required for combining low dimensionality and cleavability with
magnetism.
| 0 | 1 | 0 | 0 | 0 | 0 |
Ramsey theorem for designs | We prove that for any choice of parameters $k,t,\lambda$ the class of all
finite ordered designs with parameters $k,t,\lambda$ is a Ramsey class.
| 1 | 0 | 1 | 0 | 0 | 0 |
Oracle inequalities for the stochastic differential equations | This paper is a survey of recent results on the adaptive robust non
parametric methods for the continuous time regression model with the semi -
martingale noises with jumps. The noises are modeled by the Lévy processes,
the Ornstein -- Uhlenbeck processes and semi-Markov processes. We represent the
general model selection method and the sharp oracle inequalities methods which
provide the robust efficient estimation in the adaptive setting. Moreover, we
present the recent results on the improved model selection methods for the
nonparametric estimation problems.
| 0 | 0 | 1 | 0 | 0 | 0 |
Empirical Risk Minimization as Parameter Choice Rule for General Linear Regularization Methods | We consider the statistical inverse problem to recover $f$ from noisy
measurements $Y = Tf + \sigma \xi$ where $\xi$ is Gaussian white noise and $T$
a compact operator between Hilbert spaces. Considering general reconstruction
methods of the form $\hat f_\alpha = q_\alpha \left(T^*T\right)T^*Y$ with an
ordered filter $q_\alpha$, we investigate the choice of the regularization
parameter $\alpha$ by minimizing an unbiased estimate of the predictive risk
$\mathbb E\left[\Vert Tf - T\hat f_\alpha\Vert^2\right]$. The corresponding
parameter $\alpha_{\mathrm{pred}}$ and its usage are well-known in the
literature, but oracle inequalities and optimality results in this general
setting are unknown. We prove a (generalized) oracle inequality, which relates
the direct risk $\mathbb E\left[\Vert f - \hat
f_{\alpha_{\mathrm{pred}}}\Vert^2\right]$ with the oracle prediction risk
$\inf_{\alpha>0}\mathbb E\left[\Vert Tf - T\hat f_{\alpha}\Vert^2\right]$. From
this oracle inequality we are then able to conclude that the investigated
parameter choice rule is of optimal order.
Finally we also present numerical simulations, which support the order
optimality of the method and the quality of the parameter choice in finite
sample situations.
| 0 | 0 | 1 | 1 | 0 | 0 |
Disturbance-to-State Stabilization and Quantized Control for Linear Hyperbolic Systems | We consider a system of linear hyperbolic PDEs where the state at one of the
boundary points is controlled using the measurements of another boundary point.
Because of the disturbances in the measurement, the problem of designing
dynamic controllers is considered so that the closed-loop system is robust with
respect to measurement errors. Assuming that the disturbance is a locally
essentially bounded measurable function of time, we derive a
disturbance-to-state estimate which provides an upper bound on the maximum norm
of the state (with respect to the spatial variable) at each time in terms of
$\mathcal{L}^\infty$-norm of the disturbance up to that time. The analysis is
based on constructing a Lyapunov function for the closed-loop system, which
leads to controller synthesis and the conditions on system dynamics required
for stability. As an application of this stability notion, the problem of
quantized control for hyperbolic PDEs is considered where the measurements sent
to the controller are communicated using a quantizer of finite length. The
presence of quantizer yields practical stability only, and the ultimate bounds
on the norm of the state trajectory are also derived.
| 1 | 0 | 1 | 0 | 0 | 0 |
Energy dependent stereodynamics of the Ne($^3$P$_2$)+Ar reaction | The stereodynamics of the Ne($^3$P$_2$)+Ar Penning and Associative ionization
reactions have been studied using a crossed molecular beam apparatus. The
experiment uses a curved magnetic hexapole to polarise the Ne($^3$P$_2$) which
is then oriented with a shaped magnetic field in the region where it intersects
with a beam of Ar($^1$S). The ratios of Penning to associative ionization were
recorded over a range of collision energies from 320 cm$^{-1}$ to 500 cm$^{-1}$
and the data was used to obtain $\Omega$ state dependent reactivities for the
two reaction channels. These reactivities were found to compare favourably to
those predicted in the theoretical work of Brumer et al.
| 0 | 1 | 0 | 0 | 0 | 0 |
Qubit dynamics at tunneling Fermi-edge singularity in $\it{a.c.}$ response | We consider tunneling of spinless electrons from a single-channel emitter
into an empty collector through an interacting resonant level of the quantum
dot. When all Coulomb screening of sudden charge variations of the dot during
the tunneling is realized by the emitter channel, the system is described with
an exactly solvable model of a dissipative qubit. To study manifestations of
the coherent qubit dynamics in the collector $\it{a.c.}$ response we derive
solution to the corresponding Bloch equation for the model quantum evolution in
the presence of the oscillating voltage of frequency $% \omega$ and calculate
perturbatively the $\it{a.c.}$ response in the voltage amplitude. We have shown
that in a wide range of the model parameters the coherent qubit dynamics
results in the non-zero frequencies resonances in the amplitudes dependence of
the $\it{a.c.}$ harmonics and in the jumps of the harmonics phase shifts across
the resonances. In the first order the $\it{a.c.}$ response is directly related
to the spectral decomposition of the corresponding transient current and
contains only the first $\omega$ harmonic, whose amplitude exhibits resonance
at $\omega =\omega_I $, where $\omega_I$ is the qubit oscillation frequency. In
the second order we have obtained the $2 \omega$ harmonic of the $\it{a.c.}$
response with resonances in the frequency dependence of its amplitude at
$\omega_I$, $\omega_I/2$ and zero frequency and also have found the frequency
dependent shift of the average steady current.
| 0 | 1 | 0 | 0 | 0 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.