title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Kafnets: kernel-based non-parametric activation functions for neural networks | Neural networks are generally built by interleaving (adaptable) linear layers
with (fixed) nonlinear activation functions. To increase their flexibility,
several authors have proposed methods for adapting the activation functions
themselves, endowing them with varying degrees of flexibility. None of these
approaches, however, have gained wide acceptance in practice, and research in
this topic remains open. In this paper, we introduce a novel family of flexible
activation functions that are based on an inexpensive kernel expansion at every
neuron. Leveraging over several properties of kernel-based models, we propose
multiple variations for designing and initializing these kernel activation
functions (KAFs), including a multidimensional scheme allowing to nonlinearly
combine information from different paths in the network. The resulting KAFs can
approximate any mapping defined over a subset of the real line, either convex
or nonconvex. Furthermore, they are smooth over their entire domain, linear in
their parameters, and they can be regularized using any known scheme, including
the use of $\ell_1$ penalties to enforce sparseness. To the best of our
knowledge, no other known model satisfies all these properties simultaneously.
In addition, we provide a relatively complete overview on alternative
techniques for adapting the activation functions, which is currently lacking in
the literature. A large set of experiments validates our proposal.
| 1 | 0 | 0 | 1 | 0 | 0 |
Unsteady Propulsion by an Intermittent Swimming Gait | Inviscid computational results are presented on a self-propelled swimmer
modeled as a virtual body combined with a two-dimensional hydrofoil pitching
intermittently about its leading edge. Lighthill (1971) originally proposed
that this burst-and-coast behavior can save fish energy during swimming by
taking advantage of the viscous Bone-Lighthill boundary layer thinning
mechanism. Here, an additional inviscid Garrick mechanism is discovered that
allows swimmers to control the ratio of their added mass thrust-producing
forces to their circulatory drag-inducing forces by decreasing their duty
cycle, DC, of locomotion. This mechanism can save intermittent swimmers as much
as 60% of the energy it takes to swim continuously at the same speed. The
inviscid energy savings are shown to increase with increasing amplitude of
motion, increase with decreasing Lighthill number, Li, and switch to an
energetic cost above continuous swimming for sufficiently low DC. Intermittent
swimmers are observed to shed four vortices per cycle that form into groups
that are self-similar with the DC. In addition, previous thrust and power
scaling laws of continuous self-propelled swimming are further generalized to
include intermittent swimming. The key is that by averaging the thrust and
power coefficients over only the bursting period then the intermittent problem
can be transformed into a continuous one. Furthermore, the intermittent thrust
and power scaling relations are extended to predict the mean speed and cost of
transport of swimmers. By tuning a few coefficients with a handful of
simulations these self-propelled relations can become predictive. In the
current study, the mean speed and cost of transport are predicted to within 3%
and 18% of their full-scale values by using these relations.
| 0 | 1 | 0 | 0 | 0 | 0 |
Modeling rooted in-trees by finite p-groups | The aim of this chapter is to provide an adequate graph theoretic framework
for the description of periodic bifurcations which have recently been
discovered in descendant trees of finite p-groups. The graph theoretic concepts
of rooted in-trees with weighted vertices and edges perfectly admit an abstract
formulation of the group theoretic notions of successive extensions, nuclear
rank, multifurcation, and step size. Since all graphs in this chapter are
infinite and dense, we use methods of pattern recognition and independent
component analysis to reduce the complex structure to periodically repeating
finite patterns. The method of group cohomology yields subgraph isomorphisms
required for proving the periodicity of branches along mainlines. Finally the
mainlines are glued together with the aid of infinite limit groups whose finite
quotients form the vertices of mainlines. The skeleton of the infinite graph is
a countable union of infinite mainlines, connected by periodic bifurcations.
Each mainline is the backbone of a minimal subtree consisting of a periodically
repeating finite pattern of branches with bounded depth. A second periodicity
is caused by isomorphisms between all minimal subtrees which make up the
complete infinite graph. Only the members of the first minimal tree are
metabelian and the bifurcations, which were unknown up to now, open the long
desired door to non-metabelian extensions whose second derived quotients are
isomorphic to the metabelian groups. An application of this key result to
algebraic number theory solves the problem of p-class field towers of exact
length three.
| 0 | 0 | 1 | 0 | 0 | 0 |
Towards Algorithmic Typing for DOT | The Dependent Object Types (DOT) calculus formalizes key features of Scala.
The D$_{<: }$ calculus is the core of DOT. To date, presentations of D$_{<: }$
have used declarative typing and subtyping rules, as opposed to algorithmic.
Unfortunately, algorithmic typing for full D$_{<: }$ is known to be an
undecidable problem.
We explore the design space for a restricted version of D$_{<: }$ that has
decidable typechecking. Even in this simplified D$_{<: }$ , algorithmic typing
and subtyping are tricky, due to the "bad bounds" problem. The Scala compiler
bypasses bad bounds at the cost of a loss in expressiveness in its type system.
Based on the approach taken in the Scala compiler, we present the Step Typing
and Step Subtyping relations for D$_{<: }$. We prove these relations sound and
decidable. They are not complete with respect to the original D$_{<: }$ rules.
| 1 | 0 | 0 | 0 | 0 | 0 |
Really? Well. Apparently Bootstrapping Improves the Performance of Sarcasm and Nastiness Classifiers for Online Dialogue | More and more of the information on the web is dialogic, from Facebook
newsfeeds, to forum conversations, to comment threads on news articles. In
contrast to traditional, monologic Natural Language Processing resources such
as news, highly social dialogue is frequent in social media, making it a
challenging context for NLP. This paper tests a bootstrapping method,
originally proposed in a monologic domain, to train classifiers to identify two
different types of subjective language in dialogue: sarcasm and nastiness. We
explore two methods of developing linguistic indicators to be used in a first
level classifier aimed at maximizing precision at the expense of recall. The
best performing classifier for the first phase achieves 54% precision and 38%
recall for sarcastic utterances. We then use general syntactic patterns from
previous work to create more general sarcasm indicators, improving precision to
62% and recall to 52%. To further test the generality of the method, we then
apply it to bootstrapping a classifier for nastiness dialogic acts. Our first
phase, using crowdsourced nasty indicators, achieves 58% precision and 49%
recall, which increases to 75% precision and 62% recall when we bootstrap over
the first level with generalized syntactic patterns.
| 1 | 0 | 0 | 0 | 0 | 0 |
A new statistical method for characterizing the atmospheres of extrasolar planets | By detecting light from extrasolar planets,we can measure their compositions
and bulk physical properties. The technologies used to make these measurements
are still in their infancy, and a lack of self-consistency suggests that
previous observations have underestimated their systemic errors.We demonstrate
a statistical method, newly applied to exoplanet characterization, which uses a
Bayesian formalism to account for underestimated errorbars. We use this method
to compare photometry of a substellar companion, GJ 758b, with custom
atmospheric models. Our method produces a probability distribution of
atmospheric model parameters including temperature, gravity, cloud model
(fsed), and chemical abundance for GJ 758b. This distribution is less sensitive
to highly variant data, and appropriately reflects a greater uncertainty on
parameter fits.
| 0 | 1 | 0 | 0 | 0 | 0 |
Prioritizing network communities | Uncovering modular structure in networks is fundamental for systems in
biology, physics, and engineering. Community detection identifies candidate
modules as hypotheses, which then need to be validated through experiments,
such as mutagenesis in a biological laboratory. Only a few communities can
typically be validated, and it is thus important to prioritize which
communities to select for downstream experimentation. Here we develop CRank, a
mathematically principled approach for prioritizing network communities. CRank
efficiently evaluates robustness and magnitude of structural features of each
community and then combines these features into the community prioritization.
CRank can be used with any community detection method. It needs only
information provided by the network structure and does not require any
additional metadata or labels. However, when available, CRank can incorporate
domain-specific information to further boost performance. Experiments on many
large networks show that CRank effectively prioritizes communities, yielding a
nearly 50-fold improvement in community prioritization.
| 0 | 0 | 0 | 1 | 1 | 0 |
New ALMA constraints on the star-forming ISM at low metallicity: A 50 pc view of the blue compact dwarf galaxy SBS0335-052 | Properties of the cold interstellar medium of low-metallicity galaxies are
not well-known due to the faintness and extremely small scale on which emission
is expected. We present deep ALMA band 6 (230GHz) observations of the nearby,
low-metallicity (12 + log(O/H) = 7.25) blue compact dwarf galaxy SBS0335-052 at
an unprecedented resolution of 0.2 arcsec (52 pc). The 12CO J=2-1 line is not
detected and we report a 3-sigma upper limit of LCO(2-1) = 3.6x10^4 K km/s
pc^2. Assuming that molecular gas is converted into stars with a given
depletion time, ranging from 0.02 to 2 Gyr, we find lower limits on the
CO-to-H2 conversion factor alpha_CO in the range 10^2-10^4 Msun pc^-2 (K
km/s)^-1. The continuum emission is detected and resolved over the two main
super star clusters. Re-analysis of the IR-radio spectral energy distribution
suggests that the mm-fluxes are not only free-free emission but are most likely
also associated with a cold dust component coincident with the position of the
brightest cluster. With standard dust properties, we estimate its mass to be as
large as 10^5 Msun. Both line and continuum results suggest the presence of a
large cold gas reservoir unseen in CO even with ALMA.
| 0 | 1 | 0 | 0 | 0 | 0 |
On Structured Prediction Theory with Calibrated Convex Surrogate Losses | We provide novel theoretical insights on structured prediction in the context
of efficient convex surrogate loss minimization with consistency guarantees.
For any task loss, we construct a convex surrogate that can be optimized via
stochastic gradient descent and we prove tight bounds on the so-called
"calibration function" relating the excess surrogate risk to the actual risk.
In contrast to prior related work, we carefully monitor the effect of the
exponential number of classes in the learning guarantees as well as on the
optimization complexity. As an interesting consequence, we formalize the
intuition that some task losses make learning harder than others, and that the
classical 0-1 loss is ill-suited for general structured prediction.
| 1 | 0 | 0 | 1 | 0 | 0 |
Improving Sharir and Welzl's bound on crossing-free matchings through solving a stronger recurrence | Sharir and Welzl [1] derived a bound on crossing-free matchings primarily
based on solving a recurrence based on the size of the matchings. We show that
the recurrence given in Lemma 2.3 in Sharir and Welzl can be improve to
$(2n-6s)\textbf{Ma}_{m}(P)\leq\frac{68}{3}(s+2)\textbf{Ma}_{m-1}(P)$ and
$(3n-7s)\textbf{Ma}_{m}(P)\leq44.5(s+2)\textbf{Ma}_{m-1}(P)$, thereby improving
the upper bound for crossing-free matchings.
| 0 | 0 | 1 | 0 | 0 | 0 |
Computing eigenfunctions and eigenvalues of boundary value problems with the orthogonal spectral renormalization method | The spectral renormalization method was introduced in 2005 as an effective
way to compute ground states of nonlinear Schrödinger and Gross-Pitaevskii
type equations. In this paper, we introduce an orthogonal spectral
renormalization (OSR) method to compute ground and excited states (and their
respective eigenvalues) of linear and nonlinear eigenvalue problems. The
implementation of the algorithm follows four simple steps: (i) reformulate the
underlying eigenvalue problem as a fixed point equation, (ii) introduce a
renormalization factor that controls the convergence properties of the
iteration, (iii) perform a Gram-Schmidt orthogonalization process in order to
prevent the iteration from converging to an unwanted mode; and (iv) compute the
solution sought using a fixed-point iteration. The advantages of the OSR scheme
over other known methods (such as Newton's and self-consistency) are: (i) it
allows the flexibility to choose large varieties of initial guesses without
diverging, (ii) easy to implement especially at higher dimensions and (iii) it
can easily handle problems with complex and random potentials. The OSR method
is implemented on benchmark Hermitian linear and nonlinear eigenvalue problems
as well as linear and nonlinear non-Hermitian $\mathcal{PT}$-symmetric models.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Discontinuity Adjustment for Subdistribution Function Confidence Bands Applied to Right-Censored Competing Risks Data | The wild bootstrap is the resampling method of choice in survival analytic
applications. Theoretic justifications rely on the assumption of existing
intensity functions which is equivalent to an exclusion of ties among the event
times. However, such ties are omnipresent in practical studies. It turns out
that the wild bootstrap should only be applied in a modified manner that
corrects for altered limit variances and emerging dependencies. This again
ensures the asymptotic exactness of inferential procedures. An analogous
necessity is the use of the Greenwood-type variance estimator for Nelson-Aalen
estimators which is particularly preferred in tied data regimes. All theoretic
arguments are transferred to bootstrapping Aalen-Johansen estimators for
cumulative incidence functions in competing risks. An extensive simulation
study as well as an application to real competing risks data of male intensive
care unit patients suffering from pneumonia illustrate the practicability of
the proposed technique.
| 0 | 0 | 1 | 1 | 0 | 0 |
Estimation of block sparsity in compressive sensing | In this paper, we consider a soft measure of block sparsity,
$k_\alpha(\mathbf{x})=\left(\lVert\mathbf{x}\rVert_{2,\alpha}/\lVert\mathbf{x}\rVert_{2,1}\right)^{\frac{\alpha}{1-\alpha}},\alpha\in[0,\infty]$
and propose a procedure to estimate it by using multivariate isotropic
symmetric $\alpha$-stable random projections without sparsity or block sparsity
assumptions. The limiting distribution of the estimator is given. Some
simulations are conducted to illustrate our theoretical results.
| 1 | 0 | 0 | 1 | 0 | 0 |
Q-learning with UCB Exploration is Sample Efficient for Infinite-Horizon MDP | A fundamental question in reinforcement learning is whether model-free
algorithms are sample efficient. Recently, Jin et al. \cite{jin2018q} proposed
a Q-learning algorithm with UCB exploration policy, and proved it has nearly
optimal regret bound for finite-horizon episodic MDP. In this paper, we adapt
Q-learning with UCB-exploration bonus to infinite-horizon MDP with discounted
rewards \emph{without} accessing a generative model. We show that the
\textit{sample complexity of exploration} of our algorithm is bounded by
$\tilde{O}({\frac{SA}{\epsilon^2(1-\gamma)^7}})$. This improves the previously
best known result of $\tilde{O}({\frac{SA}{\epsilon^4(1-\gamma)^8}})$ in this
setting achieved by delayed Q-learning \cite{strehl2006pac}, and matches the
lower bound in terms of $\epsilon$ as well as $S$ and $A$ except for
logarithmic factors.
| 1 | 0 | 0 | 1 | 0 | 0 |
You Cannot Fix What You Cannot Find! An Investigation of Fault Localization Bias in Benchmarking Automated Program Repair Systems | Properly benchmarking Automated Program Repair (APR) systems should
contribute to the development and adoption of the research outputs by
practitioners. To that end, the research community must ensure that it reaches
significant milestones by reliably comparing state-of-the-art tools for a
better understanding of their strengths and weaknesses. In this work, we
identify and investigate a practical bias caused by the fault localization (FL)
step in a repair pipeline. We propose to highlight the different fault
localization configurations used in the literature, and their impact on APR
systems when applied to the Defects4J benchmark. Then, we explore the
performance variations that can be achieved by `tweaking' the FL step.
Eventually, we expect to create a new momentum for (1) full disclosure of APR
experimental procedures with respect to FL, (2) realistic expectations of
repairing bugs in Defects4J, as well as (3) reliable performance comparison
among the state-of-the-art APR systems, and against the baseline performance
results of our thoroughly assessed kPAR repair tool. Our main findings include:
(a) only a subset of Defects4J bugs can be currently localized by commonly-used
FL techniques; (b) current practice of comparing state-of-the-art APR systems
(i.e., counting the number of fixed bugs) is potentially misleading due to the
bias of FL configurations; and (c) APR authors do not properly qualify their
performance achievement with respect to the different tuning parameters
implemented in APR systems.
| 1 | 0 | 0 | 0 | 0 | 0 |
Acquiring Common Sense Spatial Knowledge through Implicit Spatial Templates | Spatial understanding is a fundamental problem with wide-reaching real-world
applications. The representation of spatial knowledge is often modeled with
spatial templates, i.e., regions of acceptability of two objects under an
explicit spatial relationship (e.g., "on", "below", etc.). In contrast with
prior work that restricts spatial templates to explicit spatial prepositions
(e.g., "glass on table"), here we extend this concept to implicit spatial
language, i.e., those relationships (generally actions) for which the spatial
arrangement of the objects is only implicitly implied (e.g., "man riding
horse"). In contrast with explicit relationships, predicting spatial
arrangements from implicit spatial language requires significant common sense
spatial understanding. Here, we introduce the task of predicting spatial
templates for two objects under a relationship, which can be seen as a spatial
question-answering task with a (2D) continuous output ("where is the man w.r.t.
a horse when the man is walking the horse?"). We present two simple
neural-based models that leverage annotated images and structured text to learn
this task. The good performance of these models reveals that spatial locations
are to a large extent predictable from implicit spatial language. Crucially,
the models attain similar performance in a challenging generalized setting,
where the object-relation-object combinations (e.g.,"man walking dog") have
never been seen before. Next, we go one step further by presenting the models
with unseen objects (e.g., "dog"). In this scenario, we show that leveraging
word embeddings enables the models to output accurate spatial predictions,
proving that the models acquire solid common sense spatial knowledge allowing
for such generalization.
| 1 | 0 | 0 | 1 | 0 | 0 |
Design of Capacity Approaching Ensembles of LDPC Codes for Correlated Sources using EXIT Charts | This paper is concerned with the design of capacity approaching ensembles of
Low-Densiy Parity-Check (LDPC) codes for correlated sources. We consider
correlated binary sources where the data is encoded independently at each
source through a systematic LDPC encoder and sent over two independent
channels. At the receiver, a iterative joint decoder consisting of two
component LDPC decoders is considered where the encoded bits at the output of
each component decoder are used at the other decoder as the a priori
information. We first provide asymptotic performance analysis using the concept
of extrinsic information transfer (EXIT) charts. Compared to the conventional
EXIT charts devised to analyze LDPC codes for point to point communication, the
proposed EXIT charts have been completely modified to able to accommodate the
systematic nature of the codes as well as the iterative behavior between the
two component decoders. Then the developed modified EXIT charts are deployed to
design ensembles for different levels of correlation. Our results show that as
the average degree of the designed ensembles grow, the thresholds corresponding
to the designed ensembles approach the capacity. In particular, for ensembles
with average degree of around 9, the gap to capacity is reduced to about 0.2dB.
Finite block length performance evaluation is also provided for the designed
ensembles to verify the asymptotic results.
| 1 | 0 | 0 | 0 | 0 | 0 |
Evolution of Morphological and Physical Properties of Laboratory Interstellar Organic Residues with Ultraviolet Irradiation | Refractory organic compounds formed in molecular clouds are among the
building blocks of the solar system objects and could be the precursors of
organic matter found in primitive meteorites and cometary materials. However,
little is known about the evolutionary pathways of molecular cloud organics
from dense molecular clouds to planetary systems. In this study, we focus on
the evolution of the morphological and viscoelastic properties of molecular
cloud refractory organic matter. We found that the organic residue,
experimentally synthesized at about 10 K from UV-irradiated H2O-CH3OH-NH3 ice,
changed significantly in terms of its nanometer- to micrometer-scale morphology
and viscoelastic properties after UV irradiation at room temperature. The dose
of this irradiation was equivalent to that experienced after short residence in
diffuse clouds (equal or less than 10,000 years) or irradiation in outer
protoplanetary disks. The irradiated organic residues became highly porous and
more rigid and formed amorphous nanospherules. These nanospherules are
morphologically similar to organic nanoglobules observed in the least-altered
chondrites, chondritic porous interplanetary dust particles, and cometary
samples, suggesting that irradiation of refractory organics could be a possible
formation pathway for such nanoglobules. The storage modulus (elasticity) of
photo-irradiated organic residues is about 100 MPa irrespective of vibrational
frequency, a value that is lower than the storage moduli of minerals and ice.
Dust grains coated with such irradiated organics would therefore stick together
efficiently, but growth to larger grains might be suppressed due to an increase
in aggregate brittleness caused by the strong connections between grains.
| 0 | 1 | 0 | 0 | 0 | 0 |
Evaporating pure, binary and ternary droplets: thermal effects and axial symmetry breaking | The Greek aperitif Ouzo is not only famous for its specific anise-flavored
taste, but also for its ability to turn from a transparent miscible liquid to a
milky-white colored emulsion when water is added. Recently, it has been shown
that this so-called Ouzo effect, i.e. the spontaneous emulsification of oil
microdroplets, can also be triggered by the preferential evaporation of ethanol
in an evaporating sessile Ouzo drop, leading to an amazingly rich drying
process with multiple phase transitions [H. Tan et al., Proc. Natl. Acad. Sci.
USA 113(31) (2016) 8642]. Due to the enhanced evaporation near the contact
line, the nucleation of oil droplets starts at the rim which results in an oil
ring encircling the drop. Furthermore, the oil droplets are advected through
the Ouzo drop by a fast solutal Marangoni flow. In this article, we investigate
the evaporation of mixture droplets in more detail, by successively increasing
the mixture complexity from pure water over a binary water-ethanol mixture to
the ternary Ouzo mixture (water, ethanol and anise oil). In particular,
axisymmetric and full three-dimensional finite element method simulations have
been performed on these droplets to discuss thermal effects and the complicated
flow in the droplet driven by an interplay of preferential evaporation,
evaporative cooling and solutal and thermal Marangoni flow. By using image
analysis techniques and micro-PIV measurements, we are able to compare the
numerically predicted volume evolutions and velocity fields with experimental
data. The Ouzo droplet is furthermore investigated by confocal microscopy. It
is shown that the oil ring predominantly emerges due to coalescence.
| 0 | 1 | 0 | 0 | 0 | 0 |
A semianalytical approach for determining the nonclassical mechanical properties of materials | In this article, a semianalytical approach for demonstrating elastic waves
propagation in nanostructures has been presented based on the modified
couple-stress theory including acceleration gradients. Using the experimental
results and atomic simulations, the static and dynamic length scales were
calculated for several materials, zinc oxide (ZnO), silicon (Si), silicon
carbide (SiC), indium antimonide (InSb), and diamond. To evaluate the predicted
static and dynamic length scales as well as the presented model, the natural
frequencies of a beam in addition to the phase velocity and group velocity of
Si were studied and compared with the available static length scales, estimated
using strain-gradient theory without considering acceleration gradients. These
three criteria, natural frequency, phase velocity, and group velocity, show
that the presented model is dynamically stable even for larger wavevector
values. Furthermore, it is explained why the previous works, which all are
based on the strain-gradient theory without acceleration gradients, predicted
very small values for the static length scale in the longitudinal direction
rather than the static length scale in the transverse directions.
| 0 | 1 | 0 | 0 | 0 | 0 |
Gaussian Processes for Demand Unconstraining | One of the key challenges in revenue management is unconstraining demand
data. Existing state of the art single-class unconstraining methods make
restrictive assumptions about the form of the underlying demand and can perform
poorly when applied to data which breaks these assumptions. In this paper, we
propose a novel unconstraining method that uses Gaussian process (GP)
regression. We develop a novel GP model by constructing and implementing a new
non-stationary covariance function for the GP which enables it to learn and
extrapolate the underlying demand trend. We show that this method can cope with
important features of realistic demand data, including nonlinear demand trends,
variations in total demand, lengthy periods of constraining, non-exponential
inter-arrival times, and discontinuities/changepoints in demand data. In all
such circumstances, our results indicate that GPs outperform existing
single-class unconstraining methods.
| 0 | 0 | 0 | 1 | 0 | 0 |
Generalization of two Bonnet's Theorems to the relative Differential Geometry of the 3-dimensional Euclidean space | This paper is devoted to the 3-dimensional relative differential geometry of
surfaces. In the Euclidean space $\R{E} ^3 $ we consider a surface $\varPhi
%\colon \vect{x} = \vect{x}(u^1,u^2) $ with position vector field $\vect{x}$,
which is relatively normalized by a relative normalization $\vect{y}% (u^1,u^2)
$. A surface $\varPhi^*% \colon \vect{x}^* = \vect{x}^*(u^1,u^2) $ with
position vector field $\vect{x}^* = \vect{x} + \mu \, \vect{y}$, where $\mu$ is
a real constant, is called a relatively parallel surface to $\varPhi$. Then
$\vect{y}$ is also a relative normalization of $\varPhi^*$. The aim of this
paper is to formulate and prove the relative analogues of two well known
theorems of O.~Bonnet which concern the parallel surfaces (see~\cite{oB1853}).
| 0 | 0 | 1 | 0 | 0 | 0 |
Some Aspects of Uniqueness Theory of Entire and Meromorphic Functions (Ph.D. thesis) | The subject of our thesis is the uniqueness theory of meromorphic functions
and it is devoted to problems concerning Bruck conjecture, set sharing and
related topics. The tool, we used in our discussions is classical Nevanlinna
theory of meromorphic functions. In 1996, in order to find the relation between
an entire function with its derivative, counterpart sharing one value CM, a
famous conjecture was proposed by R. Bruck. Since then the conjecture and its
analogous results have been investigated by many researchers and continuous
efforts have been put on by them. In our thesis, we have obtained similar types
of conclusions as that of Bruck for two differential polynomials which in turn
improve several existing results under different sharing environment. A number
of examples have been exhibited to justify the necessity or sharpness of some
conditions, hypothesis used in the thesis. As a variation of value sharing, F.
Gross first introduced the idea of set sharing, by proposing a problem, which
has later became popular as Gross Problem. Inspired by the Gross' Problem, the
set sharing problems were started which was later shifted towards the
characterization of the polynomial backbone of different unique range sets. In
our study, we introduced some new type of unique range sets and at the same
time, we further explored the anatomy of these unique range sets generating
polynomials as well as connected Bruck conjecture with Gross' Problem.
| 0 | 0 | 1 | 0 | 0 | 0 |
Transfer Regression via Pairwise Similarity Regularization | Transfer learning methods address the situation where little labeled training
data from the "target" problem exists, but much training data from a related
"source" domain is available. However, the overwhelming majority of transfer
learning methods are designed for simple settings where the source and target
predictive functions are almost identical, limiting the applicability of
transfer learning methods to real world data. We propose a novel, weaker,
property of the source domain that can be transferred even when the source and
target predictive functions diverge. Our method assumes the source and target
functions share a Pairwise Similarity property, where if the source function
makes similar predictions on a pair of instances, then so will the target
function. We propose Pairwise Similarity Regularization Transfer, a flexible
graph-based regularization framework which can incorporate this modeling
assumption into standard supervised learning algorithms. We show how users can
encode domain knowledge into our regularizer in the form of spatial continuity,
pairwise "similarity constraints" and how our method can be scaled to large
data sets using the Nystrom approximation. Finally, we present positive and
negative results on real and synthetic data sets and discuss when our Pairwise
Similarity transfer assumption seems to hold in practice.
| 1 | 0 | 0 | 0 | 0 | 0 |
Star formation in a galactic outflow | Recent observations have revealed massive galactic molecular outflows that
may have physical conditions (high gas densities) required to form stars.
Indeed, several recent models predict that such massive galactic outflows may
ignite star formation within the outflow itself. This star-formation mode, in
which stars form with high radial velocities, could contribute to the
morphological evolution of galaxies, to the evolution in size and velocity
dispersion of the spheroidal component of galaxies, and would contribute to the
population of high-velocity stars, which could even escape the galaxy. Such
star formation could provide in-situ chemical enrichment of the circumgalactic
and intergalactic medium (through supernova explosions of young stars on large
orbits), and some models also predict that it may contribute substantially to
the global star formation rate observed in distant galaxies. Although there
exists observational evidence for star formation triggered by outflows or jets
into their host galaxy, as a consequence of gas compression, evidence for star
formation occurring within galactic outflows is still missing. Here we report
new spectroscopic observations that unambiguously reveal star formation
occurring in a galactic outflow at a redshift of 0.0448. The inferred star
formation rate in the outflow is larger than 15 Msun/yr. Star formation may
also be occurring in other galactic outflows, but may have been missed by
previous observations owing to the lack of adequate diagnostics.
| 0 | 1 | 0 | 0 | 0 | 0 |
Artificial intelligence in peer review: How can evolutionary computation support journal editors? | With the volume of manuscripts submitted for publication growing every year,
the deficiencies of peer review (e.g. long review times) are becoming more
apparent. Editorial strategies, sets of guidelines designed to speed up the
process and reduce editors workloads, are treated as trade secrets by
publishing houses and are not shared publicly. To improve the effectiveness of
their strategies, editors in small publishing groups are faced with undertaking
an iterative trial-and-error approach. We show that Cartesian Genetic
Programming, a nature-inspired evolutionary algorithm, can dramatically improve
editorial strategies. The artificially evolved strategy reduced the duration of
the peer review process by 30%, without increasing the pool of reviewers (in
comparison to a typical human-developed strategy). Evolutionary computation has
typically been used in technological processes or biological ecosystems. Our
results demonstrate that genetic programs can improve real-world social systems
that are usually much harder to understand and control than physical systems.
| 1 | 0 | 0 | 0 | 0 | 0 |
Invariant Bianchi type I models in $f\left(R,T\right)$ Gravity | In this paper, we search the existence of invariant solutions of Bianchi type
I space-time in the context of $f\left(R,T\right)$ gravity. The exact solution
of the Einstein's field equations are derived by using Lie point symmetry
analysis method that yield two models of invariant universe for symmetries
$X^{(1)}$ and $X^{(3)}$. The model with symmetries $X^{(1)}$ begins with big
bang singularity while the model with symmetries $X^{(3)}$ does not favour the
big bang singularity. Under this specification, we find out at set of singular
and non singular solution of Bianchi type I model which present several other
physically valid features within the framework of $f\left(R,T\right)$.
| 0 | 1 | 0 | 0 | 0 | 0 |
Representation theoretic realization of non-symmetric Macdonald polynomials at infinity | We study the nonsymmetric Macdonald polynomials specialized at infinity from
various points of view. First, we define a family of modules of the Iwahori
algebra whose characters are equal to the nonsymmetric Macdonald polynomials
specialized at infinity. Second, we show that these modules are isomorphic to
the dual spaces of sections of certain sheaves on the semi-infinite Schubert
varieties. Third, we prove that the global versions of these modules are
homologically dual to the level one affine Demazure modules.
| 0 | 0 | 1 | 0 | 0 | 0 |
Complex spectrogram enhancement by convolutional neural network with multi-metrics learning | This paper aims to address two issues existing in the current speech
enhancement methods: 1) the difficulty of phase estimations; 2) a single
objective function cannot consider multiple metrics simultaneously. To solve
the first problem, we propose a novel convolutional neural network (CNN) model
for complex spectrogram enhancement, namely estimating clean real and imaginary
(RI) spectrograms from noisy ones. The reconstructed RI spectrograms are
directly used to synthesize enhanced speech waveforms. In addition, since
log-power spectrogram (LPS) can be represented as a function of RI
spectrograms, its reconstruction is also considered as another target. Thus a
unified objective function, which combines these two targets (reconstruction of
RI spectrograms and LPS), is equivalent to simultaneously optimizing two
commonly used objective metrics: segmental signal-to-noise ratio (SSNR) and
logspectral distortion (LSD). Therefore, the learning process is called
multi-metrics learning (MML). Experimental results confirm the effectiveness of
the proposed CNN with RI spectrograms and MML in terms of improved standardized
evaluation metrics on a speech enhancement task.
| 1 | 0 | 0 | 1 | 0 | 0 |
Optimal Threshold Design for Quanta Image Sensor | Quanta Image Sensor (QIS) is a binary imaging device envisioned to be the
next generation image sensor after CCD and CMOS. Equipped with a massive number
of single photon detectors, the sensor has a threshold $q$ above which the
number of arriving photons will trigger a binary response "1", or "0"
otherwise. Existing methods in the device literature typically assume that
$q=1$ uniformly. We argue that a spatially varying threshold can significantly
improve the signal-to-noise ratio of the reconstructed image. In this paper, we
present an optimal threshold design framework. We make two contributions.
First, we derive a set of oracle results to theoretically inform the maximally
achievable performance. We show that the oracle threshold should match exactly
with the underlying pixel intensity. Second, we show that around the oracle
threshold there exists a set of thresholds that give asymptotically unbiased
reconstructions. The asymptotic unbiasedness has a phase transition behavior
which allows us to develop a practical threshold update scheme using a
bisection method. Experimentally, the new threshold design method achieves
better rate of convergence than existing methods.
| 1 | 0 | 0 | 0 | 0 | 0 |
Statistics of $K$-groups modulo $p$ for the ring of integers of a varying quadratic number field | For each odd prime $p$, we conjecture the distribution of the $p$-torsion
subgroup of $K_{2n}(\mathcal{O}_F)$ as $F$ ranges over real quadratic fields,
or over imaginary quadratic fields. We then prove that the average size of the
$3$-torsion subgroup of $K_{2n}(\mathcal{O}_F)$ is as predicted by this
conjecture.
| 0 | 0 | 1 | 0 | 0 | 0 |
Learning Models for Shared Control of Human-Machine Systems with Unknown Dynamics | We present a novel approach to shared control of human-machine systems. Our
method assumes no a priori knowledge of the system dynamics. Instead, we learn
both the dynamics and information about the user's interaction from observation
through the use of the Koopman operator. Using the learned model, we define an
optimization problem to compute the optimal policy for a given task, and
compare the user input to the optimal input. We demonstrate the efficacy of our
approach with a user study. We also analyze the individual nature of the
learned models by comparing the effectiveness of our approach when the
demonstration data comes from a user's own interactions, from the interactions
of a group of users and from a domain expert. Positive results include
statistically significant improvements on task metrics when comparing a
user-only control paradigm with our shared control paradigm. Surprising results
include findings that suggest that individualizing the model based on a user's
own data does not effect the ability to learn a useful dynamic system. We
explore this tension as it relates to developing human-in-the-loop systems
further in the discussion.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Feature Embedding Strategy for High-level CNN representations from Multiple ConvNets | Following the rapidly growing digital image usage, automatic image
categorization has become preeminent research area. It has broaden and adopted
many algorithms from time to time, whereby multi-feature (generally,
hand-engineered features) based image characterization comes handy to improve
accuracy. Recently, in machine learning, pre-trained deep convolutional neural
networks (DCNNs or ConvNets) have been that the features extracted through such
DCNN can improve classification accuracy. Thence, in this paper, we further
investigate a feature embedding strategy to exploit cues from multiple DCNNs.
We derive a generalized feature space by embedding three different DCNN
bottleneck features with weights respect to their Softmax cross-entropy loss.
Test outcomes on six different object classification data-sets and an action
classification data-set show that regardless of variation in image statistics
and tasks the proposed multi-DCNN bottleneck feature fusion is well suited to
image classification tasks and an effective complement of DCNN. The comparisons
to existing fusion-based image classification approaches prove that the
proposed method surmounts the state-of-the-art methods and produces competitive
results with fully trained DCNNs as well.
| 1 | 0 | 0 | 0 | 0 | 0 |
Look-Ahead in the Two-Sided Reduction to Compact Band Forms for Symmetric Eigenvalue Problems and the SVD | We address the reduction to compact band forms, via unitary similarity
transformations, for the solution of symmetric eigenvalue problems and the
computation of the singular value decomposition (SVD). Concretely, in the first
case we revisit the reduction to symmetric band form while, for the second
case, we propose a similar alternative, which transforms the original matrix to
(unsymmetric) band form, replacing the conventional reduction method that
produces a triangular--band output. In both cases, we describe algorithmic
variants of the standard Level-3 BLAS-based procedures, enhanced with
look-ahead, to overcome the performance bottleneck imposed by the panel
factorization. Furthermore, our solutions employ an algorithmic block size that
differs from the target bandwidth, illustrating the important performance
benefits of this decision. Finally, we show that our alternative compact band
form for the SVD is key to introduce an effective look-ahead strategy into the
corresponding reduction procedure.
| 1 | 0 | 0 | 0 | 0 | 0 |
Non-existence of a Wente's $L^\infty$ estimate for the Neumann problem | We provide a counterexample of Wente's inequality in the context of Neumann
boundary conditions. We will also show that Wente's estimates fails for general
boundary conditions of Robin type.
| 0 | 0 | 1 | 0 | 0 | 0 |
Regular characters of classical groups over complete discrete valuation rings | Let $\mathfrak{o}$ be a complete discrete valuation ring with finide residue
field $\mathsf{k}$ of odd characteristic, and let $\mathbf{G}$ be a symplectic
or special orthogonal group scheme over $\mathfrak{o}$. For any
$\ell\in\mathbb{N}$ let $G^\ell$ denote the $\ell$-th principal congruence
subgroup of $\mathbf{G}(\mathfrak{o})$. An irreducible character of the group
$\mathbf{G}(\mathfrak{o})$ is said to be regular if it is trivial on a subgroup
$G^{\ell+1}$ for some $\ell$, and if its restriction to
$G^\ell/G^{\ell+1}\simeq \mathrm{Lie}(\mathbf{G})(\mathsf{k})$ consists of
characters of minimal $\mathbf{G}(\mathsf{k}^{\rm alg})$ stabilizer dimension.
In the present paper we consider the regular characters of such classical
groups over $\mathfrak{o}$, and construct and enumerate all regular characters
of $\mathbf{G}(\mathfrak{o})$, when the characteristic of $\mathsf{k}$ is
greater than two. As a result, we compute the regular part of their
representation zeta function.
| 0 | 0 | 1 | 0 | 0 | 0 |
Quantitative analysis of nonadiabatic effects in dense H$_3$S and PH$_3$ superconductors | The comparison study of high pressure superconducting state of recently
synthesized H$_3$S and PH$_3$ compounds are conducted within the framework of
the strong-coupling theory. By generalization of the standard Eliashberg
equations to include the lowest-order vertex correction, we have investigated
the influence of the nonadiabatic effects on the Coulomb pseudopotential,
electron effective mass, energy gap function and on the $2\Delta(0)/T_C$ ratio.
We found that, for a fixed value of critical temperature ($178$ K for H$_3$S
and $81$ K for PH$_3$), the nonadiabatic corrections reduce the Coulomb
pseudopotential for H$_3$S from $0.204$ to $0.185$ and for PH$_3$ from $0.088$
to $0.083$, however, the electron effective mass and ratio $2\Delta(0)/T_C$
remain unaffected. Independently of the assumed method of analysis, the
thermodynamic parameters of superconducting H$_3$S and PH$_3$ strongly deviate
from the prediction of BCS theory due to the strong-coupling and retardation
effects.
| 0 | 1 | 0 | 0 | 0 | 0 |
Radio-flaring Ultracool Dwarf Population Synthesis | Over a dozen ultracool dwarfs (UCDs), low-mass objects of spectral types
$\geq$M7, are known to be sources of radio flares. These typically
several-minutes-long radio bursts can be up to 100\% circularly polarized and
have high brightness temperatures, consistent with coherent emission via the
electron cyclotron maser operating in $\sim$kG magnetic fields. Recently, the
statistical properties of the bulk physical parameters that describe these UCDs
have become adequately described to permit synthesis of the population of
radio-flaring objects. For the first time, I construct a Monte Carlo simulator
to model the population of these radio-flaring UCDs. This simulator is powered
by Intel Secure Key (ISK)- a new processor technology that uses a local entropy
source to improve random number generation that has heretofore been used to
improve cryptography. The results from this simulator indicate that only
$\sim$5% of radio-flaring UCDs within the local interstellar neighborhood
($<$25 pc away) have been discovered. I discuss a number of scenarios which may
explain this radio-flaring fraction, and suggest that the observed behavior is
likely a result of several factors. The performance of ISK as compared to other
pseudorandom number generators is also evaluated, and its potential utility for
other astrophysical codes briefly described.
| 0 | 1 | 0 | 0 | 0 | 0 |
Janus: An Uncertain Cache Architecture to Cope with Side Channel Attacks | Side channel attacks are a major class of attacks to crypto-systems.
Attackers collect and analyze timing behavior, I/O data, or power consumption
in these systems to undermine their effectiveness in protecting sensitive
information. In this work, we propose a new cache architecture, called Janus,
to enable crypto-systems to introduce randomization and uncertainty in their
runtime timing behavior and power utilization profile. In the proposed cache
architecture, each data block is equipped with an on-off flag to enable/disable
the data block. The Janus architecture has two special instructions in its
instruction set to support the on-off flag. Beside the analytical evaluation of
the proposed cache architecture, we deploy it in an ARM-7 processor core to
study its feasibility and practicality. Results show a significant variation in
the timing behavior across all the benchmarks. The new secure processor
architecture has minimal hardware overhead and significant improvement in
protecting against power analysis and timing behavior attacks.
| 1 | 0 | 0 | 0 | 0 | 0 |
Predicting Adversarial Examples with High Confidence | It has been suggested that adversarial examples cause deep learning models to
make incorrect predictions with high confidence. In this work, we take the
opposite stance: an overly confident model is more likely to be vulnerable to
adversarial examples. This work is one of the most proactive approaches taken
to date, as we link robustness with non-calibrated model confidence on noisy
images, providing a data-augmentation-free path forward. The adversarial
examples phenomenon is most easily explained by the trend of increasing
non-regularized model capacity, while the diversity and number of samples in
common datasets has remained flat. Test accuracy has incorrectly been
associated with true generalization performance, ignoring that training and
test splits are often extremely similar in terms of the overall representation
space. The transferability property of adversarial examples was previously used
as evidence against overfitting arguments, a perceived random effect, but
overfitting is not always random.
| 0 | 0 | 0 | 1 | 0 | 0 |
A New Taxonomy for Symbiotic EM Sensors | It is clear that the EM spectrum is now rapidly reaching saturation,
especially for frequencies below 10~GHz. Governments, who influence the
regulatory authorities around the world, have resorted to auctioning the use of
spectrum, in a sense to gauge the importance of a particular user. Billions of
USD are being paid for modest bandwidths.
The earth observation, astronomy and similar science driven communities
cannot compete financially with such a pressure system, so this is where
governments have to step in and assess /regulate the situation.
It has been a pleasure to see a situation where the communications and
broadcast communities have come together to formulate sharing of an important
part of the spectrum (roughly, 50 MHz to 800 MHz) in an IEEE standard,
IEEE802.22. This standard (known as the "TV White Space Network" (built on
lower level standards) shows a way that fixed and mobile users can collaborate
in geographically widespread regions, using cognitive radio and geographic
databases of users. This White Space (WS) standard is well described in the
literature and is not the major topic of this short paper.
We wish to extend the idea of the WS concept to include the idea of EM
sensors (such as Radar) adopting this approach to spectrum sharing, providing a
quantum leap in access to spectrum. We postulate that networks of sensors,
using the tools developed by the WS community, can replace and enhance our
present set of EM sensors.
We first define what Networks of Sensors entail (with some history), and then
go on to define, based on a Taxonomy of Symbiosis defined by de
Bary\cite{symb}, how these sensors and other users (especially communications)
can co-exist. This new taxonomy is important for understanding, and should
replace somewhat outdated terminologies from the radar world.
| 1 | 0 | 0 | 0 | 0 | 0 |
Local-ring network automata and the impact of hyperbolic geometry in complex network link-prediction | Topological link-prediction can exploit the entire network topology (global
methods) or only the neighbourhood (local methods) of the link to predict.
Global methods are believed the best. Is this common belief well-founded?
Stochastic-Block-Model (SBM) is a global method believed as one of the best
link-predictors, therefore it is considered a reference for comparison. But,
our results suggest that SBM, whose computational time is high, cannot in
general overcome the Cannistraci-Hebb (CH) network automaton model that is a
simple local-learning-rule of topological self-organization proved as the
current best local-based and parameter-free deterministic rule for
link-prediction. To elucidate the reasons of this unexpected result, we
formally introduce the notion of local-ring network automata models and their
relation with the nature of common-neighbours' definition in complex network
theory. After extensive tests, we recommend Structural-Perturbation-Method
(SPM) as the new best global method baseline. However, even SPM overall does
not outperform CH and in several evaluation frameworks we astonishingly found
the opposite. In particular, CH was the best predictor for synthetic networks
generated by the Popularity-Similarity-Optimization (PSO) model, and its
performance in PSO networks with community structure was even better than using
the original internode-hyperbolic-distance as link-predictor. Interestingly,
when tested on non-hyperbolic synthetic networks the performance of CH
significantly dropped down indicating that this rule of network
self-organization could be strongly associated to the rise of hyperbolic
geometry in complex networks. The superiority of global methods seems a
"misleading belief" caused by a latent geometry bias of the few small networks
used as benchmark in previous studies. We propose to found a latent geometry
theory of link-prediction in complex networks.
| 1 | 0 | 0 | 0 | 0 | 0 |
Emission of Circularly Polarized Terahertz Wave from Inhomogeneous Intrinsic Josephson Junctions | We have theoretically demonstrated the emission of circularly-polarized
terahertz (THz) waves from intrinsic Josephson junctions (IJJs) which is
locally heated by an external heat source such as the laser irradiation. We
focus on a mesa-structured IJJ whose geometry is slightly deviate from a square
and find that the local heating make it possible to emit circularly-polarized
THz waves. In this mesa, the inhomogeneity of critical current density induced
by the local heating excites the electromagnetic cavity modes TM (1,0) and TM
(0,1), whose polarizations are orthogonal to each other. The mixture of these
modes results in the generation of circularly-polarized THz waves. We also show
that the circular polarization dramatically changes with the applied voltage.
The emitter based on IJJs can emit circularly-polarized and continuum THz waves
by the local heating, and will be useful for various technological application.
| 0 | 1 | 0 | 0 | 0 | 0 |
Asymmetry-Induced Synchronization in Oscillator Networks | A scenario has recently been reported in which in order to stabilize complete
synchronization of an oscillator network---a symmetric state---the symmetry of
the system itself has to be broken by making the oscillators nonidentical. But
how often does such behavior---which we term asymmetry-induced synchronization
(AISync)---occur in oscillator networks? Here we present the first general
scheme for constructing AISync systems and demonstrate that this behavior is
the norm rather than the exception in a wide class of physical systems that can
be seen as multilayer networks. Since a symmetric network in complete synchrony
is the basic building block of cluster synchronization in more general
networks, AISync should be common also in facilitating cluster synchronization
by breaking the symmetry of the cluster subnetworks.
| 0 | 1 | 0 | 0 | 0 | 0 |
Opportunistic Downlink Interference Alignment for Multi-Cell MIMO Networks | In this paper, we propose an opportunistic downlink interference alignment
(ODIA) for interference-limited cellular downlink, which intelligently combines
user scheduling and downlink IA techniques. The proposed ODIA not only
efficiently reduces the effect of inter-cell interference from other-cell base
stations (BSs) but also eliminates intra-cell interference among spatial
streams in the same cell. We show that the minimum number of users required to
achieve a target degrees-of-freedom (DoF) can be fundamentally reduced, i.e.,
the fundamental user scaling law can be improved by using the ODIA, compared
with the existing downlink IA schemes. In addition, we adopt a limited feedback
strategy in the ODIA framework, and then analyze the number of feedback bits
required for the system with limited feedback to achieve the same user scaling
law of the ODIA as the system with perfect CSI. We also modify the original
ODIA in order to further improve sum-rate, which achieves the optimal multiuser
diversity gain, i.e., $\log\log N$, per spatial stream even in the presence of
downlink inter-cell interference, where $N$ denotes the number of users in a
cell. Simulation results show that the ODIA significantly outperforms existing
interference management techniques in terms of sum-rate in realistic cellular
environments. Note that the ODIA operates in a non-collaborative and decoupled
manner, i.e., it requires no information exchange among BSs and no iterative
beamformer optimization between BSs and users, thus leading to an easier
implementation.
| 1 | 0 | 1 | 0 | 0 | 0 |
The Repeated Divisor Function and Possible Correlation with Highly Composite Numbers | Let n be a non-null positive integer and $d(n)$ is the number of positive
divisors of n, called the divisor function. Of course, $d(n) \leq n$. $d(n) =
1$ if and only if $n = 1$. For $n > 2$ we have $d(n) \geq 2$ and in this paper
we try to find the smallest $k$ such that $d(d(...d(n)...)) = 2$ where the
divisor function is applied $k$ times. At the end of the paper we make a
conjecture based on some observations.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Language Hierarchy and Kitchens-Type Theorem for Self-Similar Groups | We generalize the notion of self-similar groups of infinite tree
automorphisms to allow for groups which are defined on a tree but do not act
faithfully on it. The elements of such a group correspond to labeled trees
which may be recognized by a tree automaton (e.g. Rabin, Büchi, etc.), or
considered as elements of a tree shift (e.g. of finite type, sofic) as in
symbolic dynamics. We give examples to show that the various classes of
self-similar groups defined in this way do not coincide. As the main result,
extending the classical result of Kitchens on one-dimensional group shifts, we
provide a sufficient condition for a self-similar group whose elements form a
sofic tree shift to be a tree shift of finite type. As an application, we show
that the closure of certain self-similar groups of tree automorphisms are not
Rabin-recognizable. \end{abstract}
| 0 | 0 | 1 | 0 | 0 | 0 |
Distance Covariance in Metric Spaces: Non-Parametric Independence Testing in Metric Spaces (Master's thesis) | The aim of this thesis is to find a solution to the non-parametric
independence problem in separable metric spaces. Suppose we are given finite
collection of samples from an i.i.d. sequence of paired random elements, where
each marginal has values in some separable metric space. The non-parametric
independence problem raises the question on how one can use these samples to
reasonably draw inference on whether the marginal random elements are
independent or not. We will try to answer this question by utilizing the
so-called distance covariance functional in metric spaces developed by Russell
Lyons. We show that, if the marginal spaces are so-called metric spaces of
strong negative type (e.g. seperable Hilbert spaces), then the distance
covariance functional becomes a direct indicator of independence. That is, one
can directly determine whether the marginals are independent or not based
solely on the value of this functional. As the functional formally takes the
simultaneous distribution as argument, its value is not known in the posed
non-parametric independence problem. Hence, we construct estimators of the
distance covariance functional, and show that they exhibit asymptotic
properties which can be used to construct asymptotically consistent statistical
tests of independence. Finally, as the rejection thresholds of these
statistical tests are non-traceable we argue that they can be reasonably
bootstrapped.
| 0 | 0 | 1 | 1 | 0 | 0 |
Sum-Product-Quotient Networks | We present a novel tractable generative model that extends Sum-Product
Networks (SPNs) and significantly boosts their power. We call it
Sum-Product-Quotient Networks (SPQNs), whose core concept is to incorporate
conditional distributions into the model by direct computation using quotient
nodes, e.g. $P(A|B) = \frac{P(A,B)}{P(B)}$. We provide sufficient conditions
for the tractability of SPQNs that generalize and relax the decomposable and
complete tractability conditions of SPNs. These relaxed conditions give rise to
an exponential boost to the expressive efficiency of our model, i.e. we prove
that there are distributions which SPQNs can compute efficiently but require
SPNs to be of exponential size. Thus, we narrow the gap in expressivity between
tractable graphical models and other Neural Network-based generative models.
| 1 | 0 | 0 | 1 | 0 | 0 |
When Simpler Data Does Not Imply Less Information: A Study of User Profiling Scenarios with Constrained View of Mobile HTTP(S) Traffic | The exponential growth in smartphone adoption is contributing to the
availability of vast amounts of human behavioral data. This data enables the
development of increasingly accurate data-driven user models that facilitate
the delivery of personalized services which are often free in exchange for the
use of its customers' data. Although such usage conventions have raised many
privacy concerns, the increasing value of personal data is motivating diverse
entities to aggressively collect and exploit the data. In this paper, we unfold
profiling scenarios around mobile HTTP(S) traffic, focusing on those that have
limited but meaningful segments of the data. The capability of the scenarios to
profile personal information is examined with real user data, collected
in-the-wild from 61 mobile phone users for a minimum of 30 days. Our study
attempts to model heterogeneous user traits and interests, including
personality, boredom proneness, demographics, and shopping interests. Based on
our modeling results, we discuss various implications to personalization,
privacy, and personal data rights.
| 1 | 0 | 0 | 0 | 0 | 0 |
Algebraic Description of Shape Invariance Revisited | We revisit the algebraic description of shape invariance method in
one-dimensional quantum mechanics. In this note we focus on four particular
examples: the Kepler problem in flat space, the Kepler problem in spherical
space, the Kepler problem in hyperbolic space, and the Rosen-Morse potential
problem. Following the prescription given by Gangopadhyaya et al., we first
introduce certain nonlinear algebraic systems. We then show that, if the model
parameters are appropriately quantized, the bound-state problems can be solved
solely by means of representation theory.
| 0 | 1 | 0 | 0 | 0 | 0 |
Joint Smoothing, Tracking, and Forecasting Based on Continuous-Time Target Trajectory Fitting | We present a continuous time state estimation framework that unifies
traditionally individual tasks of smoothing, tracking, and forecasting (STF),
for a class of targets subject to smooth motion processes, e.g., the target
moves with nearly constant acceleration or affected by insignificant noises.
Fundamentally different from the conventional Markov transition formulation,
the state process is modeled by a continuous trajectory function of time (FoT)
and the STF problem is formulated as an online data fitting problem with the
goal of finding the trajectory FoT that best fits the observations in a sliding
time-window. Then, the state of the target, whether the past (namely,
smoothing), the current (filtering) or the near-future (forecasting), can be
inferred from the FoT. Our framework releases stringent statistical modeling of
the target motion in real time, and is applicable to a broad range of real
world targets of significance such as passenger aircraft and ships which move
on scheduled, (segmented) smooth paths but little statistical knowledge is
given about their real time movement and even about the sensors. In addition,
the proposed STF framework inherits the advantages of data fitting for
accommodating arbitrary sensor revisit time, target maneuvering and missed
detection. The proposed method is compared with state of the art estimators in
scenarios of either maneuvering or non-maneuvering target.
| 1 | 0 | 0 | 1 | 0 | 0 |
Parkinson's Disease Digital Biomarker Discovery with Optimized Transitions and Inferred Markov Emissions | We search for digital biomarkers from Parkinson's Disease by observing
approximate repetitive patterns matching hypothesized step and stride periodic
cycles. These observations were modeled as a cycle of hidden states with
randomness allowing deviation from a canonical pattern of transitions and
emissions, under the hypothesis that the averaged features of hidden states
would serve to informatively characterize classes of patients/controls. We
propose a Hidden Semi-Markov Model (HSMM), a latent-state model, emitting
3D-acceleration vectors. Transitions and emissions are inferred from data. We
fit separate models per unique device and training label. Hidden Markov Models
(HMM) force geometric distributions of the duration spent at each state before
transition to a new state. Instead, our HSMM allows us to specify the
distribution of state duration. This modified version is more effective because
we are interested more in each state's duration than the sequence of distinct
states, allowing inclusion of these durations the feature vector.
| 1 | 0 | 0 | 1 | 0 | 0 |
Bifurcation to locked fronts in two component reaction-diffusion systems | We study invasion fronts and spreading speeds in two component
reaction-diffusion systems. Using a variation of Lin's method, we construct
traveling front solutions and show the existence of a bifurcation to locked
fronts where both components invade at the same speed. Expansions of the wave
speed as a function of the diffusion constant of one species are obtained. The
bifurcation can be sub or super-critical depending on whether the locked fronts
exist for parameter values above or below the bifurcation value. Interestingly,
in the sub-critical case numerical simulations reveal that the spreading speed
of the PDE system does not depend continuously on the coefficient of diffusion.
| 0 | 1 | 1 | 0 | 0 | 0 |
Contrastive Hebbian Learning with Random Feedback Weights | Neural networks are commonly trained to make predictions through learning
algorithms. Contrastive Hebbian learning, which is a powerful rule inspired by
gradient backpropagation, is based on Hebb's rule and the contrastive
divergence algorithm. It operates in two phases, the forward (or free) phase,
where the data are fed to the network, and a backward (or clamped) phase, where
the target signals are clamped to the output layer of the network and the
feedback signals are transformed through the transpose synaptic weight
matrices. This implies symmetries at the synaptic level, for which there is no
evidence in the brain. In this work, we propose a new variant of the algorithm,
called random contrastive Hebbian learning, which does not rely on any synaptic
weights symmetries. Instead, it uses random matrices to transform the feedback
signals during the clamped phase, and the neural dynamics are described by
first order non-linear differential equations. The algorithm is experimentally
verified by solving a Boolean logic task, classification tasks (handwritten
digits and letters), and an autoencoding task. This article also shows how the
parameters affect learning, especially the random matrices. We use the
pseudospectra analysis to investigate further how random matrices impact the
learning process. Finally, we discuss the biological plausibility of the
proposed algorithm, and how it can give rise to better computational models for
learning.
| 0 | 0 | 0 | 1 | 1 | 0 |
A model of electrical impedance tomography on peripheral nerves for a neural-prosthetic control interface | Objective: A model is presented to evaluate the viability of using electrical
impedance tomography (EIT) with a nerve cuff to record neural activity in
peripheral nerves. Approach: Established modelling approaches in neural-EIT are
expanded on to be used, for the first time, on myelinated fibres which are
abundant in mammalian peripheral nerves and transmit motor commands. Main
results: Fibre impedance models indicate activity in unmyelinated fibres can be
screened out using operating frequencies above 100 Hz. At 1 kHz and 10 mm
electrode spacing, impedance magnitude of inactive intra-fascicle tissue and
the fraction changes during neural activity are estimated to be 1,142
{\Omega}.cm and -8.8x10-4, respectively, with a transverse current, and 328
{\Omega}.cm & -0.30, respectively with a longitudinal current. We show that a
novel EIT drive and measurement electrode pattern which utilises longitudinal
current and longitudinal differential boundary voltage measurements could
distinguish activity in different fascicles of a three-fascicle mammalian nerve
using pseudo-experimental data synthesised to replicate real operating
conditions. Significance: The results of this study provide an estimate of the
transient change in impedance of intra-fascicle tissue during neural activity
in mammalian nerve, and present a viable EIT electrode pattern, both of which
are critical steps towards implementing EIT in a nerve cuff for neural
prosthetics interfaces.
| 0 | 1 | 0 | 0 | 0 | 0 |
Above threshold scattering about a Feshbach resonance for ultracold atoms in an optical collider | Ultracold atomic gases have realised numerous paradigms of condensed matter
physics where control over interactions has crucially been afforded by tunable
Feshbach resonances. So far, the characterisation of these Feshbach resonances
has almost exclusively relied on experiments in the threshold regime near zero
energy. Here we use a laser-based collider to probe a narrow magnetic Feshbach
resonance of rubidium above threshold. By measuring the overall atomic loss
from colliding clouds as a function of magnetic field, we track the
energy-dependent resonance position. At higher energy, our collider scheme
broadens the loss feature, making the identification of the narrow resonance
challenging. However, we observe that the collisions give rise to shifts in the
centre-of-mass positions of outgoing clouds. The shifts cross zero at the
resonance and this allows us to accurately determine its location well above
threshold. Our inferred resonance positions are in excellent agreement with
theory.
| 0 | 1 | 0 | 0 | 0 | 0 |
Gaussian Process Subset Scanning for Anomalous Pattern Detection in Non-iid Data | Identifying anomalous patterns in real-world data is essential for
understanding where, when, and how systems deviate from their expected
dynamics. Yet methods that separately consider the anomalousness of each
individual data point have low detection power for subtle, emerging
irregularities. Additionally, recent detection techniques based on subset
scanning make strong independence assumptions and suffer degraded performance
in correlated data. We introduce methods for identifying anomalous patterns in
non-iid data by combining Gaussian processes with novel log-likelihood ratio
statistic and subset scanning techniques. Our approaches are powerful,
interpretable, and can integrate information across multiple data streams. We
illustrate their performance on numeric simulations and three open source
spatiotemporal datasets of opioid overdose deaths, 311 calls, and storm
reports.
| 0 | 0 | 0 | 1 | 0 | 0 |
Isolated resonances and nonlinear damping | We analyze isolated resonance curves (IRCs) in a single-degree-of-freedom
system with nonlinear damping. The adopted procedure exploits singularity
theory in conjunction with the harmonic balance method. The analysis unveils a
geometrical connection between the topology of the damping force and IRCs.
Specifically, we demonstrate that extremas and zeros of the damping force
correspond to the appearance and merging of IRCs.
| 0 | 1 | 0 | 0 | 0 | 0 |
UAV Aided Aerial-Ground IoT for Air Quality Sensing in Smart City: Architecture, Technologies and Implementation | As air pollution is becoming the largest environmental health risk, the
monitoring of air quality has drawn much attention in both theoretical studies
and practical implementations. In this article, we present a real-time,
fine-grained and power-efficient air quality monitoring system based on aerial
and ground sensing. The architecture of this system consists of four layers:
the sensing layer to collect data, the transmission layer to enable
bidirectional communications, the processing layer to analyze and process the
data, and the presentation layer to provide graphic interface for users. Three
major techniques are investigated in our implementation, given by the data
processing, the deployment strategy and the power control. For data processing,
spacial fitting and short-term prediction are performed to eliminate the
influences of the incomplete measurement and the latency of data uploading. The
deployment strategies of ground sensing and aerial sensing are investigated to
improve the quality of the collected data. The power control is further
considered to balance between power consumption and data accuracy. Our
implementation has been deployed in Peking University and Xidian University
since February 2018, and has collected about 100 thousand effective data
samples by June 2018.
| 1 | 0 | 0 | 0 | 0 | 0 |
Photoinduced vibronic coupling in two-level dissipative systems | Interaction of an electron system with a strong electromagnetic wave leads to
rearrangement both the electron and vibrational energy spectra of a dissipative
system. For instance, the optically coupled electron levels become split in the
conditions of the ac Stark effect that gives rise to appearance of the
nonadiabatic coupling between the electron and vibrational motions. The
nonadiabatic coupling exerts a substantial impact on the electron and phonon
dynamics and must be taken into account to determine the system wave functions.
In this paper, the vibronic coupling induced by the ac Stark effect is
considered. It is shown that the interaction between the electron states
dressed by an electromagnetic field and the forced vibrations of reservoir
oscillators under the action of rapid changing of the electron density with the
Rabi frequency is responsible for establishment of the photoinduced vibronic
coupling. However, if the resonance conditions for the optical phonon frequency
and the transition frequency of electrons in the dressed state basis are
satisfied, the vibronic coupling is due to the electron-phonon interaction.
Additionally, photoinduced vibronic coupling results in appearance of the
doubly dressed states which are formed by both the electron-photon and
electron-vibrational interactions.
| 0 | 1 | 0 | 0 | 0 | 0 |
Crowd Science: Measurements, Models, and Methods | The increasing practice of engaging crowds, where organizations use IT to
connect with dispersed individuals for explicit resource creation purposes, has
precipitated the need to measure the precise processes and benefits of these
activities over myriad different implementations. In this work, we seek to
address these salient and non-trivial considerations by laying a foundation of
theory, measures, and research methods that allow us to test crowd-engagement
efficacy across organizations, industries, technologies, and geographies. To do
so, we anchor ourselves in the Theory of Crowd Capital, a generalizable
framework for studying IT-mediated crowd-engagement phenomena, and put forth an
empirical apparatus of testable measures and generalizable methods to begin to
unify the field of crowd science.
| 1 | 0 | 0 | 0 | 0 | 0 |
Procedural Content Generation via Machine Learning (PCGML) | This survey explores Procedural Content Generation via Machine Learning
(PCGML), defined as the generation of game content using machine learning
models trained on existing content. As the importance of PCG for game
development increases, researchers explore new avenues for generating
high-quality content with or without human involvement; this paper addresses
the relatively new paradigm of using machine learning (in contrast with
search-based, solver-based, and constructive methods). We focus on what is most
often considered functional game content such as platformer levels, game maps,
interactive fiction stories, and cards in collectible card games, as opposed to
cosmetic content such as sprites and sound effects. In addition to using PCG
for autonomous generation, co-creativity, mixed-initiative design, and
compression, PCGML is suited for repair, critique, and content analysis because
of its focus on modeling existing content. We discuss various data sources and
representations that affect the resulting generated content. Multiple PCGML
methods are covered, including neural networks, long short-term memory (LSTM)
networks, autoencoders, and deep convolutional networks; Markov models,
$n$-grams, and multi-dimensional Markov chains; clustering; and matrix
factorization. Finally, we discuss open problems in the application of PCGML,
including learning from small datasets, lack of training data, multi-layered
learning, style-transfer, parameter tuning, and PCG as a game mechanic.
| 1 | 0 | 0 | 0 | 0 | 0 |
Automated Vulnerability Detection in Source Code Using Deep Representation Learning | Increasing numbers of software vulnerabilities are discovered every year
whether they are reported publicly or discovered internally in proprietary
code. These vulnerabilities can pose serious risk of exploit and result in
system compromise, information leaks, or denial of service. We leveraged the
wealth of C and C++ open-source code available to develop a large-scale
function-level vulnerability detection system using machine learning. To
supplement existing labeled vulnerability datasets, we compiled a vast dataset
of millions of open-source functions and labeled it with carefully-selected
findings from three different static analyzers that indicate potential
exploits. The labeled dataset is available at: this https URL. Using
these datasets, we developed a fast and scalable vulnerability detection tool
based on deep feature representation learning that directly interprets lexed
source code. We evaluated our tool on code from both real software packages and
the NIST SATE IV benchmark dataset. Our results demonstrate that deep feature
representation learning on source code is a promising approach for automated
software vulnerability detection.
| 1 | 0 | 0 | 1 | 0 | 0 |
X-Shooter study of accretion in Chamaeleon I: II. A steeper increase of accretion with stellar mass for very low mass stars? | The dependence of the mass accretion rate on the stellar properties is a key
constraint for star formation and disk evolution studies. Here we present a
study of a sample of stars in the Chamaeleon I star forming region carried out
using the VLT/X-Shooter spectrograph. The sample is nearly complete down to
M~0.1Msun for the young stars still harboring a disk in this region. We derive
the stellar and accretion parameters using a self-consistent method to fit the
broad-band flux-calibrated medium resolution spectrum. The correlation between
the accretion luminosity to the stellar luminosity, and of the mass accretion
rate to the stellar mass in the logarithmic plane yields slopes of 1.9 and 2.3,
respectively. These slopes and the accretion rates are consistent with previous
results in various star forming regions and with different theoretical
frameworks. However, we find that a broken power-law fit, with a steeper slope
for stellar luminosity smaller than ~0.45 Lsun and for stellar masses smaller
than ~ 0.3 Msun, is slightly preferred according to different statistical
tests, but the single power-law model is not excluded. The steeper relation for
lower mass stars can be interpreted as a faster evolution in the past for
accretion in disks around these objects, or as different accretion regimes in
different stellar mass ranges. Finally, we find two regions on the mass
accretion versus stellar mass plane empty of objects. One at high mass
accretion rates and low stellar masses, which is related to the steeper
dependence of the two parameters we derived. The second one is just above the
observational limits imposed by chromospheric emission. This empty region is
located at M~0.3-0.4Msun, typical masses where photoevaporation is known to be
effective, and at mass accretion rates ~10^-10 Msun/yr, a value compatible with
the one expected for photoevaporation to rapidly dissipate the inner disk.
| 0 | 1 | 0 | 0 | 0 | 0 |
A General Model for Robust Tensor Factorization with Unknown Noise | Because of the limitations of matrix factorization, such as losing spatial
structure information, the concept of low-rank tensor factorization (LRTF) has
been applied for the recovery of a low dimensional subspace from high
dimensional visual data. The low-rank tensor recovery is generally achieved by
minimizing the loss function between the observed data and the factorization
representation. The loss function is designed in various forms under different
noise distribution assumptions, like $L_1$ norm for Laplacian distribution and
$L_2$ norm for Gaussian distribution. However, they often fail to tackle the
real data which are corrupted by the noise with unknown distribution. In this
paper, we propose a generalized weighted low-rank tensor factorization method
(GWLRTF) integrated with the idea of noise modelling. This procedure treats the
target data as high-order tensor directly and models the noise by a Mixture of
Gaussians, which is called MoG GWLRTF. The parameters in the model are
estimated under the EM framework and through a new developed algorithm of
weighted low-rank tensor factorization. We provide two versions of the
algorithm with different tensor factorization operations, i.e., CP
factorization and Tucker factorization. Extensive experiments indicate the
respective advantages of this two versions in different applications and also
demonstrate the effectiveness of MoG GWLRTF compared with other competing
methods.
| 1 | 0 | 0 | 0 | 0 | 0 |
Phase locking the spin precession in a storage ring | This letter reports the successful use of feedback from a spin polarization
measurement to the revolution frequency of a 0.97 GeV/$c$ bunched and polarized
deuteron beam in the Cooler Synchrotron (COSY) storage ring in order to control
both the precession rate ($\approx 121$ kHz) and the phase of the horizontal
polarization component. Real time synchronization with a radio frequency (rf)
solenoid made possible the rotation of the polarization out of the horizontal
plane, yielding a demonstration of the feedback method to manipulate the
polarization. In particular, the rotation rate shows a sinusoidal function of
the horizontal polarization phase (relative to the rf solenoid), which was
controlled to within a one standard deviation range of $\sigma = 0.21$ rad. The
minimum possible adjustment was 3.7 mHz out of a revolution frequency of 753
kHz, which changes the precession rate by 26 mrad/s. Such a capability meets a
requirement for the use of storage rings to look for an intrinsic electric
dipole moment of charged particles.
| 0 | 1 | 0 | 0 | 0 | 0 |
Learning Vertex Representations for Bipartite Networks | Recent years have witnessed a widespread increase of interest in network
representation learning (NRL). By far most research efforts have focused on NRL
for homogeneous networks like social networks where vertices are of the same
type, or heterogeneous networks like knowledge graphs where vertices (and/or
edges) are of different types. There has been relatively little research
dedicated to NRL for bipartite networks. Arguably, generic network embedding
methods like node2vec and LINE can also be applied to learn vertex embeddings
for bipartite networks by ignoring the vertex type information. However, these
methods are suboptimal in doing so, since real-world bipartite networks concern
the relationship between two types of entities, which usually exhibit different
properties and patterns from other types of network data. For example,
E-Commerce recommender systems need to capture the collaborative filtering
patterns between customers and products, and search engines need to consider
the matching signals between queries and webpages. This work addresses the
research gap of learning vertex representations for bipartite networks. We
present a new solution BiNE, short for Bipartite Network Embedding}, which
accounts for two special properties of bipartite networks: long-tail
distribution of vertex degrees and implicit connectivity relations between
vertices of the same type. Technically speaking, we make three contributions:
(1) We design a biased random walk generator to generate vertex sequences that
preserve the long-tail distribution of vertices; (2) We propose a new
optimization framework by simultaneously modeling the explicit relations (i.e.,
observed links) and implicit relations (i.e., unobserved but transitive links);
(3) We explore the theoretical foundations of BiNE to shed light on how it
works, proving that BiNE can be interpreted as factorizing multiple matrices.
| 1 | 0 | 0 | 1 | 0 | 0 |
Liouville integrability of conservative peakons for a modified CH equation | The modified Camassa-Holm equation (also called FORQ) is one of numerous
$cousins$ of the Camassa-Holm equation possessing non-smoth solitons
($peakons$) as special solutions. The peakon sector of solutions is not
uniquely defined: in one peakon sector (dissapative) the Sobolev $H^1$ norm is
not preserved, in the other sector (conservative), introduced in [2], the time
evolution of peakons leaves the $H^1$ norm invariant. In this Letter, it is
shown that the conservative peakon equations of the modified Camassa-Holm can
be given an appropriate Poisson structure relative to which the equations are
Hamiltonian and, in fact, Liouville integrable. The latter is proved directly
by exploiting the inverse spectral techniques, especially asymptotic analysis
of solutions, developed elsewhere (in [3]).
| 0 | 1 | 1 | 0 | 0 | 0 |
State Representation Learning for Control: An Overview | Representation learning algorithms are designed to learn abstract features
that characterize data. State representation learning (SRL) focuses on a
particular kind of representation learning where learned features are in low
dimension, evolve through time, and are influenced by actions of an agent. The
representation is learned to capture the variation in the environment generated
by the agent's actions; this kind of representation is particularly suitable
for robotics and control scenarios. In particular, the low dimension
characteristic of the representation helps to overcome the curse of
dimensionality, provides easier interpretation and utilization by humans and
can help improve performance and speed in policy learning algorithms such as
reinforcement learning.
This survey aims at covering the state-of-the-art on state representation
learning in the most recent years. It reviews different SRL methods that
involve interaction with the environment, their implementations and their
applications in robotics control tasks (simulated or real). In particular, it
highlights how generic learning objectives are differently exploited in the
reviewed algorithms. Finally, it discusses evaluation methods to assess the
representation learned and summarizes current and future lines of research.
| 0 | 0 | 0 | 1 | 0 | 0 |
Tracking Emerges by Colorizing Videos | We use large amounts of unlabeled video to learn models for visual tracking
without manual human supervision. We leverage the natural temporal coherency of
color to create a model that learns to colorize gray-scale videos by copying
colors from a reference frame. Quantitative and qualitative experiments suggest
that this task causes the model to automatically learn to track visual regions.
Although the model is trained without any ground-truth labels, our method
learns to track well enough to outperform the latest methods based on optical
flow. Moreover, our results suggest that failures to track are correlated with
failures to colorize, indicating that advancing video colorization may further
improve self-supervised visual tracking.
| 1 | 0 | 0 | 0 | 0 | 0 |
Observation of topological valley transport of sound in sonic crystals | Valley pseudospin, labeling quantum states of energy extrema in momentum
space, is attracting tremendous attention1-13 because of its potential in
constructing new carrier of information. Compared with the non-topological bulk
valley transport realized soon after predictions1-5, the topological valley
transport in domain walls6-13 is extremely challenging owing to the
inter-valley scattering inevitably induced by atomic scale imperfectness, until
the recent electronic signature observed in bilayer graphene12,13. Here we
report the first experimental observation of topological valley transport of
sound in sonic crystals. The macroscopic nature of sonic crystals permits the
flexible and accurate design of domain walls. In addition to a direct
visualization of the valley-selective edge modes through spatial scanning of
sound field, reflection immunity is observed in sharply curved interfaces. The
topologically protected interface transport of sound, strikingly different from
that in traditional sound waveguides14,15, may serve as the basis of designing
devices with unconventional functions.
| 0 | 1 | 0 | 0 | 0 | 0 |
Equilibrium distributions and discrete Schur-constant models | This paper introduces Schur-constant equilibrium distribution models of
dimension n for arithmetic non-negative random variables. Such a model is
defined through the (several orders) equilibrium distributions of a univariate
survival function. First, the bivariate case is considered and analyzed in
depth, stressing the main characteristics of the Poisson case. The analysis is
then extended to the multivariate case. Several properties are derived,
including the implicit correlation and the distribution of the sum.
| 0 | 0 | 0 | 1 | 0 | 0 |
Tempered homogeneous spaces | Let $G$ be a semisimple real Lie group with finite center and $H$ a connected
closed subgroup.
We establish a geometric criterion which detects whether the representation
of $G$ in $L^2(G/H)$ is tempered.
| 0 | 0 | 1 | 0 | 0 | 0 |
Consistency of Dirichlet Partitions | A Dirichlet $k$-partition of a domain $U \subseteq \mathbb{R}^d$ is a
collection of $k$ pairwise disjoint open subsets such that the sum of their
first Laplace-Dirichlet eigenvalues is minimal. A discrete version of Dirichlet
partitions has been posed on graphs with applications in data analysis. Both
versions admit variational formulations: solutions are characterized by
minimizers of the Dirichlet energy of mappings from $U$ into a singular space
$\Sigma_k \subseteq \mathbb{R}^k$. In this paper, we extend results of N.\
García Trillos and D.\ Slepčev to show that there exist solutions of the
continuum problem arising as limits to solutions of a sequence of discrete
problems. Specifically, a sequence of points $\{x_i\}_{i \in \mathbb{N}}$ from
$U$ is sampled i.i.d.\ with respect to a given probability measure $\nu$ on $U$
and for all $n \in \mathbb{N}$, a geometric graph $G_n$ is constructed from the
first $n$ points $x_1, x_2, \ldots, x_n$ and the pairwise distances between the
points. With probability one with respect to the choice of points $\{x_i\}_{i
\in \mathbb{N}}$, we show that as $n \to \infty$ the discrete Dirichlet
energies for functions $G_n \to \Sigma_k$ $\Gamma$-converge to (a scalar
multiple of) the continuum Dirichlet energy for functions $U \to \Sigma_k$ with
respect to a metric coming from the theory of optimal transport. This, along
with a compactness property for the aforementioned energies that we prove,
implies the convergence of minimizers. When $\nu$ is the uniform distribution,
our results also imply the statistical consistency statement that Dirichlet
partitions of geometric graphs converge to partitions of the sampled space in
the Hausdorff sense.
| 0 | 0 | 1 | 1 | 0 | 0 |
Experimental investigation of the wake behind a rotating sphere | The wake behind a sphere, rotating about an axis aligned with the streamwise
direction, has been experimentally investigated in a low-velocity water tunnel
using LIF visualizations and PIV measurements. The measurements focused on the
evolution of the flow regimes that appear depending on two control parameters,
namely the Reynolds number $Re$ and the dimensionless rotation or swirl rate
$\Omega$, which is the ratio of the maximum azimuthal velocity of the body to
the free stream velocity. In the present investigation, we cover the range of
$Re$ smaller than 400 and $\Omega$ from 0 and 4. Different wakes regimes such
as an axisymmetric flow, a low helical state and a high helical mode are
represented in the ($Re$, $\Omega$) parameter plane.
| 0 | 1 | 0 | 0 | 0 | 0 |
Test results of a prototype device to calibrate the Large Size Telescope camera proposed for the Cherenkov Telescope Array | A Large Size air Cherenkov Telescope (LST) prototype, proposed for the
Cherenkov Telescope Array (CTA), is under construction at the Canary Island of
La Palma (Spain) this year. The LST camera, which comprises an array of about
500 photomultipliers (PMTs), requires a precise and regular calibration over a
large dynamic range, up to $10^3$ photo-electrons (pe's), for each PMT. We
present a system built to provide the optical calibration of the camera
consisting of a pulsed laser (355 nm wavelength, 400 ps pulse width), a set of
filters to guarantee a large dynamic range of photons on the sensors, and a
diffusing sphere to uniformly spread the laser light, with flat fielding within
3%, over the camera focal plane 28 m away. The prototype of the system
developed at INFN is hermetically closed and filled with dry air to make the
system completely isolated from the external environment. In the paper we
present the results of the tests for the evaluation of the photon density at
the camera plane, the system isolation from the environment, and the shape of
the signal as detected by the PMTs. The description of the communication of the
system with the rest of detector is also given.
| 0 | 1 | 0 | 0 | 0 | 0 |
Extended superalgebras from twistor and Killing spinors | The basic first-order differential operators of spin geometry that are Dirac
operator and twistor operator are considered. Special types of spinors defined
from these operators such as twistor spinors and Killing spinors are discussed.
Symmetry operators of massless and massive Dirac equations are introduced and
relevant symmetry operators of twistor spinors and Killing spinors are
constructed from Killing-Yano (KY) and conformal Killing-Yano (CKY) forms in
constant curvature and Einstein manifolds. The squaring map of spinors gives KY
and CKY forms for Killing and twistor spinors respectively. They constitute a
graded Lie algebra structure in some special cases. By using the graded Lie
algebra structure of KY and CKY forms, extended Killing and conformal
superalgebras are constructed in constant curvature and Einstein manifolds.
| 0 | 0 | 1 | 0 | 0 | 0 |
The Physics of Eccentric Binary Black Hole Mergers. A Numerical Relativity Perspective | Gravitational wave observations of eccentric binary black hole mergers will
provide unequivocal evidence for the formation of these systems through
dynamical assembly in dense stellar environments. The study of these
astrophysically motivated sources is timely in view of electromagnetic
observations, consistent with the existence of stellar mass black holes in the
globular cluster M22 and in the Galactic center, and the proven detection
capabilities of ground-based gravitational wave detectors. In order to get
insights into the physics of these objects in the dynamical, strong-field
gravity regime, we present a catalog of 89 numerical relativity waveforms that
describe binary systems of non-spinning black holes with mass-ratios $1\leq q
\leq 10$, and initial eccentricities as high as $e_0=0.18$ fifteen cycles
before merger. We use this catalog to provide landmark results regarding the
loss of energy through gravitational radiation, both for quadrupole and
higher-order waveform multipoles, and the astrophysical properties, final mass
and spin, of the post-merger black hole as a function of eccentricity and
mass-ratio. We discuss the implications of these results for gravitational wave
source modeling, and the design of algorithms to search for and identify the
complex signatures of these events in realistic detection scenarios.
| 1 | 0 | 0 | 0 | 0 | 0 |
Joint Computation and Communication Cooperation for Mobile Edge Computing | This paper proposes a novel joint computation and communication cooperation
approach in mobile edge computing (MEC) systems, which enables user cooperation
in both computation and communication for improving the MEC performance. In
particular, we consider a basic three-node MEC system that consists of a user
node, a helper node, and an access point (AP) node attached with an MEC server.
We focus on the user's latency-constrained computation over a finite block, and
develop a four-slot protocol for implementing the joint computation and
communication cooperation. Under this setup, we jointly optimize the
computation and communication resource allocation at both the user and the
helper, so as to minimize their total energy consumption subject to the user's
computation latency constraint. We provide the optimal solution to this
problem. Numerical results show that the proposed joint cooperation approach
significantly improves the computation capacity and the energy efficiency at
the user and helper nodes, as compared to other benchmark schemes without such
a joint design.
| 1 | 0 | 0 | 0 | 0 | 0 |
Propagating wave correlations in complex systems | We describe a novel approach for computing wave correlation functions inside
finite spatial domains driven by complex and statistical sources. By exploiting
semiclassical approximations, we provide explicit algorithms to calculate the
local mean of these correlation functions in terms of the underlying classical
dynamics. By defining appropriate ensemble averages, we show that fluctuations
about the mean can be characterised in terms of classical correlations. We give
in particular an explicit expression relating fluctuations of diagonal
contributions to those of the full wave correlation function. The methods have
a wide range of applications both in quantum mechanics and for classical wave
problems such as in vibro-acoustics and electromagnetism. We apply the methods
here to simple quantum systems, so-called quantum maps, which model the
behaviour of generic problems on Poincaré sections. Although low-dimensional,
these models exhibit a chaotic classical limit and share common characteristics
with wave propagation in complex structures.
| 0 | 1 | 0 | 0 | 0 | 0 |
Cost-Effective Cache Deployment in Mobile Heterogeneous Networks | This paper investigates one of the fundamental issues in cache-enabled
heterogeneous networks (HetNets): how many cache instances should be deployed
at different base stations, in order to provide guaranteed service in a
cost-effective manner. Specifically, we consider two-tier HetNets with
hierarchical caching, where the most popular files are cached at small cell
base stations (SBSs) while the less popular ones are cached at macro base
stations (MBSs). For a given network cache deployment budget, the cache sizes
for MBSs and SBSs are optimized to maximize network capacity while satisfying
the file transmission rate requirements. As cache sizes of MBSs and SBSs affect
the traffic load distribution, inter-tier traffic steering is also employed for
load balancing. Based on stochastic geometry analysis, the optimal cache sizes
for MBSs and SBSs are obtained, which are threshold-based with respect to cache
budget in the networks constrained by SBS backhauls. Simulation results are
provided to evaluate the proposed schemes and demonstrate the applications in
cost-effective network deployment.
| 1 | 0 | 0 | 0 | 0 | 0 |
Experimental demonstration of a Josephson magnetic memory cell with a programmable π-junction | We experimentally demonstrate the operation of a Josephson magnetic random
access memory unit cell, built with a Ni_80Fe_20/Cu/Ni pseudo spin-valve
Josephson junction with Nb electrodes and an integrated readout SQUID in a
fully planarized Nb fabrication process. We show that the parallel and
anti-parallel memory states of the spin-valve can be mapped onto a junction
equilibrium phase of either zero or pi by appropriate choice of the ferromagnet
thicknesses, and that the magnetic Josephson junction can be written to either
a zero-junction or pi-junction state by application of write fields of
approximately 5 mT. This work represents a first step towards a scalable,
dense, and power-efficient cryogenic memory for superconducting
high-performance digital computing.
| 0 | 1 | 0 | 0 | 0 | 0 |
On the Dynamics of Supermassive Black Holes in Gas-Rich, Star-Forming Galaxies: the Case for Nuclear Star Cluster Coevolution | We introduce a new model for the formation and evolution of supermassive
black holes (SMBHs) in the RAMSES code using sink particles, improving over
previous work the treatment of gas accretion and dynamical evolution. This new
model is tested against a suite of high-resolution simulations of an isolated,
gas-rich, cooling halo. We study the effect of various feedback models on the
SMBH growth and its dynamics within the galaxy.
In runs without any feedback, the SMBH is trapped within a massive bulge and
is therefore able to grow quickly, but only if the seed mass is chosen larger
than the minimum Jeans mass resolved by the simulation. We demonstrate that, in
the absence of supernovae (SN) feedback, the maximum SMBH mass is reached when
Active Galactic Nucleus (AGN) heating balances gas cooling in the nuclear
region.
When our efficient SN feedback is included, it completely prevents bulge
formation, so that massive gas clumps can perturb the SMBH orbit, and reduce
the accretion rate significantly. To overcome this issue, we propose an
observationally motivated model for the joint evolution of the SMBH and a
parent nuclear star cluster (NSC), which allows the SMBH to remain in the
nuclear region, grow fast and resist external perturbations. In this scenario,
however, SN feedback controls the gas supply and the maximum SMBH mass now
depends on the balance between AGN heating and gravity. We conclude that
SMBH/NSC co-evolution is crucial for the growth of SMBH in high-z galaxies, the
progenitors of massive elliptical today.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Concave Optimization Algorithm for Matching Partially Overlapping Point Sets | Point matching refers to the process of finding spatial transformation and
correspondences between two sets of points. In this paper, we focus on the case
that there is only partial overlap between two point sets. Following the
approach of the robust point matching method, we model point matching as a
mixed linear assignment-least square problem and show that after eliminating
the transformation variable, the resulting problem of minimization with respect
to point correspondence is a concave optimization problem. Furthermore, this
problem has the property that the objective function can be converted into a
form with few nonlinear terms via a linear transformation. Based on these
properties, we employ the branch-and-bound (BnB) algorithm to optimize the
resulting problem where the dimension of the search space is small. To further
improve efficiency of the BnB algorithm where computation of the lower bound is
the bottleneck, we propose a new lower bounding scheme which has a
k-cardinality linear assignment formulation and can be efficiently solved.
Experimental results show that the proposed algorithm outperforms
state-of-the-art methods in terms of robustness to disturbances and point
matching accuracy.
| 1 | 0 | 0 | 0 | 0 | 0 |
Adaptive Mesh Refinement in Analog Mesh Computers | The call for efficient computer architectures has introduced a variety of
application-specific compute engines to the heterogeneous computing landscape.
One particular engine, the analog mesh computer, has been well received due to
its ability to efficiently solve partial differential equations by eliminating
the iterative stages common to numerical solvers. This article introduces an
implementation of refinement for analog mesh computers.
| 1 | 0 | 0 | 0 | 0 | 0 |
On the local view of atmospheric available potential energy | The possibility of constructing Lorenz's concept of available potential
energy (APE) from a local principle has been known for some time, but has
received very little attention so far. Yet, the local APE framework offers the
advantage of providing a positive definite local form of potential energy,
which like kinetic energy can be transported, converted, and created/dissipated
locally. In contrast to Lorenz's definition, which relies on the exact from of
potential energy, the local APE theory uses the particular form of potential
energy appropriate to the approximations considered. In this paper, this idea
is illustrated for the dry hydrostatic primitive equations, whose relevant form
of potential energy is the specific enthalpy. The local APE density is
non-quadratic in general, but can nevertheless be partitioned exactly into mean
and eddy components regardless of the Reynolds averaging operator used.
This paper introduces a new form of the local APE that is easily computable
from atmospheric datasets. The advantages of using the local APE over the
classical Lorenz APE are highlighted. The paper also presents the first
calculation of the three-dimensional local APE in observation-based atmospheric
data. Finally, it illustrates how the eddy and mean components of the local APE
can be used to study regional and temporal variability in the large-scale
circulation. It is revealed that advection from high latitudes is necessary to
supply APE into the storm track regions, and that Greenland and Ross Sea, which
have suffered from rapid land ice and sea ice loss in recent decades, are
particularly susceptible to APE variability.
| 0 | 1 | 0 | 0 | 0 | 0 |
GdRh$_2$Si$_2$: An exemplary tetragonal system for antiferromagnetic order with weak in-plane anisotropy | The anisotropy of magnetic properties commonly is introduced in textbooks
using the case of an antiferromagnetic system with Ising type anisotropy. This
model presents huge anisotropic magnetization and a pronounced metamagnetic
transition and is well-known and well-documented both, in experiments and
theory. In contrast, the case of an antiferromagnetic $X$-$Y$ system with weak
in-plane anisotropy is only poorly documented. We studied the anisotropic
magnetization of the compound GdRh$_2$Si$_2$ and found that it is a perfect
model system for such a weak-anisotropy setting because the Gd$^{3+}$ ions in
GdRh$_2$Si$_2$ have a pure spin moment of S=7/2 which orders in a simple AFM
structure with ${\bf Q} = (001)$. We observed experimentally in $M(B)$ a
continuous spin-flop transition and domain effects for field applied along the
$[100]$- and the $[110]$-direction, respectively. We applied a mean field model
for the free energy to describe our data and combine it with an Ising chain
model to account for domain effects. Our calculations reproduce the
experimental data very well. In addition, we performed magnetic X-ray
scattering and X-ray magnetic circular dichroism measurements, which confirm
the AFM propagation vector to be ${\bf Q} = (001)$ and indicate the absence of
polarization on the rhodium atoms.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Kullback-Leibler Divergence-based Distributionally Robust Optimization Model for Heat Pump Day-ahead Operational Schedule in Distribution Networks | For its high coefficient of performance and zero local emissions, the heat
pump (HP) has recently become popular in North Europe and China. However, the
integration of HPs may aggravate the daily peak-valley gap in distribution
networks significantly.
| 1 | 0 | 0 | 0 | 0 | 0 |
Towards Optimally Decentralized Multi-Robot Collision Avoidance via Deep Reinforcement Learning | Developing a safe and efficient collision avoidance policy for multiple
robots is challenging in the decentralized scenarios where each robot generate
its paths without observing other robots' states and intents. While other
distributed multi-robot collision avoidance systems exist, they often require
extracting agent-level features to plan a local collision-free action, which
can be computationally prohibitive and not robust. More importantly, in
practice the performance of these methods are much lower than their centralized
counterparts.
We present a decentralized sensor-level collision avoidance policy for
multi-robot systems, which directly maps raw sensor measurements to an agent's
steering commands in terms of movement velocity. As a first step toward
reducing the performance gap between decentralized and centralized methods, we
present a multi-scenario multi-stage training framework to find an optimal
policy which is trained over a large number of robots on rich, complex
environments simultaneously using a policy gradient based reinforcement
learning algorithm. We validate the learned sensor-level collision avoidance
policy in a variety of simulated scenarios with thorough performance
evaluations and show that the final learned policy is able to find time
efficient, collision-free paths for a large-scale robot system. We also
demonstrate that the learned policy can be well generalized to new scenarios
that do not appear in the entire training period, including navigating a
heterogeneous group of robots and a large-scale scenario with 100 robots.
Videos are available at this https URL
| 1 | 0 | 0 | 0 | 0 | 0 |
Metropolis-Hastings Algorithms for Estimating Betweenness Centrality in Large Networks | Betweenness centrality is an important index widely used in different domains
such as social networks, traffic networks and the world wide web. However, even
for mid-size networks that have only a few hundreds thousands vertices, it is
computationally expensive to compute exact betweenness scores. Therefore in
recent years, several approximate algorithms have been developed. In this
paper, first given a network $G$ and a vertex $r \in V(G)$, we propose a
Metropolis-Hastings MCMC algorithm that samples from the space $V(G)$ and
estimates betweenness score of $r$. The stationary distribution of our MCMC
sampler is the optimal sampling proposed for betweenness centrality estimation.
We show that our MCMC sampler provides an $(\epsilon,\delta)$-approximation,
where the number of required samples depends on the position of $r$ in $G$ and
in many cases, it is a constant. Then, given a network $G$ and a set $R \subset
V(G)$, we present a Metropolis-Hastings MCMC sampler that samples from the
joint space $R$ and $V(G)$ and estimates relative betweenness scores of the
vertices in $R$. We show that for any pair $r_i, r_j \in R$, the ratio of the
expected values of the estimated relative betweenness scores of $r_i$ and $r_j$
respect to each other is equal to the ratio of their betweenness scores. We
also show that our joint-space MCMC sampler provides an
$(\epsilon,\delta)$-approximation of the relative betweenness score of $r_i$
respect to $r_j$, where the number of required samples depends on the position
of $r_j$ in $G$ and in many cases, it is a constant.
| 1 | 0 | 0 | 0 | 0 | 0 |
Learning Flexible and Reusable Locomotion Primitives for a Microrobot | The design of gaits for robot locomotion can be a daunting process which
requires significant expert knowledge and engineering. This process is even
more challenging for robots that do not have an accurate physical model, such
as compliant or micro-scale robots. Data-driven gait optimization provides an
automated alternative to analytical gait design. In this paper, we propose a
novel approach to efficiently learn a wide range of locomotion tasks with
walking robots. This approach formalizes locomotion as a contextual policy
search task to collect data, and subsequently uses that data to learn
multi-objective locomotion primitives that can be used for planning. As a
proof-of-concept we consider a simulated hexapod modeled after a recently
developed microrobot, and we thoroughly evaluate the performance of this
microrobot on different tasks and gaits. Our results validate the proposed
controller and learning scheme on single and multi-objective locomotion tasks.
Moreover, the experimental simulations show that without any prior knowledge
about the robot used (e.g., dynamics model), our approach is capable of
learning locomotion primitives within 250 trials and subsequently using them to
successfully navigate through a maze.
| 1 | 0 | 0 | 1 | 0 | 0 |
Statistical comparison of (brain) networks | The study of random networks in a neuroscientific context has developed
extensively over the last couple of decades. By contrast, techniques for the
statistical analysis of these networks are less developed. In this paper, we
focus on the statistical comparison of brain networks in a nonparametric
framework and discuss the associated detection and identification problems. We
tested network differences between groups with an analysis of variance (ANOVA)
test we developed specifically for networks. We also propose and analyse the
behaviour of a new statistical procedure designed to identify different
subnetworks. As an example, we show the application of this tool in
resting-state fMRI data obtained from the Human Connectome Project. Finally, we
discuss the potential bias in neuroimaging findings that is generated by some
behavioural and brain structure variables. Our method can also be applied to
other kind of networks such as protein interaction networks, gene networks or
social networks.
| 0 | 1 | 0 | 1 | 0 | 0 |
Sensitivity analysis using perturbed-law based indices for quantiles and application to an industrial case | In this paper, we present perturbed law-based sensitivity indices and how to
adapt them for quantile-oriented sensitivity analysis. We exhibit a simple way
to compute these indices in practice using an importance sampling estimator for
quantiles. Some useful asymptotic results about this estimator are also
provided. Finally, we apply this method to the study of a numerical model which
simulates the behaviour of a component in a hydraulic system in case of severe
transient solicitations. The sensitivity analysis is used to assess the impact
of epistemic uncertainties about some physical parameters on the output of the
model.
| 0 | 0 | 1 | 1 | 0 | 0 |
The Robustness of LWPP and WPP, with an Application to Graph Reconstruction | We show that the counting class LWPP [FFK94] remains unchanged even if one
allows a polynomial number of gap values rather than one. On the other hand, we
show that it is impossible to improve this from polynomially many gap values to
a superpolynomial number of gap values by relativizable proof techniques.
The first of these results implies that the Legitimate Deck Problem (from the
study of graph reconstruction) is in LWPP (and thus low for PP, i.e., $\rm
PP^{\mbox{Legitimate Deck}} = PP$) if the weakened version of the
Reconstruction Conjecture holds in which the number of nonisomorphic preimages
is assumed merely to be polynomially bounded. This strengthens the 1992 result
of Köbler, Schöning, and Torán [KST92] that the Legitimate Deck
Problem is in LWPP if the Reconstruction Conjecture holds, and provides
strengthened evidence that the Legitimate Deck Problem is not NP-hard.
We additionally show on the one hand that our main LWPP robustness result
also holds for WPP, and also holds even when one allows both the rejection- and
acceptance- gap-value targets to simultaneously be polynomial-sized lists; yet
on the other hand, we show that for the #P-based analog of LWPP the behavior
much differs in that, in some relativized worlds, even two target values
already yield a richer class than one value does. Despite that nonrobustness
result for a #P-based class, we show that the #P-based "exact counting" class
$\rm C_{=}P$ remains unchanged even if one allows a polynomial number of target
values for the number of accepting paths of the machine.
| 1 | 0 | 0 | 0 | 0 | 0 |
Time- and spatially-resolved magnetization dynamics driven by spin-orbit torques | Current-induced spin-orbit torques (SOTs) represent one of the most effective
ways to manipulate the magnetization in spintronic devices. The orthogonal
torque-magnetization geometry, the strong damping, and the large domain wall
velocities inherent to materials with strong spin-orbit coupling make SOTs
especially appealing for fast switching applications in nonvolatile memory and
logic units. So far, however, the timescale and evolution of the magnetization
during the switching process have remained undetected. Here, we report the
direct observation of SOT-driven magnetization dynamics in Pt/Co/AlO$_x$ dots
during current pulse injection. Time-resolved x-ray images with 25 nm spatial
and 100 ps temporal resolution reveal that switching is achieved within the
duration of a sub-ns current pulse by the fast nucleation of an inverted domain
at the edge of the dot and propagation of a tilted domain wall across the dot.
The nucleation point is deterministic and alternates between the four dot
quadrants depending on the sign of the magnetization, current, and external
field. Our measurements reveal how the magnetic symmetry is broken by the
concerted action of both damping-like and field-like SOT and show that
reproducible switching events can be obtained for over $10^{12}$ reversal
cycles.
| 0 | 1 | 0 | 0 | 0 | 0 |
Giant Planets Can Act As Stabilizing Agents on Debris Disks | We have explored the evolution of a cold debris disk under the gravitational
influence of dwarf planet sized objects (DPs), both in the presence and absence
of an interior giant planet. Through detailed long-term numerical simulations,
we demonstrate that, when the giant planet is not present, DPs can stir the
eccentricities and inclinations of disk particles, in linear proportion to the
total mass of the DPs; on the other hand, when the giant planet is included in
the simulations, the stirring is approximately proportional to the mass
squared. This creates two regimes: below a disk mass threshold (defined by the
total mass of DPs), the giant planet acts as a stabilizing agent of the orbits
of cometary nucleii, diminishing the effect of the scatterers; above the
threshold, the giant contributes to the dispersion of the particles.
| 0 | 1 | 0 | 0 | 0 | 0 |
Definition of geometric space around analytic fractal trees using derivative coordinate funtions | The concept of derivative coordinate functions proved useful in the
formulation of analytic fractal functions to represent smooth symmetric binary
fractal trees [1]. In this paper we introduce a new geometry that defines the
fractal space around these fractal trees. We present the canonical and
degenerate form of this fractal space and extend the fractal geometrical space
to R3 specifically and Rn by a recurrence relation. We also discuss the usage
of such fractal geometry.
| 1 | 0 | 0 | 0 | 0 | 0 |
To Pool or Not To Pool? Revisiting an Old Pattern | We revisit the well-known object-pool design pattern in Java. In the last
decade, the pattern has attracted a lot of criticism regarding its validity
when used for light-weight objects that are only meant to hold memory rather
than any other resources (database connections, sockets etc.) and in fact,
common opinion holds that is an anti-pattern in such cases. Nevertheless, we
show through several experiments in different systems that the use of this
pattern for extremely short-lived and light-weight memory objects can in fact
significantly reduce the response time of high-performance multi-threaded
applications, especially in memory-constrained environments. In certain
multi-threaded applications where high performance is a requirement and/or
memory constraints exist, we recommend therefore that the object pool pattern
be given consideration and tested for possible run-time as well as memory
footprint improvements.
| 1 | 0 | 0 | 0 | 0 | 0 |
Quenching of supermassive black hole growth around the apparent maximum mass | Recent quasar surveys have revealed that supermassive black holes (SMBHs)
rarely exceed a mass of $M_{\rm BH} \sim {\rm a~few}\times10^{10}~M_{\odot}$
during the entire cosmic history. It has been argued that quenching of the BH
growth is caused by a transition of a nuclear accretion disk into an advection
dominated accretion flow, with which strong outflows and/or jets are likely to
be associated. We investigate a relation between the maximum mass of SMBHs and
the radio-loudness of quasars with a well-defined sample of $\sim 10^5$ quasars
at a redshift range of $0<z<2$, obtained from the Sloan Digital Sky Surveys DR7
catalog. We find that the number fraction of the radio-loud (RL) quasars
increases above a threshold of $M_{\rm BH} \simeq 10^{9.5}~M_{\odot}$,
independent of their redshifts. Moreover, the number fraction of RL quasars
with lower Eddington ratios (out of the whole RL quasars), indicating lower
accretion rates, increases above the critical BH mass. These observational
trends can be natural consequences of the proposed scenario of suppressing BH
growth around the apparent maximum mass of $\sim 10^{10}~M_{\odot}$. The
ongoing VLA Sky Survey in radio will allow us to estimate of the exact number
fraction of RL quasars more precisely, which gives further insights to
understand quenching processes for BH growth.
| 0 | 1 | 0 | 0 | 0 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.