title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Predicting signatures of anisotropic resonance energy transfer in dye-functionalized nanoparticles | Resonance energy transfer (RET) is an inherently anisotropic process. Even
the simplest, well-known Förster theory, based on the transition
dipole-dipole coupling, implicitly incorporates the anisotropic character of
RET. In this theoretical work, we study possible signatures of the fundamental
anisotropic character of RET in hybrid nanomaterials composed of a
semiconductor nanoparticle (NP) decorated with molecular dyes. In particular,
by means of a realistic kinetic model, we show that the analysis of the dye
photoluminescence difference for orthogonal input polarizations reveals the
anisotropic character of the dye-NP RET which arises from the intrinsic
anisotropy of the NP lattice. In a prototypical core/shell wurtzite CdSe/ZnS NP
functionalized with cyanine dyes (Cy3B), this difference is predicted to be as
large as 75\% and it is strongly dependent in amplitude and sign on the dye-NP
distance. We account for all the possible RET processes within the system,
together with competing decay pathways in the separate segments. In addition,
we show that the anisotropic signature of RET is persistent up to a large
number of dyes per NP.
| 0 | 1 | 0 | 0 | 0 | 0 |
Matrix Completion and Performance Guarantees for Single Individual Haplotyping | Single individual haplotyping is an NP-hard problem that emerges when
attempting to reconstruct an organism's inherited genetic variations using data
typically generated by high-throughput DNA sequencing platforms. Genomes of
diploid organisms, including humans, are organized into homologous pairs of
chromosomes that differ from each other in a relatively small number of variant
positions. Haplotypes are ordered sequences of the nucleotides in the variant
positions of the chromosomes in a homologous pair; for diploids, haplotypes
associated with a pair of chromosomes may be conveniently represented by means
of complementary binary sequences. In this paper, we consider a binary matrix
factorization formulation of the single individual haplotyping problem and
efficiently solve it by means of alternating minimization. We analyze the
convergence properties of the alternating minimization algorithm and establish
theoretical bounds for the achievable haplotype reconstruction error. The
proposed technique is shown to outperform existing methods when applied to
synthetic as well as real-world Fosmid-based HapMap NA12878 datasets.
| 0 | 0 | 0 | 1 | 1 | 0 |
Simulation and analysis of $γ$-Ni cellular growth during laser powder deposition of Ni-based superalloys | Cellular or dendritic microstructures that result as a function of additive
manufacturing solidification conditions in a Ni-based melt pool are simulated
in the present work using three-dimensional phase-field simulations. A
macroscopic thermal model is used to obtain the temperature gradient $G$ and
the solidification velocity $V$ which are provided as inputs to the phase-field
model. We extract the cell spacings, cell core compositions, and cell tip as
well as mushy zone temperatures from the simulated microstructures as a
function of $V$. Cell spacings are compared with different scaling laws that
correlate to the solidification conditions and approximated by $G^{-m}V^{-n}$.
Cell core compositions are compared with the analytical solutions of a dendrite
growth theory and found to be in good agreement. Through analysis of the mushy
zone, we extract a characteristic bridging plane, where the primary $\gamma$
phase coalesces across the intercellular liquid channels at a $\gamma$ fraction
between 0.6 and 0.7. The temperature and the $\gamma$ fraction in this plane
are found to decrease with increasing $V$. The simulated microstructural
features are significant as they can be used as inputs for the simulation of
subsequent heat treatment processes.
| 0 | 1 | 0 | 0 | 0 | 0 |
Localization of Extended Quantum Objects | A quantum system of particles can exist in a localized phase, exhibiting
ergodicity breaking and maintaining forever a local memory of its initial
conditions. We generalize this concept to a system of extended objects, such as
strings and membranes, arguing that such a system can also exhibit localization
in the presence of sufficiently strong disorder (randomness) in the
Hamiltonian. We show that localization of large extended objects can be mapped
to a lower-dimensional many-body localization problem. For example, motion of a
string involves propagation of point-like signals down its length to keep the
different segments in causal contact. For sufficiently strong disorder, all
such internal modes will exhibit many-body localization, resulting in the
localization of the entire string. The eigenstates of the system can then be
constructed perturbatively through a convergent 'string locator expansion.' We
propose a type of out-of-time-order string correlator as a diagnostic of such a
string localized phase. Localization of other higher-dimensional objects, such
as membranes, can also be studied through a hierarchical construction by
mapping onto localization of lower-dimensional objects. Our arguments are
'asymptotic' ($i.e.$ valid up to rare regions) but they extend the notion of
localization (and localization protected order) to a host of settings where
such ideas previously did not apply. These include high-dimensional
ferromagnets with domain wall excitations, three-dimensional topological phases
with loop-like excitations, and three-dimensional type-II superconductors with
flux line excitations. In type-II superconductors, localization of flux lines
could stabilize superconductivity at energy densities where a normal state
would arise in thermal equilibrium.
| 0 | 1 | 0 | 0 | 0 | 0 |
The next-to-minimal weights of binary projective Reed-Muller codes | Projective Reed-Muller codes were introduced by Lachaud, in 1988 and their
dimension and minimum distance were determined by Serre and S{\o}rensen in
1991. In coding theory one is also interested in the higher Hamming weights, to
study the code performance. Yet, not many values of the higher Hamming weights
are known for these codes, not even the second lowest weight (also known as
next-to-minimal weight) is completely determined. In this paper we determine
all the values of the next-to-minimal weight for the binary projective
Reed-Muller codes, which we show to be equal to the next-to-minimal weight of
Reed-Muller codes in most, but not all, cases.
| 1 | 0 | 1 | 0 | 0 | 0 |
Big Data Classification Using Augmented Decision Trees | We present an algorithm for classification tasks on big data. Experiments
conducted as part of this study indicate that the algorithm can be as accurate
as ensemble methods such as random forests or gradient boosted trees. Unlike
ensemble methods, the models produced by the algorithm can be easily
interpreted. The algorithm is based on a divide and conquer strategy and
consists of two steps. The first step consists of using a decision tree to
segment the large dataset. By construction, decision trees attempt to create
homogeneous class distributions in their leaf nodes. However, non-homogeneous
leaf nodes are usually produced. The second step of the algorithm consists of
using a suitable classifier to determine the class labels for the
non-homogeneous leaf nodes. The decision tree segment provides a coarse segment
profile while the leaf level classifier can provide information about the
attributes that affect the label within a segment.
| 1 | 0 | 0 | 1 | 0 | 0 |
Special tilting modules for algebras with positive dominant dimension | We study a set of uniquely determined tilting and cotilting modules for an
algebra with positive dominant dimension, with the property that they are
generated or cogenerated (and usually both) by projective-injectives. These
modules have various interesting properties, for example that their
endomorphism algebras always have global dimension at most that of the original
algebra. We characterise d-Auslander-Gorenstein algebras and d-Auslander
algebras via the property that the relevant tilting and cotilting modules
coincide. By the Morita-Tachikawa correspondence, any algebra of dominant
dimension at least 2 may be expressed (essentially uniquely) as the
endomorphism algebra of a generator-cogenerator for another algebra, and we
also study our special tilting and cotilting modules from this point of view,
via the theory of recollements and intermediate extension functors.
| 0 | 0 | 1 | 0 | 0 | 0 |
The dynamical structure of political corruption networks | Corruptive behaviour in politics limits economic growth, embezzles public
funds, and promotes socio-economic inequality in modern democracies. We analyse
well-documented political corruption scandals in Brazil over the past 27 years,
focusing on the dynamical structure of networks where two individuals are
connected if they were involved in the same scandal. Our research reveals that
corruption runs in small groups that rarely comprise more than eight people, in
networks that have hubs and a modular structure that encompasses more than one
corruption scandal. We observe abrupt changes in the size of the largest
connected component and in the degree distribution, which are due to the
coalescence of different modules when new scandals come to light or when
governments change. We show further that the dynamical structure of political
corruption networks can be used for successfully predicting partners in future
scandals. We discuss the important role of network science in detecting and
mitigating political corruption.
| 1 | 0 | 0 | 1 | 0 | 0 |
Maxent-Stress Optimization of 3D Biomolecular Models | Knowing a biomolecule's structure is inherently linked to and a prerequisite
for any detailed understanding of its function. Significant effort has gone
into developing technologies for structural characterization. These
technologies do not directly provide 3D structures; instead they typically
yield noisy and erroneous distance information between specific entities such
as atoms or residues, which have to be translated into consistent 3D models.
Here we present an approach for this translation process based on
maxent-stress optimization. Our new approach extends the original graph drawing
method for the new application's specifics by introducing additional
constraints and confidence values as well as algorithmic components. Extensive
experiments demonstrate that our approach infers structural models (i. e.,
sensible 3D coordinates for the molecule's atoms) that correspond well to the
distance information, can handle noisy and error-prone data, and is
considerably faster than established tools. Our results promise to allow domain
scientists nearly-interactive structural modeling based on distance
constraints.
| 1 | 1 | 0 | 0 | 0 | 0 |
Enhanced Quantum Synchronization via Quantum Machine Learning | We study the quantum synchronization between a pair of two-level systems
inside two coupled cavities. By using a digital-analog decomposition of the
master equation that rules the system dynamics, we show that this approach
leads to quantum synchronization between both two-level systems. Moreover, we
can identify in this digital-analog block decomposition the fundamental
elements of a quantum machine learning protocol, in which the agent and the
environment (learning units) interact through a mediating system, namely, the
register. If we can additionally equip this algorithm with a classical feedback
mechanism, which consists of projective measurements in the register,
reinitialization of the register state and local conditional operations on the
agent and environment subspace, a powerful and flexible quantum machine
learning protocol emerges. Indeed, numerical simulations show that this
protocol enhances the synchronization process, even when every subsystem
experience different loss/decoherence mechanisms, and give us the flexibility
to choose the synchronization state. Finally, we propose an implementation
based on current technologies in superconducting circuits.
| 1 | 0 | 0 | 1 | 0 | 0 |
Engineering phonon leakage in nanomechanical resonators | We propose and experimentally demonstrate a technique for coupling phonons
out of an optomechanical crystal cavity. By designing a perturbation that
breaks a symmetry in the elastic structure, we selectively induce phonon
leakage without affecting the optical properties. It is shown experimentally
via cryogenic measurements that the proposed cavity perturbation causes loss of
phonons into mechanical waves on the surface of silicon, while leaving photon
lifetimes unaffected. This demonstrates that phonon leakage can be engineered
in on-chip optomechanical systems. We experimentally observe large fluctuations
in leakage rates that we attribute to fabrication disorder and verify this
using simulations. Our technique opens the way to engineering more complex
on-chip phonon networks utilizing guided mechanical waves to connect quantum
systems.
| 0 | 1 | 0 | 0 | 0 | 0 |
Entropy generation and momentum transfer in the superconductor-normal and normal-superconductor phase transformations and the consistency of the conventional theory of superconductivity | Since the discovery of the Meissner effect the superconductor to normal (S-N)
phase transition in the presence of a magnetic field is understood to be a
first order phase transformation that is reversible under ideal conditions and
obeys the laws of thermodynamics. The reverse (N-S) transition is the Meissner
effect. This implies in particular that the kinetic energy of the supercurrent
is not dissipated as Joule heat in the process where the superconductor becomes
normal and the supercurrent stops. In this paper we analyze the entropy
generation and the momentum transfer between the supercurrent and the body in
the S-N transition and the N-S transition as described by the conventional
theory of superconductivity. We find that it is impossible to explain the
transition in a way that is consistent with the laws of thermodynamics unless
the momentum transfer between the supercurrent and the body occurs with zero
entropy generation, for which the conventional theory of superconductivity
provides no mechanism. Instead, we point out that the alternative theory of
hole superconductivity does not encounter such difficulties.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Local Faber-Krahn inequality and Applications to Schrödinger's Equation | We prove a local Faber-Krahn inequality for solutions $u$ to the Dirichlet
problem for $\Delta + V$ on an arbitrary domain $\Omega$ in $\mathbb{R}^n$.
Suppose a solution $u$ assumes a global maximum at some point $x_0 \in \Omega$
and $u(x_0)>0$. Let $T(x_0)$ be the smallest time at which a Brownian motion,
started at $x_0$, has exited the domain $\Omega$ with probability $\ge 1/2$.
For nice (e.g., convex) domains, $T(x_0) \asymp d(x_0,\partial\Omega)^2$ but we
make no assumption on the geometry of the domain. Our main result is that there
exists a ball $B$ of radius $\asymp T(x_0)^{1/2}$ such that $$ \| V
\|_{L^{\frac{n}{2}, 1}(\Omega \cap B)} \ge c_n > 0, $$ provided that $n \ge 3$.
In the case $n = 2$, the above estimate fails and we obtain a substitute
result. The Laplacian may be replaced by a uniformly elliptic operator in
divergence form. This result both unifies and strenghtens a series of earlier
results.
| 0 | 0 | 1 | 0 | 0 | 0 |
Étale groupoids and their $C^*$-algebras | These notes were written as supplementary material for a five-hour lecture
series presented at the Centre de Recerca Mathemàtica at the Universitat
Autònoma de Barcelona from the 13th to the 17th of March 2017. The intention
of these notes is to give a brief overview of some key topics in the area of
$C^*$-algebras associated to étale groupoids. The scope has been deliberately
contained to the case of étale groupoids with the intention that much of the
representation-theoretic technology and measure-theoretic analysis required to
handle general groupoids can be suppressed in this simpler setting.
A published version of these notes will appear in the volume tentatively
titled "Operator algebras and dynamics: groupoids, crossed products and Rokhlin
dimension" by Gabor Szabo, Dana P. Williams and myself, and edited by Francesc
Perera, in the series "Advanced Courses in Mathematics. CRM Barcelona." The
pagination of this arXiv version is not identical to Birkhäuser's style, but
I have tried to make it close. The theorem numbering should be correct. I'm
grateful to the CRM and Birkhäuser for allowing me to post a version on
arXiv.
| 0 | 0 | 1 | 0 | 0 | 0 |
Semi-classical limit of the Levy-Lieb functional in Density Functional Theory | In a recent work, Bindini and De Pascale have introduced a regularization of
$N$-particle symmetric probabilities which preserves their one-particle
marginals. In this short note, we extend their construction to mixed quantum
fermionic states. This enables us to prove the convergence of the Levy-Lieb
functional in Density Functional Theory , to the corresponding multi-marginal
optimal transport in the semi-classical limit. Our result holds for mixed
states of any particle number $N$, with or without spin.
| 0 | 1 | 1 | 0 | 0 | 0 |
A characterization of cellular motivic spectra | Let $ \alpha: \mathcal{C} \to \mathcal{D}$ be a symmetric monoidal functor
from a stable presentable symmetric monoidal $\infty$-category $\mathcal{C} $
compactly generated by the tensorunit to a stable presentable symmetric
monoidal $\infty$-category $ \mathcal{D} $ with compact tensorunit. Let $\beta:
\mathcal{D} \to \mathcal{C}$ be a right adjoint of $\alpha$ and $ \mathrm{X}:
\mathcal{B} \to \mathcal{D} $ a symmetric monoidal functor starting at a small
rigid symmetric monoidal $\infty$-category $ \mathcal{B}$. We construct a
symmetric monoidal equivalence between modules in the $\infty$-category of
functors $ \mathcal{B} \to \mathcal{C} $ over the $ \mathrm{E}_\infty$-algebra
$\beta \circ \mathrm{X} $ and the full subcategory of $\mathcal{D}$ compactly
generated by the essential image of $\mathrm{X}$. Especially for every motivic
$ \mathrm{E}_\infty$-ring spectrum $\mathrm{A}$ we obtain a symmetric monoidal
equivalence between the $\infty$-category of cellular motivic
$\mathrm{A}$-module spectra and modules in the $\infty$-category of functors
$\mathrm{QS}$ to spectra over some $ \mathrm{E}_\infty$-algebra, where
$\mathrm{QS}$ denotes the 0th space of the sphere spectrum.
| 0 | 0 | 1 | 0 | 0 | 0 |
The Impedance of Flat Metallic Plates with Small Corrugations | Summarizes recent work on the wakefields and impedances of flat, metallic
plates with small corrugations
| 0 | 1 | 0 | 0 | 0 | 0 |
The Computer Science and Physics of Community Detection: Landscapes, Phase Transitions, and Hardness | Community detection in graphs is the problem of finding groups of vertices
which are more densely connected than they are to the rest of the graph. This
problem has a long history, but it is undergoing a resurgence of interest due
to the need to analyze social and biological networks. While there are many
ways to formalize it, one of the most popular is as an inference problem, where
there is a "ground truth" community structure built into the graph somehow. The
task is then to recover the ground truth knowing only the graph.
Recently it was discovered, first heuristically in physics and then
rigorously in probability and computer science, that this problem has a phase
transition at which it suddenly becomes impossible. Namely, if the graph is too
sparse, or the probabilistic process that generates it is too noisy, then no
algorithm can find a partition that is correlated with the planted one---or
even tell if there are communities, i.e., distinguish the graph from a purely
random one with high probability. Above this information-theoretic threshold,
there is a second threshold beyond which polynomial-time algorithms are known
to succeed; in between, there is a regime in which community detection is
possible, but conjectured to require exponential time.
For computer scientists, this field offers a wealth of new ideas and open
questions, with connections to probability and combinatorics, message-passing
algorithms, and random matrix theory. Perhaps more importantly, it provides a
window into the cultures of statistical physics and statistical inference, and
how those cultures think about distributions of instances, landscapes of
solutions, and hardness.
| 1 | 1 | 1 | 0 | 0 | 0 |
Characterizing the spread of exaggerated news content over social media | In this paper, we consider a dataset comprising press releases about health
research from different universities in the UK along with a corresponding set
of news articles. First, we do an exploratory analysis to understand how the
basic information published in the scientific journals get exaggerated as they
are reported in these press releases or news articles. This initial analysis
shows that some news agencies exaggerate almost 60\% of the articles they
publish in the health domain; more than 50\% of the press releases from certain
universities are exaggerated; articles in topics like lifestyle and childhood
are heavily exaggerated. Motivated by the above observation we set the central
objective of this paper to investigate how exaggerated news spreads over an
online social network like Twitter. The LIWC analysis points to a remarkable
observation these late tweets are essentially laden in words from opinion and
realize categories which indicates that, given sufficient time, the wisdom of
the crowd is actually able to tell apart the exaggerated news. As a second step
we study the characteristics of the users who never or rarely post exaggerated
news content and compare them with those who post exaggerated news content more
frequently. We observe that the latter class of users have less retweets or
mentions per tweet, have significantly more number of followers, use more slang
words, less hyperbolic words and less word contractions. We also observe that
the LIWC categories like bio, health, body and negative emotion are more
pronounced in the tweets posted by the users in the latter class. As a final
step we use these observations as features and automatically classify the two
groups achieving an F1 score of 0.83.
| 1 | 0 | 0 | 0 | 0 | 0 |
Aerodynamic noise from rigid trailing edges with finite porous extensions | This paper investigates the effects of finite flat porous extensions to
semi-infinite impermeable flat plates in an attempt to control trailing-edge
noise through bio-inspired adaptations. Specifically the problem of sound
generated by a gust convecting in uniform mean steady flow scattering off the
trailing edge and permeable-impermeable junction is considered. This setup
supposes that any realistic trailing-edge adaptation to a blade would be
sufficiently small so that the turbulent boundary layer encapsulates both the
porous edge and the permeable-impermeable junction, and therefore the
interaction of acoustics generated at these two discontinuous boundaries is
important. The acoustic problem is tackled analytically through use of the
Wiener-Hopf method. A two-dimensional matrix Wiener-Hopf problem arises due to
the two interaction points (the trailing edge and the permeable-impermeable
junction). This paper discusses a new iterative method for solving this matrix
Wiener-Hopf equation which extends to further two-dimensional problems in
particular those involving analytic terms that exponentially grow in the upper
or lower half planes. This method is an extension of the commonly used "pole
removal" technique and avoids the needs for full matrix factorisation.
Convergence of this iterative method to an exact solution is shown to be
particularly fast when terms neglected in the second step are formally smaller
than all other terms retained. The final acoustic solution highlights the
effects of the permeable-impermeable junction on the generated noise, in
particular how this junction affects the far-field noise generated by
high-frequency gusts by creating an interference to typical trailing-edge
scattering. This effect results in partially porous plates predicting a lower
noise reduction than fully porous plates when compared to fully impermeable
plates.
| 0 | 1 | 1 | 0 | 0 | 0 |
99% of Parallel Optimization is Inevitably a Waste of Time | It is well known that many optimization methods, including SGD, SAGA, and
Accelerated SGD for over-parameterized models, do not scale linearly in the
parallel setting. In this paper, we present a new version of block coordinate
descent that solves this issue for a number of methods. The core idea is to
make the sampling of coordinate blocks on each parallel unit independent of the
others. Surprisingly, we prove that the optimal number of blocks to be updated
by each of $n$ units in every iteration is equal to $m/n$, where $m$ is the
total number of blocks. As an illustration, this means that when $n=100$
parallel units are used, $99\%$ of work is a waste of time. We demonstrate that
with $m/n$ blocks used by each unit the iteration complexity often remains the
same. Among other applications which we mention, this fact can be exploited in
the setting of distributed optimization to break the communication bottleneck.
Our claims are justified by numerical experiments which demonstrate almost a
perfect match with our theory on a number of datasets.
| 1 | 0 | 0 | 1 | 0 | 0 |
Tuning the effective spin-orbit coupling in molecular semiconductors | The control of spins and spin to charge conversion in organics requires
understanding the molecular spin-orbit coupling (SOC), and a means to tune its
strength. However, quantifying SOC strengths indirectly through spin relaxation
effects has proven diffi- cult due to competing relaxation mechanisms. Here we
present a systematic study of the g-tensor shift in molecular semiconductors
and link it directly to the SOC strength in a series of high mobility molecular
semiconductors with strong potential for future devices. The results
demonstrate a rich variability of the molecular g-shifts with the effective
SOC, depending on subtle aspects of molecular composition and structure. We
correlate the above g -shifts to spin-lattice relaxation times over four orders
of magnitude, from 200 {\mu}s to 0.15 {\mu}s, for isolated molecules in
solution and relate our findings for isolated molecules in solution to the spin
relaxation mechanisms that are likely to be relevant in solid state systems.
| 0 | 1 | 0 | 0 | 0 | 0 |
How Deep Are Deep Gaussian Processes? | Recent research has shown the potential utility of Deep Gaussian Processes.
These deep structures are probability distributions, designed through
hierarchical construction, which are conditionally Gaussian. In this paper, the
current published body of work is placed in a common framework and, through
recursion, several classes of deep Gaussian processes are defined. The
resulting samples generated from a deep Gaussian process have a Markovian
structure with respect to the depth parameter, and the effective depth of the
resulting process is interpreted in terms of the ergodicity, or non-ergodicity,
of the resulting Markov chain. For the classes of deep Gaussian processes
introduced, we provide results concerning their ergodicity and hence their
effective depth. We also demonstrate how these processes may be used for
inference; in particular we show how a Metropolis-within-Gibbs construction
across the levels of the hierarchy can be used to derive sampling tools which
are robust to the level of resolution used to represent the functions on a
computer. For illustration, we consider the effect of ergodicity in some simple
numerical examples.
| 0 | 0 | 1 | 1 | 0 | 0 |
Sparse Randomized Kaczmarz for Support Recovery of Jointly Sparse Corrupted Multiple Measurement Vectors | While single measurement vector (SMV) models have been widely studied in
signal processing, there is a surging interest in addressing the multiple
measurement vectors (MMV) problem. In the MMV setting, more than one
measurement vector is available and the multiple signals to be recovered share
some commonalities such as a common support. Applications in which MMV is a
naturally occurring phenomenon include online streaming, medical imaging, and
video recovery. This work presents a stochastic iterative algorithm for the
support recovery of jointly sparse corrupted MMV. We present a variant of the
Sparse Randomized Kaczmarz algorithm for corrupted MMV and compare our proposed
method with an existing Kaczmarz type algorithm for MMV problems. We also
showcase the usefulness of our approach in the online (streaming) setting and
provide empirical evidence that suggests the robustness of the proposed method
to the distribution of the corruption and the number of corruptions occurring.
| 1 | 0 | 0 | 0 | 0 | 0 |
Exploiting ITO colloidal nanocrystals for ultrafast pulse generation | Dynamical materials that capable of responding to optical stimuli have always
been pursued for designing novel photonic devices and functionalities, of which
the response speed and amplitude as well as integration adaptability and energy
effectiveness are especially critical. Here we show ultrafast pulse generation
by exploiting the ultrafast and sensitive nonlinear dynamical processes in
tunably solution-processed colloidal epsilon-near-zero (ENZ) transparent
conducting oxide (TCO) nanocrystals (NCs), of which the potential respond
response speed is >2 THz and modulation depth is ~23% pumped at ~0.7 mJ/cm2,
benefiting from the highly confined geometry in addition to the ENZ enhancement
effect. These ENZ NCs may offer a scalable and printable material solution for
dynamic photonic and optoelectronic devices.
| 0 | 1 | 0 | 0 | 0 | 0 |
A quantum dynamic belief model to explain the interference effects of categorization on decision making | Categorization is necessary for many decision making tasks. However, the
categorization process may interfere the decision making result and the law of
total probability can be violated in some situations. To predict the
interference effect of categorization, some model based on quantum probability
has been proposed. In this paper, a new quantum dynamic belief (QDB) model is
proposed. Considering the precise decision may not be made during the process,
the concept of uncertainty is introduced in our model to simulate real human
thinking process. Then the interference effect categorization can be predicted
by handling the uncertain information. The proposed model is applied to a
categorization decision-making experiment to explain the interference effect of
categorization. Compared with other models, our model is relatively more
succinct and the result shows the correctness and effectiveness of our model.
| 1 | 0 | 0 | 0 | 0 | 0 |
Throughput Optimal Beam Alignment in Millimeter Wave Networks | Millimeter wave communications rely on narrow-beam transmissions to cope with
the strong signal attenuation at these frequencies, thus demanding precise beam
alignment between transmitter and receiver. The communication overhead incurred
to achieve beam alignment may become a severe impairment in mobile networks.
This paper addresses the problem of optimizing beam alignment acquisition, with
the goal of maximizing throughput. Specifically, the algorithm jointly
determines the portion of time devoted to beam alignment acquisition, as well
as, within this portion of time, the optimal beam search parameters, using the
framework of Markov decision processes. It is proved that a bisection search
algorithm is optimal, and that it outperforms exhaustive and iterative search
algorithms proposed in the literature. The duration of the beam alignment phase
is optimized so as to maximize the overall throughput. The numerical results
show that the throughput, optimized with respect to the duration of the beam
alignment phase, achievable under the exhaustive algorithm is 88.3% lower than
that achievable under the bisection algorithm. Similarly, the throughput
achievable by the iterative search algorithm for a division factor of 4 and 8
is, respectively, 12.8% and 36.4% lower than that achievable by the bisection
algorithm.
| 1 | 0 | 0 | 0 | 0 | 0 |
Behavioural Change Support Intelligent Transportation Applications | This workshop invites researchers and practitioners to participate in
exploring behavioral change support intelligent transportation applications. We
welcome submissions that explore intelligent transportation systems (ITS),
which interact with travelers in order to persuade them or nudge them towards
sustainable transportation behaviors and decisions. Emerging opportunities
including the use of data and information generated by ITS and users' mobile
devices in order to render personalized, contextualized and timely transport
behavioral change interventions are in our focus. We invite submissions and
ideas from domains of ITS including, but not limited to, multi-modal journey
planners, advanced traveler information systems and in-vehicle systems. The
expected outcome will be a deeper understanding of the challenges and future
research directions with respect to behavioral change support through ITS.
| 1 | 0 | 0 | 0 | 0 | 0 |
SU(2) Pfaffian systems and gauge theory | Motivated by the description of Nurowski's conformal structure for maximally
symmetric homogeneous examples of bracket-generating rank 2 distributions in
dimension 5, aka $(2,3,5)$-distributions, we consider a rank $3$ Pfaffian
system in dimension 5 with $SU(2)$ symmetry. We find the conditions for which
this Pfaffian system has the maximal symmetry group (in the real case this is
the split real form of $G_2$), and give the associated Nurowski's conformal
classes. We also present a $SU(2)$ gauge-theoretic interpretation of the
results obtained.
| 0 | 0 | 1 | 0 | 0 | 0 |
Correlation effects in superconducting quantum dot systems | We study the effect of electron correlations on a system consisting of a
single-level quantum dot with local Coulomb interaction attached to two
superconducting leads. We use the single-impurity Anderson model with BCS
superconducting baths to study the interplay between the proximity induced
electron pairing and the local Coulomb interaction. We show how to solve the
model using the continuous-time hybridization-expansion quantum Monte Carlo
method. The results obtained for experimentally relevant parameters are
compared with results of self-consistent second order perturbation theory as
well as with the numerical renormalization group method.
| 0 | 1 | 0 | 0 | 0 | 0 |
Learning non-parametric Markov networks with mutual information | We propose a method for learning Markov network structures for continuous
data without invoking any assumptions about the distribution of the variables.
The method makes use of previous work on a non-parametric estimator for mutual
information which is used to create a non-parametric test for multivariate
conditional independence. This independence test is then combined with an
efficient constraint-based algorithm for learning the graph structure. The
performance of the method is evaluated on several synthetic data sets and it is
shown to learn considerably more accurate structures than competing methods
when the dependencies between the variables involve non-linearities.
| 1 | 0 | 0 | 1 | 0 | 0 |
Online Adaptive Machine Learning Based Algorithm for Implied Volatility Surface Modeling | In this work, we design a machine learning based method, online adaptive
primal support vector regression (SVR), to model the implied volatility surface
(IVS). The algorithm proposed is the first derivation and implementation of an
online primal kernel SVR. It features enhancements that allow efficient online
adaptive learning by embedding the idea of local fitness and budget maintenance
to dynamically update support vectors upon pattern drifts. For algorithm
acceleration, we implement its most computationally intensive parts in a Field
Programmable Gate Arrays hardware, where a 132x speedup over CPU is achieved
during online prediction. Using intraday tick data from the E-mini S&P 500
options market, we show that the Gaussian kernel outperforms the linear kernel
in regulating the size of support vectors, and that our empirical IVS algorithm
beats two competing online methods with regards to model complexity and
regression errors (the mean absolute percentage error of our algorithm is up to
13%). Best results are obtained at the center of the IVS grid due to its larger
number of adjacent support vectors than the edges of the grid. Sensitivity
analysis is also presented to demonstrate how hyper parameters affect the error
rates and model complexity.
| 1 | 0 | 0 | 1 | 0 | 0 |
Hybrid control strategy for a semi active suspension system using fuzzy logic and bio-inspired chaotic fruit fly algorithm | This study proposes a control strategy for the efficient semi active
suspension systems utilizing a novel hybrid PID-fuzzy logic control scheme .In
the control architecture, we employ the Chaotic Fruit Fly Algorithm for PID
tuning since it can avoid local minima by chaotic search. A novel linguistic
rule based fuzzy logic controller is developed to aid the PID.A quarter car
model with a non-linear spring system is used to test the performance of the
proposed control approach. A road terrain is chosen where the comfort and
handling parameters are tested specifically in the regions of abrupt changes.
The results suggest that the suspension systems controlled by the hybrid
strategy has the potential to offer more comfort and handling by reducing the
peak acceleration and suspension distortion by 83.3 % and 28.57% respectively
when compared to the active suspension systems. Also, compared to the
performance of similar suspension control strategies optimized by stochastic
algorithms such as Genetic Algorithms (GA), Particle Swarm Optimization (PSO)
and Bacterial Foraging Optimization (BFO), reductions in peak acceleration and
suspension distortion are found to be 25%, 32.3%, 54.6% and 23.35 %, 22.5%, 5.4
% respectively.The details of the solution methodology have been presented in
the paper.
| 1 | 0 | 0 | 0 | 0 | 0 |
On the Use of Default Parameter Settings in the Empirical Evaluation of Classification Algorithms | We demonstrate that, for a range of state-of-the-art machine learning
algorithms, the differences in generalisation performance obtained using
default parameter settings and using parameters tuned via cross-validation can
be similar in magnitude to the differences in performance observed between
state-of-the-art and uncompetitive learning systems. This means that fair and
rigorous evaluation of new learning algorithms requires performance comparison
against benchmark methods with best-practice model selection procedures, rather
than using default parameter settings. We investigate the sensitivity of three
key machine learning algorithms (support vector machine, random forest and
rotation forest) to their default parameter settings, and provide guidance on
determining sensible default parameter values for implementations of these
algorithms. We also conduct an experimental comparison of these three
algorithms on 121 classification problems and find that, perhaps surprisingly,
rotation forest is significantly more accurate on average than both random
forest and a support vector machine.
| 1 | 0 | 0 | 1 | 0 | 0 |
Coaction functors, II | In further study of the application of crossed-product functors to the
Baum-Connes Conjecture, Buss, Echterhoff, and Willett introduced various other
properties that crossed-product functors may have. Here we introduce and study
analogues of these properties for coaction functors, making sure that the
properties are preserved when the coaction functors are composed with the full
crossed product to make a crossed-product functor. The new properties for
coaction functors studied here are functoriality for generalized homomorphisms
and the correspondence property. We particularly study the connections with the
ideal property. The study of functoriality for generalized homomorphisms
requires a detailed development of the Fischer construction of maximalization
of coactions with regard to possibly degenerate homomorphisms into multiplier
algebras. We verify that all "KLQ" functors arising from large ideals of the
Fourier-Stieltjes algebra $B(G)$ have all the properties we study, and at the
opposite extreme we give an example of a coaction functor having none of the
properties.
| 0 | 0 | 1 | 0 | 0 | 0 |
Classification of Casimirs in 2D hydrodynamics | We describe a complete list of Casimirs for 2D Euler hydrodynamics on a
surface without boundary: we define generalized enstrophies which, along with
circulations, form a complete set of invariants for coadjoint orbits of
area-preserving diffeomorphisms on a surface. We also outline a possible
extension of main notions to the boundary case and formulate several open
questions in that setting.
| 0 | 1 | 1 | 0 | 0 | 0 |
Novel Compliant omnicrawler-wheel transforming module | This paper presents a novel design of a crawler robot which is capable of
transforming its chassis from an Omni crawler mode to a large-sized wheel mode
using a novel mechanism. The transformation occurs without any additional
actuators. Interestingly the robot can transform into a large diameter and
small width wheel which enhances its maneuverability like small turning radius
and fast/efficient locomotion. This paper contributes on improving the
locomotion mode of previously developed hybrid compliant omnicrawler robot
CObRaSO. In addition to legged and tracked mechanism, CObRaSO can now display
large wheel mode which contributes to its locomotion capabilities. Mechanical
design of the robot has been explained in a detailed manner in this paper and
also the transforming experiment and torque analysis has been shown clearly
| 1 | 0 | 0 | 0 | 0 | 0 |
Bifurcation of solutions to Hamiltonian boundary value problems | A bifurcation is a qualitative change in a family of solutions to an equation
produced by varying parameters. In contrast to the local bifurcations of
dynamical systems that are often related to a change in the number or stability
of equilibria, bifurcations of boundary value problems are global in nature and
may not be related to any obvious change in dynamical behaviour. Catastrophe
theory is a well-developed framework which studies the bifurcations of critical
points of functions. In this paper we study the bifurcations of solutions of
boundary-value problems for symplectic maps, using the language of
(finite-dimensional) singularity theory. We associate certain such problems
with a geometric picture involving the intersection of Lagrangian submanifolds,
and hence with the critical points of a suitable generating function. Within
this framework, we then study the effect of three special cases: (i) some
common boundary conditions, such as Dirichlet boundary conditions for
second-order systems, restrict the possible types of bifurcations (for example,
in generic planar systems only the A-series beginning with folds and cusps can
occur); (ii) integrable systems, such as planar Hamiltonian systems, can
exhibit a novel periodic pitchfork bifurcation; and (iii) systems with
Hamiltonian symmetries or reversing symmetries can exhibit restricted
bifurcations associated with the symmetry. This approach offers an alternative
to the analysis of critical points in function spaces, typically used in the
study of bifurcation of variational problems, and opens the way to the
detection of more exotic bifurcations than the simple folds and cusps that are
often found in examples.
| 0 | 0 | 1 | 0 | 0 | 0 |
Trespassing the Boundaries: Labeling Temporal Bounds for Object Interactions in Egocentric Video | Manual annotations of temporal bounds for object interactions (i.e. start and
end times) are typical training input to recognition, localization and
detection algorithms. For three publicly available egocentric datasets, we
uncover inconsistencies in ground truth temporal bounds within and across
annotators and datasets. We systematically assess the robustness of
state-of-the-art approaches to changes in labeled temporal bounds, for object
interaction recognition. As boundaries are trespassed, a drop of up to 10% is
observed for both Improved Dense Trajectories and Two-Stream Convolutional
Neural Network.
We demonstrate that such disagreement stems from a limited understanding of
the distinct phases of an action, and propose annotating based on the Rubicon
Boundaries, inspired by a similarly named cognitive model, for consistent
temporal bounds of object interactions. Evaluated on a public dataset, we
report a 4% increase in overall accuracy, and an increase in accuracy for 55%
of classes when Rubicon Boundaries are used for temporal annotations.
| 1 | 0 | 0 | 0 | 0 | 0 |
Dynamical transport measurement of the Luttinger parameter in helical edges states of 2D topological insulators | One-dimensional (1D) electron systems in the presence of Coulomb interaction
are described by Luttinger liquid theory. The strength of Coulomb interaction
in the Luttinger liquid, as parameterized by the Luttinger parameter K, is in
general difficult to measure. This is because K is usually hidden in powerlaw
dependencies of observables as a function of temperature or applied bias. We
propose a dynamical way to measure K on the basis of an electronic
time-of-flight experiment. We argue that the helical Luttinger liquid at the
edge of a 2D topological insulator constitutes a preeminently suited
realization of a 1D system to test our proposal. This is based on the
robustness of helical liquids against elastic backscattering in the presence of
time reversal symmetry.
| 0 | 1 | 0 | 0 | 0 | 0 |
Controlling thermal emission of phonon by magnetic metasurfaces | Our experiment shows that the thermal emission of phonon can be controlled by
magnetic resonance (MR) mode in a metasurface (MTS). Through changing the
structural parameter of metasurface, the MR wavelength can be tuned to the
phonon resonance wavelength. This introduces a strong coupling between phonon
and MR, which results in an anticrossing phonon-plasmons mode. In the process,
we can manipulate the polarization and angular radiation of thermal emission of
phonon. Such metasurface provides a new kind of thermal emission structures for
various thermal management applications.
| 0 | 1 | 0 | 0 | 0 | 0 |
Distributed Average Tracking of Heterogeneous Physical Second-order Agents With No Input Signals Constraint | This paper addresses distributed average tracking of physical second-order
agents with heterogeneous nonlinear dynamics, where there is no constraint on
input signals. The nonlinear terms in agents' dynamics are heterogeneous,
satisfying a Lipschitz-like condition that will be defined later and is more
general than the Lipschitz condition. In the proposed algorithm, a control
input and a filter are designed for each agent. Each agent's filter has two
outputs and the idea is that the first output estimates the average of the
input signals and the second output estimates the average of the input
velocities asymptotically. In parallel, each agent's position and velocity are
driven to track, respectively, the first and the second outputs. Having
heterogeneous nonlinear terms in agents' dynamics necessitates designing the
filters for agents. Since the nonlinear terms in agents' dynamics can be
unbounded and the input signals are arbitrary, novel state-dependent
time-varying gains are employed in agents' filters and control inputs to
overcome these unboundedness effects. Finally the results are improved to
achieve the distributed average tracking for a group of double-integrator
agents, where there is no constraint on input signals and the filter is not
required anymore. Numerical simulations are also presented to illustrate the
theoretical results.
| 0 | 0 | 1 | 0 | 0 | 0 |
Bound states of the two-dimensional Dirac equation for an energy-dependent hyperbolic Scarf potential | We study the two-dimensional massless Dirac equation for a potential that is
allowed to depend on the energy and on one of the spatial variables. After
determining a modified orthogonality relation and norm for such systems, we
present an application involving an energy-dependent version of the hyperbolic
Scarf potential. We construct closed-form bound state solutions of the
associated Dirac equation.
| 0 | 1 | 0 | 0 | 0 | 0 |
Beyond Planar Symmetry: Modeling human perception of reflection and rotation symmetries in the wild | Humans take advantage of real world symmetries for various tasks, yet
capturing their superb symmetry perception mechanism with a computational model
remains elusive. Motivated by a new study demonstrating the extremely high
inter-person accuracy of human perceived symmetries in the wild, we have
constructed the first deep-learning neural network for reflection and rotation
symmetry detection (Sym-NET), trained on photos from MS-COCO (Microsoft-Common
Object in COntext) dataset with nearly 11K consistent symmetry-labels from more
than 400 human observers. We employ novel methods to convert discrete human
labels into symmetry heatmaps, capture symmetry densely in an image and
quantitatively evaluate Sym-NET against multiple existing computer vision
algorithms. On CVPR 2013 symmetry competition testsets and unseen MS-COCO
photos, Sym-NET significantly outperforms all other competitors. Beyond
mathematically well-defined symmetries on a plane, Sym-NET demonstrates
abilities to identify viewpoint-varied 3D symmetries, partially occluded
symmetrical objects, and symmetries at a semantic level.
| 1 | 0 | 0 | 1 | 0 | 0 |
Sharp Threshold of Blow-up and Scattering for the fractional Hartree equation | We consider the fractional Hartree equation in the $L^2$-supercritical case,
and we find a sharp threshold of the scattering versus blow-up dichotomy for
radial data: If $
M[u_{0}]^{\frac{s-s_c}{s_c}}E[u_{0}<M[Q]^{\frac{s-s_c}{s_c}}E[Q]$ and
$M[u_{0}]^{\frac{s-s_c}{s_c}}\|u_{0}\|^2_{\dot H^s}<M[Q]^{\frac{s-s_c}{s_c}}\|
Q\|^2_{\dot H^s}$, then the solution $u(t)$ is globally well-posed and
scatters; if $
M[u_{0}]^{\frac{s-s_c}{s_c}}E[u_{0}]<M[Q]^{\frac{s-s_c}{s_c}}E[Q]$ and
$M[u_{0}]^{\frac{s-s_c}{s_c}}\|u_{0}\|^2_{\dot H^s}>M[Q]^{\frac{s-s_c}{s_c}}\|
Q\|^2_{\dot H^s}$, the solution $u(t)$ blows up in finite time. This condition
is sharp in the sense that the solitary wave solution $e^{it}Q(x)$ is global
but not scattering, which satisfies the equality in the above conditions. Here,
$Q$ is the ground-state solution for the fractional Hartree equation.
| 0 | 0 | 1 | 0 | 0 | 0 |
GP-SUM. Gaussian Processes Filtering of non-Gaussian Beliefs | This work studies the problem of stochastic dynamic filtering and state
propagation with complex beliefs. The main contribution is GP-SUM, a filtering
algorithm tailored to dynamic systems and observation models expressed as
Gaussian Processes (GP), and to states represented as a weighted sum of
Gaussians. The key attribute of GP-SUM is that it does not rely on
linearizations of the dynamic or observation models, or on unimodal Gaussian
approximations of the belief, hence enables tracking complex state
distributions. The algorithm can be seen as a combination of a sampling-based
filter with a probabilistic Bayes filter. On the one hand, GP-SUM operates by
sampling the state distribution and propagating each sample through the dynamic
system and observation models. On the other hand, it achieves effective
sampling and accurate probabilistic propagation by relying on the GP form of
the system, and the sum-of-Gaussian form of the belief. We show that GP-SUM
outperforms several GP-Bayes and Particle Filters on a standard benchmark. We
also demonstrate its use in a pushing task, predicting with experimental
accuracy the naturally occurring non-Gaussian distributions.
| 1 | 0 | 0 | 1 | 0 | 0 |
A Dynamic Programming Principle for Distribution-Constrained Optimal Stopping | We consider an optimal stopping problem where a constraint is placed on the
distribution of the stopping time. Reformulating the problem in terms of
so-called measure-valued martingales allows us to transform the marginal
constraint into an initial condition and view the problem as a stochastic
control problem; we establish the corresponding dynamic programming principle.
| 0 | 0 | 1 | 0 | 0 | 0 |
Spectral algebra models of unstable v_n-periodic homotopy theory | We give a survey of a generalization of Quillen-Sullivan rational homotopy
theory which gives spectral algebra models of unstable v_n-periodic homotopy
types. In addition to describing and contextualizing our original approach, we
sketch two other recent approaches which are of a more conceptual nature, due
to Arone-Ching and Heuts. In the process, we also survey many relevant concepts
which arise in the study of spectral algebra over operads, including
topological André-Quillen cohomology, Koszul duality, and Goodwillie
calculus.
| 0 | 0 | 1 | 0 | 0 | 0 |
The length of excitable knots | The FitzHugh-Nagumo equation provides a simple mathematical model of cardiac
tissue as an excitable medium hosting spiral wave vortices. Here we present
extensive numerical simulations studying long-term dynamics of knotted vortex
string solutions for all torus knots up to crossing number 11. We demonstrate
that FitzHugh-Nagumo evolution preserves the knot topology for all the examples
presented, thereby providing a novel field theory approach to the study of
knots. Furthermore, the evolution yields a well-defined minimal length for each
knot that is comparable to the ropelength of ideal knots. We highlight the role
of the medium boundary in stabilizing the length of the knot and discuss the
implications beyond torus knots. By applying Moffatt's test we are able to show
that there is not a unique attractor within a given knot topology.
| 0 | 1 | 1 | 0 | 0 | 0 |
The Bane of Low-Dimensionality Clustering | In this paper, we give a conditional lower bound of $n^{\Omega(k)}$ on
running time for the classic k-median and k-means clustering objectives (where
n is the size of the input), even in low-dimensional Euclidean space of
dimension four, assuming the Exponential Time Hypothesis (ETH). We also
consider k-median (and k-means) with penalties where each point need not be
assigned to a center, in which case it must pay a penalty, and extend our lower
bound to at least three-dimensional Euclidean space.
This stands in stark contrast to many other geometric problems such as the
traveling salesman problem, or computing an independent set of unit spheres.
While these problems benefit from the so-called (limited) blessing of
dimensionality, as they can be solved in time $n^{O(k^{1-1/d})}$ or
$2^{n^{1-1/d}}$ in d dimensions, our work shows that widely-used clustering
objectives have a lower bound of $n^{\Omega(k)}$, even in dimension four.
We complete the picture by considering the two-dimensional case: we show that
there is no algorithm that solves the penalized version in time less than
$n^{o(\sqrt{k})}$, and provide a matching upper bound of $n^{O(\sqrt{k})}$.
The main tool we use to establish these lower bounds is the placement of
points on the moment curve, which takes its inspiration from constructions of
point sets yielding Delaunay complexes of high complexity.
| 1 | 0 | 0 | 0 | 0 | 0 |
Blue-detuned magneto-optical trap | We present the properties and advantages of a new magneto-optical trap (MOT)
where blue-detuned light drives `type-II' transitions that have dark ground
states. Using $^{87}$Rb, we reach a radiation-pressure-limited density
exceeding $10^{11}$cm$^{-3}$ and a temperature below 30$\mu$K. The phase-space
density is higher than in normal atomic MOTs, and a million times higher than
comparable red-detuned type-II MOTs, making it particularly attractive for
molecular MOTs which rely on type-II transitions. The loss of atoms from the
trap is dominated by ultracold collisions between Rb atoms. For typical
trapping conditions, we measure a loss rate of
$1.8(4)\times10^{-10}$cm$^{3}$s$^{-1}$.
| 0 | 1 | 0 | 0 | 0 | 0 |
Polishness of some topologies related to word or tree automata | We prove that the Büchi topology and the automatic topology are Polish. We
also show that this cannot be fully extended to the case of a space of infinite
labelled binary trees; in particular the Büchi and the Muller topologies are
not Polish in this case.
| 1 | 0 | 1 | 0 | 0 | 0 |
Emulating satellite drag from large simulation experiments | Obtaining accurate estimates of satellite drag coefficients in low Earth
orbit is a crucial component in positioning and collision avoidance. Simulators
can produce accurate estimates, but their computational expense is much too
large for real-time application. A pilot study showed that Gaussian process
(GP) surrogate models could accurately emulate simulations. However, cubic
runtime for training GPs means that they could only be applied to a narrow
range of input configurations to achieve the desired level of accuracy. In this
paper we show how extensions to the local approximate Gaussian Process (laGP)
method allow accurate full-scale emulation. The new methodological
contributions, which involve a multi-level global/local modeling approach, and
a set-wise approach to local subset selection, are shown to perform well in
benchmark and synthetic data settings. We conclude by demonstrating that our
method achieves the desired level of accuracy, besting simpler viable (i.e.,
computationally tractable) global and local modeling approaches, when trained
on seventy thousand core hours of drag simulations for two real-world
satellites: the Hubble space telescope (HST) and the gravity recovery and
climate experiment (GRACE).
| 0 | 0 | 0 | 1 | 0 | 0 |
Rescaling and other forms of unsupervised preprocessing introduce bias into cross-validation | Cross-validation of predictive models is the de-facto standard for model
selection and evaluation. In proper use, it provides an unbiased estimate of a
model's predictive performance. However, data sets often undergo a preliminary
data-dependent transformation, such as feature rescaling or dimensionality
reduction, prior to cross-validation. It is widely believed that such a
preprocessing stage, if done in an unsupervised manner that does not consider
the class labels or response values, has no effect on the validity of
cross-validation. In this paper, we show that this belief is not true.
Preliminary preprocessing can introduce either a positive or negative bias into
the estimates of model performance. Thus, it may lead to sub-optimal choices of
model parameters and invalid inference. In light of this, the scientific
community should re-examine the use of preliminary preprocessing prior to
cross-validation across the various application domains. By default, all data
transformations, including unsupervised preprocessing stages, should be learned
only from the training samples, and then merely applied to the validation and
testing samples.
| 1 | 0 | 0 | 1 | 0 | 0 |
Causal Inference on Discrete Data via Estimating Distance Correlations | In this paper, we deal with the problem of inferring causal directions when
the data is on discrete domain. By considering the distribution of the cause
$P(X)$ and the conditional distribution mapping cause to effect $P(Y|X)$ as
independent random variables, we propose to infer the causal direction via
comparing the distance correlation between $P(X)$ and $P(Y|X)$ with the
distance correlation between $P(Y)$ and $P(X|Y)$. We infer "$X$ causes $Y$" if
the dependence coefficient between $P(X)$ and $P(Y|X)$ is smaller. Experiments
are performed to show the performance of the proposed method.
| 0 | 0 | 0 | 1 | 0 | 0 |
A Class of Exponential Sequences with Shift-Invariant Discriminators | The discriminator of an integer sequence s = (s(i))_{i>=0}, introduced by
Arnold, Benkoski, and McCabe in 1985, is the function D_s(n) that sends n to
the least integer m such that the numbers s(0), s(1), ..., s(n-1) are pairwise
incongruent modulo m. In this note we present a class of exponential sequences
that have the special property that their discriminators are shift-invariant,
i.e., that the discriminator of the sequence is the same even if the sequence
is shifted by any positive constant.
| 1 | 0 | 1 | 0 | 0 | 0 |
Contraction par Frobenius et modules de Steinberg | For a reductive group G defined over an algebraically closed field of
positive characteristic, we show that the Frobenius contraction functor of
G-modules is right adjoint to the Frobenius twist of the modules tensored with
the Steinberg module twice. It follows that the Frobenius contraction functor
preserves injectivity, good filtrations, but not semisiplicity.
| 0 | 0 | 1 | 0 | 0 | 0 |
SafeDrive: A Robust Lane Tracking System for Autonomous and Assisted Driving Under Limited Visibility | We present an approach towards robust lane tracking for assisted and
autonomous driving, particularly under poor visibility. Autonomous detection of
lane markers improves road safety, and purely visual tracking is desirable for
widespread vehicle compatibility and reducing sensor intrusion, cost, and
energy consumption. However, visual approaches are often ineffective because of
a number of factors, including but not limited to occlusion, poor weather
conditions, and paint wear-off. Our method, named SafeDrive, attempts to
improve visual lane detection approaches in drastically degraded visual
conditions without relying on additional active sensors. In scenarios where
visual lane detection algorithms are unable to detect lane markers, the
proposed approach uses location information of the vehicle to locate and access
alternate imagery of the road and attempts detection on this secondary image.
Subsequently, by using a combination of feature-based and pixel-based
alignment, an estimated location of the lane marker is found in the current
scene. We demonstrate the effectiveness of our system on actual driving data
from locations in the United States with Google Street View as the source of
alternate imagery.
| 1 | 0 | 0 | 0 | 0 | 0 |
On the analysis of personalized medication response and classification of case vs control patients in mobile health studies: the mPower case study | In this work we provide a couple of contributions to the analysis of
longitudinal data collected by smartphones in mobile health applications.
First, we propose a novel statistical approach to disentangle personalized
treatment and "time-of-the-day" effects in observational studies. Under the
assumption of no unmeasured confounders, we show how to use conditional
independence relations in the data in order to determine if a difference in
performance between activity tasks performed before and after the participant
has taken medication, are potentially due to an effect of the medication or to
a "time-of-the-day" effect (or still to both). Second, we show that smartphone
data collected from a given study participant can represent a "digital
fingerprint" of the participant, and that classifiers of case/control labels,
constructed using longitudinal data, can show artificially improved performance
when data from each participant is included in both training and test sets. We
illustrate our contributions using data collected during the first 6 months of
the mPower study.
| 0 | 0 | 0 | 1 | 0 | 0 |
Exponential Integrators in Time-Dependent Density Functional Calculations | The integrating factor and exponential time differencing methods are
implemented and tested for solving the time-dependent Kohn--Sham equations.
Popular time propagation methods used in physics, as well as other robust
numerical approaches, are compared to these exponential integrator methods in
order to judge the relative merit of the computational schemes. We determine an
improvement in accuracy of multiple orders of magnitude when describing
dynamics driven primarily by a nonlinear potential. For cases of dynamics
driven by a time-dependent external potential, the accuracy of the exponential
integrator methods are less enhanced but still match or outperform the best of
the conventional methods tested.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Distributed Scheduling Algorithm to Provide Quality-of-Service in Multihop Wireless Networks | Control of multihop Wireless networks in a distributed manner while providing
end-to-end delay requirements for different flows, is a challenging problem.
Using the notions of Draining Time and Discrete Review from the theory of fluid
limits of queues, an algorithm that meets delay requirements to various flows
in a network is constructed. The algorithm involves an optimization which is
implemented in a cyclic distributed manner across nodes by using the technique
of iterative gradient ascent, with minimal information exchange between nodes.
The algorithm uses time varying weights to give priority to flows. The
performance of the algorithm is studied in a network with interference modelled
by independent sets.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Multi-task Deep Learning Architecture for Maritime Surveillance using AIS Data Streams | In a world of global trading, maritime safety, security and efficiency are
crucial issues. We propose a multi-task deep learning framework for vessel
monitoring using Automatic Identification System (AIS) data streams. We combine
recurrent neural networks with latent variable modeling and an embedding of AIS
messages to a new representation space to jointly address key issues to be
dealt with when considering AIS data streams: massive amount of streaming data,
noisy data and irregular timesampling. We demonstrate the relevance of the
proposed deep learning framework on real AIS datasets for a three-task setting,
namely trajectory reconstruction, anomaly detection and vessel type
identification.
| 0 | 0 | 0 | 1 | 0 | 0 |
The CCI30 Index | We describe the design of the CCI30 cryptocurrency index.
| 0 | 0 | 0 | 0 | 0 | 1 |
Training Probabilistic Spiking Neural Networks with First-to-spike Decoding | Third-generation neural networks, or Spiking Neural Networks (SNNs), aim at
harnessing the energy efficiency of spike-domain processing by building on
computing elements that operate on, and exchange, spikes. In this paper, the
problem of training a two-layer SNN is studied for the purpose of
classification, under a Generalized Linear Model (GLM) probabilistic neural
model that was previously considered within the computational neuroscience
literature. Conventional classification rules for SNNs operate offline based on
the number of output spikes at each output neuron. In contrast, a novel
training method is proposed here for a first-to-spike decoding rule, whereby
the SNN can perform an early classification decision once spike firing is
detected at an output neuron. Numerical results bring insights into the optimal
parameter selection for the GLM neuron and on the accuracy-complexity trade-off
performance of conventional and first-to-spike decoding.
| 1 | 0 | 0 | 1 | 0 | 0 |
An Effective Training Method For Deep Convolutional Neural Network | In this paper, we propose the nonlinearity generation method to speed up and
stabilize the training of deep convolutional neural networks. The proposed
method modifies a family of activation functions as nonlinearity generators
(NGs). NGs make the activation functions linear symmetric for their inputs to
lower model capacity, and automatically introduce nonlinearity to enhance the
capacity of the model during training. The proposed method can be considered an
unusual form of regularization: the model parameters are obtained by training a
relatively low-capacity model, that is relatively easy to optimize at the
beginning, with only a few iterations, and these parameters are reused for the
initialization of a higher-capacity model. We derive the upper and lower bounds
of variance of the weight variation, and show that the initial symmetric
structure of NGs helps stabilize training. We evaluate the proposed method on
different frameworks of convolutional neural networks over two object
recognition benchmark tasks (CIFAR-10 and CIFAR-100). Experimental results
showed that the proposed method allows us to (1) speed up the convergence of
training, (2) allow for less careful weight initialization, (3) improve or at
least maintain the performance of the model at negligible extra computational
cost, and (4) easily train a very deep model.
| 1 | 0 | 0 | 1 | 0 | 0 |
A Multi-Layer K-means Approach for Multi-Sensor Data Pattern Recognition in Multi-Target Localization | Data-target association is an important step in multi-target localization for
the intelligent operation of un- manned systems in numerous applications such
as search and rescue, traffic management and surveillance. The objective of
this paper is to present an innovative data association learning approach named
multi-layer K-means (MLKM) based on leveraging the advantages of some existing
machine learning approaches, including K-means, K-means++, and deep neural
networks. To enable the accurate data association from different sensors for
efficient target localization, MLKM relies on the clustering capabilities of
K-means++ structured in a multi-layer framework with the error correction
feature that is motivated by the backpropogation that is well-known in deep
learning research. To show the effectiveness of the MLKM method, numerous
simulation examples are conducted to compare its performance with K-means,
K-means++, and deep neural networks.
| 1 | 0 | 0 | 1 | 0 | 0 |
Investigation of Monaural Front-End Processing for Robust ASR without Retraining or Joint-Training | In recent years, monaural speech separation has been formulated as a
supervised learning problem, which has been systematically researched and shown
the dramatical improvement of speech intelligibility and quality for human
listeners. However, it has not been well investigated whether the methods can
be employed as the front-end processing and directly improve the performance of
a machine listener, i.e., an automatic speech recognizer, without retraining or
joint-training the acoustic model. In this paper, we explore the effectiveness
of the independent front-end processing for the multi-conditional trained ASR
on the CHiME-3 challenge. We find that directly feeding the enhanced features
to ASR can make 36.40% and 11.78% relative WER reduction for the GMM-based and
DNN-based ASR respectively. We also investigate the affect of noisy phase and
generalization ability under unmatched noise condition.
| 1 | 0 | 0 | 0 | 0 | 0 |
Average treatment effects in the presence of unknown interference | We investigate large-sample properties of treatment effect estimators under
unknown interference in randomized experiments. The inferential target is a
generalization of the average treatment effect estimand that marginalizes over
potential spillover effects. We show that estimators commonly used to estimate
treatment effects under no-interference are consistent for the generalized
estimand for several common experimental designs under limited but otherwise
arbitrary and unknown interference. The rates of convergence depend on the rate
at which the amount of interference grows and the degree to which it aligns
with dependencies in treatment assignment. Importantly for practitioners, the
results imply that if one erroneously assumes that units do not interfere in a
setting with limited, or even moderate, interference, standard estimators are
nevertheless likely to be close to an average treatment effect if the sample is
sufficiently large.
| 0 | 0 | 1 | 1 | 0 | 0 |
Spatial localization for nonlinear dynamical stochastic models for excitable media | Nonlinear dynamical stochastic models are ubiquitous in different areas.
Excitable media models are typical examples with large state dimensions. Their
statistical properties are often of great interest but are also very
challenging to compute. In this article, a theoretical framework to understand
the spatial localization for a large class of stochastically coupled nonlinear
systems in high dimensions is developed. Rigorous mathematical theories show
the covariance decay behavior due to both local and nonlocal effects, which
result from the diffusion and the mean field interaction, respectively. The
analysis is based on a comparison with an appropriate linear surrogate model,
of which the covariance propagation can be computed explicitly. Two important
applications of these theoretical results are discussed. They are the spatial
averaging strategy for efficiently sampling the covariance matrix and the
localization technique in data assimilation. Test examples of a surrogate
linear model and a stochastically coupled FitzHugh-Nagumo model for excitable
media are adopted to validate the theoretical results. The latter is also used
for a systematical study of the spatial averaging strategy in efficiently
sampling the covariance matrix in different dynamical regimes.
| 0 | 0 | 1 | 1 | 0 | 0 |
Nonlinear photoionization of transparent solids: a nonperturbative theory obeying selection rules | We provide a nonperturbative theory for photoionization of transparent
solids. By applying a particular steepest-descent method, we derive analytical
expressions for the photoionization rate within the two-band structure model,
which consistently account for the $selection$ $rules$ related to the parity of
the number of absorbed photons ($odd$ or $even$). We demonstrate the crucial
role of the interference of the transition amplitudes (saddle-points), which in
the semi-classical limit, can be interpreted in terms of interfering quantum
trajectories. Keldysh's foundational work of laser physics [Sov. Phys. JETP 20,
1307 (1965)] disregarded this interference, resulting in the violation of
$selection$ $rules$. We provide an improved Keldysh photoionization theory and
show its excellent agreement with measurements for the frequency dependence of
the two-photon absorption and nonlinear refractive index coefficients in
dielectrics.
| 0 | 1 | 0 | 0 | 0 | 0 |
Quantifying the Model Risk Inherent in the Calibration and Recalibration of Option Pricing Models | We focus on two particular aspects of model risk: the inability of a chosen
model to fit observed market prices at a given point in time (calibration
error) and the model risk due to recalibration of model parameters (in
contradiction to the model assumptions). In this context, we follow the
approach of Glasserman and Xu (2014) and use relative entropy as a pre-metric
in order to quantify these two sources of model risk in a common framework, and
consider the trade-offs between them when choosing a model and the frequency
with which to recalibrate to the market. We illustrate this approach applied to
the models of Black and Scholes (1973) and Heston (1993), using option data for
Apple (AAPL) and Google (GOOG). We find that recalibrating a model more
frequently simply shifts model risk from one type to another, without any
substantial reduction of aggregate model risk. Furthermore, moving to a more
complicated stochastic model is seen to be counterproductive if one requires a
high degree of robustness, for example as quantified by a 99 percent quantile
of aggregate model risk.
| 0 | 0 | 0 | 0 | 0 | 1 |
CN rings in full protoplanetary disks around young stars as probes of disk structure | Bright ring-like structure emission of the CN molecule has been observed in
protoplanetary disks. We investigate whether such structures are due to the
morphology of the disk itself or if they are instead an intrinsic feature of CN
emission. With the intention of using CN as a diagnostic, we also address to
which physical and chemical parameters CN is most sensitive. A set of disk
models were run for different stellar spectra, masses, and physical structures
via the 2D thermochemical code DALI. An updated chemical network that accounts
for the most relevant CN reactions was adopted. Ring-shaped emission is found
to be a common feature of all adopted models; the highest abundance is found in
the upper outer regions of the disk, and the column density peaks at 30-100 AU
for T Tauri stars with standard accretion rates. Higher mass disks generally
show brighter CN. Higher UV fields, such as those appropriate for T Tauri stars
with high accretion rates or for Herbig Ae stars or for higher disk flaring,
generally result in brighter and larger rings. These trends are due to the main
formation paths of CN, which all start with vibrationally excited H2*
molecules, that are produced through far ultraviolet (FUV) pumping of H2. The
model results compare well with observed disk-integrated CN fluxes and the
observed location of the CN ring for the TW Hya disk. CN rings are produced
naturally in protoplanetary disks and do not require a specific underlying disk
structure such as a dust cavity or gap. The strong link between FUV flux and CN
emission can provide critical information regarding the vertical structure of
the disk and the distribution of dust grains which affects the UV penetration,
and could help to break some degeneracies in the SED fitting. In contrast with
C2H or c-C3H2, the CN flux is not very sensitive to carbon and oxygen
depletion.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Trio Neural Model for Dynamic Entity Relatedness Ranking | Measuring entity relatedness is a fundamental task for many natural language
processing and information retrieval applications. Prior work often studies
entity relatedness in static settings and an unsupervised manner. However,
entities in real-world are often involved in many different relationships,
consequently entity-relations are very dynamic over time. In this work, we
propose a neural networkbased approach for dynamic entity relatedness,
leveraging the collective attention as supervision. Our model is capable of
learning rich and different entity representations in a joint framework.
Through extensive experiments on large-scale datasets, we demonstrate that our
method achieves better results than competitive baselines.
| 0 | 0 | 0 | 1 | 0 | 0 |
Dynamic k-Struve Sumudu Solutions for Fractional Kinetic Equations | In this present study, we investigate solutions for fractional kinetic
equations, involving k-Struve functions using Sumudu transform. The methodology
and results can be considered and applied to various related fractional
problems in mathematical physics.
| 0 | 0 | 1 | 0 | 0 | 0 |
Does mitigating ML's impact disparity require treatment disparity? | Following related work in law and policy, two notions of disparity have come
to shape the study of fairness in algorithmic decision-making. Algorithms
exhibit treatment disparity if they formally treat members of protected
subgroups differently; algorithms exhibit impact disparity when outcomes differ
across subgroups, even if the correlation arises unintentionally. Naturally, we
can achieve impact parity through purposeful treatment disparity. In one thread
of technical work, papers aim to reconcile the two forms of parity proposing
disparate learning processes (DLPs). Here, the learning algorithm can see group
membership during training but produce a classifier that is group-blind at test
time. In this paper, we show theoretically that: (i) When other features
correlate to group membership, DLPs will (indirectly) implement treatment
disparity, undermining the policy desiderata they are designed to address; (ii)
When group membership is partly revealed by other features, DLPs induce
within-class discrimination; and (iii) In general, DLPs provide a suboptimal
trade-off between accuracy and impact parity. Based on our technical analysis,
we argue that transparent treatment disparity is preferable to occluded methods
for achieving impact parity. Experimental results on several real-world
datasets highlight the practical consequences of applying DLPs vs. per-group
thresholds.
| 1 | 0 | 0 | 1 | 0 | 0 |
Topological Terms and Phases of Sigma Models | We study boundary conditions of topological sigma models with the goal of
generalizing the concepts of anomalous symmetry and symmetry protected
topological order. We find a version of 't Hooft's anomaly matching conditions
on the renormalization group flow of boundaries of invertible topological sigma
models and discuss several examples of anomalous boundary theories. We also
comment on bulk topological transitions in dynamical sigma models and argue
that one can, with care, use topological data to draw sigma model phase
diagrams.
| 0 | 1 | 0 | 0 | 0 | 0 |
A commuting-vector-field approach to some dispersive estimates | We prove the pointwise decay of solutions to three linear equations: (i) the
transport equation in phase space generalizing the classical Vlasov equation,
(ii) the linear Schrodinger equation, (iii) the Airy (linear KdV) equation. The
usual proofs use explicit representation formulae, and either obtain
$L^1$---$L^\infty$ decay through directly estimating the fundamental solution
in physical space, or by studying oscillatory integrals coming from the
representation in Fourier space. Our proof instead combines "vector field"
commutators that capture the inherent symmetries of the relevant equations with
conservation laws for mass and energy to get space-time weighted energy
estimates. Combined with a simple version of Sobolev's inequality this gives
pointwise decay as desired. In the case of the Vlasov and Schrodinger equations
we can recover sharp pointwise decay; in the Schrodinger case we also show how
to obtain local energy decay as well as Strichartz-type estimates. For the Airy
equation we obtain a local energy decay that is almost sharp from the scaling
point of view, but nonetheless misses the classical estimates by a gap. This
work is inspired by the work of Klainerman on $L^2$---$L^\infty$ decay of wave
equations, as well as the recent work of Fajman, Joudioux, and Smulevici on
decay of mass distributions for the relativistic Vlasov equation.
| 0 | 0 | 1 | 0 | 0 | 0 |
The connected countable spaces of Bing and Ritter are topologically homogeneous | Answering a problem posed by the second author on Mathoverflow, we prove that
the connected countable Hausdorff spaces constructed by Bing and Ritter are
topologically homogeneous.
| 0 | 0 | 1 | 0 | 0 | 0 |
Dynamic Graph Convolutional Networks | Many different classification tasks need to manage structured data, which are
usually modeled as graphs. Moreover, these graphs can be dynamic, meaning that
the vertices/edges of each graph may change during time. Our goal is to jointly
exploit structured data and temporal information through the use of a neural
network model. To the best of our knowledge, this task has not been addressed
using these kind of architectures. For this reason, we propose two novel
approaches, which combine Long Short-Term Memory networks and Graph
Convolutional Networks to learn long short-term dependencies together with
graph structure. The quality of our methods is confirmed by the promising
results achieved.
| 1 | 0 | 0 | 1 | 0 | 0 |
Robust and Efficient Parametric Spectral Estimation in Atomic Force Microscopy | An atomic force microscope (AFM) is capable of producing ultra-high
resolution measurements of nanoscopic objects and forces. It is an
indispensable tool for various scientific disciplines such as molecular
engineering, solid-state physics, and cell biology. Prior to a given
experiment, the AFM must be calibrated by fitting a spectral density model to
baseline recordings. However, since AFM experiments typically collect large
amounts of data, parameter estimation by maximum likelihood can be
prohibitively expensive. Thus, practitioners routinely employ a much faster
least-squares estimation method, at the cost of substantially reduced
statistical efficiency. Additionally, AFM data is often contaminated by
periodic electronic noise, to which parameter estimates are highly sensitive.
This article proposes a two-stage estimator to address these issues.
Preliminary parameter estimates are first obtained by a variance-stabilizing
procedure, by which the simplicity of least-squares combines with the
efficiency of maximum likelihood. A test for spectral periodicities then
eliminates high-impact outliers, considerably and robustly protecting the
second-stage estimator from the effects of electronic noise. Simulation and
experimental results indicate that a two- to ten-fold reduction in mean squared
error can be expected by applying our methodology.
| 0 | 0 | 0 | 1 | 0 | 0 |
A Robotic Auto-Focus System based on Deep Reinforcement Learning | Considering its advantages in dealing with high-dimensional visual input and
learning control policies in discrete domain, Deep Q Network (DQN) could be an
alternative method of traditional auto-focus means in the future. In this
paper, based on Deep Reinforcement Learning, we propose an end-to-end approach
that can learn auto-focus policies from visual input and finish at a clear spot
automatically. We demonstrate that our method - discretizing the action space
with coarse to fine steps and applying DQN is not only a solution to auto-focus
but also a general approach towards vision-based control problems. Separate
phases of training in virtual and real environments are applied to obtain an
effective model. Virtual experiments, which are carried out after the virtual
training phase, indicates that our method could achieve 100% accuracy on a
certain view with different focus range. Further training on real robots could
eliminate the deviation between the simulator and real scenario, leading to
reliable performances in real applications.
| 1 | 0 | 0 | 0 | 0 | 0 |
Wall modeling via function enrichment: extension to detached-eddy simulation | We extend the approach of wall modeling via function enrichment to
detached-eddy simulation. The wall model aims at using coarse cells in the
near-wall region by modeling the velocity profile in the viscous sublayer and
log-layer. However, unlike other wall models, the full Navier-Stokes equations
are still discretely fulfilled, including the pressure gradient and convective
term. This is achieved by enriching the elements of the high-order
discontinuous Galerkin method with the law-of-the-wall. As a result, the
Galerkin method can "choose" the optimal solution among the polynomial and
enrichment shape functions. The detached-eddy simulation methodology provides a
suitable turbulence model for the coarse near-wall cells. The approach is
applied to wall-modeled LES of turbulent channel flow in a wide range of
Reynolds numbers. Flow over periodic hills shows the superiority compared to an
equilibrium wall model under separated flow conditions.
| 0 | 1 | 0 | 0 | 0 | 0 |
Existence and uniqueness of periodic solution of nth-order Equations with delay in Banach space having Fourier type | The aim of this work is to study the existence of a periodic solutions of
nth-order differential equations with delay d dt x(t) + d 2 dt 2 x(t) + d 3 dt
3 x(t) + ... + d n dt n x(t) = Ax(t) + L(xt) + f (t). Our approach is based on
the M-boundedness of linear operators, Fourier type, B s p,q-multipliers and
Besov spaces.
| 0 | 0 | 1 | 0 | 0 | 0 |
A proof of Hilbert's theorem on ternary quartic forms with the ladder technique | This paper proposes a totally constructive approach for the proof of
Hilbert's theorem on ternary quartic forms. The main contribution is the ladder
technique, with which the Hilbert's theorem is proved vividly.
| 1 | 0 | 1 | 0 | 0 | 0 |
Data-driven Job Search Engine Using Skills and Company Attribute Filters | According to a report online, more than 200 million unique users search for
jobs online every month. This incredibly large and fast growing demand has
enticed software giants such as Google and Facebook to enter this space, which
was previously dominated by companies such as LinkedIn, Indeed and
CareerBuilder. Recently, Google released their "AI-powered Jobs Search Engine",
"Google For Jobs" while Facebook released "Facebook Jobs" within their
platform. These current job search engines and platforms allow users to search
for jobs based on general narrow filters such as job title, date posted,
experience level, company and salary. However, they have severely limited
filters relating to skill sets such as C++, Python, and Java and company
related attributes such as employee size, revenue, technographics and
micro-industries. These specialized filters can help applicants and companies
connect at a very personalized, relevant and deeper level. In this paper we
present a framework that provides an end-to-end "Data-driven Jobs Search
Engine". In addition, users can also receive potential contacts of recruiters
and senior positions for connection and networking opportunities. The high
level implementation of the framework is described as follows: 1) Collect job
postings data in the United States, 2) Extract meaningful tokens from the
postings data using ETL pipelines, 3) Normalize the data set to link company
names to their specific company websites, 4) Extract and ranking the skill
sets, 5) Link the company names and websites to their respective company level
attributes with the EVERSTRING Company API, 6) Run user-specific search queries
on the database to identify relevant job postings and 7) Rank the job search
results. This framework offers a highly customizable and highly targeted search
experience for end users.
| 1 | 0 | 0 | 0 | 0 | 0 |
The universal DAHA of type $(C_1^\vee,C_1)$ and Leonard pairs of $q$-Racah type | A Leonard pair is a pair of diagonalizable linear transformations of a
finite-dimensional vector space, each of which acts in an irreducible
tridiagonal fashion on an eigenbasis for the other one. Let $\mathbb F$ denote
an algebraically closed field, and fix a nonzero $q \in \mathbb F$ that is not
a root of unity. The universal double affine Hecke algebra (DAHA) $\hat{H}_q$
of type $(C_1^\vee,C_1)$ is the associative $\mathbb F$-algebra defined by
generators $\lbrace t_i^{\pm 1}\rbrace_{i=0}^3$ and relations (i)
$t_it_i^{-1}=t_i^{-1}t_i=1$; (ii) $t_i+t_i^{-1}$ is central; (iii)
$t_0t_1t_2t_3 = q^{-1}$. We consider the elements $X=t_3t_0$ and $Y=t_0t_1$ of
$\hat{H}_q$. Let $\mathcal V$ denote a finite-dimensional irreducible
$\hat{H}_q$-module on which each of $X$, $Y$ is diagonalizable and $t_0$ has
two distinct eigenvalues. Then $\mathcal V$ is a direct sum of the two
eigenspaces of $t_0$. We show that the pair $X+X^{-1}$, $Y+Y^{-1}$ acts on each
eigenspace as a Leonard pair, and each of these Leonard pairs falls into a
class said to have $q$-Racah type. Thus from $\mathcal V$ we obtain a pair of
Leonard pairs of $q$-Racah type. It is known that a Leonard pair of $q$-Racah
type is determined up to isomorphism by a parameter sequence $(a,b,c,d)$ called
its Huang data. Given a pair of Leonard pairs of $q$-Racah type, we find
necessary and sufficient conditions on their Huang data for that pair to come
from the above construction.
| 0 | 0 | 1 | 0 | 0 | 0 |
Deep Learning for Computational Chemistry | The rise and fall of artificial neural networks is well documented in the
scientific literature of both computer science and computational chemistry. Yet
almost two decades later, we are now seeing a resurgence of interest in deep
learning, a machine learning algorithm based on multilayer neural networks.
Within the last few years, we have seen the transformative impact of deep
learning in many domains, particularly in speech recognition and computer
vision, to the extent that the majority of expert practitioners in those field
are now regularly eschewing prior established models in favor of deep learning
models. In this review, we provide an introductory overview into the theory of
deep neural networks and their unique properties that distinguish them from
traditional machine learning algorithms used in cheminformatics. By providing
an overview of the variety of emerging applications of deep neural networks, we
highlight its ubiquity and broad applicability to a wide range of challenges in
the field, including QSAR, virtual screening, protein structure prediction,
quantum chemistry, materials design and property prediction. In reviewing the
performance of deep neural networks, we observed a consistent outperformance
against non-neural networks state-of-the-art models across disparate research
topics, and deep neural network based models often exceeded the "glass ceiling"
expectations of their respective tasks. Coupled with the maturity of
GPU-accelerated computing for training deep neural networks and the exponential
growth of chemical data on which to train these networks on, we anticipate that
deep learning algorithms will be a valuable tool for computational chemistry.
| 1 | 0 | 0 | 1 | 0 | 0 |
Causal Interventions for Fairness | Most approaches in algorithmic fairness constrain machine learning methods so
the resulting predictions satisfy one of several intuitive notions of fairness.
While this may help private companies comply with non-discrimination laws or
avoid negative publicity, we believe it is often too little, too late. By the
time the training data is collected, individuals in disadvantaged groups have
already suffered from discrimination and lost opportunities due to factors out
of their control. In the present work we focus instead on interventions such as
a new public policy, and in particular, how to maximize their positive effects
while improving the fairness of the overall system. We use causal methods to
model the effects of interventions, allowing for potential interference--each
individual's outcome may depend on who else receives the intervention. We
demonstrate this with an example of allocating a budget of teaching resources
using a dataset of schools in New York City.
| 0 | 0 | 0 | 1 | 0 | 0 |
Analyzing Hypersensitive AI: Instability in Corporate-Scale Machine Learning | Predictive geometric models deliver excellent results for many Machine
Learning use cases. Despite their undoubted performance, neural predictive
algorithms can show unexpected degrees of instability and variance,
particularly when applied to large datasets. We present an approach to measure
changes in geometric models with respect to both output consistency and
topological stability. Considering the example of a recommender system using
word2vec, we analyze the influence of single data points, approximation methods
and parameter settings. Our findings can help to stabilize models where needed
and to detect differences in informational value of data points on a large
scale.
| 0 | 0 | 0 | 1 | 0 | 0 |
Role of the orbital degree of freedom in iron-based superconductors | Almost a decade has passed since the serendipitous discovery of the
iron-based high temperature superconductors (FeSCs) in 2008. The question of
how much similarity the FeSCs have with the copper oxide high temperature
superconductors emerged since the initial discovery of long-range
antiferromagnetism in the FeSCs in proximity to superconductivity. Despite the
great resemblance in their phase diagrams, there exist important disparities
between FeSCs and cuprates that need to be considered in order to paint a full
picture of these two families of high temperature superconductors. One of the
key differences lies in the multi-orbital multi-band nature of FeSCs, in
contrast to the effective single-band model for cuprates. Due to the complexity
of multi-orbital band structures, the orbital degree of freedom is often
neglected in formulating the theoretical models for FeSCs. On the experimental
side, systematic studies of the orbital related phenomena in FeSCs have been
largely lacking. In this review, we summarize angle-resolved photoemission
spectroscopy (ARPES) measurements across various FeSC families in literature,
focusing on the systematic trend of orbital dependent electron correlations and
the role of different Fe 3d orbitals in driving the nematic transition, the
spin-density-wave transition, and implications for superconductivity.
| 0 | 1 | 0 | 0 | 0 | 0 |
Perpetual points: New tool for localization of co-existing attractors in dynamical systems | Perpetual points (PPs) are special critical points for which the magnitude of
acceleration describing dynamics drops to zero, while the motion is still
possible (stationary points are excluded), e.g. considering the motion of the
particle in the potential field, at perpetual point it has zero acceleration
and non-zero velocity. We show that using PPs we can trace all the stable fixed
points in the system, and that the structure of trajectories leading from
former points to stable equilibria may be similar to orbits obtained from
unstable stationary points. Moreover, we argue that the concept of perpetual
points may be useful in tracing unexpected attractors (hidden or rare
attractors with small basins of attraction). We show potential applicability of
this approach by analysing several representative systems of physical
significance, including the damped oscillator, pendula and the Henon map. We
suggest that perpetual points may be a useful tool for localization of
co-existing attractors in dynamical systems.
| 0 | 1 | 0 | 0 | 0 | 0 |
Loss Functions in Restricted Parameter Spaces and Their Bayesian Applications | A squared error loss remains the most commonly used loss function for
constructing a Bayes estimator of the parameter of interest. It, however, can
lead to sub-optimal solutions when a parameter is defined on a restricted
space. It can also be an inappropriate choice in the context when an
overestimation and/or underestimation results in severe consequences and a more
conservative estimator is preferred. We advocate a class of loss functions for
parameters defined on restricted spaces which infinitely penalize boundary
decisions like the squared error loss does on the real line. We also recall
several properties of loss functions such as symmetry, convexity and
invariance. We propose generalizations of the squared error loss function for
parameters defined on the positive real line and on an interval. We provide
explicit solutions for corresponding Bayes estimators and discuss multivariate
extensions. Three well-known Bayesian estimation problems are used to
demonstrate inferential benefits the novel Bayes estimators can provide in the
context of restricted estimation.
| 0 | 0 | 1 | 1 | 0 | 0 |
When Slepian Meets Fiedler: Putting a Focus on the Graph Spectrum | The study of complex systems benefits from graph models and their analysis.
In particular, the eigendecomposition of the graph Laplacian lets emerge
properties of global organization from local interactions; e.g., the Fiedler
vector has the smallest non-zero eigenvalue and plays a key role for graph
clustering. Graph signal processing focusses on the analysis of signals that
are attributed to the graph nodes. The eigendecomposition of the graph
Laplacian allows to define the graph Fourier transform and extend conventional
signal-processing operations to graphs. Here, we introduce the design of
Slepian graph signals, by maximizing energy concentration in a predefined
subgraph for a graph spectral bandlimit. We establish a novel link with
classical Laplacian embedding and graph clustering, which provides a meaning to
localized graph frequencies.
| 1 | 0 | 0 | 0 | 0 | 0 |
Colored Image Encryption and Decryption Using Chaotic Lorenz System and DCT2 | In this paper, a scheme for the encryption and decryption of colored images
by using the Lorenz system and the discrete cosine transform in two dimensions
(DCT2) is proposed. Although chaos is random, it has deterministic features
that can be used for encryption; further, the same sequences can be produced at
the transmitter and receiver under the same initial conditions. Another
property of DCT2 is that the energy is concentrated in some elements of the
coefficients. These two properties are used to efficiently encrypt and recover
the image at the receiver by using three different keys with three different
predefined number of shifts for each instance of key usage. Simulation results
and statistical analysis show that the scheme high performance in weakening the
correlation between the pixels of the image that resulted from the inverse of
highest energy values of DCT2 that form 99.9 % of the energy as well as those
of the difference image.
| 1 | 0 | 0 | 0 | 0 | 0 |
Self-Gluing formula of the monopole invariant and its application | Given a $4$-manifold $\hat{M}$ and two homeomorphic surfaces $\Sigma_1,
\Sigma_2$ smoothly embedded in $\hat{M}$ with genus more than 1, we remove the
neighborhoods of the surfaces and obtain a new $4$-manifold $M$ from gluing two
boundaries $S^1 \times \Sigma_1$ and $S^1 \times \Sigma_1.$ In this artice, we
prove a gluing formula which describes the relation of the Seiberg-Witten
invariants of $M$ and $\hat{M}.$ Moreover, we show the application of the
formula on the existence condition of the symplectic structure on a family of
$4$-manfolds under some conditions.
| 0 | 0 | 1 | 0 | 0 | 0 |
An aptamer-biosensor for azole class antifungal drugs | This report describes the development of an aptamer for sensing azole
antifungal drugs for therapeutic drug monitoring. Modified Synthetic Evolution
of Ligands through Exponential Enrichment (SELEX) was used to discover a DNA
aptamer recognizing azole class antifungal drugs. This aptamer undergoes a
secondary structural change upon binding to its target molecule as shown
through fluorescence anisotropy-based binding measurements. Experiments using
circular dichroism spectroscopy, revealed a unique double G-quadruplex
structure that was essential and specific for binding to the azole antifungal
target. Aptamer-functionalized Graphene Field Effect Transistor (GFET) devices
were created and used to measure the binding of strength of azole antifungals
to this surface. In total this aptamer and the supporting sensing platform
could provide a valuable tool for improving the treatment of patients with
invasive fungal infections.
| 0 | 1 | 0 | 0 | 0 | 0 |
Notes on "Einstein metrics on compact simple Lie groups attached to standard triples" | In the paper "Einstein metrics on compact simple Lie groups attached to
standard triples", the authors introduced the definition of standard triples
and proved that every compact simple Lie group $G$ attached to a standard
triple $(G,K,H)$ admits a left-invariant Einstein metric which is not naturally
reductive except the standard triple $(\Sp(4),2\Sp(2),4\Sp(1))$. For the triple
$(\Sp(4),2\Sp(2),4\Sp(1))$, we find there exists an involution pair of $\sp(4)$
such that $4\sp(1)$ is the fixed point of the pair, and then give the
decomposition of $\sp(4)$ as a direct sum of irreducible
$\ad(4\sp(1))$-modules. But $\Sp(4)/4\Sp(1)$ is not a generalized Wallach
space. Furthermore we give left-invariant Einstein metrics on $\Sp(4)$ which
are non-naturally reductive and $\Ad(4\Sp(1))$-invariant. For the general case
$(\Sp(2n_1n_2),2\Sp(n_1n_2),2n_2\Sp(n_1))$, there exist $2n_2-1$ involutions of
$\sp(2n_1n_2)$ such that $2n_2\sp(n_1))$ is the fixed point of these $2n_2-1$
involutions, and it follows the decomposition of $\sp(2n_1n_2)$ as a direct sum
of irreducible $\ad(2n_2\sp(n_1))$-modules. In order to give new non-naturally
reductive and $\Ad(2n_2\Sp(n_1)))$-invariant Einstein metrics on
$\Sp(2n_1n_2)$, we prove a general result, i.e. $\Sp(2k+l)$ admits at least two
non-naturally reductive Einstein metrics which are
$\Ad(\Sp(k)\times\Sp(k)\times\Sp(l))$-invariant if $k<l$. It implies that every
compact simple Lie group $\Sp(n)$ for $n\geq 4$ admits at least
$2[\frac{n-1}{3}]$ non-naturally reductive left-invariant Einstein metrics.
| 0 | 0 | 1 | 0 | 0 | 0 |
Harmonic quasi-isometric maps II : negatively curved manifolds | We prove that a quasi-isometric map, and more generally a coarse embedding,
between pinched Hadamard manifolds is within bounded distance from a unique
harmonic map.
| 0 | 0 | 1 | 0 | 0 | 0 |
Platform independent profiling of a QCD code | The supercomputing platforms available for high performance computing based
research evolve at a great rate. However, this rapid development of novel
technologies requires constant adaptations and optimizations of the existing
codes for each new machine architecture. In such context, minimizing time of
efficiently porting the code on a new platform is of crucial importance. A
possible solution for this common challenge is to use simulations of the
application that can assist in detecting performance bottlenecks. Due to
prohibitive costs of classical cycle-accurate simulators, coarse-grain
simulations are more suitable for large parallel and distributed systems. We
present a procedure of implementing the profiling for openQCD code [1] through
simulation, which will enable the global reduction of the cost of profiling and
optimizing this code commonly used in the lattice QCD community. Our approach
is based on well-known SimGrid simulator [2], which allows for fast and
accurate performance predictions of HPC codes. Additionally, accurate
estimations of the program behavior on some future machines, not yet accessible
to us, are anticipated.
| 1 | 1 | 0 | 0 | 0 | 0 |
DeepGauge: Multi-Granularity Testing Criteria for Deep Learning Systems | Deep learning (DL) defines a new data-driven programming paradigm that
constructs the internal system logic of a crafted neuron network through a set
of training data. We have seen wide adoption of DL in many safety-critical
scenarios. However, a plethora of studies have shown that the state-of-the-art
DL systems suffer from various vulnerabilities which can lead to severe
consequences when applied to real-world applications. Currently, the testing
adequacy of a DL system is usually measured by the accuracy of test data.
Considering the limitation of accessible high quality test data, good accuracy
performance on test data can hardly provide confidence to the testing adequacy
and generality of DL systems. Unlike traditional software systems that have
clear and controllable logic and functionality, the lack of interpretability in
a DL system makes system analysis and defect detection difficult, which could
potentially hinder its real-world deployment. In this paper, we propose
DeepGauge, a set of multi-granularity testing criteria for DL systems, which
aims at rendering a multi-faceted portrayal of the testbed. The in-depth
evaluation of our proposed testing criteria is demonstrated on two well-known
datasets, five DL systems, and with four state-of-the-art adversarial attack
techniques against DL. The potential usefulness of DeepGauge sheds light on the
construction of more generic and robust DL systems.
| 0 | 0 | 0 | 1 | 0 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.