title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Self-consistent dynamical model of the Broad Line Region | We develope a self-consistent description of the Broad Line Region based on
the concept of the failed wind powered by the radiation pressure acting on
dusty accretion disk atmosphere in Keplerian motion. The material raised high
above the disk is illuminated, dust evaportes, and the matter falls back
towards the disk. This material is the source of emission lines. The model
predicts the inner and outer radius of the region, the cloud dynamics under the
dust radiation pressure and, subsequently, just the gravitational field of the
central black hole, which results in assymetry between the rise and fall.
Knowledge of the dynamics allows to predict the shapes of the emission lines as
functions of the basic parameters of an active nucleus: black hole mass,
accretion rate, black hole spin (or accretion efficiency) and the viewing angle
with respect to the symmetry axis. Here we show preliminary results based on
analytical approximations to the cloud motion.
| 0 | 1 | 0 | 0 | 0 | 0 |
Measuring the polarization of electromagnetic fields using Rabi-rate measurements with spatial resolution: experiment and theory | When internal states of atoms are manipulated using coherent optical or
radio-frequency (RF) radiation, it is essential to know the polarization of the
radiation with respect to the quantization axis of the atom. We first present a
measurement of the two-dimensional spatial distribution of the electric-field
amplitude of a linearly-polarized pulsed RF electric field at $\sim 25.6\,$GHz
and its angle with respect to a static electric field. The measurements exploit
coherent population transfer between the $35$s and $35$p Rydberg states of
helium atoms in a pulsed supersonic beam. Based on this experimental result, we
develop a general framework in the form of a set of equations relating the five
independent polarization parameters of a coherently oscillating field in a
fixed laboratory frame to Rabi rates of transitions between a ground and three
excited states of an atom with arbitrary quantization axis. We then explain how
these equations can be used to fully characterize the polarization in a minimum
of five Rabi rate measurements by rotation of an external bias-field, or,
knowing the polarization of the driving field, to determine the orientation of
the static field using two measurements. The presented technique is not limited
to Rydberg atoms and RF fields but can also be applied to characterize optical
fields. The technique has potential for sensing the spatiotemporal properties
of electromagnetic fields, e.g., in metrology devices or in hybrid experiments
involving atoms close to surfaces.
| 0 | 1 | 0 | 0 | 0 | 0 |
Software-based Microarchitectural Attacks | Modern processors are highly optimized systems where every single cycle of
computation time matters. Many optimizations depend on the data that is being
processed. Software-based microarchitectural attacks exploit effects of these
optimizations. Microarchitectural side-channel attacks leak secrets from
cryptographic computations, from general purpose computations, or from the
kernel. This leakage even persists across all common isolation boundaries, such
as processes, containers, and virtual machines. Microarchitectural fault
attacks exploit the physical imperfections of modern computer systems.
Shrinking process technology introduces effects between isolated hardware
elements that can be exploited by attackers to take control of the entire
system. These attacks are especially interesting in scenarios where the
attacker is unprivileged or even sandboxed.
In this thesis, we focus on microarchitectural attacks and defenses on
commodity systems. We investigate known and new side channels and show that
microarchitectural attacks can be fully automated. Furthermore, we show that
these attacks can be mounted in highly restricted environments such as
sandboxed JavaScript code in websites. We show that microarchitectural attacks
exist on any modern computer system, including mobile devices (e.g.,
smartphones), personal computers, and commercial cloud systems. This thesis
consists of two parts. In the first part, we provide background on modern
processor architectures and discuss state-of-the-art attacks and defenses in
the area of microarchitectural side-channel attacks and microarchitectural
fault attacks. In the second part, a selection of our papers are provided
without modification from their original publications. I have co-authored these
papers, which have subsequently been anonymously peer-reviewed, accepted, and
presented at renowned international conferences.
| 1 | 0 | 0 | 0 | 0 | 0 |
Pixelwise Instance Segmentation with a Dynamically Instantiated Network | Semantic segmentation and object detection research have recently achieved
rapid progress. However, the former task has no notion of different instances
of the same object, and the latter operates at a coarse, bounding-box level. We
propose an Instance Segmentation system that produces a segmentation map where
each pixel is assigned an object class and instance identity label. Most
approaches adapt object detectors to produce segments instead of boxes. In
contrast, our method is based on an initial semantic segmentation module, which
feeds into an instance subnetwork. This subnetwork uses the initial
category-level segmentation, along with cues from the output of an object
detector, within an end-to-end CRF to predict instances. This part of our model
is dynamically instantiated to produce a variable number of instances per
image. Our end-to-end approach requires no post-processing and considers the
image holistically, instead of processing independent proposals. Therefore,
unlike some related work, a pixel cannot belong to multiple instances.
Furthermore, far more precise segmentations are achieved, as shown by our
state-of-the-art results (particularly at high IoU thresholds) on the Pascal
VOC and Cityscapes datasets.
| 1 | 0 | 0 | 0 | 0 | 0 |
Binary Matrix Factorization via Dictionary Learning | Matrix factorization is a key tool in data analysis; its applications include
recommender systems, correlation analysis, signal processing, among others.
Binary matrices are a particular case which has received significant attention
for over thirty years, especially within the field of data mining. Dictionary
learning refers to a family of methods for learning overcomplete basis (also
called frames) in order to efficiently encode samples of a given type; this
area, now also about twenty years old, was mostly developed within the signal
processing field. In this work we propose two binary matrix factorization
methods based on a binary adaptation of the dictionary learning paradigm to
binary matrices. The proposed algorithms focus on speed and scalability; they
work with binary factors combined with bit-wise operations and a few auxiliary
integer ones. Furthermore, the methods are readily applicable to online binary
matrix factorization. Another important issue in matrix factorization is the
choice of rank for the factors; we address this model selection problem with an
efficient method based on the Minimum Description Length principle. Our
preliminary results show that the proposed methods are effective at producing
interpretable factorizations of various data types of different nature.
| 0 | 0 | 0 | 1 | 0 | 0 |
Enriching Complex Networks with Word Embeddings for Detecting Mild Cognitive Impairment from Speech Transcripts | Mild Cognitive Impairment (MCI) is a mental disorder difficult to diagnose.
Linguistic features, mainly from parsers, have been used to detect MCI, but
this is not suitable for large-scale assessments. MCI disfluencies produce
non-grammatical speech that requires manual or high precision automatic
correction of transcripts. In this paper, we modeled transcripts into complex
networks and enriched them with word embedding (CNE) to better represent short
texts produced in neuropsychological assessments. The network measurements were
applied with well-known classifiers to automatically identify MCI in
transcripts, in a binary classification task. A comparison was made with the
performance of traditional approaches using Bag of Words (BoW) and linguistic
features for three datasets: DementiaBank in English, and Cinderella and
Arizona-Battery in Portuguese. Overall, CNE provided higher accuracy than using
only complex networks, while Support Vector Machine was superior to other
classifiers. CNE provided the highest accuracies for DementiaBank and
Cinderella, but BoW was more efficient for the Arizona-Battery dataset probably
owing to its short narratives. The approach using linguistic features yielded
higher accuracy if the transcriptions of the Cinderella dataset were manually
revised. Taken together, the results indicate that complex networks enriched
with embedding is promising for detecting MCI in large-scale assessments
| 1 | 0 | 0 | 0 | 0 | 0 |
The many faces of degeneracy in conic optimization | Slater's condition -- existence of a "strictly feasible solution" -- is a
common assumption in conic optimization. Without strict feasibility,
first-order optimality conditions may be meaningless, the dual problem may
yield little information about the primal, and small changes in the data may
render the problem infeasible. Hence, failure of strict feasibility can
negatively impact off-the-shelf numerical methods, such as primal-dual interior
point methods, in particular. New optimization modelling techniques and convex
relaxations for hard nonconvex problems have shown that the loss of strict
feasibility is a more pronounced phenomenon than has previously been realized.
In this text, we describe various reasons for the loss of strict feasibility,
whether due to poor modelling choices or (more interestingly) rich underlying
structure, and discuss ways to cope with it and, in many pronounced cases, how
to use it as an advantage. In large part, we emphasize the facial reduction
preprocessing technique due to its mathematical elegance, geometric
transparency, and computational potential.
| 0 | 0 | 1 | 0 | 0 | 0 |
Strong homotopy types of acyclic categories and $Δ$-complexes | We extend the homotopy theories based on point reduction for finite spaces
and simplicial complexes to finite acyclic categories and $\Delta$-complexes,
respectively. The functors of classifying spaces and face posets are compatible
with these homotopy theories. In contrast with the classical settings of finite
spaces and simplicial complexes, the universality of morphisms and simplices
plays a central role in this paper.
| 0 | 0 | 1 | 0 | 0 | 0 |
Boundary problems for the fractional and tempered fractional operators | For characterizing the Brownian motion in a bounded domain: $\Omega$, it is
well-known that the boundary conditions of the classical diffusion equation
just rely on the given information of the solution along the boundary of a
domain; on the contrary, for the Lévy flights or tempered Lévy flights in a
bounded domain, it involves the information of a solution in the complementary
set of $\Omega$, i.e., $\mathbb{R}^n\backslash \Omega$, with the potential
reason that paths of the corresponding stochastic process are discontinuous.
Guided by probability intuitions and the stochastic perspectives of anomalous
diffusion, we show the reasonable ways, ensuring the clear physical meaning and
well-posedness of the partial differential equations (PDEs), of specifying
`boundary' conditions for space fractional PDEs modeling the anomalous
diffusion. Some properties of the operators are discussed, and the
well-posednesses of the PDEs with generalized boundary conditions are proved.
| 0 | 0 | 1 | 0 | 0 | 0 |
Bohm's approach to quantum mechanics: Alternative theory or practical picture? | Since its inception Bohmian mechanics has been generally regarded as a
hidden-variable theory aimed at providing an objective description of quantum
phenomena. To date, this rather narrow conception of Bohm's proposal has caused
it more rejection than acceptance. Now, after 65 years of Bohmian mechanics,
should still be such an interpretational aspect the prevailing appraisal? Why
not favoring a more pragmatic view, as a legitimate picture of quantum
mechanics, on equal footing in all respects with any other more conventional
quantum picture? These questions are used here to introduce a discussion on an
alternative way to deal with Bohmian mechanics at present, enhancing its aspect
as an efficient and useful picture or formulation to tackle, explore, describe
and explain quantum phenomena where phase and correlation (entanglement) are
key elements. This discussion is presented through two complementary blocks.
The first block is aimed at briefly revisiting the historical context that gave
rise to the appearance of Bohmian mechanics, and how this approach or analogous
ones have been used in different physical contexts. This discussion is used to
emphasize a more pragmatic view to the detriment of the more conventional
hidden-variable (ontological) approach that has been a leitmotif within the
quantum foundations. The second block focuses on some particular formal aspects
of Bohmian mechanics supporting the view presented here, with special emphasis
on the physical meaning of the local phase field and the associated velocity
field encoded within the wave function. As an illustration, a simple model of
Young's two-slit experiment is considered. The simplicity of this model allows
to understand in an easy manner how the information conveyed by the Bohmian
formulation relates to other more conventional concepts in quantum mechanics.
This sort of pedagogical application is also aimed at ...
| 0 | 1 | 0 | 0 | 0 | 0 |
Continuous-wave virtual-state lasing from cold ytterbium atoms | While conventional lasers are based on gain media with three or four real
levels, unconventional lasers including virtual levels and two-photon processes
offer new opportunities. We study lasing that involves a two-photon process
through a virtual lower level, which we realize in a cloud of cold ytterbium
atoms that are magneto-optically trapped inside a cavity. We pump the atoms on
the narrow $^1$S$_0$ $\to$ $^3$P$_1$ line and generate laser emission on the
same transition. Lasing is verified by a threshold behavior of output power
vs.\ pump power and atom number, a flat $g^{(2)}$ correlation function above
threshold, and the polarization properties of the output. In the proposed
lasing mechanism the MOT beams create the virtual lower level of the lasing
transition. The laser process runs continuously, needs no further repumping,
and might be adapted to other atoms or transitions such as the ultra narrow
$^1$S$_0$ $\to$ $^3$P$_0$ clock transition in ytterbium.
| 0 | 1 | 0 | 0 | 0 | 0 |
Assessment of Future Changes in Intensity-Duration-Frequency Curves for Southern Ontario using North American (NA)-CORDEX Models with Nonstationary Methods | The evaluation of possible climate change consequence on extreme rainfall has
significant implications for the design of engineering structure and
socioeconomic resources development. While many studies have assessed the
impact of climate change on design rainfall using global and regional climate
model (RCM) predictions, to date, there has been no comprehensive comparison or
evaluation of intensity-duration-frequency (IDF) statistics at regional scale,
considering both stationary versus nonstationary models for the future climate.
To understand how extreme precipitation may respond to future IDF curves, we
used an ensemble of three RCMs participating in the North-American (NA)-CORDEX
domain over eight rainfall stations across Southern Ontario, one of the most
densely populated and major economic region in Canada. The IDF relationships
are derived from multi-model RCM simulations and compared with the
station-based observations. We modeled precipitation extremes, at different
durations using extreme value distributions considering parameters that are
either stationary or nonstationary, as a linear function of time. Our results
showed that extreme precipitation intensity driven by future climate forcing
shows a significant increase in intensity for 10-year events in 2050s
(2030-2070) relative to 1970-2010 baseline period across most of the locations.
However, for longer return periods, an opposite trend is noted. Surprisingly,
in term of design storms, no significant differences were found when comparing
stationary and nonstationary IDF estimation methods for the future (2050s) for
the larger return periods. The findings, which are specific to regional
precipitation extremes, suggest no immediate reason for alarm, but the need for
progressive updating of the design standards in light of global warming.
| 0 | 1 | 0 | 1 | 0 | 0 |
Improved Algorithms for Computing the Cycle of Minimum Cost-to-Time Ratio in Directed Graphs | We study the problem of finding the cycle of minimum cost-to-time ratio in a
directed graph with $ n $ nodes and $ m $ edges. This problem has a long
history in combinatorial optimization and has recently seen interesting
applications in the context of quantitative verification. We focus on strongly
polynomial algorithms to cover the use-case where the weights are relatively
large compared to the size of the graph. Our main result is an algorithm with
running time $ \tilde O (m^{3/4} n^{3/2}) $, which gives the first improvement
over Megiddo's $ \tilde O (n^3) $ algorithm [JACM'83] for sparse graphs. We
further demonstrate how to obtain both an algorithm with running time $ n^3 /
2^{\Omega{(\sqrt{\log n})}} $ on general graphs and an algorithm with running
time $ \tilde O (n) $ on constant treewidth graphs. To obtain our main result,
we develop a parallel algorithm for negative cycle detection and single-source
shortest paths that might be of independent interest.
| 1 | 0 | 0 | 0 | 0 | 0 |
Interplay of dust alignment, grain growth and magnetic fields in polarization: lessons from the emission-to-extinction ratio | Polarized extinction and emission from dust in the interstellar medium (ISM)
are hard to interpret, as they have a complex dependence on dust optical
properties, grain alignment and magnetic field orientation. This is
particularly true in molecular clouds. The data available today are not yet
used to their full potential.
The combination of emission and extinction, in particular, provides
information not available from either of them alone. We combine data from the
scientific literature on polarized dust extinction with Planck data on
polarized emission and we use them to constrain the possible variations in dust
and environmental conditions inside molecular clouds, and especially
translucent lines of sight, taking into account magnetic field orientation.
We focus on the dependence between \lambda_max -- the wavelength of maximum
polarization in extinction -- and other observables such as the extinction
polarization, the emission polarization and the ratio of the two. We set out to
reproduce these correlations using Monte-Carlo simulations where the relevant
quantities in a dust model -- grain alignment, size distribution and magnetic
field orientation -- vary to mimic the diverse conditions expected inside
molecular clouds.
None of the quantities chosen can explain the observational data on its own:
the best results are obtained when all quantities vary significantly across and
within clouds. However, some of the data -- most notably the stars with low
emission-to-extinction polarization ratio -- are not reproduced by our
simulation. Our results suggest not only that dust evolution is necessary to
explain polarization in molecular clouds, but that a simple change in size
distribution is not sufficient to explain the data, and point the way for
future and more sophisticated models.
| 0 | 1 | 0 | 0 | 0 | 0 |
Asynchronous Distributed Variational Gaussian Processes for Regression | Gaussian processes (GPs) are powerful non-parametric function estimators.
However, their applications are largely limited by the expensive computational
cost of the inference procedures. Existing stochastic or distributed
synchronous variational inferences, although have alleviated this issue by
scaling up GPs to millions of samples, are still far from satisfactory for
real-world large applications, where the data sizes are often orders of
magnitudes larger, say, billions. To solve this problem, we propose ADVGP, the
first Asynchronous Distributed Variational Gaussian Process inference for
regression, on the recent large-scale machine learning platform,
PARAMETERSERVER. ADVGP uses a novel, flexible variational framework based on a
weight space augmentation, and implements the highly efficient, asynchronous
proximal gradient optimization. While maintaining comparable or better
predictive performance, ADVGP greatly improves upon the efficiency of the
existing variational methods. With ADVGP, we effortlessly scale up GP
regression to a real-world application with billions of samples and demonstrate
an excellent, superior prediction accuracy to the popular linear models.
| 0 | 0 | 0 | 1 | 0 | 0 |
Bayesian Unification of Gradient and Bandit-based Learning for Accelerated Global Optimisation | Bandit based optimisation has a remarkable advantage over gradient based
approaches due to their global perspective, which eliminates the danger of
getting stuck at local optima. However, for continuous optimisation problems or
problems with a large number of actions, bandit based approaches can be
hindered by slow learning. Gradient based approaches, on the other hand,
navigate quickly in high-dimensional continuous spaces through local
optimisation, following the gradient in fine grained steps. Yet, apart from
being susceptible to local optima, these schemes are less suited for online
learning due to their reliance on extensive trial-and-error before the optimum
can be identified. In this paper, we propose a Bayesian approach that unifies
the above two paradigms in one single framework, with the aim of combining
their advantages. At the heart of our approach we find a stochastic linear
approximation of the function to be optimised, where both the gradient and
values of the function are explicitly captured. This allows us to learn from
both noisy function and gradient observations, and predict these properties
across the action space to support optimisation. We further propose an
accompanying bandit driven exploration scheme that uses Bayesian credible
bounds to trade off exploration against exploitation. Our empirical results
demonstrate that by unifying bandit and gradient based learning, one obtains
consistently improved performance across a wide spectrum of problem
environments. Furthermore, even when gradient feedback is unavailable, the
flexibility of our model, including gradient prediction, still allows us
outperform competing approaches, although with a smaller margin. Due to the
pervasiveness of bandit based optimisation, our scheme opens up for improved
performance both in meta-optimisation and in applications where gradient
related information is readily available.
| 1 | 0 | 0 | 0 | 0 | 0 |
Stacco: Differentially Analyzing Side-Channel Traces for Detecting SSL/TLS Vulnerabilities in Secure Enclaves | Intel Software Guard Extension (SGX) offers software applications enclave to
protect their confidentiality and integrity from malicious operating systems.
The SSL/TLS protocol, which is the de facto standard for protecting
transport-layer network communications, has been broadly deployed for a secure
communication channel. However, in this paper, we show that the marriage
between SGX and SSL may not be smooth sailing.
Particularly, we consider a category of side-channel attacks against SSL/TLS
implementations in secure enclaves, which we call the control-flow inference
attacks. In these attacks, the malicious operating system kernel may perform a
powerful man-in-the-kernel attack to collect execution traces of the enclave
programs at page, cacheline, or branch level, while positioning itself in the
middle of the two communicating parties. At the center of our work is a
differential analysis framework, dubbed Stacco, to dynamically analyze the
SSL/TLS implementations and detect vulnerabilities that can be exploited as
decryption oracles. Surprisingly, we found exploitable vulnerabilities in the
latest versions of all the SSL/TLS libraries we have examined.
To validate the detected vulnerabilities, we developed a man-in-the-kernel
adversary to demonstrate Bleichenbacher attacks against the latest OpenSSL
library running in the SGX enclave (with the help of Graphene) and completely
broke the PreMasterSecret encrypted by a 4096-bit RSA public key with only
57286 queries. We also conducted CBC padding oracle attacks against the latest
GnuTLS running in Graphene-SGX and an open-source SGX-implementation of mbedTLS
(i.e., mbedTLS-SGX) that runs directly inside the enclave, and showed that it
only needs 48388 and 25717 queries, respectively, to break one block of AES
ciphertext. Empirical evaluation suggests these man-in-the-kernel attacks can
be completed within 1 or 2 hours.
| 1 | 0 | 0 | 0 | 0 | 0 |
Automatic Extrinsic Calibration for Lidar-Stereo Vehicle Sensor Setups | Sensor setups consisting of a combination of 3D range scanner lasers and
stereo vision systems are becoming a popular choice for on-board perception
systems in vehicles; however, the combined use of both sources of information
implies a tedious calibration process. We present a method for extrinsic
calibration of lidar-stereo camera pairs without user intervention. Our
calibration approach is aimed to cope with the constraints commonly found in
automotive setups, such as low-resolution and specific sensor poses. To
demonstrate the performance of our method, we also introduce a novel approach
for the quantitative assessment of the calibration results, based on a
simulation environment. Tests using real devices have been conducted as well,
proving the usability of the system and the improvement over the existing
approaches. Code is available at this http URL
| 1 | 0 | 0 | 0 | 0 | 0 |
Representations of Super $W(2,2)$ algebra $\mathfrak{L}$ | In paper, we study the representation theory of super $W(2,2)$ algebra
${\mathfrak{L}}$. We prove that ${\mathfrak{L}}$ has no mixed irreducible
modules and give the classification of irreducible modules of intermediate
series. We determinate the conjugate-linear anti-involution of ${\mathfrak{L}}$
and give the unitary modules of intermediate series.
| 0 | 0 | 1 | 0 | 0 | 0 |
Effective Reformulation of Query for Code Search using Crowdsourced Knowledge and Extra-Large Data Analytics | Software developers frequently issue generic natural language queries for
code search while using code search engines (e.g., GitHub native search,
Krugle). Such queries often do not lead to any relevant results due to
vocabulary mismatch problems. In this paper, we propose a novel technique that
automatically identifies relevant and specific API classes from Stack Overflow
Q & A site for a programming task written as a natural language query, and then
reformulates the query for improved code search. We first collect candidate API
classes from Stack Overflow using pseudo-relevance feedback and two term
weighting algorithms, and then rank the candidates using Borda count and
semantic proximity between query keywords and the API classes. The semantic
proximity has been determined by an analysis of 1.3 million questions and
answers of Stack Overflow. Experiments using 310 code search queries report
that our technique suggests relevant API classes with 48% precision and 58%
recall which are 32% and 48% higher respectively than those of the
state-of-the-art. Comparisons with two state-of-the-art studies and three
popular search engines (e.g., Google, Stack Overflow, and GitHub native search)
report that our reformulated queries (1) outperform the queries of the
state-of-the-art, and (2) significantly improve the code search results
provided by these contemporary search engines.
| 1 | 0 | 0 | 0 | 0 | 0 |
Detecting Bot Activity in the Ethereum Blockchain Network | The Ethereum blockchain network is a decentralized platform enabling smart
contract execution and transactions of Ether (ETH) [1], its designated
cryptocurrency. Ethereum is the second most popular cryptocurrency with a
market cap of more than 100 billion USD, with hundreds of thousands of
transactions executed daily by hundreds of thousands of unique wallets. Tens of
thousands of those wallets are newly generated each day. The Ethereum platform
enables anyone to freely open multiple new wallets [2] free of charge
(resulting in a large number of wallets that are controlled by the same
entities). This attribute makes the Ethereum network a breeding space for
activity by software robots (bots). The existence of bots is widespread in
different digital technologies and there are various approaches to detect their
activity such as rule-base, clustering, machine learning and more [3,4]. In
this work we demonstrate how bot detection can be implemented using a network
theory approach.
| 1 | 0 | 0 | 0 | 0 | 0 |
Near-infrared laser thermal conjunctivoplasty | Conjunctivochalasis is a common cause of tear dysfunction due to the
conjunctiva becoming loose and wrinkly with age. The current solutions to this
disease include either surgical excision in the operating room, or
thermoreduction of the loose tissue with hot wire in the clinic. We developed a
near-infrared (NIR) laser thermal conjunctivoplasty (LTC) system, which gently
shrinks the redundant tissue. The NIR light is mainly absorbed by water, so the
heating is even and there is no bleeding. The system utilizes a 1460-nm
programmable laser diode system as a light source. A miniaturized handheld
probe delivers the laser light and focuses the laser into a 10x1 mm2 line. A
foot pedal is used to deliver a preset number of calibrated laser pulses. A
fold of loose conjunctiva is grasped by a pair of forceps. The infrared laser
light is delivered through an optical fiber and a laser line is focused exactly
on the conjunctival fold by a cylindrical lens. Ex vivo experiments using
porcine eye were performed with the optimal laser parameters. It was found that
up to 50% of conjunctiva shrinkage could be achieved.
| 0 | 1 | 0 | 0 | 0 | 0 |
Superconductivity at 7.3 K in the 133-type Cr-based RbCr3As3 single crystals | Here we report the preparation and superconductivity of the 133-type Cr-based
quasi-one-dimensional (Q1D) RbCr3As3 single crystals. The samples were prepared
by the deintercalation of Rb+ ions from the 233-type Rb2Cr3As3 crystals which
were grown from a high-temperature solution growth method. The RbCr3As3
compound crystallizes in a centrosymmetric structure with the space group of
P63/m (No. 176) different with its non-centrosymmetric Rb2Cr3As3
superconducting precursor, and the refined lattice parameters are a = 9.373(3)
{\AA} and c = 4.203(7) {\AA}. Electrical resistivity and magnetic
susceptibility characterizations reveal the occurrence of superconductivity
with an interestingly higher onset Tc of 7.3 K than other Cr-based
superconductors, and a high upper critical field Hc2(0) near 70 T in this
133-type RbCr3As3 crystals.
| 0 | 1 | 0 | 0 | 0 | 0 |
Solution of parabolic free boundary problems using transmuted heat polynomials | A numerical method for free boundary problems for the equation \[
u_{xx}-q(x)u=u_t \] is proposed. The method is based on recent results from
transmutation operators theory allowing one to construct efficiently a complete
system of solutions for this equation generalizing the system of heat
polynomials. The corresponding implementation algorithm is presented.
| 0 | 0 | 1 | 0 | 0 | 0 |
Neural-Network Quantum States, String-Bond States, and Chiral Topological States | Neural-Network Quantum States have been recently introduced as an Ansatz for
describing the wave function of quantum many-body systems. We show that there
are strong connections between Neural-Network Quantum States in the form of
Restricted Boltzmann Machines and some classes of Tensor-Network states in
arbitrary dimensions. In particular we demonstrate that short-range Restricted
Boltzmann Machines are Entangled Plaquette States, while fully connected
Restricted Boltzmann Machines are String-Bond States with a nonlocal geometry
and low bond dimension. These results shed light on the underlying architecture
of Restricted Boltzmann Machines and their efficiency at representing many-body
quantum states. String-Bond States also provide a generic way of enhancing the
power of Neural-Network Quantum States and a natural generalization to systems
with larger local Hilbert space. We compare the advantages and drawbacks of
these different classes of states and present a method to combine them
together. This allows us to benefit from both the entanglement structure of
Tensor Networks and the efficiency of Neural-Network Quantum States into a
single Ansatz capable of targeting the wave function of strongly correlated
systems. While it remains a challenge to describe states with chiral
topological order using traditional Tensor Networks, we show that
Neural-Network Quantum States and their String-Bond States extension can
describe a lattice Fractional Quantum Hall state exactly. In addition, we
provide numerical evidence that Neural-Network Quantum States can approximate a
chiral spin liquid with better accuracy than Entangled Plaquette States and
local String-Bond States. Our results demonstrate the efficiency of neural
networks to describe complex quantum wave functions and pave the way towards
the use of String-Bond States as a tool in more traditional machine-learning
applications.
| 0 | 1 | 0 | 1 | 0 | 0 |
Nondestructive testing of grating imperfections using grating-based X-ray phase-contrast imaging | We reported the usage of grating-based X-ray phase-contrast imaging in
nondestructive testing of grating imperfections. It was found that
electroplating flaws could be easily detected by conventional absorption
signal, and in particular, we observed that the grating defects resulting from
uneven ultraviolet exposure could be clearly discriminated with phase-contrast
signal. The experimental results demonstrate that grating-based X-ray
phase-contrast imaging, with a conventional low-brilliance X-ray source, a
large field of view and a reasonable compact setup, which simultaneously yields
phase- and attenuation-contrast signal of the sample, can be ready-to-use in
fast nondestructive testing of various imperfections in gratings and other
similar photoetching products.
| 0 | 1 | 0 | 0 | 0 | 0 |
End-to-End Information Extraction without Token-Level Supervision | Most state-of-the-art information extraction approaches rely on token-level
labels to find the areas of interest in text. Unfortunately, these labels are
time-consuming and costly to create, and consequently, not available for many
real-life IE tasks. To make matters worse, token-level labels are usually not
the desired output, but just an intermediary step. End-to-end (E2E) models,
which take raw text as input and produce the desired output directly, need not
depend on token-level labels. We propose an E2E model based on pointer
networks, which can be trained directly on pairs of raw input and output text.
We evaluate our model on the ATIS data set, MIT restaurant corpus and the MIT
movie corpus and compare to neural baselines that do use token-level labels. We
achieve competitive results, within a few percentage points of the baselines,
showing the feasibility of E2E information extraction without the need for
token-level labels. This opens up new possibilities, as for many tasks
currently addressed by human extractors, raw input and output data are
available, but not token-level labels.
| 1 | 0 | 0 | 0 | 0 | 0 |
Lipschitz regularity of solutions to two-phase free boundary problems | We prove Lipschitz continuity of viscosity solutions to a class of two-phase
free boundary problems governed by fully nonlinear operators.
| 0 | 0 | 1 | 0 | 0 | 0 |
VTA: An Open Hardware-Software Stack for Deep Learning | Hardware acceleration is an enabler for ubiquitous and efficient deep
learning. With hardware accelerators being introduced in datacenter and edge
devices, it is time to acknowledge that hardware specialization is central to
the deep learning system stack.
This technical report presents the Versatile Tensor Accelerator (VTA), an
open, generic, and customizable deep learning accelerator design. VTA is a
programmable accelerator that exposes a RISC-like programming abstraction to
describe operations at the tensor level. We designed VTA to expose the most
salient and common characteristics of mainstream deep learning accelerators,
such as tensor operations, DMA load/stores, and explicit compute/memory
arbitration.
VTA is more than a standalone accelerator design: it's an end-to-end solution
that includes drivers, a JIT runtime, and an optimizing compiler stack based on
TVM. The current release of VTA includes a behavioral hardware simulator, as
well as the infrastructure to deploy VTA on low-cost FPGA development boards
for fast prototyping.
By extending the TVM stack with a customizable, and open source deep learning
hardware accelerator design, we are exposing a transparent end-to-end deep
learning stack from the high-level deep learning framework, down to the actual
hardware design and implementation. This forms a truly end-to-end, from
software-to-hardware open source stack for deep learning systems.
| 0 | 0 | 0 | 1 | 0 | 0 |
Efficient Localized Inference for Large Graphical Models | We propose a new localized inference algorithm for answering marginalization
queries in large graphical models with the correlation decay property. Given a
query variable and a large graphical model, we define a much smaller model in a
local region around the query variable in the target model so that the marginal
distribution of the query variable can be accurately approximated. We introduce
two approximation error bounds based on the Dobrushin's comparison theorem and
apply our bounds to derive a greedy expansion algorithm that efficiently guides
the selection of neighbor nodes for localized inference. We verify our
theoretical bounds on various datasets and demonstrate that our localized
inference algorithm can provide fast and accurate approximation for large
graphical models.
| 1 | 0 | 0 | 1 | 0 | 0 |
Multi-kink collisions in the $ϕ^6$ model | We study simultaneous collisions of two, three, and four kinks and antikinks
of the $\phi^6$ model at the same spatial point. Unlike the $\phi^4$ kinks, the
$\phi^6$ kinks are asymmetric and this enriches the variety of the collision
scenarios. In our numerical simulations we observe both reflection and bound
state formation depending on the number of kinks and on their spatial ordering
in the initial configuration. We also analyze the extreme values of the energy
densities and the field gradient observed during the collisions. Our results
suggest that very high energy densities can be produced in multi-kink
collisions in a controllable manner. Appearance of high energy density spots in
multi-kink collisions can be important in various physical applications of the
Klein-Gordon model.
| 0 | 1 | 0 | 0 | 0 | 0 |
InverseFaceNet: Deep Monocular Inverse Face Rendering | We introduce InverseFaceNet, a deep convolutional inverse rendering framework
for faces that jointly estimates facial pose, shape, expression, reflectance
and illumination from a single input image. By estimating all parameters from
just a single image, advanced editing possibilities on a single face image,
such as appearance editing and relighting, become feasible in real time. Most
previous learning-based face reconstruction approaches do not jointly recover
all dimensions, or are severely limited in terms of visual quality. In
contrast, we propose to recover high-quality facial pose, shape, expression,
reflectance and illumination using a deep neural network that is trained using
a large, synthetically created training corpus. Our approach builds on a novel
loss function that measures model-space similarity directly in parameter space
and significantly improves reconstruction accuracy. We further propose a
self-supervised bootstrapping process in the network training loop, which
iteratively updates the synthetic training corpus to better reflect the
distribution of real-world imagery. We demonstrate that this strategy
outperforms completely synthetically trained networks. Finally, we show
high-quality reconstructions and compare our approach to several
state-of-the-art approaches.
| 1 | 0 | 0 | 0 | 0 | 0 |
Exact partial information decompositions for Gaussian systems based on dependency constraints | The Partial Information Decomposition (PID) [arXiv:1004.2515] provides a
theoretical framework to characterize and quantify the structure of
multivariate information sharing. A new method (Idep) has recently been
proposed for computing a two-predictor PID over discrete spaces.
[arXiv:1709.06653] A lattice of maximum entropy probability models is
constructed based on marginal dependency constraints, and the unique
information that a particular predictor has about the target is defined as the
minimum increase in joint predictor-target mutual information when that
particular predictor-target marginal dependency is constrained. Here, we apply
the Idep approach to Gaussian systems, for which the marginally constrained
maximum entropy models are Gaussian graphical models. Closed form solutions for
the Idep PID are derived for both univariate and multivariate Gaussian systems.
Numerical and graphical illustrations are provided, together with practical and
theoretical comparisons of the Idep PID with the minimum mutual information PID
(Immi). [arXiv:1411.2832] In particular, it is proved that the Immi method
generally produces larger estimates of redundancy and synergy than does the
Idep method. In discussion of the practical examples, the PIDs are complemented
by the use of deviance tests for the comparison of Gaussian graphical models.
| 0 | 0 | 0 | 1 | 1 | 0 |
Deep Multimodal Subspace Clustering Networks | We present convolutional neural network (CNN) based approaches for
unsupervised multimodal subspace clustering. The proposed framework consists of
three main stages - multimodal encoder, self-expressive layer, and multimodal
decoder. The encoder takes multimodal data as input and fuses them to a latent
space representation. The self-expressive layer is responsible for enforcing
the self-expressiveness property and acquiring an affinity matrix corresponding
to the data points. The decoder reconstructs the original input data. The
network uses the distance between the decoder's reconstruction and the original
input in its training. We investigate early, late and intermediate fusion
techniques and propose three different encoders corresponding to them for
spatial fusion. The self-expressive layers and multimodal decoders are
essentially the same for different spatial fusion-based approaches. In addition
to various spatial fusion-based methods, an affinity fusion-based network is
also proposed in which the self-expressive layer corresponding to different
modalities is enforced to be the same. Extensive experiments on three datasets
show that the proposed methods significantly outperform the state-of-the-art
multimodal subspace clustering methods.
| 0 | 0 | 0 | 1 | 0 | 0 |
Finite-time generalization of the thermodynamic uncertainty relation | For fluctuating currents in non-equilibrium steady states, the recently
discovered thermodynamic uncertainty relation expresses a fundamental relation
between their variance and the overall entropic cost associated with the
driving. We show that this relation holds not only for the long-time limit of
fluctuations, as described by large deviation theory, but also for fluctuations
on arbitrary finite time scales. This generalization facilitates applying the
thermodynamic uncertainty relation to single molecule experiments, for which
infinite timescales are not accessible. Importantly, often this finite-time
variant of the relation allows inferring a bound on the entropy production that
is even stronger than the one obtained from the long-time limit. We illustrate
the relation for the fluctuating work that is performed by a stochastically
switching laser tweezer on a trapped colloidal particle.
| 0 | 1 | 0 | 0 | 0 | 0 |
Composition Properties of Inferential Privacy for Time-Series Data | With the proliferation of mobile devices and the internet of things,
developing principled solutions for privacy in time series applications has
become increasingly important. While differential privacy is the gold standard
for database privacy, many time series applications require a different kind of
guarantee, and a number of recent works have used some form of inferential
privacy to address these situations.
However, a major barrier to using inferential privacy in practice is its lack
of graceful composition -- even if the same or related sensitive data is used
in multiple releases that are safe individually, the combined release may have
poor privacy properties. In this paper, we study composition properties of a
form of inferential privacy called Pufferfish when applied to time-series data.
We show that while general Pufferfish mechanisms may not compose gracefully, a
specific Pufferfish mechanism, called the Markov Quilt Mechanism, which was
recently introduced, has strong composition properties comparable to that of
pure differential privacy when applied to time series data.
| 1 | 0 | 0 | 1 | 0 | 0 |
Landau-Ginzburg theory of cortex dynamics: Scale-free avalanches emerge at the edge of synchronization | Understanding the origin, nature, and functional significance of complex
patterns of neural activity, as recorded by diverse electrophysiological and
neuroimaging techniques, is a central challenge in neuroscience. Such patterns
include collective oscillations emerging out of neural synchronization as well
as highly heterogeneous outbursts of activity interspersed by periods of
quiescence, called "neuronal avalanches." Much debate has been generated about
the possible scale invariance or criticality of such avalanches and its
relevance for brain function. Aimed at shedding light onto this, here we
analyze the large-scale collective properties of the cortex by using a
mesoscopic approach following the principle of parsimony of Landau-Ginzburg.
Our model is similar to that of Wilson-Cowan for neural dynamics but crucially,
includes stochasticity and space; synaptic plasticity and inhibition are
considered as possible regulatory mechanisms. Detailed analyses uncover a phase
diagram including down-state, synchronous, asynchronous, and up-state phases
and reveal that empirical findings for neuronal avalanches are consistently
reproduced by tuning our model to the edge of synchronization. This reveals
that the putative criticality of cortical dynamics does not correspond to a
quiescent-to-active phase transition as usually assumed in theoretical
approaches but to a synchronization phase transition, at which incipient
oscillations and scale-free avalanches coexist. Furthermore, our model also
accounts for up and down states as they occur (e.g., during deep sleep). This
approach constitutes a framework to rationalize the possible collective phases
and phase transitions of cortical networks in simple terms, thus helping to
shed light on basic aspects of brain functioning from a very broad perspective.
| 0 | 0 | 0 | 0 | 1 | 0 |
Recurrent Autoregressive Networks for Online Multi-Object Tracking | The main challenge of online multi-object tracking is to reliably associate
object trajectories with detections in each video frame based on their tracking
history. In this work, we propose the Recurrent Autoregressive Network (RAN), a
temporal generative modeling framework to characterize the appearance and
motion dynamics of multiple objects over time. The RAN couples an external
memory and an internal memory. The external memory explicitly stores previous
inputs of each trajectory in a time window, while the internal memory learns to
summarize long-term tracking history and associate detections by processing the
external memory. We conduct experiments on the MOT 2015 and 2016 datasets to
demonstrate the robustness of our tracking method in highly crowded and
occluded scenes. Our method achieves top-ranked results on the two benchmarks.
| 1 | 0 | 0 | 0 | 0 | 0 |
The occurrence of transverse and longitudinal electric currents in the classical plasma under the action of N transverse electromagnetic waves | Classical plasma with arbitrary degree of degeneration of electronic gas is
considered. In plasma N (N>2) collinear electromagnatic waves are propagated.
It is required to find the response of plasma to these waves. Distribution
function in square-law approximation on quantities of two small parameters from
Vlasov equation is received. The formula for electric current calculation is
deduced. It is demonstrated that the nonlinearity account leads to occurrence
of the longitudinal electric current directed along a wave vector. This
longitudinal current is orthogonal to the known transversal current received at
the linear analysis. The case of small values of wave number is considered.
| 0 | 1 | 0 | 0 | 0 | 0 |
Well-posedness of a Model for the Growth of Tree Stems and Vines | The paper studies a PDE model for the growth of a tree stem or a vine, having
the form of a differential inclusion with state constraints. The equations
describe the elongation due to cell growth, and the response to gravity and to
external obstacles.
The main theorem shows that the evolution problem is well posed, until a
specific "breakdown configuration" is reached. A formula is proved,
characterizing the reaction produced by unilateral constraints. At a.e. time t,
this is determined by the minimization of an elastic energy functional under
suitable constraints.
| 0 | 0 | 1 | 0 | 0 | 0 |
Yonsei evolutionary population synthesis (YEPS). II. Spectro-photometric evolution of helium-enhanced stellar populations | The discovery of multiple stellar populations in Milky Way globular clusters
(GCs) has stimulated various follow-up studies on helium-enhanced stellar
populations. Here we present the evolutionary population synthesis models for
the spectro-photometric evolution of simple stellar populations (SSPs) with
varying initial helium abundance ($Y_{\rm ini}$). We show that $Y_{\rm ini}$
brings about {dramatic} changes in spectro-photometric properties of SSPs. Like
the normal-helium SSPs, the integrated spectro-photometric evolution of
helium-enhanced SSPs is also dependent on metallicity and age for a given
$Y_{\rm ini}$. {We discuss the implications and prospects for the
helium-enhanced populations in relation to the second-generation populations
found in the Milky Way GCs.} All of the models are available at
\url{this http URL}.
| 0 | 1 | 0 | 0 | 0 | 0 |
Large deviation theorem for random covariance matrices | We establish a large deviation theorem for the empirical spectral
distribution of random covariance matrices whose entries are independent random
variables with mean 0, variance 1 and having controlled forth moments. Some new
properties of Laguerre polynomials are also given.
| 0 | 0 | 1 | 0 | 0 | 0 |
Hindsight policy gradients | A reinforcement learning agent that needs to pursue different goals across
episodes requires a goal-conditional policy. In addition to their potential to
generalize desirable behavior to unseen goals, such policies may also enable
higher-level planning based on subgoals. In sparse-reward environments, the
capacity to exploit information about the degree to which an arbitrary goal has
been achieved while another goal was intended appears crucial to enable sample
efficient learning. However, reinforcement learning agents have only recently
been endowed with such capacity for hindsight. In this paper, we demonstrate
how hindsight can be introduced to policy gradient methods, generalizing this
idea to a broad class of successful algorithms. Our experiments on a diverse
selection of sparse-reward environments show that hindsight leads to a
remarkable increase in sample efficiency.
| 1 | 0 | 0 | 0 | 0 | 0 |
Abundances in photoionized nebulae of the Local Group and nucleosynthesis of intermediate mass stars | Photoionized nebulae, comprising HII regions and planetary nebulae, are
excellent laboratories to investigate the nucleosynthesis and chemical
evolution of several elements in the Galaxy and other galaxies of the Local
Group. Our purpose in this investigation is threefold: (i) compare the
abundances of HII regions and planetary nebulae in each system in order to
investigate the differences derived from the age and origin of these objects,
(ii) compare the chemical evolution in different systems, such as the Milky
Way, the Magellanic Clouds, and other galaxies of the Local Group, and (iii)
investigate to what extent the nucleosynthesis contributions from the
progenitor stars affect the observed abundances in planetary nebulae, which
constrains the nucleosynthesis of intermediate mass stars. We show that all
objects in the samples present similar trends concerning distance-independent
correlations, and some constraints can be defined on the production of He and N
by the PN progenitor stars.
| 0 | 1 | 0 | 0 | 0 | 0 |
Are Thousands of Samples Really Needed to Generate Robust Gene-List for Prediction of Cancer Outcome? | The prediction of cancer prognosis and metastatic potential immediately after
the initial diagnoses is a major challenge in current clinical research. The
relevance of such a signature is clear, as it will free many patients from the
agony and toxic side-effects associated with the adjuvant chemotherapy
automatically and sometimes carelessly subscribed to them. Motivated by this
issue, Ein-Dor (2006) and Zuk (2007) presented a Bayesian model which leads to
the following conclusion: Thousands of samples are needed to generate a robust
gene list for predicting outcome. This conclusion is based on existence of some
statistical assumptions. The current work raises doubts over this determination
by showing that: (1) These assumptions are not consistent with additional
assumptions such as sparsity and Gaussianity. (2) The empirical Bayes
methodology which was suggested in order to test the relevant assumptions
doesn't detect severe violations of the model assumptions and consequently an
overestimation of the required sample size might be incurred.
| 0 | 0 | 0 | 1 | 0 | 0 |
Multi-scale analysis of lead-lag relationships in high-frequency financial markets | We propose a novel estimation procedure for scale-by-scale lead-lag
relationships of financial assets observed at a high-frequency in a
non-synchronous manner. The proposed estimation procedure does not require any
interpolation processing of the original data and is applicable to quite fine
resolution data. The validity of the proposed estimators is shown under the
continuous-time framework developed in our previous work Hayashi and Koike
(2016). An empirical application shows promising results of the proposed
approach.
| 0 | 0 | 0 | 1 | 0 | 0 |
Approximations of the allelic frequency spectrum in general supercritical branching populations | We consider a general branching population where the lifetimes of individuals
are i.i.d.\ with arbitrary distribution and where each individual gives birth
to new individuals at Poisson times independently from each other. In addition,
we suppose that individuals experience mutations at Poissonian rate $\theta$
under the infinitely many alleles assumption assuming that types are
transmitted from parents to offspring. This mechanism leads to a partition of
the population by type, called the allelic partition. The main object of this
work is the frequency spectrum $A(k,t)$ which counts the number of families of
size $k$ in the population at time $t$. The process $(A(k,t),\
t\in\mathbb{R}_+)$ is an example of non-Markovian branching process belonging
to the class of general branching processes counted by random characteristics.
In this work, we propose methods of approximation to replace the frequency
spectrum by simpler quantities. Our main goal is study the asymptotic error
made during these approximations through central limit theorems. In a last
section, we perform several numerical analysis using this model, in particular
to analyze the behavior of one of these approximations with respect to Sabeti's
Extended Haplotype Homozygosity [18].
| 0 | 0 | 1 | 0 | 0 | 0 |
Anomaly Detection Using Optimally-Placed Micro-PMU Sensors in Distribution Grids | As the distribution grid moves toward a tightly-monitored network, it is
important to automate the analysis of the enormous amount of data produced by
the sensors to increase the operators situational awareness about the system.
In this paper, focusing on Micro-Phasor Measurement Unit ($\mu$PMU) data, we
propose a hierarchical architecture for monitoring the grid and establish a set
of analytics and sensor fusion primitives for the detection of abnormal
behavior in the control perimeter. Due to the key role of the $\mu$PMU devices
in our architecture, a source-constrained optimal $\mu$PMU placement is also
described that finds the best location of the devices with respect to our
rules. The effectiveness of the proposed methods are tested through the
synthetic and real $\mu$PMU data.
| 1 | 0 | 0 | 0 | 0 | 0 |
Electron-Phonon Interaction in Ternary Rare-Earth Copper Antimonides LaCuSb2 and La(Cu0.8Ag0.2)Sb2 probed by Yanson Point-Contact Spectroscopy | Investigation of the electron-phonon interaction (EPI) in LaCuSb2 and
La(Cu0.8Ag0.2)Sb2 compounds by Yanson point-contact spectroscopy (PCS) has been
carried out. Point-contact spectra display a pronounced broad maximum in the
range of 10÷20 mV caused by EPI. Variation of the position of this maximum
is likely connected with anisotropic phonon spectrum in these layered
compounds. The absence of phonon features after the main maximum allows the
assessment of the Debye energy of about 40 meV. The EPI constant for the
LaCuSb2 compound was estimated to be {\lambda}=0.2+/-0.03. A zero-bias minimum
in differential resistance for the latter compound is observed for some point
contacts, which vanishes at about 6 K, pointing to the formation of
superconducting phase under point contact, while superconducting critical
temperature of the bulk sample is only 1K.
| 0 | 1 | 0 | 0 | 0 | 0 |
Ultrahigh Magnetic Field Phases in Frustrated Triangular-lattice Magnet CuCrO$_2$ | The magnetic phases of a triangular-lattice antiferromagnet, CuCrO$_2$, were
investigated in magnetic fields along to the $c$ axis, $H$ // [001], up to 120
T. Faraday rotation and magneto-absorption spectroscopy were used to unveil the
rich physics of magnetic phases. An up-up-down (UUD) magnetic structure phase
was observed around 90--105 T at temperatures around 10 K. Additional distinct
anomalies adjacent to the UUD phase were uncovered and the Y-shaped and the
V-shaped phases are proposed to be viable candidates. These ordered phases are
emerged as a result of the interplay of geometrical spin frustration, single
ion anisotropy and thermal fluctuations in an environment of extremely high
magnetic fields.
| 0 | 1 | 0 | 0 | 0 | 0 |
Preserving Differential Privacy in Convolutional Deep Belief Networks | The remarkable development of deep learning in medicine and healthcare domain
presents obvious privacy issues, when deep neural networks are built on users'
personal and highly sensitive data, e.g., clinical records, user profiles,
biomedical images, etc. However, only a few scientific studies on preserving
privacy in deep learning have been conducted. In this paper, we focus on
developing a private convolutional deep belief network (pCDBN), which
essentially is a convolutional deep belief network (CDBN) under differential
privacy. Our main idea of enforcing epsilon-differential privacy is to leverage
the functional mechanism to perturb the energy-based objective functions of
traditional CDBNs, rather than their results. One key contribution of this work
is that we propose the use of Chebyshev expansion to derive the approximate
polynomial representation of objective functions. Our theoretical analysis
shows that we can further derive the sensitivity and error bounds of the
approximate polynomial representation. As a result, preserving differential
privacy in CDBNs is feasible. We applied our model in a health social network,
i.e., YesiWell data, and in a handwriting digit dataset, i.e., MNIST data, for
human behavior prediction, human behavior classification, and handwriting digit
recognition tasks. Theoretical analysis and rigorous experimental evaluations
show that the pCDBN is highly effective. It significantly outperforms existing
solutions.
| 1 | 0 | 0 | 1 | 0 | 0 |
Injectivity almost everywhere and mappings with finite distortion in nonlinear elasticity | We show that a sufficient condition for the weak limit of a sequence of
$W^1_q$-homeomorphisms with finite distortion to be almost everywhere injective
for $q \geq n-1$, can be stated by means of composition operators. Applying
this result, we study nonlinear elasticity problems with respect to these new
classes of mappings. Furthermore, we impose loose growth conditions on the
stored-energy function for the class of $W^1_n$-homeomorphisms with finite
distortion and integrable inner as well as outer distortion coefficients.
| 0 | 0 | 1 | 0 | 0 | 0 |
A probabilistic approach to the leader problem in random graphs | Consider the classical Erdos-Renyi random graph process wherein one starts
with an empty graph on $n$ vertices at time $t=0$. At each stage, an edge is
chosen uniformly at random and placed in the graph. After the original
fundamental work in [19], Erdős suggested that one should view the original
random graph process as a "race of components". This suggested understanding
functionals such as the time for fixation of the identity of the maximal
component, sometimes referred to as the "leader problem". Using refined
combinatorial techniques, {\L}uczak [25] provided a complete analysis of this
question including the close relationship to the critical scaling window of the
Erdos-Renyi process. In this paper, we abstract this problem to the context of
the multiplicative coalescent which by the work of Aldous in [3] describes the
evolution of the Erdos-Renyi random graph in the critical regime. Further,
different entrance boundaries of this process have arisen in the study of heavy
tailed network models in the critical regime with degree exponent $\tau \in
(3,4)$. The leader problem in the context of the Erdos-Renyi random graph also
played an important role in the study of the scaling limit of the minimal
spanning tree on the complete graph [2]. In this paper we provide a
probabilistic analysis of the leader problem for the multiplicative coalescent
in the context of entrance boundaries of relevance to critical random graphs.
As a special case we recover {\L}uczak's result in [25] for the Erdos-Renyi
random graph.
| 0 | 0 | 1 | 0 | 0 | 0 |
An improvement on LSB+ method | The Least Significant Bit (LSB) substitution is an old and simple data hiding
method that could almost effortlessly be implemented in spatial or transform
domain over any digital media. This method can be attacked by several
steganalysis methods, because it detectably changes statistical and perceptual
characteristics of the cover signal. A typical method for steganalysis of the
LSB substitution is the histogram attack that attempts to diagnose anomalies in
the cover image's histogram. A well-known method to stand the histogram attack
is the LSB+ steganography that intentionally embeds some extra bits to make the
histogram look natural. However, the LSB+ method still affects the perceptual
and statistical characteristics of the cover signal. In this paper, we propose
a new method for image steganography, called LSB++, which improves over the
LSB+ image steganography by decreasing the amount of changes made to the
perceptual and statistical attributes of the cover image. We identify some
sensitive pixels affecting the signal characteristics, and then lock and keep
them from the extra bit embedding process of the LSB+ method, by introducing a
new embedding key. Evaluation results show that, without reducing the embedding
capacity, our method can decrease potentially detectable changes caused by the
embedding process.
| 1 | 0 | 0 | 0 | 0 | 0 |
Oxygen reduction mechanisms in nanostructured La0.8Sr0.2MnO3 cathodes for Solid Oxide Fuel Cells | In this work we outline the mechanisms contributing to the oxygen reduction
reaction in nanostructured cathodes of La0.8Sr0.2MnO3 (LSM) for Solid Oxide
Fuel Cells (SOFC). These cathodes, developed from LSM nanostructured tubes, can
be used at lower temperatures compared to microstructured ones, and this is a
crucial fact to avoid the degradation of the fuel cell components. This
reduction of the operating temperatures stems mainly from two factors: i) the
appearance of significant oxide ion diffusion through the cathode material in
which the nanostructure plays a key role and ii) an optimized gas phase
diffusion of oxygen through the porous structure of the cathode, which becomes
negligible. A detailed analysis of our Electrochemical Impedance Spectroscopy
supported by first principles calculations point towards an improved overall
cathodic performance driven by a fast transport of oxide ions through the
cathode surface.
| 0 | 1 | 0 | 0 | 0 | 0 |
Learning Interpretable Models with Causal Guarantees | Machine learning has shown much promise in helping improve the quality of
medical, legal, and economic decision-making. In these applications, machine
learning models must satisfy two important criteria: (i) they must be causal,
since the goal is typically to predict individual treatment effects, and (ii)
they must be interpretable, so that human decision makers can validate and
trust the model predictions. There has recently been much progress along each
direction independently, yet the state-of-the-art approaches are fundamentally
incompatible. We propose a framework for learning causal interpretable
models---from observational data---that can be used to predict individual
treatment effects. Our framework can be used with any algorithm for learning
interpretable models. Furthermore, we prove an error bound on the treatment
effects predicted by our model. Finally, in an experiment on real-world data,
we show that the models trained using our framework significantly outperform a
number of baselines.
| 1 | 0 | 0 | 1 | 0 | 0 |
Achromatic super-oscillatory lenses with sub-wavelength focusing | Lenses are crucial to light-enabled technologies. Conventional lenses have
been perfected to achieve near-diffraction-limited resolution and minimal
chromatic aberrations. However, such lenses are bulky and cannot focus light
into a hotspot smaller than half wavelength of light. Pupil filters, initially
suggested by Toraldo di Francia, can overcome the resolution constraints of
conventional lenses, but are not intrinsically chromatically corrected. Here we
report single-element planar lenses that not only deliver sub-wavelength
focusing (beating the diffraction limit of conventional refractive lenses) but
also focus light of different colors into the same hotspot. Using the principle
of super-oscillations we designed and fabricated a range of binary dielectric
and metallic lenses for visible and infrared parts of the spectrum that are
manufactured on silicon wafers, silica substrates and optical fiber tips. Such
low cost, compact lenses could be useful in mobile devices, data storage,
surveillance, robotics, space applications, imaging, manufacturing with light,
and spatially resolved nonlinear microscopies.
| 0 | 1 | 0 | 0 | 0 | 0 |
Time-dependent linear-response variational Monte Carlo | We present the extension of variational Monte Carlo (VMC) to the calculation
of electronic excitation energies and oscillator strengths using time-dependent
linear-response theory. By exploiting the analogy existing between the linear
method for wave-function optimisation and the generalised eigenvalue equation
of linear-response theory, we formulate the equations of linear-response VMC
(LR-VMC). This LR-VMC approach involves the first-and second-order derivatives
of the wave function with respect to the parameters. We perform first tests of
the LR-VMC method within the Tamm-Dancoff approximation using
single-determinant Jastrow-Slater wave functions with different Slater basis
sets on some singlet and triplet excitations of the beryllium atom. Comparison
with reference experimental data and with configuration-interaction-singles
(CIS) results shows that LR-VMC generally outperforms CIS for excitation
energies and is thus a promising approach for calculating electronic
excited-state properties of atoms and molecules.
| 0 | 1 | 0 | 0 | 0 | 0 |
Wireless Power Transfer for Distributed Estimation in Sensor Networks | This paper studies power allocation for distributed estimation of an unknown
scalar random source in sensor networks with a multiple-antenna fusion center
(FC), where wireless sensors are equipped with radio-frequency based energy
harvesting technology. The sensors' observation is locally processed by using
an uncoded amplify-and-forward scheme. The processed signals are then sent to
the FC, and are coherently combined at the FC, at which the best linear
unbiased estimator (BLUE) is adopted for reliable estimation. We aim to solve
the following two power allocation problems: 1) minimizing distortion under
various power constraints; and 2) minimizing total transmit power under
distortion constraints, where the distortion is measured in terms of
mean-squared error of the BLUE. Two iterative algorithms are developed to solve
the non-convex problems, which converge at least to a local optimum. In
particular, the above algorithms are designed to jointly optimize the
amplification coefficients, energy beamforming, and receive filtering. For each
problem, a suboptimal design, a single-antenna FC scenario, and a common
harvester deployment for colocated sensors, are also studied. Using the
powerful semidefinite relaxation framework, our result is shown to be valid for
any number of sensors, each with different noise power, and for an arbitrarily
number of antennas at the FC.
| 1 | 0 | 1 | 0 | 0 | 0 |
About a non-standard interpolation problem | Using algebraic methods, and motivated by the one variable case, we study a
multipoint interpolation problem in the setting of several complex variables.
The duality realized by the residue generator associated with an underlying
Gorenstein algebra, using the Lagrange interpolation polynomial, plays a key
role in the arguments.
| 0 | 0 | 1 | 0 | 0 | 0 |
Quantum spin liquid signatures in Kitaev-like frustrated magnets | Motivated by recent experiments on $\alpha$-RuCl$_3$, we investigate a
possible quantum spin liquid ground state of the honeycomb-lattice spin model
with bond-dependent interactions. We consider the $K-\Gamma$ model, where $K$
and $\Gamma$ represent the Kitaev and symmetric-anisotropic interactions
between spin-1/2 moments on the honeycomb lattice. Using the infinite density
matrix renormalization group (iDMRG), we provide compelling evidence for the
existence of quantum spin liquid phases in an extended region of the phase
diagram. In particular, we use transfer matrix spectra to show the evolution of
two-particle excitations with well-defined two-dimensional dispersion, which is
a strong signature of quantum spin liquid. These results are compared with
predictions from Majorana mean-field theory and used to infer the quasiparticle
excitation spectra. Further, we compute the dynamical structure factor using
finite size cluster computations and show that the results resemble the
scattering continuum seen in neutron scattering experiments on
$\alpha$-RuCl$_3$. We discuss these results in light of recent and future
experiments.
| 0 | 1 | 0 | 0 | 0 | 0 |
Equivalent electric circuit of magnetosphere-ionosphere-atmosphere interaction | The aim of this study is to investigate the magnetospheric disturbances
effects on complicated nonlinear system of atmospheric processes. During
substorms and storms, the ionosphere was subjected to rather a significant
Joule heating, and the power of precipitating energetic particles was also
great. Nevertheless, there were no abnormal variations of meteoparameters in
the lower atmosphere. If there is a mechanism for the powerful magnetospheric
disturbance effect on meteorological processes in the atmosphere, it supposes a
more complicated series of many intermediates, and is not associated directly
with the energy that arrives into the ionosphere during storms. I discuss the
problem of the effect of the solar wind electric field sharp increase via the
global electric circuit during magnetospheric disturbances on the cloud layer
formation.
| 0 | 1 | 0 | 0 | 0 | 0 |
Charge polarization effects on the optical response of blue-emitting superlattices | In the new approach to study the optical response of periodic structures,
successfully applied to study the optical properties of blue-emitting InGaN/GaN
superlattices, the spontaneous charge polarization was neglected. To search the
effect of this quantum confined Stark phenomenon we study the optical response,
assuming parabolic band edge modulations in the conduction and valence bands.
We discuss the consequences on the eigenfunction symmetries and the ensuing
optical transition selection rules. Using the new approach in the WKB
approximation of the finite periodic systems theory, we determine the energy
eigenvalues, their corresponding eigenfunctions and the subband structures in
the conduction and valence bands. We calculate the photoluminescence as a
function of the charge localization strength, and compare with the experimental
result. We show that for subbands close to the barrier edge the optical
response and the surface states are sensitive to charge polarization strength.
| 0 | 1 | 0 | 0 | 0 | 0 |
Replica analysis of overfitting in regression models for time-to-event data | Overfitting, which happens when the number of parameters in a model is too
large compared to the number of data points available for determining these
parameters, is a serious and growing problem in survival analysis. While modern
medicine presents us with data of unprecedented dimensionality, these data
cannot yet be used effectively for clinical outcome prediction. Standard error
measures in maximum likelihood regression, such as p-values and z-scores, are
blind to overfitting, and even for Cox's proportional hazards model (the main
tool of medical statisticians), one finds in literature only rules of thumb on
the number of samples required to avoid overfitting. In this paper we present a
mathematical theory of overfitting in regression models for time-to-event data,
which aims to increase our quantitative understanding of the problem and
provide practical tools with which to correct regression outcomes for the
impact of overfitting. It is based on the replica method, a statistical
mechanical technique for the analysis of heterogeneous many-variable systems
that has been used successfully for several decades in physics, biology, and
computer science, but not yet in medical statistics. We develop the theory
initially for arbitrary regression models for time-to-event data, and verify
its predictions in detail for the popular Cox model.
| 0 | 1 | 0 | 1 | 0 | 0 |
System Description: Russell - A Logical Framework for Deductive Systems | Russell is a logical framework for the specification and implementation of
deductive systems. It is a high-level language with respect to Metamath
language, so inherently it uses a Metamath foundations, i.e. it doesn't rely on
any particular formal calculus, but rather is a pure logical framework. The
main difference with Metamath is in the proof language and approach to syntax:
the proofs have a declarative form, i.e. consist of actual expressions, which
are used in proofs, while syntactic grammar rules are separated from the
meaningful rules of inference.
Russell is implemented in c++14 and is distributed under GPL v3 license. The
repository contains translators from Metamath to Russell and back. Original
Metamath theorem base (almost 30 000 theorems) can be translated to Russell,
verified, translated back to Metamath and verified with the original Metamath
verifier. Russell can be downloaded from the repository
this https URL
| 1 | 0 | 1 | 0 | 0 | 0 |
Spinor analysis | "Let us call the novel quantities which, in addition to the vectors and
tensors, have appeared in the quantum mechanics of the spinning electron, and
which in the case of the Lorentz group are quite differently transformed from
tensors, as spinors for short. Is there no spinor analysis that every physicist
can learn, such as tensor analysis, and with the aid of which all the possible
spinors can be formed, and secondly, all the invariant equations in which
spinors occur?" So Mr Ehrenfest asked me and the answer will be given below.
| 0 | 1 | 0 | 0 | 0 | 0 |
Identifiability of phylogenetic parameters from k-mer data under the coalescent | Distances between sequences based on their $k$-mer frequency counts can be
used to reconstruct phylogenies without first computing a sequence alignment.
Past work has shown that effective use of k-mer methods depends on 1)
model-based corrections to distances based on $k$-mers and 2) breaking long
sequences into blocks to obtain repeated trials from the sequence-generating
process. Good performance of such methods is based on having many high-quality
blocks with many homologous sites, which can be problematic to guarantee a
priori.
Nature provides natural blocks of sequences into homologous regions---namely,
the genes. However, directly using past work in this setting is problematic
because of possible discordance between different gene trees and the underlying
species tree. Using the multispecies coalescent model as a basis, we derive
model-based moment formulas that involve the divergence times and the
coalescent parameters. From this setting, we prove identifiability results for
the tree and branch length parameters under the Jukes-Cantor model of sequence
mutations.
| 0 | 0 | 1 | 0 | 0 | 0 |
Short-Time Nonlinear Effects in the Exciton-Polariton System | In the exciton-polariton system, a linear dispersive photon field is coupled
to a nonlinear exciton field. Short-time analysis of the lossless system shows
that, when the photon field is excited, the time required for that field to
exhibit nonlinear effects is longer than the time required for the nonlinear
Schrödinger equation, in which the photon field itself is nonlinear. When the
initial condition is scaled by $\epsilon^\alpha$, it is found that the relative
error committed by omitting the nonlinear term in the exciton-polariton system
remains within $\epsilon$ for all times up to $t=C\epsilon^\beta$, where
$\beta=(1-\alpha(p-1))/(p+2)$. This is in contrast to $\beta=1-\alpha(p-1)$ for
the nonlinear Schrödinger equation.
| 0 | 0 | 1 | 0 | 0 | 0 |
GTC Observations of an Overdense Region of LAEs at z=6.5 | We present the results of our search for the faint galaxies near the end of
the Reionisation Epoch. This has been done using very deep OSIRIS images
obtained at the Gran Telescopio Canarias (GTC). Our observations focus around
two close, massive Lyman Alpha Emitters (LAEs) at redshift 6.5, discovered in
the SXDS field within a large-scale overdense region (Ouchi et al. 2010). The
total GTC observing time in three medium band filters (F883w35, F913w25 and
F941w33) is over 34 hours covering $7.0\times8.5$ arcmin$^2$ (or $\sim30,000$
Mpc$^3$ at $z=6.5$). In addition to the two spectroscopically confirmed LAEs in
the field, we have identified 45 other LAE candidates. The preliminary
luminosity function derived from our observations, assuming a spectroscopic
confirmation success rate of $\frac{2}{3}$ as in previous surveys, suggests
this area is about 2 times denser than the general field galaxy population at
$z=6.5$. If confirmed spectroscopically, our results will imply the discovery
of one of the earliest protoclusters in the universe, which will evolve to
resemble the most massive galaxy clusters today.
| 0 | 1 | 0 | 0 | 0 | 0 |
Comment on Photothermal radiometry parametric identifiability theory for reliable and unique nondestructive coating thickness and thermophysical measurements, J. Appl. Phys. 121(9), 095101 (2017) | A recent paper [X. Guo, A. Mandelis, J. Tolev and K. Tang, J. Appl. Phys.,
121, 095101 (2017)] intends to demonstrate that from the photothermal
radiometry signal obtained on a coated opaque sample in 1D transfer, one should
be able to identify separately the following three parameters of the coating:
thermal diffusivity, thermal conductivity and thickness. In this comment, it is
shown that the three parameters are correlated in the considered experimental
arrangement, the identifiability criterion is in error and the thickness
inferred therefrom is not trustable.
| 0 | 1 | 0 | 0 | 0 | 0 |
Computing the projected reachable set of switched affine systems: an application to systems biology | A fundamental question in systems biology is what combinations of mean and
variance of the species present in a stochastic biochemical reaction network
are attainable by perturbing the system with an external signal. To address
this question, we show that the moments evolution in any generic network can be
either approximated or, under suitable assumptions, computed exactly as the
solution of a switched affine system. Motivated by this application, we propose
a new method to approximate the reachable set of switched affine systems. A
remarkable feature of our approach is that it allows one to easily compute
projections of the reachable set for pairs of moments of interest, without
requiring the computation of the full reachable set, which can be prohibitive
for large networks. As a second contribution, we also show how to select the
external signal in order to maximize the probability of reaching a target set.
To illustrate the method we study a renown model of controlled gene expression
and we derive estimates of the reachable set, for the protein mean and
variance, that are more accurate than those available in the literature and
consistent with experimental data.
| 1 | 0 | 1 | 0 | 0 | 0 |
Temporal Action Localization by Structured Maximal Sums | We address the problem of temporal action localization in videos. We pose
action localization as a structured prediction over arbitrary-length temporal
windows, where each window is scored as the sum of frame-wise classification
scores. Additionally, our model classifies the start, middle, and end of each
action as separate components, allowing our system to explicitly model each
action's temporal evolution and take advantage of informative temporal
dependencies present in this structure. In this framework, we localize actions
by searching for the structured maximal sum, a problem for which we develop a
novel, provably-efficient algorithmic solution. The frame-wise classification
scores are computed using features from a deep Convolutional Neural Network
(CNN), which are trained end-to-end to directly optimize for a novel structured
objective. We evaluate our system on the THUMOS 14 action detection benchmark
and achieve competitive performance.
| 1 | 0 | 0 | 0 | 0 | 0 |
Using Transfer Learning for Image-Based Cassava Disease Detection | Cassava is the third largest source of carbohydrates for human food in the
world but is vulnerable to virus diseases, which threaten to destabilize food
security in sub-Saharan Africa. Novel methods of cassava disease detection are
needed to support improved control which will prevent this crisis. Image
recognition offers both a cost effective and scalable technology for disease
detection. New transfer learning methods offer an avenue for this technology to
be easily deployed on mobile devices. Using a dataset of cassava disease images
taken in the field in Tanzania, we applied transfer learning to train a deep
convolutional neural network to identify three diseases and two types of pest
damage (or lack thereof). The best trained model accuracies were 98% for brown
leaf spot (BLS), 96% for red mite damage (RMD), 95% for green mite damage
(GMD), 98% for cassava brown streak disease (CBSD), and 96% for cassava mosaic
disease (CMD). The best model achieved an overall accuracy of 93% for data not
used in the training process. Our results show that the transfer learning
approach for image recognition of field images offers a fast, affordable, and
easily deployable strategy for digital plant disease detection.
| 1 | 0 | 0 | 0 | 0 | 0 |
Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments | Ability to continuously learn and adapt from limited experience in
nonstationary environments is an important milestone on the path towards
general intelligence. In this paper, we cast the problem of continuous
adaptation into the learning-to-learn framework. We develop a simple
gradient-based meta-learning algorithm suitable for adaptation in dynamically
changing and adversarial scenarios. Additionally, we design a new multi-agent
competitive environment, RoboSumo, and define iterated adaptation games for
testing various aspects of continuous adaptation strategies. We demonstrate
that meta-learning enables significantly more efficient adaptation than
reactive baselines in the few-shot regime. Our experiments with a population of
agents that learn and compete suggest that meta-learners are the fittest.
| 1 | 0 | 0 | 0 | 0 | 0 |
Alpha-Divergences in Variational Dropout | We investigate the use of alternative divergences to Kullback-Leibler (KL) in
variational inference(VI), based on the Variational Dropout \cite{kingma2015}.
Stochastic gradient variational Bayes (SGVB) \cite{aevb} is a general framework
for estimating the evidence lower bound (ELBO) in Variational Bayes. In this
work, we extend the SGVB estimator with using Alpha-Divergences, which are
alternative to divergences to VI' KL objective. The Gaussian dropout can be
seen as a local reparametrization trick of the SGVB objective. We extend the
Variational Dropout to use alpha divergences for variational inference. Our
results compare $\alpha$-divergence variational dropout with standard
variational dropout with correlated and uncorrelated weight noise. We show that
the $\alpha$-divergence with $\alpha \rightarrow 1$ (or KL divergence) is still
a good measure for use in variational inference, in spite of the efficient use
of Alpha-divergences for Dropout VI \cite{Li17}. $\alpha \rightarrow 1$ can
yield the lowest training error, and optimizes a good lower bound for the
evidence lower bound (ELBO) among all values of the parameter $\alpha \in
[0,\infty)$.
| 1 | 0 | 0 | 1 | 0 | 0 |
Curvature properties of Robinson-Trautman metric | The curvature properties of Robinson-Trautman metric have been investigated.
It is shown that Robinson-Trautman metric admits several kinds of
pseudosymmetric type structures such as Weyl pseudosymmetric, Ricci
pseudosymmetric, pseudosymmetric Weyl conformal curvature tensor etc. Also it
is shown that the difference $R\cdot R - Q(S,R)$ is linearly dependent with
$Q(g,C)$ but the metric is not Ricci generalized pseudosymmetric. Moreover, it
is proved that this metric is Roter type, 2-quasi-Einstein, Ricci tensor is
Riemann compatible and its Weyl conformal curvature 2-forms are recurrent. It
is also shown that the energy momentum tensor of the metric is pseudosymmetric
and the conditions under which such tensor is of Codazzi type and cyclic
parallel have been investigated. Finally, we have made a comparison between the
curvature properties of Robinson-Trautman metric and Som-Raychaudhuri metric.
| 0 | 0 | 1 | 0 | 0 | 0 |
Dehn invariant of flexible polyhedra | We prove that the Dehn invariant of any flexible polyhedron in Euclidean
space of dimension greater than or equal to 3 is constant during the flexion.
In dimensions 3 and 4 this implies that any flexible polyhedron remains
scissors congruent to itself during the flexion. This proves the Strong Bellows
Conjecture posed by Connelly in 1979. It was believed that this conjecture was
disproved by Alexandrov and Connelly in 2009. However, we find an error in
their counterexample. Further, we show that the Dehn invariant of a flexible
polyhedron in either sphere or Lobachevsky space of dimension greater than or
equal to 3 is constant during the flexion if and only if this polyhedron
satisfies the usual Bellows Conjecture, i.e., its volume is constant during
every flexion of it. Using previous results due to the first listed author, we
deduce that the Dehn invariant is constant during the flexion for every bounded
flexible polyhedron in odd-dimensional Lobachevsky space and for every flexible
polyhedron with sufficiently small edge lengths in any space of constant
curvature of dimension greater than or equal to 3.
| 0 | 0 | 1 | 0 | 0 | 0 |
On Evaluation of Embodied Navigation Agents | Skillful mobile operation in three-dimensional environments is a primary
topic of study in Artificial Intelligence. The past two years have seen a surge
of creative work on navigation. This creative output has produced a plethora of
sometimes incompatible task definitions and evaluation protocols. To coordinate
ongoing and future research in this area, we have convened a working group to
study empirical methodology in navigation research. The present document
summarizes the consensus recommendations of this working group. We discuss
different problem statements and the role of generalization, present evaluation
measures, and provide standard scenarios that can be used for benchmarking.
| 1 | 0 | 0 | 0 | 0 | 0 |
Single Magnetic Impurity in Tilted Dirac Surface States | We utilize variational method to investigate the Kondo screening of a
spin-1/2 magnetic impurity in tilted Dirac surface states with the Dirac cone
tilted along the $k_y$-axis. We mainly study about the effect of the tilting
term on the binding energy and the spin-spin correlation between magnetic
impurity and conduction electrons, and compare the results with the
counterparts in a two dimensional helical metal. The binding energy has a
critical value while the Dirac cone is slightly tilted. However, as the tilting
term increases, the density of states around the Fermi surface becomes
significant, such that the impurity and the host material always favor a bound
state. The diagonal and the off-diagonal terms of the spin-spin correlation
between the magnetic impurity and conduction electrons are also studied. Due to
the spin-orbit coupling and the tilting of the spectra, various components of
spin-spin correlation show very strong anisotropy in coordinate space, and are
of power-law decay with respect to the spatial displacements.
| 0 | 1 | 0 | 0 | 0 | 0 |
Leveraging the Path Signature for Skeleton-based Human Action Recognition | Human action recognition in videos is one of the most challenging tasks in
computer vision. One important issue is how to design discriminative features
for representing spatial context and temporal dynamics. Here, we introduce a
path signature feature to encode information from intra-frame and inter-frame
contexts. A key step towards leveraging this feature is to construct the proper
trajectories (paths) for the data steam. In each frame, the correlated
constraints of human joints are treated as small paths, then the spatial path
signature features are extracted from them. In video data, the evolution of
these spatial features over time can also be regarded as paths from which the
temporal path signature features are extracted. Eventually, all these features
are concatenated to constitute the input vector of a fully connected neural
network for action classification. Experimental results on four standard
benchmark action datasets, J-HMDB, SBU Dataset, Berkeley MHAD, and NTURGB+D
demonstrate that the proposed approach achieves state-of-the-art accuracy even
in comparison with recent deep learning based models.
| 1 | 0 | 0 | 0 | 0 | 0 |
How Many Subpopulations is Too Many? Exponential Lower Bounds for Inferring Population Histories | Reconstruction of population histories is a central problem in population
genetics. Existing coalescent-based methods, like the seminal work of Li and
Durbin (Nature, 2011), attempt to solve this problem using sequence data but
have no rigorous guarantees. Determining the amount of data needed to correctly
reconstruct population histories is a major challenge. Using a variety of tools
from information theory, the theory of extremal polynomials, and approximation
theory, we prove new sharp information-theoretic lower bounds on the problem of
reconstructing population structure -- the history of multiple subpopulations
that merge, split and change sizes over time. Our lower bounds are exponential
in the number of subpopulations, even when reconstructing recent histories. We
demonstrate the sharpness of our lower bounds by providing algorithms for
distinguishing and learning population histories with matching dependence on
the number of subpopulations.
| 0 | 0 | 0 | 0 | 1 | 0 |
Source localization in an ocean waveguide using supervised machine learning | Source localization in ocean acoustics is posed as a machine learning problem
in which data-driven methods learn source ranges directly from observed
acoustic data. The pressure received by a vertical linear array is preprocessed
by constructing a normalized sample covariance matrix (SCM) and used as the
input. Three machine learning methods (feed-forward neural networks (FNN),
support vector machines (SVM) and random forests (RF)) are investigated in this
paper, with focus on the FNN. The range estimation problem is solved both as a
classification problem and as a regression problem by these three machine
learning algorithms. The results of range estimation for the Noise09 experiment
are compared for FNN, SVM, RF and conventional matched-field processing and
demonstrate the potential of machine learning for underwater source
localization..
| 1 | 1 | 0 | 0 | 0 | 0 |
Mining Illegal Insider Trading of Stocks: A Proactive Approach | Illegal insider trading of stocks is based on releasing non-public
information (e.g., new product launch, quarterly financial report, acquisition
or merger plan) before the information is made public. Detecting illegal
insider trading is difficult due to the complex, nonlinear, and non-stationary
nature of the stock market. In this work, we present an approach that detects
and predicts illegal insider trading proactively from large heterogeneous
sources of structured and unstructured data using a deep-learning based
approach combined with discrete signal processing on the time series data. In
addition, we use a tree-based approach that visualizes events and actions to
aid analysts in their understanding of large amounts of unstructured data.
Using existing data, we have discovered that our approach has a good success
rate in detecting illegal insider trading patterns.
| 0 | 0 | 0 | 1 | 0 | 1 |
Discovery of Extreme [OIII]+H$β$ Emitting Galaxies Tracing an Overdensity at z~3.5 in CDF-South | Using deep multi-wavelength photometry of galaxies from ZFOURGE, we group
galaxies at $2.5<z<4.0$ by the shape of their spectral energy distributions
(SEDs). We identify a population of galaxies with excess emission in the
$K_s$-band, which corresponds to [OIII]+H$\beta$ emission at $2.95<z<3.65$.
This population includes 78% of the bluest galaxies with UV slopes steeper than
$\beta = -2$. We de-redshift and scale this photometry to build two composite
SEDs, enabling us to measure equivalent widths of these Extreme [OIII]+H$\beta$
Emission Line Galaxies (EELGs) at $z\sim3.5$. We identify 60 galaxies that
comprise a composite SED with [OIII]+H$\beta$ rest-frame equivalent width of
$803\pm228$\AA\ and another 218 galaxies in a composite SED with equivalent
width of $230\pm90$\AA. These EELGs are analogous to the `green peas' found in
the SDSS, and are thought to be undergoing their first burst of star formation
due to their blue colors ($\beta < -1.6$), young ages
($\log(\rm{age}/yr)\sim7.2$), and low dust attenuation values. Their strong
nebular emission lines and compact sizes (typically $\sim1.4$ kpc) are
consistent with the properties of the star-forming galaxies possibly
responsible for reionizing the universe at $z>6$. Many of the EELGs also
exhibit Lyman-$\alpha$ emission. Additionally, we find that many of these
sources are clustered in an overdensity in the Chandra Deep Field South, with
five spectroscopically confirmed members at $z=3.474 \pm 0.004$. The spatial
distribution and photometric redshifts of the ZFOURGE population further
confirm the overdensity highlighted by the EELGs.
| 0 | 1 | 0 | 0 | 0 | 0 |
Predictive Simulations for Tuning Electronic and Optical Properties of SubPc Derivatives | Boron subphthalocyanine chloride is an electron donor material used in small
molecule organic photovoltaics with an unusually large molecular dipole moment.
Using first-principles calculations, we investigate enhancing the electronic
and optical properties of boron subphthalocyanine chloride, by substituting the
boron and chlorine atoms with other trivalent and halogen atoms in order to
modify the molecular dipole moment. Gas phase molecular structures and
properties are predicted with hybrid functionals. Using positions and
orientations of the known compounds as the starting coordinates for these
molecules, stable crystalline structures are derived following a procedure that
involves perturbation and accurate total energy minimization. Electronic
structure and photonic properties of the predicted crystals are computed using
the GW method and the Bethe-Salpeter equation, respectively. Finally, a simple
transport model is use to demonstrate the importance of molecular dipole
moments on device performance.
| 0 | 1 | 0 | 0 | 0 | 0 |
Learning to attend in a brain-inspired deep neural network | Recent machine learning models have shown that including attention as a
component results in improved model accuracy and interpretability, despite the
concept of attention in these approaches only loosely approximating the brain's
attention mechanism. Here we extend this work by building a more brain-inspired
deep network model of the primate ATTention Network (ATTNet) that learns to
shift its attention so as to maximize the reward. Using deep reinforcement
learning, ATTNet learned to shift its attention to the visual features of a
target category in the context of a search task. ATTNet's dorsal layers also
learned to prioritize these shifts of attention so as to maximize success of
the ventral pathway classification and receive greater reward. Model behavior
was tested against the fixations made by subjects searching images for the same
cued category. Both subjects and ATTNet showed evidence for attention being
preferentially directed to target goals, behaviorally measured as oculomotor
guidance to targets. More fundamentally, ATTNet learned to shift its attention
to target like objects and spatially route its visual inputs to accomplish the
task. This work makes a step toward a better understanding of the role of
attention in the brain and other computational systems.
| 0 | 0 | 0 | 0 | 1 | 0 |
Anisotropic functional Laplace deconvolution | In the present paper we consider the problem of estimating a
three-dimensional function $f$ based on observations from its noisy Laplace
convolution. Our study is motivated by the analysis of Dynamic Contrast
Enhanced (DCE) imaging data. We construct an adaptive wavelet-Laguerre
estimator of $f$, derive minimax lower bounds for the $L^2$-risk when $f$
belongs to a three-dimensional Laguerre-Sobolev ball and demonstrate that the
wavelet-Laguerre estimator is adaptive and asymptotically near-optimal in a
wide range of Laguerre-Sobolev spaces. We carry out a limited simulations study
and show that the estimator performs well in a finite sample setting. Finally,
we use the technique for the solution of the Laplace deconvolution problem on
the basis of DCE Computerized Tomography data.
| 0 | 0 | 0 | 1 | 0 | 0 |
Prediction of many-electron wavefunctions using atomic potentials | For a given many-electron molecule, it is possible to define a corresponding
one-electron Schrödinger equation, using potentials derived from simple
atomic densities, whose solution predicts fairly accurate molecular orbitals
for single- and multi-determinant wavefunctions for the molecule. The energy is
not predicted and must be evaluated by calculating Coulomb and exchange
interactions over the predicted orbitals. Potentials are found by minimizing
the energy of predicted wavefunctions. There exist slightly less accurate
average potentials for first-row atoms that can be used without modification in
different molecules. For a test set of molecules representing different bonding
environments, these average potentials give wavefunctions with energies that
deviate from exact self-consistent field or configuration interaction energies
by less than 0.08 eV and 0.03 eV per bond or valence electron pair,
respectively.
| 0 | 1 | 0 | 0 | 0 | 0 |
Free energy of formation of a crystal nucleus in incongruent solidification: Implication for modeling the crystallization of aqueous nitric acid droplets in type 1 polar stratospheric clouds | Using the formalism of the classical nucleation theory, we derive an
expression for the reversible work $W_*$ of formation of a binary crystal
nucleus in a liquid binary solution of non-stoichiometric composition
(incongruent crystallization). Applied to the crystallization of aqueous nitric
acid (NA) droplets, the new expression more adequately takes account of the
effect of nitric acid vapor compared to the conventional expression of
MacKenzie, Kulmala, Laaksonen, and Vesala (MKLV) [J.Geophys.Res. 102, 19729
(1997)]. The predictions of both MKLV and modified expressions for the average
liquid-solid interfacial tension $\sigma^{ls}$ of nitric acid dihydrate (NAD)
crystals are compared by using existing experimental data on the incongruent
crystallization of aqueous NA droplets of composition relevant to polar
stratospheric clouds (PSCs). The predictions based on the MKLV expression are
higher by about 5% compared to predictions based on our modified expression.
This results in similar differences between the predictions of both expressions
for the solid-vapor interfacial tension $\sigma^{sv}$ of NAD crystal nuclei.
The latter can be obtained by analyzing of experimental data on crystal
nucleation rates in aqueous NA droplets and exploiting the dominance of the
surface-stimulated mode of crystal nucleation in small droplets and its
negligibility in large ones. Applying that method, our expression for $W_*$
provides an estimate for $\sigma^{sv}$ of NAD in the range from 92 dyn/cm to
100 dyn/cm, while the MKLV expression predicts it in the range from 95 dyn/cm
to 105 dyn/cm. The predictions of both expressions for $W_*$ become identical
in the case of congruent crystallization; this was also demonstrated by
applying our method to the nucleation of nitric acid trihydrate (NAT) crystals
in PSC droplets of stoichiometric composition.
| 0 | 1 | 0 | 0 | 0 | 0 |
Ensemble learning with Conformal Predictors: Targeting credible predictions of conversion from Mild Cognitive Impairment to Alzheimer's Disease | Most machine learning classifiers give predictions for new examples
accurately, yet without indicating how trustworthy predictions are. In the
medical domain, this hampers their integration in decision support systems,
which could be useful in the clinical practice. We use a supervised learning
approach that combines Ensemble learning with Conformal Predictors to predict
conversion from Mild Cognitive Impairment to Alzheimer's Disease. Our goal is
to enhance the classification performance (Ensemble learning) and complement
each prediction with a measure of credibility (Conformal Predictors). Our
results showed the superiority of the proposed approach over a similar ensemble
framework with standard classifiers.
| 0 | 0 | 0 | 1 | 0 | 0 |
Parameter Sharing Deep Deterministic Policy Gradient for Cooperative Multi-agent Reinforcement Learning | Deep reinforcement learning for multi-agent cooperation and competition has
been a hot topic recently. This paper focuses on cooperative multi-agent
problem based on actor-critic methods under local observations settings. Multi
agent deep deterministic policy gradient obtained state of art results for some
multi-agent games, whereas, it cannot scale well with growing amount of agents.
In order to boost scalability, we propose a parameter sharing deterministic
policy gradient method with three variants based on neural networks, including
actor-critic sharing, actor sharing and actor sharing with partially shared
critic. Benchmarks from rllab show that the proposed method has advantages in
learning speed and memory efficiency, well scales with growing amount of
agents, and moreover, it can make full use of reward sharing and
exchangeability if possible.
| 1 | 0 | 0 | 0 | 0 | 0 |
Repair Strategies for Storage on Mobile Clouds | We study the data reliability problem for a community of devices forming a
mobile cloud storage system. We consider the application of regenerating codes
for file maintenance within a geographically-limited area. Such codes require
lower bandwidth to regenerate lost data fragments compared to file replication
or reconstruction. We investigate threshold-based repair strategies where data
repair is initiated after a threshold number of data fragments have been lost
due to node mobility. We show that at a low departure-to-repair rate regime, a
lazy repair strategy in which repairs are initiated after several nodes have
left the system outperforms eager repair in which repairs are initiated after a
single departure. This optimality is reversed when nodes are highly mobile. We
further compare distributed and centralized repair strategies and derive the
optimal repair threshold for minimizing the average repair cost per unit of
time, as a function of underlying code parameters. In addition, we examine
cooperative repair strategies and show performance improvements compared to
non-cooperative codes. We investigate several models for the time needed for
node repair including a simple fixed time model that allows for the computation
of closed-form expressions and a more realistic model that takes into account
the number of repaired nodes. We derive the conditions under which the former
model approximates the latter. Finally, an extended model where additional
failures are allowed during the repair process is investigated. Overall, our
results establish the joint effect of code design and repair algorithms on the
maintenance cost of distributed storage systems.
| 1 | 0 | 0 | 0 | 0 | 0 |
Mean-variance portfolio selection under partial information with drift uncertainty | This paper studies a mean-variance portfolio selection problem under partial
information with drift uncertainty. It is proved that all the contingent claims
in this model are attainable in the sense of Xiong and Zhou. Further, we
propose a numerical scheme to approximate the optimal portfolio. Malliavin
calculus and the strong law of large numbers play important roles in this
scheme.
| 0 | 0 | 0 | 0 | 0 | 1 |
Learning from MOM's principles: Le Cam's approach | We obtain estimation error rates for estimators obtained by aggregation of
regularized median-of-means tests, following a construction of Le Cam. The
results hold with exponentially large probability -- as in the gaussian
framework with independent noise- under only weak moments assumptions on data
and without assuming independence between noise and design. Any norm may be
used for regularization. When it has some sparsity inducing power we recover
sparse rates of convergence.
The procedure is robust since a large part of data may be corrupted, these
outliers have nothing to do with the oracle we want to reconstruct. Our general
risk bound is of order \begin{equation*} \max\left(\mbox{minimax rate in the
i.i.d. setup}, \frac{\text{number of outliers}}{\text{number of
observations}}\right) \enspace. \end{equation*}In particular, the number of
outliers may be as large as (number of data) $\times$(minimax rate) without
affecting this rate. The other data do not have to be identically distributed
but should only have equivalent $L^1$ and $L^2$ moments.
For example, the minimax rate $s \log(ed/s)/N$ of recovery of a $s$-sparse
vector in $\mathbb{R}^d$ is achieved with exponentially large probability by a
median-of-means version of the LASSO when the noise has $q_0$ moments for some
$q_0>2$, the entries of the design matrix should have $C_0\log(ed)$ moments and
the dataset can be corrupted up to $C_1 s \log(ed/s)$ outliers.
| 0 | 0 | 1 | 1 | 0 | 0 |
On a Neumann-type series for modified Bessel functions of the first kind | In this paper, we are interested in a Neumann-type series for modified Bessel
functions of the first kind which arises in the study of Dunkl operators
associated with dihedral groups and as an instance of the Laguerre semigroup
constructed by Ben Said-Kobayashi-Orsted. We first revisit the particular case
corresponding to the group of square-preserving symmetries for which we give
two new and different proofs other than the existing ones. The first proof uses
the expansion of powers in a Neumann series of Bessel functions while the
second one is based on a quadratic transformation for the Gauss hypergeometric
function and opens the way to derive further expressions when the orders of the
underlying dihedral groups are powers of two. More generally, we give another
proof of De Bie \& al formula expressing this series as a $\Phi_2$-Horn
confluent hypergeometric function. In the course of proving, we shed the light
on the occurrence of multiple angles in their formula through elementary
symmetric functions, and get a new representation of Gegenbauer polynomials.
| 0 | 0 | 1 | 0 | 0 | 0 |
Generalized Log-sine integrals and Bell polynomials | In this paper, we investigate the integral of $x^n\log^m(\sin(x))$ for
natural numbers $m$ and $n$. In doing so, we recover some well-known results
and remark on some relations to the log-sine integral
$\operatorname{Ls}_{n+m+1}^{(n)}(\theta)$. Later, we use properties of Bell
polynomials to find a closed expression for the derivative of the central
binomial and shifted central binomial coefficients in terms of polygamma
functions and harmonic numbers.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Modern Search for Wolf-Rayet Stars in the Magellanic Clouds. III. A Third Year of Discoveries | For the past three years we have been conducting a survey for WR stars in the
Large and Small Magellanic Clouds (LMC, SMC). Our previous work has resulted in
the discovery of a new type of WR star in the LMC, which we are calling WN3/O3.
These stars have the emission-line properties of a WN3 star (strong N V but no
N IV), plus the absorption-line properties of an O3 star (Balmer hydrogen plus
Pickering He II but no He I). Yet these stars are 15x fainter than an O3 V star
would be by itself, ruling out these being WN3+O3 binaries. Here we report the
discovery of two more members of this class, bringing the total number of these
objects to 10, 6.5% of the LMC's total WR population. The optical spectra of
nine of these WN3/O3s are virtually indistinguishable from each other, but one
of the newly found stars is significantly different, showing a lower excitation
emission and absorption spectrum (WN4/O4-ish). In addition, we have newly
classified three unusual Of-type stars, including one with a strong C III 4650
line, and two rapidly rotating "Oef" stars. We also "rediscovered" a low mass
x-ray binary, RX J0513.9-6951, and demonstrate its spectral variability.
Finally, we discuss the spectra of ten low priority WR candidates that turned
out not to have He II emission. These include both a Be star and a B[e] star.
| 0 | 1 | 0 | 0 | 0 | 0 |
Mathematical modeling of Zika disease in pregnant women and newborns with microcephaly in Brazil | We propose a new mathematical model for the spread of Zika virus. Special
attention is paid to the transmission of microcephaly. Numerical simulations
show the accuracy of the model with respect to the Zika outbreak occurred in
Brazil.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Noninformative Prior on a Space of Distribution Functions | In a given problem, the Bayesian statistical paradigm requires the
specification of a prior distribution that quantifies relevant information
about the unknowns of main interest external to the data. In cases where little
such information is available, the problem under study may possess an
invariance under a transformation group that encodes a lack of information,
leading to a unique prior---this idea was explored at length by E.T. Jaynes.
Previous successful examples have included location-scale invariance under
linear transformation, multiplicative invariance of the rate at which events in
a counting process are observed, and the derivation of the Haldane prior for a
Bernoulli success probability. In this paper we show that this method can be
extended, by generalizing Jaynes, in two ways: (1) to yield families of
approximately invariant priors, and (2) to the infinite-dimensional setting,
yielding families of priors on spaces of distribution functions. Our results
can be used to describe conditions under which a particular Dirichlet Process
posterior arises from an optimal Bayesian analysis, in the sense that
invariances in the prior and likelihood lead to one and only one posterior
distribution.
| 0 | 0 | 1 | 1 | 0 | 0 |
Towards a realistic NNLIF model: Analysis and numerical solver for excitatory-inhibitory networks with delay and refractory periods | The Network of Noisy Leaky Integrate and Fire (NNLIF) model describes the
behavior of a neural network at mesoscopic level. It is one of the simplest
self-contained mean-field models considered for that purpose. Even so, to study
the mathematical properties of the model some simplifications were necessary
Cáceres-Carrillo-Perthame(2011), Cáceres-Perthame(2014),
Cáceres-Schneider(2017), which disregard crucial phenomena. In this work we
deal with the general NNLIF model without simplifications. It involves a
network with two populations (excitatory and inhibitory), with transmission
delays between the neurons and where the neurons remain in a refractory state
for a certain time. We have studied the number of steady states in terms of the
model parameters, the long time behaviour via the entropy method and
Poincaré's inequality, blow-up phenomena, and the importance of transmission
delays between excitatory neurons to prevent blow-up and to give rise to
synchronous solutions. Besides analytical results, we have presented a
numerical resolutor for this model, based on high order flux-splitting WENO
schemes and an explicit third order TVD Runge-Kutta method, in order to
describe the wide range of phenomena exhibited by the network: blow-up,
asynchronous/synchronous solutions and instability/stability of the steady
states; the solver also allows us to observe the time evolution of the firing
rates, refractory states and the probability distributions of the excitatory
and inhibitory populations.
| 0 | 0 | 1 | 0 | 0 | 0 |
Subsets and Splits