title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Isometric copies of $l^\infty$ in Cesàro-Orlicz function spaces | We characterize Cesàro-Orlicz function spaces $Ces_{\varphi}$ containing
order isomorphically isometric copy of $l^\infty$. We discuss also some useful
applicable conditions sufficient for the existence of such a copy.
| 0 | 0 | 1 | 0 | 0 | 0 |
Modeling non-stationary extreme dependence with stationary max-stable processes and multidimensional scaling | Modeling the joint distribution of extreme weather events in multiple
locations is a challenging task with important applications. In this study, we
use max-stable models to study extreme daily precipitation events in
Switzerland. The non-stationarity of the spatial process at hand involves
important challenges, which are often dealt with by using a stationary model in
a so-called climate space, with well-chosen covariates. Here, we instead chose
to warp the weather stations under study in a latent space of higher dimension
using multidimensional scaling (MDS). The advantage of this approach is its
improved flexibility to reproduce highly non-stationary phenomena, while
keeping a tractable stationary spatial model in the latent space. Two model
fitting approaches, which both use MDS, are presented and compared to a
classical approach that relies on composite likelihood maximization in a
climate space. Results suggest that the proposed methods better reproduce the
observed extremal coefficients and their complex spatial dependence.
| 0 | 0 | 0 | 1 | 0 | 0 |
MOEMS deformable mirror testing in cryo for future optical instrumentation | MOEMS Deformable Mirrors (DM) are key components for next generation
instruments with innovative adaptive optics systems, in existing telescopes and
in the future ELTs. These DMs must perform at room temperature as well as in
cryogenic and vacuum environment. Ideally, the MOEMS-DMs must be designed to
operate in such environment. We present some major rules for designing /
operating DMs in cryo and vacuum. We chose to use interferometry for the full
characterization of these devices, including surface quality measurement in
static and dynamical modes, at ambient and in vacuum/cryo. Thanks to our
previous set-up developments, we placed a compact cryo-vacuum chamber designed
for reaching 10-6 mbar and 160K, in front of our custom Michelson
interferometer, able to measure performances of the DM at actuator/segment
level as well as whole mirror level, with a lateral resolution of 2{\mu}m and a
sub-nanometric z-resolution. Using this interferometric bench, we tested the
Iris AO PTT111 DM: this unique and robust design uses an array of single
crystalline silicon hexagonal mirrors with a pitch of 606{\mu}m, able to move
in tip, tilt and piston with strokes from 5 to 7{\mu}m, and tilt angle in the
range of +/-5mrad. They exhibit typically an open-loop flat surface figure as
good as <20nm rms. A specific mount including electronic and opto-mechanical
interfaces has been designed for fitting in the test chamber. Segment
deformation, mirror shaping, open-loop operation are tested at room and cryo
temperature and results are compared. The device could be operated successfully
at 160K. An additional, mainly focus-like, 500 nm deformation is measured at
160K; we were able to recover the best flat in cryo by correcting the focus and
local tip-tilts on some segments. Tests on DM with different mirror thicknesses
(25{\mu}m and 50{\mu}m) and different coatings (silver and gold) are currently
under way.
| 0 | 1 | 0 | 0 | 0 | 0 |
Remote Sensing Image Scene Classification: Benchmark and State of the Art | Remote sensing image scene classification plays an important role in a wide
range of applications and hence has been receiving remarkable attention. During
the past years, significant efforts have been made to develop various datasets
or present a variety of approaches for scene classification from remote sensing
images. However, a systematic review of the literature concerning datasets and
methods for scene classification is still lacking. In addition, almost all
existing datasets have a number of limitations, including the small scale of
scene classes and the image numbers, the lack of image variations and
diversity, and the saturation of accuracy. These limitations severely limit the
development of new approaches especially deep learning-based methods. This
paper first provides a comprehensive review of the recent progress. Then, we
propose a large-scale dataset, termed "NWPU-RESISC45", which is a publicly
available benchmark for REmote Sensing Image Scene Classification (RESISC),
created by Northwestern Polytechnical University (NWPU). This dataset contains
31,500 images, covering 45 scene classes with 700 images in each class. The
proposed NWPU-RESISC45 (i) is large-scale on the scene classes and the total
image number, (ii) holds big variations in translation, spatial resolution,
viewpoint, object pose, illumination, background, and occlusion, and (iii) has
high within-class diversity and between-class similarity. The creation of this
dataset will enable the community to develop and evaluate various data-driven
algorithms. Finally, several representative methods are evaluated using the
proposed dataset and the results are reported as a useful baseline for future
research.
| 1 | 0 | 0 | 0 | 0 | 0 |
The Network of U.S. Mutual Fund Investments: Diversification, Similarity and Fragility throughout the Global Financial Crisis | Network theory proved recently to be useful in the quantification of many
properties of financial systems. The analysis of the structure of investment
portfolios is a major application since their eventual correlation and overlap
impact the actual risk diversification by individual investors. We investigate
the bipartite network of US mutual fund portfolios and their assets. We follow
its evolution during the Global Financial Crisis and analyse the interplay
between diversification, as understood in classical portfolio theory, and
similarity of the investments of different funds. We show that, on average,
portfolios have become more diversified and less similar during the crisis.
However, we also find that large overlap is far more likely than expected from
models of random allocation of investments. This indicates the existence of
strong correlations between fund portfolio strategies. We introduce a
simplified model of propagation of financial shocks, that we exploit to show
that a systemic risk component origins from the similarity of portfolios. The
network is still vulnerable after crisis because of this effect, despite the
increase in the diversification of portfolios. Our results indicate that
diversification may even increase systemic risk when funds diversify in the
same way. Diversification and similarity can play antagonistic roles and the
trade-off between the two should be taken into account to properly assess
systemic risk.
| 0 | 0 | 0 | 1 | 0 | 1 |
Removal of Salt and Pepper noise from Gray-Scale and Color Images: An Adaptive Approach | An efficient adaptive algorithm for the removal of Salt and Pepper noise from
gray scale and color image is presented in this paper. In this proposed method
first a 3X3 window is taken and the central pixel of the window is considered
as the processing pixel. If the processing pixel is found as uncorrupted, then
it is left unchanged. And if the processing pixel is found corrupted one, then
the window size is increased according to the conditions given in the proposed
algorithm. Finally the processing pixel or the central pixel is replaced by
either the mean, median or trimmed value of the elements in the current window
depending upon different conditions of the algorithm. The proposed algorithm
efficiently removes noise at all densities with better Peak Signal to Noise
Ratio (PSNR) and Image Enhancement Factor (IEF). The proposed algorithm is
compared with different existing algorithms like MF, AMF, MDBUTMF, MDBPTGMF and
AWMF.
| 1 | 0 | 0 | 0 | 0 | 0 |
On the distance and algorithms of strong product digraphs | Strong product is an efficient way to construct a larger digraph through some
specific small digraphs. The large digraph constructed by the strong product
method contains the factor digraphs as its subgraphs, and can retain some good
properties of the factor digraphs. The distance of digraphs is one of the most
basic structural parameters in graph theory, and it plays an important role in
analyzing the effectiveness of interconnection networks. In particular, it
provides a basis for measuring the transmission delay of networks. When the
topological structure of an interconnection network is represented by a
digraph, the average distance of the directed graph is a good measure of the
communication performance of the network. In this paper, we mainly investigate
the distance and average distance of strong product digraphs, and give a
formula for the distance of strong product digraphs and an algorithm for
solving the average distance of strong product digraphs.
| 1 | 0 | 0 | 0 | 0 | 0 |
Global weak solution to the viscous two-fluid model with finite energy | In this paper, we prove the existence of global weak solutions to the
compressible two-fluid Navier-Stokes equations in three dimensional space. The
pressure depends on two different variables from the continuity equations.
We develop an argument of variable reduction for the pressure law. This
yields to the strong convergence of the densities, and provides the existence
of global solutions in time, for the compressible two-fluid Navier-Stokes
equations, with large data in three dimensional space.
| 0 | 0 | 1 | 0 | 0 | 0 |
Privacy Preserving and Collusion Resistant Energy Sharing | Energy has been increasingly generated or collected by different entities on
the power grid (e.g., universities, hospitals and householdes) via solar
panels, wind turbines or local generators in the past decade. With local
energy, such electricity consumers can be considered as "microgrids" which can
simulataneously generate and consume energy. Some microgrids may have excessive
energy that can be shared to other power consumers on the grid. To this end,
all the entities have to share their local private information (e.g., their
local demand, local supply and power quality data) to each other or a
third-party to find and implement the optimal energy sharing solution. However,
such process is constrained by privacy concerns raised by the microgrids. In
this paper, we propose a privacy preserving scheme for all the microgrids which
can securely implement their energy sharing against both semi-honest and
colluding adversaries. The proposed approach includes two secure communication
protocols that can ensure quantified privacy leakage and handle collusions.
| 1 | 0 | 0 | 0 | 0 | 0 |
Word Embeddings Quantify 100 Years of Gender and Ethnic Stereotypes | Word embeddings use vectors to represent words such that the geometry between
vectors captures semantic relationship between the words. In this paper, we
develop a framework to demonstrate how the temporal dynamics of the embedding
can be leveraged to quantify changes in stereotypes and attitudes toward women
and ethnic minorities in the 20th and 21st centuries in the United States. We
integrate word embeddings trained on 100 years of text data with the U.S.
Census to show that changes in the embedding track closely with demographic and
occupation shifts over time. The embedding captures global social shifts --
e.g., the women's movement in the 1960s and Asian immigration into the U.S --
and also illuminates how specific adjectives and occupations became more
closely associated with certain populations over time. Our framework for
temporal analysis of word embedding opens up a powerful new intersection
between machine learning and quantitative social science.
| 1 | 0 | 0 | 0 | 0 | 0 |
Infinite Mixture of Inverted Dirichlet Distributions | In this work, we develop a novel Bayesian estimation method for the Dirichlet
process (DP) mixture of the inverted Dirichlet distributions, which has been
shown to be very flexible for modeling vectors with positive elements. The
recently proposed extended variational inference (EVI) framework is adopted to
derive an analytically tractable solution. The convergency of the proposed
algorithm is theoretically guaranteed by introducing single lower bound
approximation to the original objective function in the VI framework. In
principle, the proposed model can be viewed as an infinite inverted Dirichelt
mixture model (InIDMM) that allows the automatic determination of the number of
mixture components from data. Therefore, the problem of pre-determining the
optimal number of mixing components has been overcome. Moreover, the problems
of over-fitting and under-fitting are avoided by the Bayesian estimation
approach. Comparing with several recently proposed DP-related methods, the good
performance and effectiveness of the proposed method have been demonstrated
with both synthesized data and real data evaluations.
| 0 | 0 | 0 | 1 | 0 | 0 |
$Ψ$ec: A Local Spectral Exterior Calculus | We introduce $\Psi$ec, a local spectral exterior calculus that provides a
discretization of Cartan's exterior calculus of differential forms using
wavelet functions. Our construction consists of differential form wavelets with
flexible directional localization, between fully isotropic and curvelet- and
ridgelet-like, that provide tight frames for the spaces of $k$-forms in
$\mathbb{R}^2$ and $\mathbb{R}^3$. By construction, these wavelets satisfy the
de Rahm co-chain complex, the Hodge decomposition, and that the integral of a
$k+1$-form is a $k$-form. They also enforce Stokes' theorem for differential
forms, and we show that with a finite number of wavelet levels it is most
efficiently approximated using anisotropic curvelet- or ridgelet-like forms.
Our construction is based on the intrinsic geometric properties of the exterior
calculus in the Fourier domain. To reveal these, we extend existing results on
the Fourier transform of differential forms to a frequency domain description
of the exterior calculus, including, for example, a Parseval theorem for forms
and a description of the symbols of all important operators.
| 1 | 0 | 0 | 0 | 0 | 0 |
Ultraslow fluctuations in the pseudogap states of HgBa$_{2}$CaCu$_{2}$O$_{6+d}$ | We report the transverse relaxation rates 1/$T_2$'s of the $^{63}$Cu nuclear
spin-echo envelope for double-layer high-$T_c$ cuprate superconductors
HgBa$_{2}$CaCu$_{2}$O$_{6+d}$ from underdoped to overdoped. The relaxation rate
1/$T_{2L}$ of the exponential function (Lorentzian component) shows a peak at
220$-$240 K in the underdoped ($T_c$ = 103 K) and the optimally doped ($T_c$ =
127 K) samples but no peak in the overdoped ($T_c$ = 93 K) sample. The
enhancement in 1/$T_{2L}$ suggests development of the zero frequency components
of local field fluctuations. Ultraslow fluctuations are hidden in the pseudogap
states.
| 0 | 1 | 0 | 0 | 0 | 0 |
Multi-Scale Wavelet Domain Residual Learning for Limited-Angle CT Reconstruction | Limited-angle computed tomography (CT) is often used in clinical applications
such as C-arm CT for interventional imaging. However, CT images from limited
angles suffers from heavy artifacts due to incomplete projection data. Existing
iterative methods require extensive calculations but can not deliver
satisfactory results. Based on the observation that the artifacts from limited
angles have some directional property and are globally distributed, we propose
a novel multi-scale wavelet domain residual learning architecture, which
compensates for the artifacts. Experiments have shown that the proposed method
effectively eliminates artifacts, thereby preserving edge and global structures
of the image.
| 1 | 0 | 0 | 0 | 0 | 0 |
Helium-like atoms. The Green's function approach to the Fock expansion calculations | The renewed Green's function approach to calculating the angular Fock
coefficients, $\psi_{k,p}(\alpha,\theta)$ is presented. The final formulas are
simplified and specified to be applicable for analytical as well as numerical
calculations. The Green's function formulas with the hyperspherical angles
$\theta=0,\pi$ (arbitrary $\alpha$) or $\alpha=0,\pi$ (arbitrary $\theta$) are
indicated as corresponding to the angular Fock coefficients possessing physical
meaning. The most interesting case of $\theta=0$ corresponding to a collinear
arrangement of the particles is studied in detail. It is emphasized that this
case represents the generalization of the specific cases of the
electron-nucleus ($\alpha=0$) and electron-electron ($\alpha=\pi/2$)
coalescences. It is shown that the Green's function method for $\theta=0$
enables us to calculate any component/subcomponent of the angular Fock
coefficient in the form of a single series representation with arbitrary angle
$\theta$. Those cases, where the Green's function approach can not be applied,
are thoroughly studied, and the corresponding solutions are found.
| 0 | 1 | 0 | 0 | 0 | 0 |
Butterfly Effect: Bidirectional Control of Classification Performance by Small Additive Perturbation | This paper proposes a new algorithm for controlling classification results by
generating a small additive perturbation without changing the classifier
network. Our work is inspired by existing works generating adversarial
perturbation that worsens classification performance. In contrast to the
existing methods, our work aims to generate perturbations that can enhance
overall classification performance. To solve this performance enhancement
problem, we newly propose a perturbation generation network (PGN) influenced by
the adversarial learning strategy. In our problem, the information in a large
external dataset is summarized by a small additive perturbation, which helps to
improve the performance of the classifier trained with the target dataset. In
addition to this performance enhancement problem, we show that the proposed PGN
can be adopted to solve the classical adversarial problem without utilizing the
information on the target classifier. The mentioned characteristics of our
method are verified through extensive experiments on publicly available visual
datasets.
| 1 | 0 | 0 | 1 | 0 | 0 |
Secrecy Outage Analysis for Downlink Transmissions in the Presence of Randomly Located Eavesdroppers | We analyze the secrecy outage probability in the downlink for wireless
networks with spatially (Poisson) distributed eavesdroppers (EDs) under the
assumption that the base station employs transmit antenna selection (TAS) to
enhance secrecy performance. We compare the cases where the receiving user
equipment (UE) operates in half-duplex (HD) mode and full-duplex (FD) mode. In
the latter case, the UE simultaneously receives the intended downlink message
and transmits a jamming signal to strengthen secrecy. We investigate two models
of (semi)passive eavesdropping: (1) EDs act independently and (2) EDs collude
to intercept the transmitted message. For both of these models, we obtain
expressions for the secrecy outage probability in the downlink for HD and FD UE
operation. The expressions for HD systems have very accurate approximate or
exact forms in terms of elementary and/or special functions for all path loss
exponents. Those related to the FD systems have exact integral forms for
general path loss exponents, while exact closed forms are given for specific
exponents. A closed-form approximation is also derived for the FD case with
colluding EDs. The resulting analysis shows that the reduction in the secrecy
outage probability is logarithmic in the number of antennas used for TAS and
identifies conditions under which HD operation should be used instead of FD
jamming at the UE. These performance trends and exact relations between system
parameters can be used to develop adaptive power allocation and duplex
operation methods in practice. Examples of such techniques are alluded to
herein.
| 1 | 0 | 1 | 0 | 0 | 0 |
Defining Equitable Geographic Districts in Road Networks via Stable Matching | We introduce a novel method for defining geographic districts in road
networks using stable matching. In this approach, each geographic district is
defined in terms of a center, which identifies a location of interest, such as
a post office or polling place, and all other network vertices must be labeled
with the center to which they are associated. We focus on defining geographic
districts that are equitable, in that every district has the same number of
vertices and the assignment is stable in terms of geographic distance. That is,
there is no unassigned vertex-center pair such that both would prefer each
other over their current assignments. We solve this problem using a version of
the classic stable matching problem, called symmetric stable matching, in which
the preferences of the elements in both sets obey a certain symmetry. In our
case, we study a graph-based version of stable matching in which nodes are
stably matched to a subset of nodes denoted as centers, prioritized by their
shortest-path distances, so that each center is apportioned a certain number of
nodes. We show that, for a planar graph or road network with $n$ nodes and $k$
centers, the problem can be solved in $O(n\sqrt{n}\log n)$ time, which improves
upon the $O(nk)$ runtime of using the classic Gale-Shapley stable matching
algorithm when $k$ is large. Finally, we provide experimental results on road
networks for these algorithms and a heuristic algorithm that performs better
than the Gale-Shapley algorithm for any range of values of $k$.
| 1 | 0 | 0 | 0 | 0 | 0 |
End-to-end Lung Nodule Detection in Computed Tomography | Computer aided diagnostic (CAD) system is crucial for modern med-ical
imaging. But almost all CAD systems operate on reconstructed images, which were
optimized for radiologists. Computer vision can capture features that is subtle
to human observers, so it is desirable to design a CAD system op-erating on the
raw data. In this paper, we proposed a deep-neural-network-based detection
system for lung nodule detection in computed tomography (CT). A
primal-dual-type deep reconstruction network was applied first to convert the
raw data to the image space, followed by a 3-dimensional convolutional neural
network (3D-CNN) for the nodule detection. For efficient network training, the
deep reconstruction network and the CNN detector was trained sequentially
first, then followed by one epoch of end-to-end fine tuning. The method was
evaluated on the Lung Image Database Consortium image collection (LIDC-IDRI)
with simulated forward projections. With 144 multi-slice fanbeam pro-jections,
the proposed end-to-end detector could achieve comparable sensitivity with the
reference detector, which was trained and applied on the fully-sampled image
data. It also demonstrated superior detection performance compared to detectors
trained on the reconstructed images. The proposed method is general and could
be expanded to most detection tasks in medical imaging.
| 1 | 0 | 0 | 1 | 0 | 0 |
Fairness with Dynamics | It has recently been shown that if feedback effects of decisions are ignored,
then imposing fairness constraints such as demographic parity or equality of
opportunity can actually exacerbate unfairness. We propose to address this
challenge by modeling feedback effects as the dynamics of a Markov decision
processes (MDPs). First, we define analogs of fairness properties that have
been proposed for supervised learning. Second, we propose algorithms for
learning fair decision-making policies for MDPs. We also explore extensions to
reinforcement learning, where parts of the dynamical system are unknown and
must be learned without violating fairness. Finally, we demonstrate the need to
account for dynamical effects using simulations on a loan applicant MDP.
| 1 | 0 | 0 | 1 | 0 | 0 |
Exploring extra dimensions through inflationary tensor modes | Predictions of inflationary schemes can be influenced by the presence of
extra dimensions. This could be of particular relevance for the spectrum of
gravitational waves in models where the extra dimensions provide a brane-world
solution to the hierarchy problem. Apart from models of large as well as
exponentially warped extra dimensions, we analyze the size of tensor modes in
the Linear Dilaton scheme recently revived in the discussion of the "clockwork
mechanism". The results are model dependent, significantly enhanced tensor
modes on one side and a suppression on the other. In some cases we are led to a
scheme of "remote inflation", where the expansion is driven by energies at a
hidden brane. In all cases where tensor modes are enhanced, the requirement of
perturbativity of gravity leads to a stringent upper limit on the allowed
Hubble rate during inflation.
| 0 | 1 | 0 | 0 | 0 | 0 |
Elements of $C^*$-algebras Attaining Their Norm in a Finite-Dimensional Representation | We characterize the class of RFD $C^*$-algebras as those containing a dense
subset of elements that attain their norm under a finite-dimensional
representation. We show further that this subset is the whole space precisely
when every irreducible representation of the $C^*$-algebra is
finite-dimensional, which is equivalent to the $C^*$-algebra having no simple
infinite-dimensional AF subquotient. We apply techniques from this proof to
show the existence of elements in more general classes of $C^*$-algebras whose
norms in finite-dimensional representations fit certain prescribed properties.
| 0 | 0 | 1 | 0 | 0 | 0 |
A probability inequality for sums of independent Banach space valued random variables | Let $(\mathbf{B}, \|\cdot\|)$ be a real separable Banach space. Let
$\varphi(\cdot)$ and $\psi(\cdot)$ be two continuous and increasing functions
defined on $[0, \infty)$ such that $\varphi(0) = \psi(0) = 0$, $\lim_{t
\rightarrow \infty} \varphi(t) = \infty$, and
$\frac{\psi(\cdot)}{\varphi(\cdot)}$ is a nondecreasing function on $[0,
\infty)$. Let $\{V_{n};~n \geq 1 \}$ be a sequence of independent and symmetric
{\bf B}-valued random variables. In this note, we establish a probability
inequality for sums of independent {\bf B}-valued random variables by showing
that for every $n \geq 1$ and all $t \geq 0$, \[
\mathbb{P}\left(\left\|\sum_{i=1}^{n} V_{i} \right\| > t b_{n} \right) \leq 4
\mathbb{P} \left(\left\|\sum_{i=1}^{n} \varphi\left(\psi^{-1}(\|V_{i}\|)\right)
\frac{V_{i}}{\|V_{i}\|} \right\| > t a_{n} \right) +
\sum_{i=1}^{n}\mathbb{P}\left(\|V_{i}\| > b_{n} \right), \] where $a_{n} =
\varphi(n)$ and $b_{n} = \psi(n)$, $n \geq 1$. As an application of this
inequality, we establish what we call a comparison theorem for the weak law of
large numbers for independent and identically distributed ${\bf B}$-valued
random variables.
| 0 | 0 | 1 | 0 | 0 | 0 |
Including Uncertainty when Learning from Human Corrections | It is difficult for humans to efficiently teach robots how to correctly
perform a task. One intuitive solution is for the robot to iteratively learn
the human's preferences from corrections, where the human improves the robot's
current behavior at each iteration. When learning from corrections, we argue
that while the robot should estimate the most likely human preferences, it
should also know what it does not know, and integrate this uncertainty as it
makes decisions. We advance the state-of-the-art by introducing a Kalman filter
for learning from corrections: this approach obtains the uncertainty of the
estimated human preferences. Next, we demonstrate how the estimate uncertainty
can be leveraged for active learning and risk-sensitive deployment. Our results
indicate that obtaining and leveraging uncertainty leads to faster learning
from human corrections.
| 1 | 0 | 0 | 0 | 0 | 0 |
Gated-Attention Architectures for Task-Oriented Language Grounding | To perform tasks specified by natural language instructions, autonomous
agents need to extract semantically meaningful representations of language and
map it to visual elements and actions in the environment. This problem is
called task-oriented language grounding. We propose an end-to-end trainable
neural architecture for task-oriented language grounding in 3D environments
which assumes no prior linguistic or perceptual knowledge and requires only raw
pixels from the environment and the natural language instruction as input. The
proposed model combines the image and text representations using a
Gated-Attention mechanism and learns a policy to execute the natural language
instruction using standard reinforcement and imitation learning methods. We
show the effectiveness of the proposed model on unseen instructions as well as
unseen maps, both quantitatively and qualitatively. We also introduce a novel
environment based on a 3D game engine to simulate the challenges of
task-oriented language grounding over a rich set of instructions and
environment states.
| 1 | 0 | 0 | 0 | 0 | 0 |
Automatic classification of trees using a UAV onboard camera and deep learning | Automatic classification of trees using remotely sensed data has been a dream
of many scientists and land use managers. Recently, Unmanned aerial vehicles
(UAV) has been expected to be an easy-to-use, cost-effective tool for remote
sensing of forests, and deep learning has attracted attention for its ability
concerning machine vision. In this study, using a commercially available UAV
and a publicly available package for deep learning, we constructed a machine
vision system for the automatic classification of trees. In our method, we
segmented a UAV photography image of forest into individual tree crowns and
carried out object-based deep learning. As a result, the system was able to
classify 7 tree types at 89.0% accuracy. This performance is notable because we
only used basic RGB images from a standard UAV. In contrast, most of previous
studies used expensive hardware such as multispectral imagers to improve the
performance. This result means that our method has the potential to classify
individual trees in a cost-effective manner. This can be a usable tool for many
forest researchers and managements.
| 0 | 0 | 0 | 1 | 0 | 0 |
Assortative Mixing Equilibria in Social Network Games | It is known that individuals in social networks tend to exhibit homophily
(a.k.a. assortative mixing) in their social ties, which implies that they
prefer bonding with others of their own kind. But what are the reasons for this
phenomenon? Is it that such relations are more convenient and easier to
maintain? Or are there also some more tangible benefits to be gained from this
collective behaviour?
The current work takes a game-theoretic perspective on this phenomenon, and
studies the conditions under which different assortative mixing strategies lead
to equilibrium in an evolving social network. We focus on a biased preferential
attachment model where the strategy of each group (e.g., political or social
minority) determines the level of bias of its members toward other group
members and non-members. Our first result is that if the utility function that
the group attempts to maximize is the degree centrality of the group,
interpreted as the sum of degrees of the group members in the network, then the
only strategy achieving Nash equilibrium is a perfect homophily, which implies
that cooperation with other groups is harmful to this utility function. A
second, and perhaps more surprising, result is that if a reward for inter-group
cooperation is added to the utility function (e.g., externally enforced by an
authority as a regulation), then there are only two possible equilibria,
namely, perfect homophily or perfect heterophily, and it is possible to
characterize their feasibility spaces. Interestingly, these results hold
regardless of the minority-majority ratio in the population.
We believe that these results, as well as the game-theoretic perspective
presented herein, may contribute to a better understanding of the forces that
shape the groups and communities of our society.
| 1 | 1 | 0 | 0 | 0 | 0 |
Texture segmentation with Fully Convolutional Networks | In the last decade, deep learning has contributed to advances in a wide range
computer vision tasks including texture analysis. This paper explores a new
approach for texture segmentation using deep convolutional neural networks,
sharing important ideas with classic filter bank based texture segmentation
methods. Several methods are developed to train Fully Convolutional Networks to
segment textures in various applications. We show in particular that these
networks can learn to recognize and segment a type of texture, e.g. wood and
grass from texture recognition datasets (no training segmentation). We
demonstrate that Fully Convolutional Networks can learn from repetitive
patterns to segment a particular texture from a single image or even a part of
an image. We take advantage of these findings to develop a method that is
evaluated on a series of supervised and unsupervised experiments and improve
the state of the art on the Prague texture segmentation datasets.
| 1 | 0 | 0 | 0 | 0 | 0 |
The Beam and detector of the NA62 experiment at CERN | NA62 is a fixed-target experiment at the CERN SPS dedicated to measurements
of rare kaon decays. Such measurements, like the branching fraction of the
$K^{+} \rightarrow \pi^{+} \nu \bar\nu$ decay, have the potential to bring
significant insights into new physics processes when comparison is made with
precise theoretical predictions. For this purpose, innovative techniques have
been developed, in particular, in the domain of low-mass tracking devices.
Detector construction spanned several years from 2009 to 2014. The
collaboration started detector commissioning in 2014 and will collect data
until the end of 2018. The beam line and detector components are described
together with their early performance obtained from 2014 and 2015 data.
| 0 | 1 | 0 | 0 | 0 | 0 |
A dequantized metaplectic knot invariant | Let $K\subset S^3$ be a knot, $X:= S^3\setminus K$ its complement, and
$\mathbb{T}$ the circle group identified with $\mathbb{R}/\mathbb{Z}$. To any
oriented long knot diagram of $K$, we associate a quadratic polynomial in
variables bijectively associated with the bridges of the diagram such that,
when the variables projected to $\mathbb{T}$ satisfy the linear equations
characterizing the first homology group $H_1(\tilde{X}_2)$ of the double cyclic
covering of $X$, the polynomial projects down to a well defined
$\mathbb{T}$-valued function on $T^1(\tilde{X}_2,\mathbb{T})$ (the dual of the
torsion part $T_1$ of $H_1$). This function is sensitive to knot chirality, for
example, it seems to confirm chirality of the knot $10_{71}$. It also
distinguishes the knots $7_4$ and $9_2$ known to have identical Alexander
polynomials and the knots $9_2$ and K11n13 known to have identical Jones
polynomials but does not distinguish $7_4$ and K11n13.
| 0 | 0 | 1 | 0 | 0 | 0 |
Persian Wordnet Construction using Supervised Learning | This paper presents an automated supervised method for Persian wordnet
construction. Using a Persian corpus and a bi-lingual dictionary, the initial
links between Persian words and Princeton WordNet synsets have been generated.
These links will be discriminated later as correct or incorrect by employing
seven features in a trained classification system. The whole method is just a
classification system, which has been trained on a train set containing FarsNet
as a set of correct instances. State of the art results on the automatically
derived Persian wordnet is achieved. The resulted wordnet with a precision of
91.18% includes more than 16,000 words and 22,000 synsets.
| 1 | 0 | 0 | 1 | 0 | 0 |
Anyon condensation and its applications | Bose condensation is central to our understanding of quantum phases of
matter. Here we review Bose condensation in topologically ordered phases (also
called topological symmetry breaking), where the condensing bosons have
non-trivial mutual statistics with other quasiparticles in the system. We give
a non-technical overview of the relationship between the phases before and
after condensation, drawing parallels with more familiar symmetry-breaking
transitions. We then review two important applications of this phenomenon.
First, we describe the equivalence between such condensation transitions and
pairs of phases with gappable boundaries, as well as examples where multiple
types of gapped boundary between the same two phases exist. Second, we discuss
how such transitions can lead to global symmetries which exchange or permute
anyon types. Finally we discuss the nature of the critical point, which can be
mapped to a conventional phase transition in some -- but not all -- cases.
| 0 | 1 | 0 | 0 | 0 | 0 |
Group-velocity-locked vector soliton molecules in a birefringence-enhanced fiber laser | Physics phenomena of multi-soliton complexes have enriched the life of
dissipative solitons in fiber lasers. By developing a birefringence-enhanced
fiber laser, we report the first experimental observation of
group-velocity-locked vector soliton (GVLVS) molecules. The
birefringence-enhanced fiber laser facilitates the generation of GVLVSs, where
the two orthogonally polarized components are coupled together to form a
multi-soliton complex. Moreover, the interaction of repulsive and attractive
forces between multiple pulses binds the particle-like GVLVSs together in time
domain to further form compound multi-soliton complexes, namely GVLVS
molecules. By adopting the polarization-resolved measurement, we show that the
two orthogonally polarized components of the GVLVS molecules are both soliton
molecules supported by the strongly modulated spectral fringes and the
double-humped intensity profiles. Additionally, GVLVS molecules with various
soliton separations are also observed by adjusting the pump power and the
polarization controller.
| 0 | 1 | 0 | 0 | 0 | 0 |
Survey of Visual Question Answering: Datasets and Techniques | Visual question answering (or VQA) is a new and exciting problem that
combines natural language processing and computer vision techniques. We present
a survey of the various datasets and models that have been used to tackle this
task. The first part of the survey details the various datasets for VQA and
compares them along some common factors. The second part of this survey details
the different approaches for VQA, classified into four types: non-deep learning
models, deep learning models without attention, deep learning models with
attention, and other models which do not fit into the first three. Finally, we
compare the performances of these approaches and provide some directions for
future work.
| 1 | 0 | 0 | 0 | 0 | 0 |
Towards a Principled Integration of Multi-Camera Re-Identification and Tracking through Optimal Bayes Filters | With the rise of end-to-end learning through deep learning, person detectors
and re-identification (ReID) models have recently become very strong.
Multi-camera multi-target (MCMT) tracking has not fully gone through this
transformation yet. We intend to take another step in this direction by
presenting a theoretically principled way of integrating ReID with tracking
formulated as an optimal Bayes filter. This conveniently side-steps the need
for data-association and opens up a direct path from full images to the core of
the tracker. While the results are still sub-par, we believe that this new,
tight integration opens many interesting research opportunities and leads the
way towards full end-to-end tracking from raw pixels.
| 1 | 0 | 0 | 0 | 0 | 0 |
Beyond Volume: The Impact of Complex Healthcare Data on the Machine Learning Pipeline | From medical charts to national census, healthcare has traditionally operated
under a paper-based paradigm. However, the past decade has marked a long and
arduous transformation bringing healthcare into the digital age. Ranging from
electronic health records, to digitized imaging and laboratory reports, to
public health datasets, today, healthcare now generates an incredible amount of
digital information. Such a wealth of data presents an exciting opportunity for
integrated machine learning solutions to address problems across multiple
facets of healthcare practice and administration. Unfortunately, the ability to
derive accurate and informative insights requires more than the ability to
execute machine learning models. Rather, a deeper understanding of the data on
which the models are run is imperative for their success. While a significant
effort has been undertaken to develop models able to process the volume of data
obtained during the analysis of millions of digitalized patient records, it is
important to remember that volume represents only one aspect of the data. In
fact, drawing on data from an increasingly diverse set of sources, healthcare
data presents an incredibly complex set of attributes that must be accounted
for throughout the machine learning pipeline. This chapter focuses on
highlighting such challenges, and is broken down into three distinct
components, each representing a phase of the pipeline. We begin with attributes
of the data accounted for during preprocessing, then move to considerations
during model building, and end with challenges to the interpretation of model
output. For each component, we present a discussion around data as it relates
to the healthcare domain and offer insight into the challenges each may impose
on the efficiency of machine learning techniques.
| 1 | 0 | 0 | 1 | 0 | 0 |
Redundant Perception and State Estimation for Reliable Autonomous Racing | In autonomous racing, vehicles operate close to the limits of handling and a
sensor failure can have critical consequences. To limit the impact of such
failures, this paper presents the redundant perception and state estimation
approaches developed for an autonomous race car. Redundancy in perception is
achieved by estimating the color and position of the track delimiting objects
using two sensor modalities independently. Specifically, learning-based
approaches are used to generate color and pose estimates, from LiDAR and camera
data respectively. The redundant perception inputs are fused by a particle
filter based SLAM algorithm that operates in real-time. Velocity is estimated
using slip dynamics, with reliability being ensured through a probabilistic
failure detection algorithm. The sub-modules are extensively evaluated in
real-world racing conditions using the autonomous race car "gotthard
driverless", achieving lateral accelerations up to 1.7G and a top speed of
90km/h.
| 1 | 0 | 0 | 0 | 0 | 0 |
Experimental study of extrinsic spin Hall effect in CuPt alloy | We have experimentally studied the effects on the spin Hall angle due to
systematic addition of Pt into the light metal Cu. We perform spin torque
ferromagnetic resonance measurements on Py/CuPt bilayer and find that as the Pt
concentration increases, the spin Hall angle of CuPt alloy increases. Moreover,
only 28% Pt in CuPt alloy can give rise to a spin Hall angle close to that of
Pt. We further extract the spin Hall resistivity of CuPt alloy for different Pt
concentrations and find that the contribution of skew scattering is larger for
lower Pt concentrations, while the side-jump contribution is larger for higher
Pt concentrations. From technological perspective, since the CuPt alloy can
sustain high processing temperatures and Cu is the most common metallization
element in the Si platform, it would be easier to integrate the CuPt alloy
based spintronic devices into existing Si fabrication technology.
| 0 | 1 | 0 | 0 | 0 | 0 |
NeST: A Neural Network Synthesis Tool Based on a Grow-and-Prune Paradigm | Deep neural networks (DNNs) have begun to have a pervasive impact on various
applications of machine learning. However, the problem of finding an optimal
DNN architecture for large applications is challenging. Common approaches go
for deeper and larger DNN architectures but may incur substantial redundancy.
To address these problems, we introduce a network growth algorithm that
complements network pruning to learn both weights and compact DNN architectures
during training. We propose a DNN synthesis tool (NeST) that combines both
methods to automate the generation of compact and accurate DNNs. NeST starts
with a randomly initialized sparse network called the seed architecture. It
iteratively tunes the architecture with gradient-based growth and
magnitude-based pruning of neurons and connections. Our experimental results
show that NeST yields accurate, yet very compact DNNs, with a wide range of
seed architecture selection. For the LeNet-300-100 (LeNet-5) architecture, we
reduce network parameters by 70.2x (74.3x) and floating-point operations
(FLOPs) by 79.4x (43.7x). For the AlexNet and VGG-16 architectures, we reduce
network parameters (FLOPs) by 15.7x (4.6x) and 30.2x (8.6x), respectively.
NeST's grow-and-prune paradigm delivers significant additional parameter and
FLOPs reduction relative to pruning-only methods.
| 1 | 0 | 0 | 0 | 0 | 0 |
Resonant Drag Instabilities in protoplanetary disks: the streaming instability and new, faster-growing instabilities | We identify and study a number of new, rapidly growing instabilities of dust
grains in protoplanetary disks, which may be important for planetesimal
formation. The study is based on the recognition that dust-gas mixtures are
generically unstable to a Resonant Drag Instability (RDI), whenever the gas,
absent dust, supports undamped linear modes. We show that the "streaming
instability" is an RDI associated with epicyclic oscillations; this provides
simple interpretations for its mechanisms and accurate analytic expressions for
its growth rates and fastest-growing wavelengths. We extend this analysis to
more general dust streaming motions and other waves, including buoyancy and
magnetohydrodynamic oscillations, finding various new instabilities. Most
importantly, we identify the disk "settling instability," which occurs as dust
settles vertically into the midplane of a rotating disk. For small grains, this
instability grows many orders of magnitude faster than the standard streaming
instability, with a growth rate that is independent of grain size. Growth
timescales for realistic dust-to-gas ratios are comparable to the disk orbital
period, and the characteristic wavelengths are more than an order of magnitude
larger than the streaming instability (allowing the instability to concentrate
larger masses). This suggests that in the process of settling, dust will band
into rings then filaments or clumps, potentially seeding dust traps,
high-metallicity regions that in turn seed the streaming instability, or even
overdensities that coagulate or directly collapse to planetesimals.
| 0 | 1 | 0 | 0 | 0 | 0 |
Nonparametric Neural Networks | Automatically determining the optimal size of a neural network for a given
task without prior information currently requires an expensive global search
and training many networks from scratch. In this paper, we address the problem
of automatically finding a good network size during a single training cycle. We
introduce *nonparametric neural networks*, a non-probabilistic framework for
conducting optimization over all possible network sizes and prove its soundness
when network growth is limited via an L_p penalty. We train networks under this
framework by continuously adding new units while eliminating redundant units
via an L_2 penalty. We employ a novel optimization algorithm, which we term
*adaptive radial-angular gradient descent* or *AdaRad*, and obtain promising
results.
| 1 | 0 | 0 | 0 | 0 | 0 |
FastTrack: Minimizing Stalls for CDN-based Over-the-top Video Streaming Systems | Traffic for internet video streaming has been rapidly increasing and is
further expected to increase with the higher definition videos and IoT
applications, such as 360 degree videos and augmented virtual reality
applications. While efficient management of heterogeneous cloud resources to
optimize the quality of experience is important, existing work in this problem
space often left out important factors. In this paper, we present a model for
describing a today's representative system architecture for video streaming
applications, typically composed of a centralized origin server and several CDN
sites. Our model comprehensively considers the following factors: limited
caching spaces at the CDN sites, allocation of CDN for a video request, choice
of different ports from the CDN, and the central storage and bandwidth
allocation. With the model, we focus on minimizing a performance metric, stall
duration tail probability (SDTP), and present a novel, yet efficient, algorithm
to solve the formulated optimization problem. The theoretical bounds with
respect to the SDTP metric are also analyzed and presented. Our extensive
simulation results demonstrate that the proposed algorithms can significantly
improve the SDTP metric, compared to the baseline strategies. Small-scale video
streaming system implementation in a real cloud environment further validates
our results.
| 1 | 0 | 0 | 0 | 0 | 0 |
Powerful statistical inference for nested data using sufficient summary statistics | Hierarchically-organized data arise naturally in many psychology and
neuroscience studies. As the standard assumption of independent and identically
distributed samples does not hold for such data, two important problems are to
accurately estimate group-level effect sizes, and to obtain powerful
statistical tests against group-level null hypotheses. A common approach is to
summarize subject-level data by a single quantity per subject, which is often
the mean or the difference between class means, and treat these as samples in a
group-level t-test. This 'naive' approach is, however, suboptimal in terms of
statistical power, as it ignores information about the intra-subject variance.
To address this issue, we review several approaches to deal with nested data,
with a focus on methods that are easy to implement. With what we call the
sufficient-summary-statistic approach, we highlight a computationally efficient
technique that can improve statistical power by taking into account
within-subject variances, and we provide step-by-step instructions on how to
apply this approach to a number of frequently-used measures of effect size. The
properties of the reviewed approaches and the potential benefits over a
group-level t-test are quantitatively assessed on simulated data and
demonstrated on EEG data from a simulated-driving experiment.
| 0 | 0 | 1 | 1 | 0 | 0 |
Beam-On-Graph: Simultaneous Channel Estimation for mmWave MIMO Systems with Multiple Users | This paper is concerned with the channel estimation problem in multi-user
millimeter wave (mmWave) wireless systems with large antenna arrays. We develop
a novel simultaneous-estimation with iterative fountain training (SWIFT)
framework, in which multiple users estimate their channels at the same time and
the required number of channel measurements is adapted to various channel
conditions of different users. To achieve this, we represent the beam direction
estimation process by a graph, referred to as the beam-on-graph, and associate
the channel estimation process with a code-on-graph decoding problem.
Specifically, the base station (BS) and each user measure the channel with a
series of random combinations of transmit/receive beamforming vectors until the
channel estimate converges. As the proposed SWIFT does not adapt the BS's beams
to any single user, we are able to estimate all user channels simultaneously.
Simulation results show that SWIFT can significantly outperform the existing
random beamforming-based approaches, which use a predetermined number of
measurements, over a wide range of signal-to-noise ratios and channel coherence
time. Furthermore, by utilizing the users' order in terms of completing their
channel estimation, our SWIFT framework can infer the sequence of users'
channel quality and perform effective user scheduling to achieve superior
performance.
| 1 | 0 | 1 | 0 | 0 | 0 |
On Comparison Of Experts | A policy maker faces a sequence of unknown outcomes. At each stage two
(self-proclaimed) experts provide probabilistic forecasts on the outcome in the
next stage. A comparison test is a protocol for the policy maker to
(eventually) decide which of the two experts is better informed. The protocol
takes as input the sequence of pairs of forecasts and actual realizations and
(weakly) ranks the two experts. We propose two natural properties that such a
comparison test must adhere to and show that these essentially uniquely
determine the comparison test. This test is a function of the derivative of the
induced pair of measures at the realization.
| 1 | 0 | 1 | 0 | 0 | 0 |
Multiscale simulation on shearing transitions of thin-film lubrication with multi-layer molecules | Shearing transitions of multi-layer molecularly thin-film lubrication systems
in variations of the film-substrate coupling strength and the load are studied
by using a multiscale method. Three kinds of the interlayer slips found in
decreasing the coupling strength are in qualitative agreement with experimental
results. Although tribological behaviors are almost insensitive to the smaller
coupling strength, they and the effective film thickness are enlarged more and
more as the larger one increases. When the load increases, the tribological
behaviors are similar to those in increasing coupling strength, but the
effective film thickness is opposite.
| 0 | 1 | 0 | 0 | 0 | 0 |
Chirality-induced Antisymmetry in Magnetic Domain-Wall Speed | In chiral magnetic materials, numerous intriguing phenomena such as built in
chiral magnetic domain walls (DWs) and skyrmions are generated by the
Dzyaloshinskii Moriya interaction (DMI). The DMI also results in asymmetric DW
speed under in plane magnetic field, which provides a useful scheme to measure
the DMI strengths. However, recent findings of additional asymmetries such as
chiral damping have disenabled unambiguous DMI determination and the underlying
mechanism of overall asymmetries becomes under debate. By extracting the
DMI-induced symmetric contribution, here we experimentally investigated the
nature of the additional asymmetry. The results revealed that the additional
asymmetry has a truly antisymmetric nature with the typical behavior governed
by the DW chirality. In addition, the antisymmetric contribution changes the DW
speed more than 100 times, which cannot be solely explained by the chiral
damping scenario. By calibrating such antisymmetric contributions, experimental
inaccuracies can be largely removed, enabling again the DMI measurement scheme.
| 0 | 1 | 0 | 0 | 0 | 0 |
Strong interaction between graphene layer and Fano resonance in terahertz metamaterials | Graphene has emerged as a promising building block in the modern optics and
optoelectronics due to its novel optical and electrical properties. In the
mid-infrared and terahertz (THz) regime, graphene behaves like metals and
supports surface plasmon resonances (SPRs). Moreover, the continuously tunable
conductivity of graphene enables active SPRs and gives rise to a range of
active applications. However, the interaction between graphene and metal-based
resonant metamaterials has not been fully understood. In this work, a
simulation investigation on the interaction between the graphene layer and THz
resonances supported by the two-gap split ring metamaterials is systematically
conducted. The simulation results show that the graphene layer can
substantially reduce the Fano resonance and even switch it off, while leave the
dipole resonance nearly unaffected, which phenomenon is well explained with the
high conductivity of graphene. With the manipulation of graphene conductivity
via altering its Fermi energy or layer number, the amplitude of the Fano
resonance can be modulated. The tunable Fano resonance here together with the
underlying physical mechanism can be strategically important in designing
active metal-graphene hybrid metamaterials. In addition, the "sensitivity" to
the graphene layer of the Fano resonance is also highly appreciated in the
field of ultrasensitive sensing, where the novel physical mechanism can be
employed in sensing other graphene-like two-dimensional (2D) materials or
biomolecules with the high conductivity.
| 0 | 1 | 0 | 0 | 0 | 0 |
DELTA: DEep Learning Transfer using Feature Map with Attention for Convolutional Networks | Transfer learning through fine-tuning a pre-trained neural network with an
extremely large dataset, such as ImageNet, can significantly accelerate
training while the accuracy is frequently bottlenecked by the limited dataset
size of the new target task. To solve the problem, some regularization methods,
constraining the outer layer weights of the target network using the starting
point as references (SPAR), have been studied. In this paper, we propose a
novel regularized transfer learning framework DELTA, namely DEep Learning
Transfer using Feature Map with Attention. Instead of constraining the weights
of neural network, DELTA aims to preserve the outer layer outputs of the target
network. Specifically, in addition to minimizing the empirical loss, DELTA
intends to align the outer layer outputs of two networks, through constraining
a subset of feature maps that are precisely selected by attention that has been
learned in an supervised learning manner. We evaluate DELTA with the
state-of-the-art algorithms, including L2 and L2-SP. The experiment results
show that our proposed method outperforms these baselines with higher accuracy
for new tasks.
| 1 | 0 | 0 | 1 | 0 | 0 |
The Burst Failure Influence on the $H_\infty$ Norm | In this work, we present an analysis of the Burst failure effect in the
$H_\infty$ norm. We present a procedure to perform an analysis between
different Markov Chain models and a numerical example. In the numerical example
the results obtained pointed out that the burst failure effect in the
performance does not exceed 6.3%. However, this work is an introduction for a
wider and more extensive analysis in this subject.
| 1 | 0 | 0 | 0 | 0 | 0 |
Cosmological Simulations in Exascale Era | The architecture of Exascale computing facilities, which involves millions of
heterogeneous processing units, will deeply impact on scientific applications.
Future astrophysical HPC applications must be designed to make such computing
systems exploitable. The ExaNeSt H2020 EU-funded project aims to design and
develop an exascale ready prototype based on low-energy-consumption ARM64 cores
and FPGA accelerators. We participate to the design of the platform and to the
validation of the prototype with cosmological N-body and hydrodynamical codes
suited to perform large-scale, high-resolution numerical simulations of cosmic
structures formation and evolution. We discuss our activities on astrophysical
applications to take advantage of the underlying architecture.
| 1 | 1 | 0 | 0 | 0 | 0 |
Time evolution of the Luttinger model with nonuniform temperature profile | We study the time evolution of a one-dimensional interacting fermion system
described by the Luttinger model starting from a nonequilibrium state defined
by a smooth temperature profile $T(x)$. As a specific example we consider the
case when $T(x)$ is equal to $T_L$ ($T_R$) far to the left (right). Using a
series expansion in $\epsilon = 2(T_{R} - T_{L})/(T_{L}+T_{R})$, we compute the
energy density, the heat current density, and the fermion two-point correlation
function for all times $t \geq 0$. For local (delta-function) interactions, the
first two are computed to all orders, giving simple exact expressions involving
the Schwarzian derivative of the integral of $T(x)$. For nonlocal interactions,
breaking scale invariance, we compute the nonequilibrium steady state (NESS) to
all orders and the evolution to first order in $\epsilon$. The heat current in
the NESS is universal even when conformal invariance is broken by the
interactions, and its dependence on $T_{L,R}$ agrees with numerical results for
the $XXZ$ spin chain. Moreover, our analytical formulas predict peaks at short
times in the transition region between different temperatures and show
dispersion effects that, even if nonuniversal, are qualitatively similar to
ones observed in numerical simulations for related models, such as spin chains
and interacting lattice fermions.
| 0 | 1 | 1 | 0 | 0 | 0 |
Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm | The game of chess is the most widely-studied domain in the history of
artificial intelligence. The strongest programs are based on a combination of
sophisticated search techniques, domain-specific adaptations, and handcrafted
evaluation functions that have been refined by human experts over several
decades. In contrast, the AlphaGo Zero program recently achieved superhuman
performance in the game of Go, by tabula rasa reinforcement learning from games
of self-play. In this paper, we generalise this approach into a single
AlphaZero algorithm that can achieve, tabula rasa, superhuman performance in
many challenging domains. Starting from random play, and given no domain
knowledge except the game rules, AlphaZero achieved within 24 hours a
superhuman level of play in the games of chess and shogi (Japanese chess) as
well as Go, and convincingly defeated a world-champion program in each case.
| 1 | 0 | 0 | 0 | 0 | 0 |
Some remarks on Huisken's monotonicity formula for mean curvature flow | We discuss a monotone quantity related to Huisken's monotonicity formula and
some technical consequences for mean curvature flow.
| 0 | 0 | 1 | 0 | 0 | 0 |
Inverse Moment Methods for Sufficient Forecasting using High-Dimensional Predictors | We consider forecasting a single time series using high-dimensional
predictors in the presence of a possible nonlinear forecast function. The
sufficient forecasting (Fan et al., 2016) used sliced inverse regression to
estimate lower-dimensional sufficient indices for nonparametric forecasting
using factor models. However, Fan et al. (2016) is fundamentally limited to the
inverse first-moment method, by assuming the restricted fixed number of
factors, linearity condition for factors, and monotone effect of factors on the
response. In this work, we study the inverse second-moment method using
directional regression and the inverse third-moment method to extend the
methodology and applicability of the sufficient forecasting. As the number of
factors diverges with the dimension of predictors, the proposed method relaxes
the distributional assumption of the predictor and enhances the capability of
capturing the non-monotone effect of factors on the response. We not only
provide a high-dimensional analysis of inverse moment methods such as
exhaustiveness and rate of convergence, but also prove their model selection
consistency. The power of our proposed methods is demonstrated in both
simulation studies and an empirical study of forecasting monthly macroeconomic
data from Q1 1959 to Q1 2016. During our theoretical development, we prove an
invariance result for inverse moment methods, which make a separate
contribution to the sufficient dimension reduction.
| 0 | 0 | 1 | 1 | 0 | 0 |
Spectral energy distribution and radio halo of NGC 253 at low radio frequencies | We present new radio continuum observations of NGC253 from the Murchison
Widefield Array at frequencies between 76 and 227 MHz. We model the broadband
radio spectral energy distribution for the total flux density of NGC253 between
76 MHz and 11 GHz. The spectrum is best described as a sum of central starburst
and extended emission. The central component, corresponding to the inner 500pc
of the starburst region of the galaxy, is best modelled as an internally
free-free absorbed synchrotron plasma, with a turnover frequency around 230
MHz. The extended emission component of the NGC253 spectrum is best described
as a synchrotron emission flattening at low radio frequencies. We find that 34%
of the extended emission (outside the central starburst region) at 1 GHz
becomes partially absorbed at low radio frequencies. Most of this flattening
occurs in the western region of the SE halo, and may be indicative of
synchrotron self-absorption of shock re-accelerated electrons or an intrinsic
low-energy cut off of the electron distribution. Furthermore, we detect the
large-scale synchrotron radio halo of NGC253 in our radio images. At 154 - 231
MHz the halo displays the well known X-shaped/horn-like structure, and extends
out to ~8kpc in z-direction (from major axis).
| 0 | 1 | 0 | 0 | 0 | 0 |
Emergent Phases of Fractonic Matter | Fractons are emergent particles which are immobile in isolation, but which
can move together in dipolar pairs or other small clusters. These exotic
excitations naturally occur in certain quantum phases of matter described by
tensor gauge theories. Previous research has focused on the properties of small
numbers of fractons and their interactions, effectively mapping out the
"Standard Model" of fractons. In the present work, however, we consider systems
with a finite density of either fractons or their dipolar bound states, with a
focus on the $U(1)$ fracton models. We study some of the phases in which
emergent fractonic matter can exist, thereby initiating the study of the
"condensed matter" of fractons. We begin by considering a system with a finite
density of fractons, which we show can exhibit microemulsion physics, in which
fractons form small-scale clusters emulsed in a phase dominated by long-range
repulsion. We then move on to study systems with a finite density of mobile
dipoles, which have phases analogous to many conventional condensed matter
phases. We focus on two major examples: Fermi liquids and quantum Hall phases.
A finite density of fermionic dipoles will form a Fermi surface and enter a
Fermi liquid phase. Interestingly, this dipolar Fermi liquid exhibits a
finite-temperature phase transition, corresponding to an unbinding transition
of fractons. Finally, we study chiral two-dimensional phases corresponding to
dipoles in "quantum Hall" states of their emergent magnetic field. We study
numerous aspects of these generalized quantum Hall systems, such as their edge
theories and ground state degeneracies.
| 0 | 1 | 0 | 0 | 0 | 0 |
Fractional Volterra Hierarchy | The generating function of cubic Hodge integrals satisfying the local
Calabi-Yau condition is conjectured to be a tau function of a new integrable
system which can be regarded as a fractional generalization of the Volterra
lattice hierarchy, so we name it the fractional Volterra hierarchy. In this
paper, we give the definition of this integrable hierarchy in terms of Lax pair
and Hamiltonian formalisms, construct its tau functions, and present its
multi-soliton solutions.
| 0 | 1 | 1 | 0 | 0 | 0 |
Deorbitalization strategies for meta-GGA exchange-correlation functionals | We explore the simplification of widely used meta-generalized-gradient
approximation (mGGA) exchange-correlation functionals to the Laplacian level of
refinement by use of approximate kinetic energy density functionals (KEDFs).
Such deorbitalization is motivated by the prospect of reducing computational
cost while recovering a strictly Kohn-Sham local potential framework (rather
than the usual generalized Kohn-Sham treatment of mGGAs). A KEDF that has been
rather successful in solid simulations proves to be inadequate for
deorbitalization but we produce other forms which, with parametrization to
Kohn-Sham results (not experimental data) on a small training set, yield rather
good results on standard molecular test sets when used to deorbitalize the
meta-GGA made very simple, TPSS, and SCAN functionals. We also study the
difference between high-fidelity and best-performing deorbitalizations and
discuss possible implications for use in ab initio molecular dynamics
simulations of complicated condensed phase systems.
| 0 | 1 | 0 | 0 | 0 | 0 |
Fast counting of medium-sized rooted subgraphs | We prove that counting copies of any graph $F$ in another graph $G$ can be
achieved using basic matrix operations on the adjacency matrix of $G$.
Moreover, the resulting algorithm is competitive for medium-sized $F$: our
algorithm recovers the best known complexity for rooted 6-clique counting and
improves on the best known for 9-cycle counting. Underpinning our proofs is the
new result that, for a general class of graph operators, matrix operations are
homomorphisms for operations on rooted graphs.
| 1 | 0 | 1 | 0 | 0 | 0 |
First-principles investigation of graphitic carbon nitride monolayer with embedded Fe atom | Density-functional theory calculations with spin-polarized generalized
gradient approximation and Hubbard $U$ correction is carried out to investigate
the mechanical, structural, electronic and magnetic properties of graphitic
heptazine with embedded $\mathrm{Fe}$ atom under bi-axial tensile strain and
applied perpendicular electric field. It was found that the binding energy of
heptazine with embedded $\mathrm{Fe}$ atom system decreases as more tensile
strain is applied and increases as more electric field strength is applied. Our
calculations also predict a band gap at a peak value of 5 tensile strain but at
expense of the structural stability of the system. The band gap opening at 5
tensile strain is due to distortion in the structure caused by the repulsive
effect in the cavity between the lone pairs of edge nitrogen atoms and
$\mathrm{d}_{xy}/\mathrm{d}_{x^2-y^2}$ orbital of Fe atom, hence the
unoccupied $\mathrm{p}_z$-orbital is forced to shift towards higher energy. The
electronic and magnetic properties of the heptazine with embedded $\mathrm{Fe}$
system under perpendicular electric field up to a peak value of 10
$\mathrm{V/nm}$ is also well preserved despite obvious buckled structure. Such
properties may be desirable for diluted magnetic semiconductors, spintronics,
and sensing devices.
| 0 | 1 | 0 | 0 | 0 | 0 |
Courant's Nodal Domain Theorem for Positivity Preserving Forms | We introduce a notion of nodal domains for positivity preserving forms. This
notion generalizes the classical ones for Laplacians on domains and on graphs.
We prove the Courant nodal domain theorem in this generalized setting using
purely analytical methods.
| 0 | 0 | 1 | 0 | 0 | 0 |
Intelligent Notification Systems: A Survey of the State of the Art and Research Challenges | Notifications provide a unique mechanism for increasing the effectiveness of
real-time information delivery systems. However, notifications that demand
users' attention at inopportune moments are more likely to have adverse effects
and might become a cause of potential disruption rather than proving beneficial
to users. In order to address these challenges a variety of intelligent
notification mechanisms based on monitoring and learning users' behavior have
been proposed. The goal of such mechanisms is maximizing users' receptivity to
the delivered information by automatically inferring the right time and the
right context for sending a certain type of information.
This article provides an overview of the current state of the art in the area
of intelligent notification mechanisms that relies on the awareness of users'
context and preferences. More specifically, we first present a survey of
studies focusing on understanding and modeling users' interruptibility and
receptivity to notifications from desktops and mobile devices. Then, we discuss
the existing challenges and opportunities in developing mechanisms for
intelligent notification systems in a variety of application scenarios.
| 1 | 0 | 0 | 0 | 0 | 0 |
Using Matching to Detect Infeasibility of Some Integer Programs | A novel matching based heuristic algorithm designed to detect specially
formulated infeasible zero-one IPs is presented. The algorithm input is a set
of nested doubly stochastic subsystems and a set E of instance defining
variables set at zero level. The algorithm deduces additional variables at zero
level until either a constraint is violated (the IP is infeasible), or no more
variables can be deduced zero (the IP is undecided). All feasible IPs, and all
infeasible IPs not detected infeasible are undecided. We successfully apply the
algorithm to a small set of specially formulated infeasible zero-one IP
instances of the Hamilton cycle decision problem. We show how to model both the
graph and subgraph isomorphism decision problems for input to the algorithm.
Increased levels of nested doubly stochastic subsystems can be implemented
dynamically. The algorithm is designed for parallel processing, and for
inclusion of techniques in addition to matching.
| 1 | 0 | 0 | 0 | 0 | 0 |
α7 nicotinic acetylcholine receptor signaling modulates ovine fetal brain astrocytes transcriptome in response to endotoxin: comparison to microglia, implications for prenatal stress and development of autism spectrum disorder | Neuroinflammation in utero may result in lifelong neurological disabilities.
Astrocytes play a pivotal role, but the mechanisms are poorly understood. No
early postnatal treatment strategies exist to enhance neuroprotective potential
of astrocytes. We hypothesized that agonism on {\alpha}7 nicotinic
acetylcholine receptor ({\alpha}7nAChR) in fetal astrocytes will augment their
neuroprotective transcriptome profile, while the antagonistic stimulation of
{\alpha}7nAChR will achieve the opposite. Using an in vivo - in vitro model of
developmental programming of neuroinflammation induced by lipopolysaccharide
(LPS), we validated this hypothesis in primary fetal sheep astrocytes cultures
re-exposed to LPS in presence of a selective {\alpha}7nAChR agonist or
antagonist. Our RNAseq findings show that a pro-inflammatory astrocyte
transcriptome phenotype acquired in vitro by LPS stimulation is reversed with
{\alpha}7nAChR agonistic stimulation. Conversely, antagonistic {\alpha}7nAChR
stimulation potentiates the pro-inflammatory astrocytic transcriptome
phenotype. Furthermore, we conduct a secondary transcriptome analysis against
the identical {\alpha}7nAChR experiments in fetal sheep primary microglia
cultures and against the Simons Simplex Collection for autism spectrum disorder
and discuss the implications.
| 0 | 0 | 0 | 0 | 1 | 0 |
Face centered cubic and hexagonal close packed skyrmion crystals in centro-symmetric magnets | Skyrmions are disk-like objects that typically form triangular crystals in
two dimensional systems. This situation is analogous to the so-called "pancake
vortices" of quasi-two dimensional superconductors. The way in which skyrmion
disks or pancake skyrmions pile up in layered centro-symmetric materials is
dictated by the inter-layer exchange. Unbiased Monte Carlo simulations and
simple stabilization arguments reveal face centered cubic and hexagonal close
packed skyrmion crystals for different choices of the inter-layer exchange, in
addition to the conventional triangular crystal of skyrmion lines. Moreover, an
inhomogeneous current induces sliding motion of pancake skyrmions, indicating
that they behave as effective mesoscale particles.
| 0 | 1 | 0 | 0 | 0 | 0 |
Fast, Accurate and Lightweight Super-Resolution with Neural Architecture Search | Deep convolution neural networks demonstrate impressive results in the
super-resolution domain. A series of studies concentrate on improving peak
signal noise ratio (PSNR) by using much deeper layers, which are not friendly
to constrained resources. Pursuing a trade-off between the restoration capacity
and the simplicity of models is still non-trivial. Recent contributions are
struggling to manually maximize this balance, while our work achieves the same
goal automatically with neural architecture search. Specifically, we handle
super-resolution with a multi-objective approach. We also propose an elastic
search tactic at both micro and macro level, based on a hybrid controller that
profits from evolutionary computation and reinforcement learning. Quantitative
experiments help us to draw a conclusion that our generated models dominate
most of the state-of-the-art methods with respect to the individual FLOPS.
| 1 | 0 | 0 | 0 | 0 | 0 |
Robbins-Monro conditions for persistent exploration learning strategies | We formulate simple assumptions, implying the Robbins-Monro conditions for
the $Q$-learning algorithm with the local learning rate, depending on the
number of visits of a particular state-action pair (local clock) and the number
of iteration (global clock). It is assumed that the Markov decision process is
communicating and the learning policy ensures the persistent exploration. The
restrictions are imposed on the functional dependence of the learning rate on
the local and global clocks. The result partially confirms the conjecture of
Bradkte (1994).
| 0 | 0 | 0 | 1 | 0 | 0 |
Localization in the Disordered Holstein model | The Holstein model describes the motion of a tight-binding tracer particle
interacting with a field of quantum harmonic oscillators. We consider this
model with an on-site random potential. Provided the hopping amplitude for the
particle is small, we prove localization for matrix elements of the resolvent,
in particle position and in the field Fock space. These bounds imply a form of
dynamical localization for the particle position that leaves open the
possibility of resonant tunneling in Fock space between equivalent field
configurations.
| 0 | 1 | 1 | 0 | 0 | 0 |
Universal Planning Networks | A key challenge in complex visuomotor control is learning abstract
representations that are effective for specifying goals, planning, and
generalization. To this end, we introduce universal planning networks (UPN).
UPNs embed differentiable planning within a goal-directed policy. This planning
computation unrolls a forward model in a latent space and infers an optimal
action plan through gradient descent trajectory optimization. The
plan-by-gradient-descent process and its underlying representations are learned
end-to-end to directly optimize a supervised imitation learning objective. We
find that the representations learned are not only effective for goal-directed
visual imitation via gradient-based trajectory optimization, but can also
provide a metric for specifying goals using images. The learned representations
can be leveraged to specify distance-based rewards to reach new target states
for model-free reinforcement learning, resulting in substantially more
effective learning when solving new tasks described via image-based goals. We
were able to achieve successful transfer of visuomotor planning strategies
across robots with significantly different morphologies and actuation
capabilities.
| 1 | 0 | 0 | 1 | 0 | 0 |
Adversarial Removal of Demographic Attributes from Text Data | Recent advances in Representation Learning and Adversarial Training seem to
succeed in removing unwanted features from the learned representation. We show
that demographic information of authors is encoded in -- and can be recovered
from -- the intermediate representations learned by text-based neural
classifiers. The implication is that decisions of classifiers trained on
textual data are not agnostic to -- and likely condition on -- demographic
attributes. When attempting to remove such demographic information using
adversarial training, we find that while the adversarial component achieves
chance-level development-set accuracy during training, a post-hoc classifier,
trained on the encoded sentences from the first part, still manages to reach
substantially higher classification accuracies on the same data. This behavior
is consistent across several tasks, demographic properties and datasets. We
explore several techniques to improve the effectiveness of the adversarial
component. Our main conclusion is a cautionary one: do not rely on the
adversarial training to achieve invariant representation to sensitive features.
| 0 | 0 | 0 | 1 | 0 | 0 |
Superradiance phase transition in the presence of parameter fluctuations | We theoretically analyze the effect of parameter fluctuations on the
superradiance phase transition in a setup where a large number of
superconducting qubits are coupled to a single cavity. We include parameter
fluctuations that are typical of superconducting architectures, such as
fluctuations in qubit gaps, bias points and qubit-cavity coupling strengths. We
find that the phase transition should occur in this case, although it manifests
itself somewhat differently from the case with no fluctuations. We also find
that fluctuations in the qubit gaps and qubit-cavity coupling strengths do not
necessarily make it more difficult to reach the transition point. Fluctuations
in the bias points, however, increase the coupling strength required to reach
the quantum phase transition point and enter the superradiant phase. Similarly,
these fluctuations lower the critical temperature for the thermal phase
transition.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Decision Support Method for Recommending Degrees of Exploration in Exploratory Testing | Exploratory testing is neither black nor white, but rather a continuum of
exploration exists. In this research we propose an approach for decision
support helping practitioners to distribute time between different degrees of
exploratory testing on that continuum. To make the continuum manageable, five
levels have been defined: freestyle testing, high, medium and low degrees of
exploration, and scripted testing. The decision support approach is based on
the repertory grid technique. The approach has been used in one company. The
method for data collection was focus groups. The results showed that the
proposed approach aids practitioners in the reflection of what exploratory
testing levels to use, and aligns their understanding for priorities of
decision criteria and the performance of exploratory testing levels in their
contexts. The findings also showed that the participating company, which is
currently conducting mostly scripted testing, should spend more time on testing
using higher degrees of exploration in comparison to scripted testing.
| 1 | 0 | 0 | 0 | 0 | 0 |
Quantum Field Theory and Coalgebraic Logic in Theoretical Computer Science | In this paper we suggest that in the framework of the Category Theory it is
possible to demonstrate the mathematical and logical \textit{dual equivalence}
between the category of the $q$-deformed Hopf Coalgebras and the category of
the $q$-deformed Hopf Algebras in QFT, interpreted as a thermal field theory.
Each pair algebra-coalgebra characterizes, indeed, a QFT system and its
mirroring thermal bath, respectively, so to model dissipative quantum systems
persistently in far-from-equilibrium conditions, with an evident significance
also for biological sciences. The $q$-deformed Hopf Coalgebras and the
$q$-deformed Hopf Algebras constitute two dual categories because characterized
by the same functor $T$, related with the Bogoliubov transform, and by its
contravariant application $T^{op}$, respectively. The \textit{q}-deformation
parameter, indeed, is related to the Bogoliubov angle, and it is effectively a
thermal parameter. Therefore, the different values of $q$ identify univocally,
and then label, the vacua appearing in the foliation process of the quantum
vacuum. This means that, in the framework of Universal Coalgebra, as general
theory of dynamic and computing systems ("labelled state-transition systems"),
the so labelled infinitely many quantum vacua can be interpreted as the Final
Coalgebra of an "Infinite State Black-Box Machine". All this opens the way to
the possibility of designing a new class of universal quantum computing
architectures based on this coalgebraic formulation of QFT, as its ability of
naturally generating a Fibonacci progression demonstrates.
| 1 | 0 | 1 | 0 | 0 | 0 |
Spatiotemporal Prediction of Ambulance Demand using Gaussian Process Regression | Accurately predicting when and where ambulance call-outs occur can reduce
response times and ensure the patient receives urgent care sooner. Here we
present a novel method for ambulance demand prediction using Gaussian process
regression (GPR) in time and geographic space. The method exhibits superior
accuracy to MEDIC, a method which has been used in industry. The use of GPR has
additional benefits such as the quantification of uncertainty with each
prediction, the choice of kernel functions to encode prior knowledge and the
ability to capture spatial correlation. Measures to increase the utility of GPR
in the current context, with large training sets and a Poisson-distributed
output, are outlined.
| 0 | 0 | 0 | 1 | 0 | 0 |
ArchiveWeb: collaboratively extending and exploring web archive collections - How would you like to work with your collections? | Curated web archive collections contain focused digital content which is
collected by archiving organizations, groups, and individuals to provide a
representative sample covering specific topics and events to preserve them for
future exploration and analysis. In this paper, we discuss how to best support
collaborative construction and exploration of these collections through the
ArchiveWeb system. ArchiveWeb has been developed using an iterative
evaluation-driven design-based research approach, with considerable user
feedback at all stages. The first part of this paper describes the important
insights we gained from our initial requirements engineering phase during the
first year of the project and the main functionalities of the current
ArchiveWeb system for searching, constructing, exploring, and discussing web
archive collections. The second part summarizes the feedback we received on
this version from archiving organizations and libraries, as well as our
corresponding plans for improving and extending the system for the next
release.
| 1 | 0 | 0 | 0 | 0 | 0 |
gl2vec: Learning Feature Representation Using Graphlets for Directed Networks | Learning network representations has a variety of applications, such as
network classification. Most existing work in this area focuses on static
undirected networks and do not account for presence of directed edges or
temporarily changes. Furthermore, most work focuses on node representations
that do poorly on tasks like network classification. In this paper, we propose
a novel, flexible and scalable network embedding methodology, \emph{gl2vec},
for network classification in both static and temporal directed networks.
\emph{gl2vec} constructs vectors for feature representation using static or
temporal network graphlet distributions and a null model for comparing them
against random graphs. We argue that \emph{gl2vec} can be used to classify and
compare networks of varying sizes and time period with high accuracy. We
demonstrate the efficacy and usability of \emph{gl2vec} over existing
state-of-the-art methods on network classification tasks such as network type
classification and subgraph identification in several real-world static and
temporal directed networks. Experimental results further show that
\emph{gl2vec}, concatenated with a wide range of state-of-the-art methods,
improves classification accuracy by up to $10\%$ in real-world applications
such as detecting departments for subgraphs in an email network or identifying
mobile users given their app switching behaviors represented as static or
temporal directed networks.
| 1 | 0 | 0 | 0 | 0 | 0 |
Evidence of s-wave superconductivity in the noncentrosymmetric La$_7$Ir$_3$ | Superconductivity in noncentrosymmetric compounds has attracted sustained
interest in the last decades. Here we present a detailed study on the
transport, thermodynamic properties and the band structure of the
noncentrosymmetric superconductor La$_7$Ir$_3$ ($T_c$ $\sim$2.3 K) that was
recently proposed to break the time-reversal symmetry. It is found that
La$_7$Ir$_3$ displays a moderately large electronic heat capacity (Sommerfeld
coefficient $\gamma_n$ $\sim$ 53.1 mJ/mol $\text{K}^2$) and a significantly
enhanced Kadowaki-Woods ratio (KWR $\sim$ 32 $\mu\Omega$ cm mol$^2$ K$^2$
J$^{-2}$) that is greater than the typical value ($\sim$ 10 $\mu\Omega$ cm
mol$^2$ K$^2$ J$^{-2}$) for strongly correlated electron systems. The upper
critical field $H_{c2}$ was seen to be nicely described by the single-band
Werthamer-Helfand-Hohenberg model down to very low temperatures. The
hydrostatic pressure effects on the superconductivity were also investigated.
The heat capacity below $T_c$ reveals a dominant s-wave gap with the magnitude
close to the BCS value. The first-principles calculations yield the
electron-phonon coupling constant $\lambda$ = 0.81 and the logarithmically
averaged frequency $\omega_{ln}$ = 78.5 K, resulting in a theoretical $T_c$ =
2.5 K, close to the experimental value. Our calculations suggest that the
enhanced electronic heat capacity is more likely due to electron-phonon
coupling, rather than the electron-electron correlation effects. Collectively,
these results place severe constraints on any theory of exotic
superconductivity in this system.
| 0 | 1 | 0 | 0 | 0 | 0 |
Essential Dimension of Generic Symbols in Characteristic p | In this article the $p$-essential dimension of generic symbols over fields of
characteristic $p$ is studied. In particular, the $p$-essential dimension of
the length $\ell$ generic $p$-symbol of degree $n+1$ is bounded below by
$n+\ell$ when the base field is algebraically closed of characteristic $p$. The
proof uses new techniques for working with residues in Milne-Kato
$p$-cohomology and builds on work of Babic and Chernousov in the Witt group in
characteristic 2. Two corollaries on $p$-symbol algebras (i.e, degree 2
symbols) result from this work. The generic $p$-symbol algebra of length $\ell$
is shown to have $p$-essential dimension equal to $\ell+1$ as a $p$-torsion
Brauer class. The second is a lower bound of $\ell+1$ on the $p$-essential
dimension of the functor $\mathrm{Alg}_{p^\ell,p}$. Roughly speaking this says
that you will need at least $\ell+1$ independent parameters to be able to
specify any given algebra of degree $p^{\ell}$ and exponent $p$ over a field of
characteristic $p$ and improves on the previously established lower bound of 3.
| 0 | 0 | 1 | 0 | 0 | 0 |
Ask less - Scale Market Research without Annoying Your Customers | Market research is generally performed by surveying a representative sample
of customers with questions that includes contexts such as psycho-graphics,
demographics, attitude and product preferences. Survey responses are used to
segment the customers into various groups that are useful for targeted
marketing and communication. Reducing the number of questions asked to the
customer has utility for businesses to scale the market research to a large
number of customers. In this work, we model this task using Bayesian networks.
We demonstrate the effectiveness of our approach using an example market
segmentation of broadband customers.
| 1 | 0 | 0 | 1 | 0 | 0 |
Adaptive Regularized Newton Method for Riemannian Optimization | Optimization on Riemannian manifolds widely arises in eigenvalue computation,
density functional theory, Bose-Einstein condensates, low rank nearest
correlation, image registration, and signal processing, etc. We propose an
adaptive regularized Newton method which approximates the original objective
function by the second-order Taylor expansion in Euclidean space but keeps the
Riemannian manifold constraints. The regularization term in the objective
function of the subproblem enables us to establish a Cauchy-point like
condition as the standard trust-region method for proving global convergence.
The subproblem can be solved inexactly either by first-order methods or a
modified Riemannian Newton method. In the later case, it can further take
advantage of negative curvature directions. Both global convergence and
superlinear local convergence are guaranteed under mild conditions. Extensive
computational experiments and comparisons with other state-of-the-art methods
indicate that the proposed algorithm is very promising.
| 0 | 0 | 1 | 0 | 0 | 0 |
MPC meets SNA: A Privacy Preserving Analysis of Distributed Sensitive Social Networks | In this paper, we formalize the notion of distributed sensitive social
networks (DSSNs), which encompasses networks like enmity networks, financial
transaction networks, supply chain networks and sexual relationship networks.
Compared to the well studied traditional social networks, DSSNs are often more
challenging to study, given the privacy concerns of the individuals on whom the
network is knit. In the current work, we envision the use of secure multiparty
tools and techniques for performing privacy preserving social network analysis
over DSSNs. As a step towards realizing this, we design efficient
data-oblivious algorithms for computing the K-shell decomposition and the
PageRank centrality measure for a given DSSN. The designed data-oblivious
algorithms can be translated into equivalent secure computation protocols. We
also list a string of challenges that are needed to be addressed, for employing
secure computation protocols as a practical solution for studying DSSNs.
| 1 | 0 | 0 | 0 | 0 | 0 |
Lossy Image Compression with Compressive Autoencoders | We propose a new approach to the problem of optimizing autoencoders for lossy
image compression. New media formats, changing hardware technology, as well as
diverse requirements and content types create a need for compression algorithms
which are more flexible than existing codecs. Autoencoders have the potential
to address this need, but are difficult to optimize directly due to the
inherent non-differentiabilty of the compression loss. We here show that
minimal changes to the loss are sufficient to train deep autoencoders
competitive with JPEG 2000 and outperforming recently proposed approaches based
on RNNs. Our network is furthermore computationally efficient thanks to a
sub-pixel architecture, which makes it suitable for high-resolution images.
This is in contrast to previous work on autoencoders for compression using
coarser approximations, shallower architectures, computationally expensive
methods, or focusing on small images.
| 1 | 0 | 0 | 1 | 0 | 0 |
A cost effective and reliable environment monitoring system for HPC applications | We present a slow control system to gather all relevant environment
information necessary to effectively and reliably run an HPC (High Performance
Computing) system at a high value over price ratio. The scalable and reliable
overall concept is presented as well as a newly developed hardware device for
sensor read out. This device incorporates a Raspberry Pi, an Arduino and PoE
(Power over Ethernet) functionality in a compact form factor. The system is in
use at the 2 PFLOPS cluster of the Johannes Gutenberg-University and
Helmholtz-Institute in Mainz.
| 1 | 0 | 0 | 0 | 0 | 0 |
Quenching the Kitaev honeycomb model | I studied the non-equilibrium response of an initial Néel state under
time evolution with the Kitaev honeycomb model. This time evolution can be
computed using a random sampling over all relevant flux configurations. With
isotropic interactions the system quickly equilibrates into a steady state
valence bond solid. Anisotropy induces an exponentially long prethermal regime
whose dynamics are governed by an effective toric code. Signatures of topology
are absent, however, due to the high energy density nature of the initial
state.
| 0 | 1 | 0 | 0 | 0 | 0 |
Privacy-Preserving Adversarial Networks | We propose a data-driven framework for optimizing privacy-preserving data
release mechanisms toward the information-theoretically optimal tradeoff
between minimizing distortion of useful data and concealing sensitive
information. Our approach employs adversarially-trained neural networks to
implement randomized mechanisms and to perform a variational approximation of
mutual information privacy. We empirically validate our Privacy-Preserving
Adversarial Networks (PPAN) framework with experiments conducted on discrete
and continuous synthetic data, as well as the MNIST handwritten digits dataset.
With the synthetic data, we find that our model-agnostic PPAN approach achieves
tradeoff points very close to the optimal tradeoffs that are
analytically-derived from model knowledge. In experiments with the MNIST data,
we visually demonstrate a learned tradeoff between minimizing the pixel-level
distortion versus concealing the written digit.
| 1 | 0 | 0 | 1 | 0 | 0 |
Antiferromagnetic structure and electronic properties of BaCr2As2 and BaCrFeAs2 | The chromium arsenides BaCr2As2 and BaCrFeAs2 with ThCr2Si2 type structure
(space group I4/mmm; also adopted by '122' iron arsenide superconductors) have
been suggested as mother compounds for possible new superconductors. DFT-based
calculations of the electronic structure evidence metallic antiferromagnetic
ground states for both compounds. By powder neutron diffraction we confirm for
BaCr2As2 a robust ordering in the antiferromagnetic G-type structure at T_N =
580 K with mu_Cr = 1.9 mu_B at T = 2K. Anomalies in the lattice parameters
point to magneto-structural coupling effects. In BaCrFeAs2 the Cr and Fe atoms
randomly occupy the transition-metal site and G-type order is found below 265 K
with mu_Cr/Fe = 1.1 mu_B. 57Fe Moessbauer spectroscopy demonstrates that only a
small ordered moment is associated with the Fe atoms, in agreement with
electronic structure calculations with mu_Fe ~ 0. The temperature dependence of
the hyperfine field does not follow that of the total moments. Both compounds
are metallic but show large enhancements of the linear specific heat
coefficient gamma with respect to the band structure values. The metallic state
and the electrical transport in BaCrFeAs2 is dominated by the atomic disorder
of Cr and Fe and partial magnetic disorder of Fe. Our results indicate that
Neel-type order is unfavorable for the Fe moments and thus it is destabilized
with increasing iron content.
| 0 | 1 | 0 | 0 | 0 | 0 |
Molecular Modeling of the Microstructure Evolution during the Carbonization of PAN-Based Carbon Fibers | Development of high strength carbon fibers (CFs) requires an understanding of
the relationship between the processing conditions, microstructure and
resulting properties. We developed a molecular model that combines kinetic
Monte Carlo (KMC) and molecular dynamics (MD) techniques to predict the
microstructure evolution during the carbonization process of carbon fiber
manufacturing. The model accurately predicts the cross-sectional microstructure
of carbon fibers, predicting features such as graphitic sheets and hairpin
structures that have been observed experimentally. We predict the transverse
modulus of the resulting fibers and find that the modulus is slightly lower
than experimental values, but is up to an order of magnitude lower than ideal
graphite. We attribute this to the perfect longitudinal texture of our
simulated structures, as well as the chain sliding mechanism that governs the
deformation of the fibers, rather than the van der Waals interaction that
governs the modulus for graphite. We also observe that high reaction rates
result in porous structures that have lower moduli.
| 0 | 1 | 0 | 0 | 0 | 0 |
Hyperinflation | A model of cosmological inflation is proposed in which field space is a
hyperbolic plane. The inflaton never slow-rolls, and instead orbits the bottom
of the potential, buoyed by a centrifugal force. Though initial velocities
redshift away during inflation, in negatively curved spaces angular momentum
naturally starts exponentially large and remains relevant throughout. Quantum
fluctuations produce perturbations that are adiabatic and approximately scale
invariant; strikingly, in a certain parameter regime the perturbations can grow
double-exponentially during horizon crossing.
| 0 | 1 | 0 | 0 | 0 | 0 |
Improving on Q & A Recurrent Neural Networks Using Noun-Tagging | Often, more time is spent on finding a model that works well, rather than
tuning the model and working directly with the dataset. Our research began as
an attempt to improve upon a simple Recurrent Neural Network for answering
"simple" first-order questions (QA-RNN), developed by Ferhan Ture and Oliver
Jojic, from Comcast Labs, using the SimpleQuestions dataset. Their baseline
model, a bidirectional, 2-layer LSTM RNN and a GRU RNN, have accuracies of 0.94
and 0.90, for entity detection and relation prediction, respectively. We fine
tuned these models by doing substantial hyper-parameter tuning, getting
resulting accuracies of 0.70 and 0.80, for entity detection and relation
prediction, respectively. An accuracy of 0.984 was obtained on entity detection
using a 1-layer LSTM, where preprocessing was done by removing all words not
part of a noun chunk from the question. 100% of the dataset was available for
relation prediction, but only 20% of the dataset, was available for entity
detection, which we believe to be much of the reason for our initial
difficulties in replicating their result, despite the fact we were able to
improve on their entity detection results.
| 0 | 0 | 0 | 1 | 0 | 0 |
Injectivity and weak*-to-weak continuity suffice for convergence rates in $\ell^1$-regularization | We show that the convergence rate of $\ell^1$-regularization for linear
ill-posed equations is always $O(\delta)$ if the exact solution is sparse and
if the considered operator is injective and weak*-to-weak continuous. Under the
same assumptions convergence rates in case of non-sparse solutions are proven.
The results base on the fact that certain source-type conditions used in the
literature for proving convergence rates are automatically satisfied.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Panel Prototype for the Mu2e Straw Tube Tracker at Fermilab | The Mu2e experiment will search for coherent, neutrino-less conversion of
muons into electrons in the Coulomb field of an aluminum nucleus with a
sensitivity of four orders of magnitude better than previous experiments. The
signature of this process is an electron with energy nearly equal to the muon
mass. Mu2e relies on a precision (0.1%) measurement of the outgoing electron
momentum to separate signal from background. In order to achieve this goal,
Mu2e has chosen a very low-mass straw tracker, made of 20,736 5 mm diameter
thin-walled (15 $\mu$m) Mylar straws, held under tension to avoid the need for
supports within the active volume, and arranged in an approximately 3 m long by
0.7 m radius cylinder, operated in vacuum and a 1 T magnetic field. Groups of
96 straws are assembled into modules, called panels. We present the prototype
and the assembly procedure for a Mu2e tracker panel built at Fermilab
| 0 | 1 | 0 | 0 | 0 | 0 |
A Review on Internet of Things (IoT), Internet of Everything (IoE) and Internet of Nano Things (IoNT) | The current prominence and future promises of the Internet of Things (IoT),
Internet of Everything (IoE) and Internet of Nano Things (IoNT) are extensively
reviewed and a summary survey report is presented. The analysis clearly
distinguishes between IoT and IoE which are wrongly considered to be the same
by many people. Upon examining the current advancement in the fields of IoT,
IoE and IoNT, the paper presents scenarios for the possible future expansion of
their applications.
| 1 | 0 | 0 | 0 | 0 | 0 |
Ensemble Adversarial Training: Attacks and Defenses | Adversarial examples are perturbed inputs designed to fool machine learning
models. Adversarial training injects such examples into training data to
increase robustness. To scale this technique to large datasets, perturbations
are crafted using fast single-step methods that maximize a linear approximation
of the model's loss. We show that this form of adversarial training converges
to a degenerate global minimum, wherein small curvature artifacts near the data
points obfuscate a linear approximation of the loss. The model thus learns to
generate weak perturbations, rather than defend against strong ones. As a
result, we find that adversarial training remains vulnerable to black-box
attacks, where we transfer perturbations computed on undefended models, as well
as to a powerful novel single-step attack that escapes the non-smooth vicinity
of the input data via a small random step. We further introduce Ensemble
Adversarial Training, a technique that augments training data with
perturbations transferred from other models. On ImageNet, Ensemble Adversarial
Training yields models with strong robustness to black-box attacks. In
particular, our most robust model won the first round of the NIPS 2017
competition on Defenses against Adversarial Attacks.
| 1 | 0 | 0 | 1 | 0 | 0 |
X-Ray bright optically faint active galactic nuclei in the Subaru Hyper Suprime-Cam wide survey | We construct a sample of X-ray bright optically faint active galactic nuclei
by combining Subaru Hyper Suprime-Cam, XMM-Newton, and infrared source
catalogs. 53 X-ray sources satisfying i band magnitude fainter than 23.5 mag
and X-ray counts with EPIC-PN detector larger than 70 are selected from 9.1
deg^2, and their spectral energy distributions (SEDs) and X-ray spectra are
analyzed. 44 objects with an X-ray to i-band flux ratio F_X/F_i>10 are
classified as extreme X-ray-to-optical flux sources. SEDs of 48 among 53 are
represented by templates of type 2 AGNs or starforming galaxies and show
signature of stellar emission from host galaxies in the optical in the source
rest frame. Infrared/optical SEDs indicate significant contribution of emission
from dust to infrared fluxes and that the central AGN is dust obscured.
Photometric redshifts determined from the SEDs are in the range of 0.6-2.5.
X-ray spectra are fitted by an absorbed power law model, and the intrinsic
absorption column densities are modest (best-fit log N_H = 20.5-23.5 cm^-2 in
most cases). The absorption corrected X-ray luminosities are in the range of
6x10^42 - 2x10^45 erg s^-1. 20 objects are classified as type 2 quasars based
on X-ray luminsosity and N_H. The optical faintness is explained by a
combination of redshifts (mostly z>1.0), strong dust extinction, and in part a
large ratio of dust/gas.
| 0 | 1 | 0 | 0 | 0 | 0 |
The Goldman symplectic form on the PSL(V)-Hitchin component | This article is the second of a pair of articles about the Goldman symplectic
form on the PSL(V )-Hitchin component. We show that any ideal triangulation on
a closed connected surface of genus at least 2, and any compatible bridge
system determine a symplectic trivialization of the tangent bundle to the
Hitchin component. Using this, we prove that a large class of flows defined in
the companion paper [SWZ17] are Hamiltonian. We also construct an explicit
collection of Hamiltonian vector fields on the Hitchin component that give a
symplectic basis at every point. These are used in the companion paper to
compute explicit global Darboux coordinates for the Hitchin component.
| 0 | 0 | 1 | 0 | 0 | 0 |
Early Results from TUS, the First Orbital Detector of Extreme Energy Cosmic Rays | TUS is the world's first orbital detector of extreme energy cosmic rays
(EECRs), which operates as a part of the scientific payload of the Lomonosov
satellite since May 19, 2016. TUS employs the nocturnal atmosphere of the Earth
to register ultraviolet (UV) fluorescence and Cherenkov radiation from
extensive air showers generated by EECRs as well as UV radiation from lightning
strikes and transient luminous events, micro-meteors and space debris. The
first months of its operation in orbit have demonstrated an unexpectedly rich
variety of UV radiation in the atmosphere. We briefly review the design of TUS
and present a few examples of events recorded in a mode dedicated to
registering EECRs.
| 0 | 1 | 0 | 0 | 0 | 0 |
Network-based methods for outcome prediction in the "sample space" | In this thesis we present the novel semi-supervised network-based algorithm
P-Net, which is able to rank and classify patients with respect to a specific
phenotype or clinical outcome under study. The peculiar and innovative
characteristic of this method is that it builds a network of samples/patients,
where the nodes represent the samples and the edges are functional or genetic
relationships between individuals (e.g. similarity of expression profiles), to
predict the phenotype under study. In other words, it constructs the network in
the "sample space" and not in the "biomarker space" (where nodes represent
biomolecules (e.g. genes, proteins) and edges represent functional or genetic
relationships between nodes), as usual in state-of-the-art methods. To assess
the performances of P-Net, we apply it on three different publicly available
datasets from patients afflicted with a specific type of tumor: pancreatic
cancer, melanoma and ovarian cancer dataset, by using the data and following
the experimental set-up proposed in two recently published papers [Barter et
al., 2014, Winter et al., 2012]. We show that network-based methods in the
"sample space" can achieve results competitive with classical supervised
inductive systems. Moreover, the graph representation of the samples can be
easily visualized through networks and can be used to gain visual clues about
the relationships between samples, taking into account the phenotype associated
or predicted for each sample. To our knowledge this is one of the first works
that proposes graph-based algorithms working in the "sample space" of the
biomolecular profiles of the patients to predict their phenotype or outcome,
thus contributing to a novel research line in the framework of the Network
Medicine.
| 1 | 0 | 0 | 1 | 0 | 0 |
The geometrical origins of some distributions and the complete concentration of measure phenomenon for mean-values of functionals | We derive out naturally some important distributions such as high order
normal distributions and high order exponent distributions and the Gamma
distribution from a geometrical way. Further, we obtain the exact mean-values
of integral form functionals in the balls of continuous functions space with
$p-$norm, and show the complete concentration of measure phenomenon which means
that a functional takes its average on a ball with probability 1, from which we
have nonlinear exchange formula of expectation.
| 0 | 0 | 1 | 1 | 0 | 0 |
The Signs in Elliptic Nets | We give a generalization of a theorem of Silverman and Stephens regarding the
signs in an elliptic divisibility sequence to the case of an elliptic net. We
also describe applications of this theorem in the study of the distribution of
the signs in elliptic nets and generating elliptic nets using the denominators
of the linear combination of points on elliptic curves.
| 0 | 0 | 1 | 0 | 0 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.