title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Simulated JWST/NIRISS Transit Spectroscopy of Anticipated TESS Planets Compared to Select Discoveries from Space-Based and Ground-Based Surveys | The Transiting Exoplanet Survey Satellite (TESS) will embark in 2018 on a
2-year wide-field survey mission, discovering over a thousand terrestrial,
super-Earth and sub-Neptune-sized exoplanets potentially suitable for follow-up
observations using the James Webb Space Telescope (JWST). This work aims to
understand the suitability of anticipated TESS planet discoveries for
atmospheric characterization by JWST's Near InfraRed Imager and Slitless
Spectrograph (NIRISS) by employing a simulation tool to estimate the
signal-to-noise (S/N) achievable in transmission spectroscopy. We applied this
tool to Monte Carlo predictions of the TESS expected planet yield and then
compared the S/N for anticipated TESS discoveries to our estimates of S/N for
18 known exoplanets. We analyzed the sensitivity of our results to planetary
composition, cloud cover, and presence of an observational noise floor. We
found that several hundred anticipated TESS discoveries with radii from 1.5 to
2.5 times the Earth's radius will produce S/N higher than currently known
exoplanets in this radius regime, such as K2-3b or K2-3c. In the terrestrial
planet regime, we found that only a few anticipated TESS discoveries will
result in higher S/N than currently known exoplanets, such as the TRAPPIST-1
planets, GJ1132b, and LHS1140b. However, we emphasize that this outcome is
based upon Kepler-derived occurrence rates, and that co-planar compact
multi-planet systems (e.g., TRAPPIST-1) may be under-represented in the
predicted TESS planet yield. Finally, we apply our calculations to estimate the
required magnitude of a JWST follow-up program devoted to mapping the
transition region between hydrogen-dominated and high molecular weight
atmospheres. We find that a modest observing program of between 60 to 100 hours
of charged JWST time can define the nature of that transition (e.g., step
function versus a power law).
| 0 | 1 | 0 | 0 | 0 | 0 |
The general linear 2-groupoid | We deal with the symmetries of a (2-term) graded vector space or bundle. Our
first theorem shows that they define a (strict) Lie 2-groupoid in a natural
way. Our second theorem explores the construction of nerves for Lie
2-categories, showing that it yields simplicial manifolds if the 2-cells are
invertible. Finally, our third and main theorem shows that smooth
pseudofunctors into our general linear 2-groupoid classify 2-term
representations up to homotopy of Lie groupoids.
| 0 | 0 | 1 | 0 | 0 | 0 |
DeepTFP: Mobile Time Series Data Analytics based Traffic Flow Prediction | Traffic flow prediction is an important research issue to avoid traffic
congestion in transportation systems. Traffic congestion avoiding can be
achieved by knowing traffic flow and then conducting transportation planning.
Achieving traffic flow prediction is challenging as the prediction is affected
by many complex factors such as inter-region traffic, vehicles' relations, and
sudden events. However, as the mobile data of vehicles has been widely
collected by sensor-embedded devices in transportation systems, it is possible
to predict the traffic flow by analysing mobile data. This study proposes a
deep learning based prediction algorithm, DeepTFP, to collectively predict the
traffic flow on each and every traffic road of a city. This algorithm uses
three deep residual neural networks to model temporal closeness, period, and
trend properties of traffic flow. Each residual neural network consists of a
branch of residual convolutional units. DeepTFP aggregates the outputs of the
three residual neural networks to optimize the parameters of a time series
prediction model. Contrast experiments on mobile time series data from the
transportation system of England demonstrate that the proposed DeepTFP
outperforms the Long Short-Term Memory (LSTM) architecture based method in
prediction accuracy.
| 1 | 0 | 0 | 0 | 0 | 0 |
Non-equilibrium Optical Conductivity: General Theory and Application to Transient Phases | A non-equilibrium theory of optical conductivity of dirty-limit
superconductors and commensurate charge density wave is presented. We discuss
the current response to different experimentally relevant light-field probe
pulses and show that a single frequency definition of the optical conductivity
$\sigma(\omega)\equiv j(\omega)/E(\omega)$ is difficult to interpret out of the
adiabatic limit. We identify characteristic time domain signatures
distinguishing between superconducting, normal metal and charge density wave
states. We also suggest a route to directly address the instantaneous
superfluid stiffness of a superconductor by shaping the probe light field.
| 0 | 1 | 0 | 0 | 0 | 0 |
The Momentum Distribution of Liquid $^4$He | We report high-resolution neutron Compton scattering measurements of liquid
$^4$He under saturated vapor pressure. There is excellent agreement between the
observed scattering and ab initio predictions of its lineshape. Quantum Monte
Carlo calculations predict that the Bose condensate fraction is zero in the
normal fluid, builds up rapidly just below the superfluid transition
temperature, and reaches a value of approximately $7.5\%$ below 1 K. We also
used model fit functions to obtain from the scattering data empirical estimates
for the average atomic kinetic energy and Bose condensate fraction. These
quantities are also in excellent agreement with ab initio calculations. The
convergence between the scattering data and Quantum Monte Carlo calculations is
strong evidence for a Bose broken symmetry in superfluid $^4$He.
| 0 | 1 | 0 | 0 | 0 | 0 |
Semantic Code Repair using Neuro-Symbolic Transformation Networks | We study the problem of semantic code repair, which can be broadly defined as
automatically fixing non-syntactic bugs in source code. The majority of past
work in semantic code repair assumed access to unit tests against which
candidate repairs could be validated. In contrast, the goal here is to develop
a strong statistical model to accurately predict both bug locations and exact
fixes without access to information about the intended correct behavior of the
program. Achieving such a goal requires a robust contextual repair model, which
we train on a large corpus of real-world source code that has been augmented
with synthetically injected bugs. Our framework adopts a two-stage approach
where first a large set of repair candidates are generated by rule-based
processors, and then these candidates are scored by a statistical model using a
novel neural network architecture which we refer to as Share, Specialize, and
Compete. Specifically, the architecture (1) generates a shared encoding of the
source code using an RNN over the abstract syntax tree, (2) scores each
candidate repair using specialized network modules, and (3) then normalizes
these scores together so they can compete against one another in comparable
probability space. We evaluate our model on a real-world test set gathered from
GitHub containing four common categories of bugs. Our model is able to predict
the exact correct repair 41\% of the time with a single guess, compared to 13\%
accuracy for an attentional sequence-to-sequence model.
| 1 | 0 | 0 | 0 | 0 | 0 |
Hints on the gradual re-sizing of the torus in AGN by decomposing IRS/Spitzer spectra | Several authors have claimed that the less luminous active galactic nuclei
(AGN) are not capable of sustaining the dusty torus structure. Thus, a gradual
re-sizing of the torus is expected when the AGN luminosity decreases. Our aim
is to confront mid-infrared observations of local AGN of different luminosities
with this scenario. We decomposed about ~100 IRS/Spitzer spectra of LLAGN and
powerful Seyferts in order to decontaminate the torus component from other
contributors. We have used the affinity propagation (AP) method to cluster the
data into five groups within the sample according to torus contribution to the
5-15 um range (Ctorus) and bolometric luminosity. The AP groups show a
progressively higher torus contribution and an increase of the bolometric
luminosity, from Group 1 (Ctorus~ 0% and logLbol ~ 41) and up to Group 5
(Ctorus ~80% and log(Lbol) ~44). We have fitted the average spectra of each of
the AP groups to clumpy models. The torus is no longer present in Group 1,
supporting the disappearance at low-luminosities. We were able to fit the
average spectra for the torus component in Groups 3 (Ctorus~ 40% and log(Lbol)~
42.6), 4 (Ctorus~ 60% and log(Lbol)~ 43.7), and 5 to Clumpy torus models. We
did not find a good fitting to Clumpy torus models for Group 2 (Ctorus~ 18% and
log(Lbol)~ 42). This might suggest a different configuration and/or composition
of the clouds for Group 2, which is consistent with a different gas content
seen in Groups 1, 2, and 3, according to the detections of H2 molecular lines.
Groups 3, 4, and 5 show a trend to decrease of the width of the torus (which
yields to a likely decrease of the geometrical covering factor), although we
cannot confirm it with the present data. Finally, Groups 3, 4, and 5 show an
increase on the outer radius of the torus for higher luminosities, consistent
with a re-sizing of the torus according to the AGN luminosity.
| 0 | 1 | 0 | 0 | 0 | 0 |
Self-similar minimizers of a branched transport functional | We solve here completely an irrigation problem from a Dirac mass to the
Lebesgue measure. The functional we consider is a two dimensional analog of a
functional previously derived in the study of branched patterns in type-I
superconductors. The minimizer we obtain is a self-similar tree.
| 0 | 0 | 1 | 0 | 0 | 0 |
S-OHEM: Stratified Online Hard Example Mining for Object Detection | One of the major challenges in object detection is to propose detectors with
highly accurate localization of objects. The online sampling of high-loss
region proposals (hard examples) uses the multitask loss with equal weight
settings across all loss types (e.g, classification and localization, rigid and
non-rigid categories) and ignores the influence of different loss distributions
throughout the training process, which we find essential to the training
efficacy. In this paper, we present the Stratified Online Hard Example Mining
(S-OHEM) algorithm for training higher efficiency and accuracy detectors.
S-OHEM exploits OHEM with stratified sampling, a widely-adopted sampling
technique, to choose the training examples according to this influence during
hard example mining, and thus enhance the performance of object detectors. We
show through systematic experiments that S-OHEM yields an average precision
(AP) improvement of 0.5% on rigid categories of PASCAL VOC 2007 for both the
IoU threshold of 0.6 and 0.7. For KITTI 2012, both results of the same metric
are 1.6%. Regarding the mean average precision (mAP), a relative increase of
0.3% and 0.5% (1% and 0.5%) is observed for VOC07 (KITTI12) using the same set
of IoU threshold. Also, S-OHEM is easy to integrate with existing region-based
detectors and is capable of acting with post-recognition level regressors.
| 1 | 0 | 0 | 0 | 0 | 0 |
Deep Temporal-Recurrent-Replicated-Softmax for Topical Trends over Time | Dynamic topic modeling facilitates the identification of topical trends over
time in temporal collections of unstructured documents. We introduce a novel
unsupervised neural dynamic topic model named as Recurrent Neural
Network-Replicated Softmax Model (RNNRSM), where the discovered topics at each
time influence the topic discovery in the subsequent time steps. We account for
the temporal ordering of documents by explicitly modeling a joint distribution
of latent topical dependencies over time, using distributional estimators with
temporal recurrent connections. Applying RNN-RSM to 19 years of articles on NLP
research, we demonstrate that compared to state-of-the art topic models, RNNRSM
shows better generalization, topic interpretation, evolution and trends. We
also introduce a metric (named as SPAN) to quantify the capability of dynamic
topic model to capture word evolution in topics over time.
| 1 | 0 | 0 | 0 | 0 | 0 |
Generalizing Point Embeddings using the Wasserstein Space of Elliptical Distributions | Embedding complex objects as vectors in low dimensional spaces is a
longstanding problem in machine learning. We propose in this work an extension
of that approach, which consists in embedding objects as elliptical probability
distributions, namely distributions whose densities have elliptical level sets.
We endow these measures with the 2-Wasserstein metric, with two important
benefits: (i) For such measures, the squared 2-Wasserstein metric has a closed
form, equal to a weighted sum of the squared Euclidean distance between means
and the squared Bures metric between covariance matrices. The latter is a
Riemannian metric between positive semi-definite matrices, which turns out to
be Euclidean on a suitable factor representation of such matrices, which is
valid on the entire geodesic between these matrices. (ii) The 2-Wasserstein
distance boils down to the usual Euclidean metric when comparing Diracs, and
therefore provides a natural framework to extend point embeddings. We show that
for these reasons Wasserstein elliptical embeddings are more intuitive and
yield tools that are better behaved numerically than the alternative choice of
Gaussian embeddings with the Kullback-Leibler divergence. In particular, and
unlike previous work based on the KL geometry, we learn elliptical
distributions that are not necessarily diagonal. We demonstrate the advantages
of elliptical embeddings by using them for visualization, to compute embeddings
of words, and to reflect entailment or hypernymy.
| 0 | 0 | 0 | 1 | 0 | 0 |
Lancaster A at SemEval-2017 Task 5: Evaluation metrics matter: predicting sentiment from financial news headlines | This paper describes our participation in Task 5 track 2 of SemEval 2017 to
predict the sentiment of financial news headlines for a specific company on a
continuous scale between -1 and 1. We tackled the problem using a number of
approaches, utilising a Support Vector Regression (SVR) and a Bidirectional
Long Short-Term Memory (BLSTM). We found an improvement of 4-6% using the LSTM
model over the SVR and came fourth in the track. We report a number of
different evaluations using a finance specific word embedding model and reflect
on the effects of using different evaluation metrics.
| 1 | 0 | 0 | 0 | 0 | 0 |
Bootstrapping for multivariate linear regression models | The multivariate linear regression model is an important tool for
investigating relationships between several response variables and several
predictor variables. The primary interest is in inference about the unknown
regression coefficient matrix. We propose multivariate bootstrap techniques as
a means for making inferences about the unknown regression coefficient matrix.
These bootstrapping techniques are extensions of those developed in Freedman
(1981), which are only appropriate for univariate responses. Extensions to the
multivariate linear regression model are made without proof. We formalize this
extension and prove its validity. A real data example and two simulated data
examples which offer some finite sample verification of our theoretical results
are provided.
| 0 | 0 | 1 | 1 | 0 | 0 |
Long coherence times for edge spins | We show that in certain one-dimensional spin chains with open boundary
conditions, the edge spins retain memory of their initial state for very long
times. The long coherence times do not require disorder, only an ordered phase.
In the integrable Ising and XYZ chains, the presence of a strong zero mode
means the coherence time is infinite, even at infinite temperature. When Ising
is perturbed by interactions breaking the integrability, the coherence time
remains exponentially long in the perturbing couplings. We show that this is a
consequence of an edge "almost" strong zero mode that almost commutes with the
Hamiltonian. We compute this operator explicitly, allowing us to estimate
accurately the plateau value of edge spin autocorrelator.
| 0 | 1 | 1 | 0 | 0 | 0 |
Exploring light mediators with low-threshold direct detection experiments | We explore the potential of future cryogenic direct detection experiments to
determine the properties of the mediator that communicates the interactions
between dark matter and nuclei. Due to their low thresholds and large
exposures, experiments like CRESST-III, SuperCDMS SNOLAB and EDELWEISS-III will
have excellent capability to reconstruct mediator masses in the MeV range for a
large class of models. Combining the information from several experiments
further improves the parameter reconstruction, even when taking into account
additional nuisance parameters related to background uncertainties and the dark
matter velocity distribution. These observations may offer the intriguing
possibility of studying dark matter self-interactions with direct detection
experiments.
| 0 | 1 | 0 | 0 | 0 | 0 |
DR/DZ equivalence conjecture and tautological relations | In this paper we present a family of conjectural relations in the
tautological ring of the moduli spaces of stable curves which implies the
strong double ramification/Dubrovin-Zhang equivalence conjecture. Our
tautological relations have the form of an equality between two different
families of tautological classes, only one of which involves the double
ramification cycle. We prove that both families behave the same way upon
pullback and pushforward with respect to forgetting a marked point. We also
prove that our conjectural relations are true in genus $0$ and $1$ and also
when first pushed forward from $\overline{\mathcal{M}}_{g,n+m}$ to
$\overline{\mathcal{M}}_{g,n}$ and then restricted to $\mathcal{M}_{g,n}$, for
any $g,n,m\geq 0$. Finally we show that, for semisimple CohFTs, the DR/DZ
equivalence only depends on a subset of our relations, finite in each genus,
which we prove for $g\leq 2$. As an application we find a new formula for the
class $\lambda_g$ as a linear combination of dual trees intersected with kappa
and psi classes, and we check it for $g \leq 3$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Surface Networks | We study data-driven representations for three-dimensional triangle meshes,
which are one of the prevalent objects used to represent 3D geometry. Recent
works have developed models that exploit the intrinsic geometry of manifolds
and graphs, namely the Graph Neural Networks (GNNs) and its spectral variants,
which learn from the local metric tensor via the Laplacian operator. Despite
offering excellent sample complexity and built-in invariances, intrinsic
geometry alone is invariant to isometric deformations, making it unsuitable for
many applications. To overcome this limitation, we propose several upgrades to
GNNs to leverage extrinsic differential geometry properties of
three-dimensional surfaces, increasing its modeling power.
In particular, we propose to exploit the Dirac operator, whose spectrum
detects principal curvature directions --- this is in stark contrast with the
classical Laplace operator, which directly measures mean curvature. We coin the
resulting models \emph{Surface Networks (SN)}. We prove that these models
define shape representations that are stable to deformation and to
discretization, and we demonstrate the efficiency and versatility of SNs on two
challenging tasks: temporal prediction of mesh deformations under non-linear
dynamics and generative models using a variational autoencoder framework with
encoders/decoders given by SNs.
| 1 | 0 | 0 | 1 | 0 | 0 |
Forward Flux Sampling Calculation of Homogeneous Nucleation Rates from Aqueous NaCl Solutions | We used molecular dynamics simulations and the path sampling technique known
as forward flux sampling to study homogeneous nucleation of NaCl crystals from
supersaturated aqueous solutions at 298 K and 1 bar. Nucleation rates were
obtained for a range of salt concentrations for the Joung-Cheatham NaCl force
field combined with the SPC/E water model. The calculated nucleation rates are
significantly lower than available experimental measurements. The estimates for
the nucleation rates in this work do not rely on classical nucleation theory,
but the pathways observed in the simulations suggest that the nucleation
process is better described by classical nucleation theory than an alternative
interpretation based on Ostwald's step rule, in contrast to some prior
simulations of related models. In addition to the size of NaCl nucleus, we find
that the crystallinity of a nascent cluster plays an important role in the
nucleation process. Nuclei with high crystallinity were found to have higher
growth probability and longer lifetimes, possibly because they are less exposed
to hydration water.
| 0 | 1 | 0 | 0 | 0 | 0 |
Driving an Ornstein--Uhlenbeck Process to Desired First-Passage Time Statistics | First-passage time (FPT) of an Ornstein-Uhlenbeck (OU) process is of immense
interest in a variety of contexts. This paper considers an OU process with two
boundaries, one of which is absorbing while the other one could be either
reflecting or absorbing, and studies the control strategies that can lead to
desired FPT moments. Our analysis shows that the FPT distribution of an OU
process is scale invariant with respect to the drift parameter, i.e., the drift
parameter just controls the mean FPT and doesn't affect the shape of the
distribution. This allows to independently control the mean and coefficient of
variation (CV) of the FPT. We show that that increasing the threshold may
increase or decrease CV of the FPT, depending upon whether or not one of the
threshold is reflecting. We also explore the effect of control parameters on
the FPT distribution, and find parameters that minimize the distance between
the FPT distribution and a desired distribution.
| 0 | 0 | 1 | 0 | 0 | 0 |
The self-consistent Dyson equation and self-energy functionals: failure or new opportunities? | Perturbation theory using self-consistent Green's functions is one of the
most widely used approaches to study many-body effects in condensed matter. On
the basis of general considerations and by performing analytical calculations
for the specific example of the Hubbard atom, we discuss some key features of
this approach. We show that when the domain of the functionals that are used to
realize the map between the non-interacting and the interacting Green's
functions is properly defined, there exists a class of self-energy functionals
for which the self-consistent Dyson equation has only one solution, which is
the physical one. We also show that manipulation of the perturbative expansion
of the interacting Green's function may lead to a wrong self-energy as
functional of the interacting Green's function, at least for some regions of
the parameter space. These findings confirm and explain numerical results of
Kozik et al. for the widely used skeleton series of Luttinger and Ward [Phys.
Rev. Lett. 114, 156402]. Our study shows that it is important to distinguish
between the maps between sets of functions and the functionals that realize
those maps. We demonstrate that the self-consistent Green's functions approach
itself is not problematic, whereas the functionals that are widely used may
have a limited range of validity.
| 0 | 1 | 0 | 0 | 0 | 0 |
Statistical Implications of the Revenue Transfer Methodology in the Affordable Care Act | The Affordable Care Act (ACA) includes a permanent revenue transfer
methodology which provides financial incentives to health insurance plans that
have higher than average actuarial risk. In this paper, we derive some
statistical implications of the revenue transfer methodology in the ACA. We
treat as random variables the revenue transfers between individual insurance
plans in a given marketplace, where each plan's revenue transfer amount is
measured as a percentage of the plan's total premium. We analyze the means and
variances of those random variables, and deduce from the zero sum nature of the
revenue transfers that there is no limit to the magnitude of revenue transfer
payments relative to plans' total premiums. Using data provided by the American
Academy of Actuaries and by the Centers for Medicare and Medicaid Services, we
obtain an explanation for empirical phenomena that revenue transfers were more
variable and can be substantially greater for insurance plans with smaller
market shares. We show that it is often the case that an insurer which has
decreasing market share will also have increased volatility in its revenue
transfers.
| 0 | 0 | 0 | 1 | 0 | 0 |
Chance-Constrained Combinatorial Optimization with a Probability Oracle and Its Application to Probabilistic Partial Set Covering | We investigate a class of chance-constrained combinatorial optimization
problems. Given a pre-specified risk level $\epsilon \in [0,1]$, the
chance-constrained program aims to find the minimum cost selection of a vector
of binary decisions $x$ such that a desirable event $\mathcal{B}(x)$ occurs
with probability at least $ 1-\epsilon$. In this paper, we assume that we have
an oracle that computes $\mathbb P( \mathcal{B}(x))$ exactly. Using this
oracle, we propose a general exact method for solving the chance-constrained
problem. In addition, we show that if the chance-constrained program is solved
approximately by a sampling-based approach, then the oracle can be used as a
tool for checking and fixing the feasibility of the optimal solution given by
this approach. We demonstrate the effectiveness of our proposed methods on a
variant of the probabilistic set covering problem (PSC), which admits an
efficient probability oracle. We give a compact mixed-integer program that
solves PSC optimally (without sampling) for a special case. For large-scale
instances for which the exact methods exhibit slow convergence, we propose a
sampling-based approach that exploits the special structure of PSC. In
particular, we introduce a new class of facet-defining inequalities for a
submodular substructure of PSC, and show that a sampling-based algorithm
coupled with the probability oracle solves the large-scale test instances
effectively.
| 0 | 0 | 1 | 0 | 0 | 0 |
Optimal Input Design for Affine Model Discrimination with Applications in Intention-Aware Vehicles | This paper considers the optimal design of input signals for the purpose of
discriminating among a finite number of affine models with uncontrolled inputs
and noise. Each affine model represents a different system operating mode,
corresponding to unobserved intents of other drivers or robots, or to fault
types or attack strategies, etc. The input design problem aims to find optimal
separating/discriminating (controlled) inputs such that the output trajectories
of all the affine models are guaranteed to be distinguishable from each other,
despite uncertainty in the initial condition and uncontrolled inputs as well as
the presence of process and measurement noise. We propose a novel formulation
to solve this problem, with an emphasis on guarantees for model discrimination
and optimality, in contrast to a previously proposed conservative formulation
using robust optimization. This new formulation can be recast as a bilevel
optimization problem and further reformulated as a mixed-integer linear program
(MILP). Moreover, our fairly general problem setting allows the incorporation
of objectives and/or responsibilities among rational agents. For instance, each
driver has to obey traffic rules, while simultaneously optimizing for safety,
comfort and energy efficiency. Finally, we demonstrate the effectiveness of our
approach for identifying the intention of other vehicles in several driving
scenarios.
| 1 | 0 | 0 | 0 | 0 | 0 |
Stable Unitary Integrators for the Numerical Implementation of Continuous Unitary Transformations | The technique of continuous unitary transformations has recently been used to
provide physical insight into a diverse array of quantum mechanical systems.
However, the question of how to best numerically implement the flow equations
has received little attention. The most immediately apparent approach, using
standard Runge-Kutta numerical integration algorithms, suffers from both severe
inefficiency due to stiffness and the loss of unitarity. After reviewing the
formalism of continuous unitary transformations and Wegner's original choice
for the infinitesimal generator of the flow, we present a number of approaches
to resolving these issues including a choice of generator which induces what we
call the "uniform tangent decay flow" and three numerical integrators
specifically designed to perform continuous unitary transformations efficiently
while preserving the unitarity of flow. We conclude by applying one of the flow
algorithms to a simple calculation that visually demonstrates the many-body
localization transition.
| 1 | 1 | 0 | 0 | 0 | 0 |
Sparse and Smooth Prior for Bayesian Linear Regression with Application to ETEX Data | Sparsity of the solution of a linear regression model is a common
requirement, and many prior distributions have been designed for this purpose.
A combination of the sparsity requirement with smoothness of the solution is
also common in application, however, with considerably fewer existing prior
models. In this paper, we compare two prior structures, the Bayesian fused
lasso (BFL) and least-squares with adaptive prior covariance matrix (LS-APC).
Since only variational solution was published for the latter, we derive a Gibbs
sampling algorithm for its inference and Bayesian model selection. The method
is designed for high dimensional problems, therefore, we discuss numerical
issues associated with evaluation of the posterior. In simulation, we show that
the LS-APC prior achieves results comparable to that of the Bayesian Fused
Lasso for piecewise constant parameter and outperforms the BFL for parameters
of more general shapes. Another advantage of the LS-APC priors is revealed in
real application to estimation of the release profile of the European Tracer
Experiment (ETEX). Specifically, the LS-APC model provides more conservative
uncertainty bounds when the regressor matrix is not informative.
| 0 | 0 | 0 | 1 | 0 | 0 |
Deep Learning: Generalization Requires Deep Compositional Feature Space Design | Generalization error defines the discriminability and the representation
power of a deep model. In this work, we claim that feature space design using
deep compositional function plays a significant role in generalization along
with explicit and implicit regularizations. Our claims are being established
with several image classification experiments. We show that the information
loss due to convolution and max pooling can be marginalized with the
compositional design, improving generalization performance. Also, we will show
that learning rate decay acts as an implicit regularizer in deep model
training.
| 1 | 0 | 0 | 1 | 0 | 0 |
First detection of sign-reversed linear polarization from the forbidden [O I] 630.03 nm line | We report on the detection of linear polarization of the forbidden [O i]
630.03 nm spectral line. The observations were carried out in the broader
context of the determination of the solar oxygen abundance, an important
problem in astrophysics that still remains unresolved. We obtained
spectro-polarimetric data of the forbidden [O i] line at 630.03 nm as well as
other neighboring permitted lines with the Solar Optical Telescope of the
Hinode satellite. A novel averaging technique was used, yielding very high
signal-to-noise ratios in excess of $10^5$. We confirm that the linear
polarization is sign-reversed compared to permitted lines as a result of the
line being dominated by a magnetic dipole transition. Our observations open a
new window for solar oxygen abundance studies, offering an alternative method
to disentangle the Ni i blend from the [O i] line at 630.03 nm that has the
advantage of simple LTE formation physics.
| 0 | 1 | 0 | 0 | 0 | 0 |
Certifying Some Distributional Robustness with Principled Adversarial Training | Neural networks are vulnerable to adversarial examples and researchers have
proposed many heuristic attack and defense mechanisms. We address this problem
through the principled lens of distributionally robust optimization, which
guarantees performance under adversarial input perturbations. By considering a
Lagrangian penalty formulation of perturbing the underlying data distribution
in a Wasserstein ball, we provide a training procedure that augments model
parameter updates with worst-case perturbations of training data. For smooth
losses, our procedure provably achieves moderate levels of robustness with
little computational or statistical cost relative to empirical risk
minimization. Furthermore, our statistical guarantees allow us to efficiently
certify robustness for the population loss. For imperceptible perturbations,
our method matches or outperforms heuristic approaches.
| 1 | 0 | 0 | 1 | 0 | 0 |
The minimal hidden computer needed to implement a visible computation | Master equations are commonly used to model the dynamics of physical systems.
Surprisingly, many deterministic maps $x \rightarrow f(x)$ cannot be
implemented by any master equation, even approximately. This raises the
question of how they arise in real-world systems like digital computers. We
show that any deterministic map over some "visible" states can be implemented
with a master equation--but only if additional "hidden" states are dynamically
coupled to those visible states. We also show that any master equation
implementing a given map can be decomposed into a sequence of "hidden"
timesteps, demarcated by changes in what transitions are allowed under the rate
matrix. Often there is a real-world cost for each additional hidden state, and
for each additional hidden timestep. We derive the associated "space/time"
tradeoff between the numbers of hidden states and of hidden timesteps needed to
implement any given $f(x)$.
| 1 | 1 | 0 | 0 | 0 | 0 |
A multi-task convolutional neural network for mega-city analysis using very high resolution satellite imagery and geospatial data | Mega-city analysis with very high resolution (VHR) satellite images has been
drawing increasing interest in the fields of city planning and social
investigation. It is known that accurate land-use, urban density, and
population distribution information is the key to mega-city monitoring and
environmental studies. Therefore, how to generate land-use, urban density, and
population distribution maps at a fine scale using VHR satellite images has
become a hot topic. Previous studies have focused solely on individual tasks
with elaborate hand-crafted features and have ignored the relationship between
different tasks. In this study, we aim to propose a universal framework which
can: 1) automatically learn the internal feature representation from the raw
image data; and 2) simultaneously produce fine-scale land-use, urban density,
and population distribution maps. For the first target, a deep convolutional
neural network (CNN) is applied to learn the hierarchical feature
representation from the raw image data. For the second target, a novel
CNN-based universal framework is proposed to process the VHR satellite images
and generate the land-use, urban density, and population distribution maps. To
the best of our knowledge, this is the first CNN-based mega-city analysis
method which can process a VHR remote sensing image with such a large data
volume. A VHR satellite image (1.2 m spatial resolution) of the center of Wuhan
covering an area of 2606 km2 was used to evaluate the proposed method. The
experimental results confirm that the proposed method can achieve a promising
accuracy for land-use, urban density, and population distribution maps.
| 1 | 0 | 0 | 0 | 0 | 0 |
Exploiting Multi-layer Graph Factorization for Multi-attributed Graph Matching | Multi-attributed graph matching is a problem of finding correspondences
between two sets of data while considering their complex properties described
in multiple attributes. However, the information of multiple attributes is
likely to be oversimplified during a process that makes an integrated
attribute, and this degrades the matching accuracy. For that reason, a
multi-layer graph structure-based algorithm has been proposed recently. It can
effectively avoid the problem by separating attributes into multiple layers.
Nonetheless, there are several remaining issues such as a scalability problem
caused by the huge matrix to describe the multi-layer structure and a
back-projection problem caused by the continuous relaxation of the quadratic
assignment problem. In this work, we propose a novel multi-attributed graph
matching algorithm based on the multi-layer graph factorization. We reformulate
the problem to be solved with several small matrices that are obtained by
factorizing the multi-layer structure. Then, we solve the problem using a
convex-concave relaxation procedure for the multi-layer structure. The proposed
algorithm exhibits better performance than state-of-the-art algorithms based on
the single-layer structure.
| 1 | 0 | 0 | 0 | 0 | 0 |
Secure Search on the Cloud via Coresets and Sketches | \emph{Secure Search} is the problem of retrieving from a database table (or
any unsorted array) the records matching specified attributes, as in SQL SELECT
queries, but where the database and the query are encrypted. Secure search has
been the leading example for practical applications of Fully Homomorphic
Encryption (FHE) starting in Gentry's seminal work; however, to the best of our
knowledge all state-of-the-art secure search algorithms to date are realized by
a polynomial of degree $\Omega(m)$ for $m$ the number of records, which is
typically too slow in practice even for moderate size $m$.
In this work we present the first algorithm for secure search that is
realized by a polynomial of degree polynomial in $\log m$. We implemented our
algorithm in an open source library based on HELib implementation for the
Brakerski-Gentry-Vaikuntanthan's FHE scheme, and ran experiments on Amazon's
EC2 cloud. Our experiments show that we can retrieve the first match in a
database of millions of entries in less than an hour using a single machine;
the time reduced almost linearly with the number of machines.
Our result utilizes a new paradigm of employing coresets and sketches, which
are modern data summarization techniques common in computational geometry and
machine learning, for efficiency enhancement for homomorphic encryption. As a
central tool we design a novel sketch that returns the first positive entry in
a (not necessarily sparse) array; this sketch may be of independent interest.
| 1 | 0 | 0 | 0 | 0 | 0 |
LATTES: a novel detector concept for a gamma-ray experiment in the Southern hemisphere | The Large Array Telescope for Tracking Energetic Sources (LATTES), is a novel
concept for an array of hybrid EAS array detectors, composed of a Resistive
Plate Counter array coupled to a Water Cherenkov Detector, planned to cover
gamma rays from less than 100 GeV up to 100 TeVs. This experiment, to be
installed at high altitude in South America, could cover the existing gap in
sensitivity between satellite and ground arrays.
The low energy threshold, large duty cycle and wide field of view of LATTES
makes it a powerful tool to detect transient phenomena and perform long term
observations of variable sources. Moreover, given its characteristics, it would
be fully complementary to the planned Cherenkov Telescope Array (CTA) as it
would be able to issue alerts.
In this talk, a description of its main features and capabilities, as well as
results on its expected performance, and sensitivity, will be presented.
| 0 | 1 | 0 | 0 | 0 | 0 |
A general model for plane-based clustering with loss function | In this paper, we propose a general model for plane-based clustering. The
general model contains many existing plane-based clustering methods, e.g.,
k-plane clustering (kPC), proximal plane clustering (PPC), twin support vector
clustering (TWSVC) and its extensions. Under this general model, one may obtain
an appropriate clustering method for specific purpose. The general model is a
procedure corresponding to an optimization problem, where the optimization
problem minimizes the total loss of the samples. Thereinto, the loss of a
sample derives from both within-cluster and between-cluster. In theory, the
termination conditions are discussed, and we prove that the general model
terminates in a finite number of steps at a local or weak local optimal point.
Furthermore, based on this general model, we propose a plane-based clustering
method by introducing a new loss function to capture the data distribution
precisely. Experimental results on artificial and public available datasets
verify the effectiveness of the proposed method.
| 1 | 0 | 0 | 1 | 0 | 0 |
Renormalization of quasiparticle band gap in doped two-dimensional materials from many-body calculations | Doped free carriers can substantially renormalize electronic self-energy and
quasiparticle band gaps of two-dimensional (2D) materials. However, it is still
challenging to quantitatively calculate this many-electron effect, particularly
at the low doping density that is most relevant to realistic experiments and
devices. Here we develop a first-principles-based effective-mass model within
the GW approximation and show a dramatic band gap renormalization of a few
hundred meV for typical 2D semiconductors. Moreover, we reveal the roles of
different many-electron interactions: The Coulomb-hole contribution is dominant
for low doping densities while the screened-exchange contribution is dominant
for high doping densities. Three prototypical 2D materials are studied by this
method, h-BN, MoS2, and black phosphorus, covering insulators to
semiconductors. Especially, anisotropic black phosphorus exhibits a
surprisingly large band gap renormalization because of its smaller
density-of-state that enhances the screened-exchange interactions. Our work
demonstrates an efficient way to accurately calculate band gap renormalization
and provides quantitative understanding of doping-dependent many-electron
physics of general 2D semiconductors.
| 0 | 1 | 0 | 0 | 0 | 0 |
Hyperbolicity as an obstruction to smoothability for one-dimensional actions | Ghys and Sergiescu proved in the $80$s that Thompson's group $T$, and hence
$F$, admits actions by $C^{\infty}$ diffeomorphisms of the circle . They proved
that the standard actions of these groups are topologically conjugate to a
group of $C^\infty$ diffeomorphisms. Monod defined a family of groups of
piecewise projective homeomorphisms, and Lodha-Moore defined finitely
presentable groups of piecewise projective homeomorphisms. These groups are of
particular interest because they are nonamenable and contain no free subgroup.
In contrast to the result of Ghys-Sergiescu, we prove that the groups of Monod
and Lodha-Moore are not topologically conjugate to a group of $C^1$
diffeomorphisms.
Furthermore, we show that the group of Lodha-Moore has no nonabelian $C^1$
action on the interval. We also show that many Monod's groups $H(A)$, for
instance when $A$ is such that $\mathsf{PSL}(2,A)$ contains a rational
homothety $x\mapsto \tfrac{p}{q}x$, do not admit a $C^1$ action on the
interval. The obstruction comes from the existence of hyperbolic fixed points
for $C^1$ actions. With slightly different techniques, we also show that some
groups of piecewise affine homeomorphisms of the interval or the circle are not
smoothable.
| 0 | 0 | 1 | 0 | 0 | 0 |
Lasso ANOVA Decompositions for Matrix and Tensor Data | Consider the problem of estimating the entries of an unknown mean matrix or
tensor given a single noisy realization. In the matrix case, this problem can
be addressed by decomposing the mean matrix into a component that is additive
in the rows and columns, i.e.\ the additive ANOVA decomposition of the mean
matrix, plus a matrix of elementwise effects, and assuming that the elementwise
effects may be sparse. Accordingly, the mean matrix can be estimated by solving
a penalized regression problem, applying a lasso penalty to the elementwise
effects. Although solving this penalized regression problem is straightforward,
specifying appropriate values of the penalty parameters is not. Leveraging the
posterior mode interpretation of the penalized regression problem, moment-based
empirical Bayes estimators of the penalty parameters can be defined. Estimation
of the mean matrix using these these moment-based empirical Bayes estimators
can be called LANOVA penalization, and the corresponding estimate of the mean
matrix can be called the LANOVA estimate. The empirical Bayes estimators are
shown to be consistent. Additionally, LANOVA penalization is extended to
accommodate sparsity of row and column effects and to estimate an unknown mean
tensor. The behavior of the LANOVA estimate is examined under misspecification
of the distribution of the elementwise effects, and LANOVA penalization is
applied to several datasets, including a matrix of microarray data, a three-way
tensor of fMRI data and a three-way tensor of wheat infection data.
| 0 | 0 | 0 | 1 | 0 | 0 |
NetSciEd: Network Science and Education for the Interconnected World | This short article presents a summary of the NetSciEd (Network Science and
Education) initiative that aims to address the need for curricula, resources,
accessible materials, and tools for introducing K-12 students and the general
public to the concept of networks, a crucial framework in understanding
complexity. NetSciEd activities include (1) the NetSci High educational
outreach program (since 2010), which connects high school students and their
teachers with regional university research labs and provides them with the
opportunity to work on network science research projects; (2) the NetSciEd
symposium series (since 2012), which brings network science researchers and
educators together to discuss how network science can help and be integrated
into formal and informal education; and (3) the Network Literacy: Essential
Concepts and Core Ideas booklet (since 2014), which was created collaboratively
and subsequently translated into 18 languages by an extensive group of network
science researchers and educators worldwide.
| 1 | 1 | 0 | 0 | 0 | 0 |
A cup product lemma for continuous plurisubharmonic functions | A version of Gromov's cup product lemma in which one factor is the (1,0)-part
of the differential of a continuous plurisubharmonic function is obtained. As
an application, it is shown that a connected noncompact complete Kaehler
manifold that has exactly one end and admits a continuous plurisubharmonic
function that is strictly plurisubharmonic along some germ of a 2-dimensional
complex analytic set at some point has the Bochner-Hartogs property; that is,
the first compactly supported cohomology with values in the structure sheaf
vanishes.
| 0 | 0 | 1 | 0 | 0 | 0 |
Near-perfect spin filtering and negative differential resistance in an Fe(II)S complex | Density functional theory and nonequilibrium Green's function calculations
have been used to explore spin-resolved transport through the high-spin state
of an iron(II)sulfur single molecular magnet. Our results show that this
molecule exhibits near-perfect spin filtering, where the spin-filtering
efficiency is above 99%, as well as significant negative differential
resistance centered at a low bias voltage. The rise in the spin-up conductivity
up to the bias voltage of 0.4 V is dominated by a conductive lowest unoccupied
molecular orbital, and this is accompanied by a slight increase in the magnetic
moment of the Fe atom. The subsequent drop in the spin-up conductivity is
because the conductive channel moves to the highest occupied molecular orbital
which has a lower conductance contribution. This is accompanied by a drop in
the magnetic moment of the Fe atom. These two exceptional properties, and the
fact that the onset of negative differential resistance occurs at low bias
voltage, suggests the potential of the molecule in nanoelectronic and
nanospintronic applications.
| 0 | 1 | 0 | 0 | 0 | 0 |
Search for magnetic inelastic dark matter with XENON100 | We present the first search for dark matter-induced delayed coincidence
signals in a dual-phase xenon time projection chamber, using the 224.6 live
days of the XENON100 science run II. This very distinct signature is predicted
in the framework of magnetic inelastic dark matter which has been proposed to
reconcile the modulation signal reported by the DAMA/LIBRA collaboration with
the null results from other direct detection experiments. No candidate event
has been found in the region of interest and upper limits on the WIMP's
magnetic dipole moment are derived. The scenarios proposed to explain the
DAMA/LIBRA modulation signal by magnetic inelastic dark matter interactions of
WIMPs with masses of 58.0 GeV/c$^2$ and 122.7 GeV/c$^2$ are excluded at 3.3
$\sigma$ and 9.3 $\sigma$, respectively.
| 0 | 1 | 0 | 0 | 0 | 0 |
Structural, elastic, electronic, and bonding properties of intermetallic Nb3Pt and Nb3Os compounds: a DFT study | Theoretical investigation of structural, elastic, electronic and bonding
properties of A-15 Nb-based intermetallic compounds Nb3B (B = Pt, Os) have been
performed using first principles calculations based on the density functional
theory (DFT). Optimized cell parameters are found to be in good agreement with
available experimental and theoretical results. The elastic constants at zero
pressure and temperature are calculated and the anisotropic behaviors of the
compounds are studied. Both the compounds are mechanically stable and ductile
in nature. Other elastic properties such as Pugh's ratio, Cauchy pressure,
machinability index are derived for the first time. Nb3Os is expected to have
good lubricating properties compared to Nb3Pt. The electronic band structure
and energy density of states (DOS) have been studied with and without
spin-orbit coupling (SOC). The band structures of both the compounds are spin
symmetric. Electronic band structure and DOS reveal that both the compounds are
metallic and the conductivity mainly arise from the Nb 4d states. The Fermi
surface features have been studied for the first time. The Fermi surfaces of
Nb3B contain both hole- and electron-like sheets which change as one replaces
Pt with Os. The electronic charge density distribution shows that Nb3Pt and
Nb3Os both have a mixture of ionic and covalent bonding. The charge transfer
between atomic species in these compounds has been explained by the Mulliken
bond population analysis.
| 0 | 1 | 0 | 0 | 0 | 0 |
Clustering and Model Selection via Penalized Likelihood for Different-sized Categorical Data Vectors | In this study, we consider unsupervised clustering of categorical vectors
that can be of different size using mixture. We use likelihood maximization to
estimate the parameters of the underlying mixture model and a penalization
technique to select the number of mixture components. Regardless of the true
distribution that generated the data, we show that an explicit penalty, known
up to a multiplicative constant, leads to a non-asymptotic oracle inequality
with the Kullback-Leibler divergence on the two sides of the inequality. This
theoretical result is illustrated by a document clustering application. To this
aim a novel robust expectation-maximization algorithm is proposed to estimate
the mixture parameters that best represent the different topics. Slope
heuristics are used to calibrate the penalty and to select a number of
clusters.
| 0 | 0 | 1 | 1 | 0 | 0 |
Topology reveals universal features for network comparison | The topology of any complex system is key to understanding its structure and
function. Fundamentally, algebraic topology guarantees that any system
represented by a network can be understood through its closed paths. The length
of each path provides a notion of scale, which is vitally important in
characterizing dominant modes of system behavior. Here, by combining topology
with scale, we prove the existence of universal features which reveal the
dominant scales of any network. We use these features to compare several
canonical network types in the context of a social media discussion which
evolves through the sharing of rumors, leaks and other news. Our analysis
enables for the first time a universal understanding of the balance between
loops and tree-like structure across network scales, and an assessment of how
this balance interacts with the spreading of information online. Crucially, our
results allow networks to be quantified and compared in a purely model-free way
that is theoretically sound, fully automated, and inherently scalable.
| 1 | 0 | 1 | 1 | 0 | 0 |
Gated Recurrent Networks for Seizure Detection | Recurrent Neural Networks (RNNs) with sophisticated units that implement a
gating mechanism have emerged as powerful technique for modeling sequential
signals such as speech or electroencephalography (EEG). The latter is the focus
on this paper. A significant big data resource, known as the TUH EEG Corpus
(TUEEG), has recently become available for EEG research, creating a unique
opportunity to evaluate these recurrent units on the task of seizure detection.
In this study, we compare two types of recurrent units: long short-term memory
units (LSTM) and gated recurrent units (GRU). These are evaluated using a state
of the art hybrid architecture that integrates Convolutional Neural Networks
(CNNs) with RNNs. We also investigate a variety of initialization methods and
show that initialization is crucial since poorly initialized networks cannot be
trained. Furthermore, we explore regularization of these convolutional gated
recurrent networks to address the problem of overfitting. Our experiments
revealed that convolutional LSTM networks can achieve significantly better
performance than convolutional GRU networks. The convolutional LSTM
architecture with proper initialization and regularization delivers 30%
sensitivity at 6 false alarms per 24 hours.
| 0 | 0 | 0 | 1 | 0 | 0 |
Non-convex Conditional Gradient Sliding | We investigate a projection free method, namely conditional gradient sliding
on batched, stochastic and finite-sum non-convex problem. CGS is a smart
combination of Nesterov's accelerated gradient method and Frank-Wolfe (FW)
method, and outperforms FW in the convex setting by saving gradient
computations. However, the study of CGS in the non-convex setting is limited.
In this paper, we propose the non-convex conditional gradient sliding (NCGS)
which surpasses the non-convex Frank-Wolfe method in batched, stochastic and
finite-sum setting.
| 0 | 0 | 1 | 0 | 0 | 0 |
Multinomial Sum Formulas of Multiple Zeta Values | For a pair of positive integers $n,k$ with $n\geq 2$, in this paper we prove
that $$ \sum_{r=1}^k\sum_{|\bf\alpha|=k}{k\choose\bf\alpha}
\zeta(n\bf\alpha)=\zeta(n)^k =\sum^k_{r=1}\sum_{|\bf\alpha|=k}
{k\choose\bf\alpha}(-1)^{k-r}\zeta^\star(n\bf\alpha), $$ where
$\bf\alpha=(\alpha_1,\alpha_2,\ldots,\alpha_r)$ is a $r$-tuple of positive
integers. Moreover, we give an application to combinatorics and get the
following identity: $$ \sum^{2k}_{r=1}r!{2k\brace
r}=\sum^k_{p=1}\sum^k_{q=1}{k\brace p}{k\brace q} p!q!D(p,q), $$ where
${k\brace p}$ is the Stirling numbers of the second kind and $D(p,q)$ is the
Delannoy number.
| 0 | 0 | 1 | 0 | 0 | 0 |
Copolar convexity | We introduce a new operation, copolar addition, on unbounded convex subsets
of the positive orthant of real euclidean space and establish convexity of the
covolumes of the corresponding convex combinations. The proof is based on a
technique of geodesics of plurisubharmonic functions. As an application, we
show that there are no relative extremal functions inside a non-constant
geodesic curve between two toric relative extremal functions.
| 0 | 0 | 1 | 0 | 0 | 0 |
Go with the Flow: Compositional Abstractions for Concurrent Data Structures (Extended Version) | Concurrent separation logics have helped to significantly simplify
correctness proofs for concurrent data structures. However, a recurring problem
in such proofs is that data structure abstractions that work well in the
sequential setting are much harder to reason about in a concurrent setting due
to complex sharing and overlays. To solve this problem, we propose a novel
approach to abstracting regions in the heap by encoding the data structure
invariant into a local condition on each individual node. This condition may
depend on a quantity associated with the node that is computed as a fixpoint
over the entire heap graph. We refer to this quantity as a flow. Flows can
encode both structural properties of the heap (e.g. the reachable nodes from
the root form a tree) as well as data invariants (e.g. sortedness). We then
introduce the notion of a flow interface, which expresses the relies and
guarantees that a heap region imposes on its context to maintain the local flow
invariant with respect to the global heap. Our main technical result is that
this notion leads to a new semantic model of separation logic. In this model,
flow interfaces provide a general abstraction mechanism for describing complex
data structures. This abstraction mechanism admits proof rules that generalize
over a wide variety of data structures. To demonstrate the versatility of our
approach, we show how to extend the logic RGSep with flow interfaces. We have
used this new logic to prove linearizability and memory safety of nontrivial
concurrent data structures. In particular, we obtain parametric linearizability
proofs for concurrent dictionary algorithms that abstract from the details of
the underlying data structure representation. These proofs cannot be easily
expressed using the abstraction mechanisms provided by existing separation
logics.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Liouville Theorem for Mean Curvature Flow | Ancient solutions arise in the study of parabolic blow-ups. If we can
categorize ancient solutions, we can better understand blow-up limits. Based on
an argument of Giga and Kohn, we give a Liouville-type theorem restricting
ancient, type-I, non-collapsing two- dimensional mean curvature flows to either
spheres or cylinders.
| 0 | 0 | 1 | 0 | 0 | 0 |
FeSe(en)0.3 - Separated FeSe layers with stripe-type crystal structure by intercalation of neutral spacer molecules | Solvothermal intercalation of ethylenediamine molecules into FeSe separates
the layers by 1078 pm and creates a different stacking. FeSe(en)0.3 is not
superconducting although each layer exhibits the stripe-type crystal structure
and the Fermi surface topology of superconducting FeSe. FeSe(en)0.3 requires
electron-doping for high-Tc similar to monolayers of FeSe@SrTiO3, whose much
higher Tc may arise from the proximity of the oxide surface.
| 0 | 1 | 0 | 0 | 0 | 0 |
Coexistence of quantum and classical flows in quantum turbulence in the $T=0$ limit | Tangles of quantized vortex line of initial density ${\cal L}(0) \sim 6\times
10^3$\,cm$^{-2}$ and variable amplitude of fluctuations of flow velocity $U(0)$
at the largest length scale were generated in superfluid $^4$He at $T=0.17$\,K,
and their free decay ${\cal L}(t)$ was measured. If $U(0)$ is small, the excess
random component of vortex line length firstly decays as ${\cal L} \propto
t^{-1}$ until it becomes comparable with the structured component responsible
for the classical velocity field, and the decay changes to ${\cal L} \propto
t^{-3/2}$. The latter regime always ultimately prevails, provided the classical
description of $U$ holds. A quantitative model of coexisting cascades of
quantum and classical energies describes all regimes of the decay.
| 0 | 1 | 0 | 0 | 0 | 0 |
Four-dimensional Lens Space Index from Two-dimensional Chiral Algebra | We study the supersymmetric partition function on $S^1 \times L(r, 1)$, or
the lens space index of four-dimensional $\mathcal{N}=2$ superconformal field
theories and their connection to two-dimensional chiral algebras. We primarily
focus on free theories as well as Argyres-Douglas theories of type $(A_1, A_k)$
and $(A_1, D_k)$. We observe that in specific limits, the lens space index is
reproduced in terms of the (refined) character of an appropriately twisted
module of the associated two-dimensional chiral algebra or a generalized vertex
operator algebra. The particular twisted module is determined by the choice of
discrete holonomies for the flavor symmetry in four-dimensions.
| 0 | 0 | 1 | 0 | 0 | 0 |
Lions' formula for RKHSs of real harmonic functions on Lipschitz domains | Let $ \Omega$ be a bounded Lipschitz domain of $ \mathbb{R}^{d}.$ The purpose
of this paper is to establish Lions' formula for reproducing kernel Hilbert
spaces $\mathcal H^s(\Omega)$ of real harmonic functions elements of the usual
Sobolev space $H^s(\Omega)$ for $s\geq 0.$ To this end, we provide a functional
characterization of $\mathcal H^s(\Omega)$ via some new families of positive
self-adjoint operators, describe their trace data and discuss the values of $s$
for which they are RKHSs. Also a construction of an orthonormal basis of
$\mathcal H^s(\Omega)$ is established.
| 0 | 0 | 1 | 0 | 0 | 0 |
Optimization and Performance of Bifacial Solar Modules: A Global Perspective | With the rapidly growing interest in bifacial photovoltaics (PV), a worldwide
map of their potential performance can help assess and accelerate the global
deployment of this emerging technology. However, the existing literature only
highlights optimized bifacial PV for a few geographic locations or develops
worldwide performance maps for very specific configurations, such as the
vertical installation. It is still difficult to translate these location- and
configuration-specific conclusions to a general optimized performance of this
technology. In this paper, we present a global study and optimization of
bifacial solar modules using a rigorous and comprehensive modeling framework.
Our results demonstrate that with a low albedo of 0.25, the bifacial gain of
ground-mounted bifacial modules is less than 10% worldwide. However, increasing
the albedo to 0.5 and elevating modules 1 m above the ground can boost the
bifacial gain to 30%. Moreover, we derive a set of empirical design rules,
which optimize bifacial solar modules across the world, that provide the
groundwork for rapid assessment of the location-specific performance. We find
that ground-mounted, vertical, east-west-facing bifacial modules will
outperform their south-north-facing, optimally tilted counterparts by up to 15%
below the latitude of 30 degrees, for an albedo of 0.5. The relative energy
output is the reverse of this in latitudes above 30 degrees. A detailed and
systematic comparison with experimental data from Asia, Europe, and North
America validates the model presented in this paper. An online simulation tool
(this https URL) based on the model developed in this paper is
also available for a user to predict and optimize bifacial modules in any
arbitrary location across the globe.
| 0 | 1 | 0 | 0 | 0 | 0 |
Wave propagation modelling in various microearthquake environments using a spectral-element method | Simulation of wave propagation in a microearthquake environment is often
challenging due to small-scale structural and material heterogeneities. We
simulate wave propagation in three different real microearthquake environments
using a spectral-element method. In the first example, we compute the full
wavefield in 2D and 3D models of an underground ore mine, namely the Pyhaesalmi
mine in Finland. In the second example, we simulate wave propagation in a
homogeneous velocity model including the actual topography of an unstable rock
slope at Aaknes in western Norway. Finally, we compute the full wavefield for a
weakly anisotropic cylindrical sample at laboratory scale, which was used for
an acoustic emission experiment under triaxial loading. We investigate the
characteristic features of wave propagation in those models and compare
synthetic waveforms with observed waveforms wherever possible. We illustrate
the challenges associated with the spectral-element simulation in those models.
| 0 | 1 | 0 | 0 | 0 | 0 |
Fast Snapshottable Concurrent Braun Heaps | This paper proposes a new concurrent heap algorithm, based on a stateless
shape property, which efficiently maintains balance during insert and removeMin
operations implemented with hand-over-hand locking. It also provides a O(1)
linearizable snapshot operation based on lazy copy-on-write semantics. Such
snapshots can be used to provide consistent views of the heap during iteration,
as well as to make speculative updates (which can later be dropped).
The simplicity of the algorithm allows it to be easily proven correct, and
the choice of shape property provides priority queue performance which is
competitive with highly optimized skiplist implementations (and has stronger
bounds on worst-case time complexity).
A Scala reference implementation is provided.
| 1 | 0 | 0 | 0 | 0 | 0 |
GuideR: a guided separate-and-conquer rule learning in classification, regression, and survival settings | This article presents GuideR, a user-guided rule induction algorithm, which
overcomes the largest limitation of the existing methods-the lack of the
possibility to introduce user's preferences or domain knowledge to the rule
learning process. Automatic selection of attributes and attribute ranges often
leads to the situation in which resulting rules do not contain interesting
information. We propose an induction algorithm which takes into account user's
requirements. Our method uses the sequential covering approach and is suitable
for classification, regression, and survival analysis problems. The
effectiveness of the algorithm in all these tasks has been verified
experimentally, confirming guided rule induction to be a powerful data analysis
tool.
| 0 | 0 | 0 | 1 | 0 | 0 |
Frequency analysis and the representation of slowly diffusing planetary solutions | Over short time intervals planetary ephemerides have been traditionally
represented in analytical form as finite sums of periodic terms or sums of
Poisson terms that are periodic terms with polynomial amplitudes. Nevertheless,
this representation is not well adapted for the evolution of the planetary
orbits in the solar system over million of years as they present drifts in
their main frequencies, due to the chaotic nature of their dynamics. The aim of
the present paper is to develop a numerical algorithm for slowly diffusing
solutions of a perturbed integrable Hamiltonian system that will apply to the
representation of the chaotic planetary motions with varying frequencies. By
simple analytical considerations, we first argue that it is possible to recover
exactly a single varying frequency. Then, a function basis involving
time-dependent fundamental frequencies is formulated in a semi-analytical way.
Finally, starting from a numerical solution, a recursive algorithm is used to
numerically decompose the solution on the significant elements of the function
basis. Simple examples show that this algorithm can be used to give compact
representations of different types of slowly diffusing solutions. As a test
example, we show how this algorithm can be successfully applied to obtain a
very compact approximation of the La2004 solution of the orbital motion of the
Earth over 40 Myr ([-35Myr,5Myr]). This example has been chosen as this
solution is widely used for the reconstruction of the climates of the past.
| 0 | 1 | 0 | 0 | 0 | 0 |
Geometric clustering in normed planes | Given two sets of points $A$ and $B$ in a normed plane, we prove that there
are two linearly separable sets $A'$ and $B'$ such that $\mathrm{diam}(A')\leq
\mathrm{diam}(A)$, $\mathrm{diam}(B')\leq \mathrm{diam}(B)$, and $A'\cup
B'=A\cup B.$ This extends a result for the Euclidean distance to symmetric
convex distance functions. As a consequence, some Euclidean $k$-clustering
algorithms are adapted to normed planes, for instance, those that minimize the
maximum, the sum, or the sum of squares of the $k$ cluster diameters. The
2-clustering problem when two different bounds are imposed to the diameters is
also solved. The Hershberger-Suri's data structure for managing ball hulls can
be useful in this context.
| 0 | 0 | 1 | 0 | 0 | 0 |
Spectrum Sharing for LTE-A Network in TV White Space | Rural areas in the developing countries are predominantly devoid of Internet
access as it is not viable for operators to provide broadband service in these
areas. To solve this problem, we propose a middle mile Long erm Evolution
Advanced (LTE-A) network operating in TV white space to connect villages to an
optical Point of Presence (PoP) located in the vicinity of a rural area. We
study the problem of spectrum sharing for the middle mile networks deployed by
multiple operators. A graph theory based Fairness Constrained Channel
Allocation (FCCA) algorithm is proposed, employing Carrier Aggregation (CA) and
Listen Before Talk (LBT) features of LTE-A. We perform extensive system level
simulations to demonstrate that FCCA not only increases spectral efficiency but
also improves system fairness.
| 1 | 0 | 0 | 0 | 0 | 0 |
Instantons for 4-manifolds with periodic ends and an obstruction to embeddings of 3-manifolds | We construct an obstruction for the existence of embeddings of homology
$3$-sphere into homology $S^3\times S^1$ under some cohomological condition.
The obstruction is defined as an element in the filtered version of the
instanton Floer cohomology due to R.Fintushel-R.Stern. We make use of the
$\mathbb{Z}$-fold covering space of homology $S^3\times S^1$ and the instantons
on it.
| 0 | 0 | 1 | 0 | 0 | 0 |
Laplacian networks: growth, local symmetry and shape optimization | Inspired by river networks and other structures formed by Laplacian growth,
we use the Loewner equation to investigate the growth of a network of thin
fingers in a diffusion field. We first review previous contributions to
illustrate how this formalism reduces the network's expansion to three rules,
which respectively govern the velocity, the direction, and the nucleation of
its growing branches. This framework allows us to establish the mathematical
equivalence between three formulations of the direction rule, namely geodesic
growth, growth that maintains local symmetry and growth that maximizes flux
into tips for a given amount of growth. Surprisingly, we find that this growth
rule may result in a network different from the static configuration that
optimizes flux into tips.
| 0 | 1 | 0 | 0 | 0 | 0 |
Dropping Convexity for More Efficient and Scalable Online Multiview Learning | Multiview representation learning is very popular for latent factor analysis.
It naturally arises in many data analysis, machine learning, and information
retrieval applications to model dependent structures among multiple data
sources. For computational convenience, existing approaches usually formulate
the multiview representation learning as convex optimization problems, where
global optima can be obtained by certain algorithms in polynomial time.
However, many pieces of evidence have corroborated that heuristic nonconvex
approaches also have good empirical computational performance and convergence
to the global optima, although there is a lack of theoretical justification.
Such a gap between theory and practice motivates us to study a nonconvex
formulation for multiview representation learning, which can be efficiently
solved by a simple stochastic gradient descent (SGD) algorithm. We first
illustrate the geometry of the nonconvex formulation; Then, we establish
asymptotic global rates of convergence to the global optima by diffusion
approximations. Numerical experiments are provided to support our theory.
| 0 | 0 | 1 | 1 | 0 | 0 |
Automatic Vector-based Road Structure Mapping Using Multi-beam LiDAR | In this paper, we studied a SLAM method for vector-based road structure
mapping using multi-beam LiDAR. We propose to use the polyline as the primary
mapping element instead of grid cell or point cloud, because the vector-based
representation is precise and lightweight, and it can directly generate
vector-based High-Definition (HD) driving map as demanded by autonomous driving
systems. We explored: 1) the extraction and vectorization of road structures
based on local probabilistic fusion. 2) the efficient vector-based matching
between frames of road structures. 3) the loop closure and optimization based
on the pose-graph. In this study, we took a specific road structure, the road
boundary, as an example. We applied the proposed matching method in three
different scenes and achieved the average absolute matching error of 0.07. We
further applied the mapping system to the urban road with the length of 860
meters and achieved an average global accuracy of 0.466 m without the help of
high precision GPS.
| 1 | 0 | 0 | 0 | 0 | 0 |
Schwarzian derivatives, projective structures, and the Weil-Petersson gradient flow for renormalized volume | To a complex projective structure $\Sigma$ on a surface, Thurston associates
a locally convex pleated surface. We derive bounds on the geometry of both in
terms of the norms $\|\phi_\Sigma\|_\infty$ and $\|\phi_\Sigma\|_2$ of the
quadratic differential $\phi_\Sigma$ of $\Sigma$ given by the Schwarzian
derivative of the associated locally univalent map. We show that these give a
unifying approach that generalizes a number of important, well known results
for convex cocompact hyperbolic structures on 3-manifolds, including bounds on
the Lipschitz constant for the nearest-point retraction and the length of the
bending lamination. We then use these bounds to begin a study of the
Weil-Petersson gradient flow of renormalized volume on the space $CC(N)$ of
convex cocompact hyperbolic structures on a compact manifold $N$ with
incompressible boundary, leading to a proof of the conjecture that the
renormalized volume has infimum given by one-half the simplicial volume of
$DN$, the double of $N$.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Deep Network Model for Paraphrase Detection in Short Text Messages | This paper is concerned with paraphrase detection. The ability to detect
similar sentences written in natural language is crucial for several
applications, such as text mining, text summarization, plagiarism detection,
authorship authentication and question answering. Given two sentences, the
objective is to detect whether they are semantically identical. An important
insight from this work is that existing paraphrase systems perform well when
applied on clean texts, but they do not necessarily deliver good performance
against noisy texts. Challenges with paraphrase detection on user generated
short texts, such as Twitter, include language irregularity and noise. To cope
with these challenges, we propose a novel deep neural network-based approach
that relies on coarse-grained sentence modeling using a convolutional neural
network and a long short-term memory model, combined with a specific
fine-grained word-level similarity matching model. Our experimental results
show that the proposed approach outperforms existing state-of-the-art
approaches on user-generated noisy social media data, such as Twitter texts,
and achieves highly competitive performance on a cleaner corpus.
| 1 | 0 | 0 | 0 | 0 | 0 |
Organic-inorganic Copper(II)-based Material: a Low-Toxic, Highly Stable Light Absorber beyond Organolead Perovskites | Lead halide perovskite solar cells have recently emerged as a very promising
photovoltaic technology due to their excellent power conversion efficiencies;
however, the toxicity of lead and the poor stability of perovskite materials
remain two main challenges that need to be addressed. Here, for the first time,
we report a lead-free, highly stable C6H4NH2CuBr2I compound. The C6H4NH2CuBr2I
films exhibit extraordinary hydrophobic behavior with a contact angle of
approximately 90 degree, and their X-ray diffraction patterns remain unchanged
even after four hours of water immersion. UV-Vis absorption spectrum shows that
C6H4NH2CuBr2I compound has an excellent optical absorption over the entire
visible spectrum. We applied this copper-based light absorber in printable
mesoscopic solar cell for the initial trial and achieved a power conversion
efficiency of 0.5%. Our study represents an alternative pathway to develop
low-toxic and highly stable organic-inorganic hybrid materials for photovoltaic
application.
| 0 | 1 | 0 | 0 | 0 | 0 |
Acyclic cluster algebras, reflection groups, and curves on a punctured disc | We establish a bijective correspondence between certain non-self-intersecting
curves in an $n$-punctured disc and positive ${\mathbf c}$-vectors of acyclic
cluster algebras whose quivers have multiple arrows between every pair of
vertices. As a corollary, we obtain a proof of a conjecture by K.-H. Lee and K.
Lee (arXiv:1703.09113) on the combinatorial description of real Schur roots for
acyclic quivers with multiple arrows, and give a combinatorial characterization
of seeds in terms of curves in an $n$-punctured disc.
| 0 | 0 | 1 | 0 | 0 | 0 |
Inferring Structural Characteristics of Networks with Strong and Weak Ties from Fixed-Choice Surveys | Knowing the structure of an offline social network facilitates a variety of
analyses, including studying the rate at which infectious diseases may spread
and identifying a subset of actors to immunize in order to reduce, as much as
possible, the rate of spread. Offline social network topologies are typically
estimated by surveying actors and asking them to list their neighbours. While
identifying close friends and family (i.e., strong ties) can typically be done
reliably, listing all of one's acquaintances (i.e., weak ties) is subject to
error due to respondent fatigue. This issue is commonly circumvented through
the use of so-called "fixed choice" surveys where respondents are asked to name
a fixed, small number of their weak ties (e.g., two or ten). Of course, the
resulting crude observed network will omit many ties, and using this crude
network to infer properties of the network, such as its degree distribution or
clustering coefficient, will lead to biased estimates. This paper develops
estimators, based on the method of moments, for a number of network
characteristics including those related to the first and second moments of the
degree distribution as well as the network size, using fixed-choice survey
data. Experiments with simulated data illustrate that the proposed estimators
perform well across a variety of network topologies and measurement scenarios,
and the resulting estimates are significantly more accurate than those obtained
directly using the crude observed network, which are commonly used in the
literature. We also describe a variation of the Jackknife procedure that can be
used to obtain an estimate of the estimator variance.
| 1 | 1 | 0 | 0 | 0 | 0 |
A Method Of Detecting Gravitational Wave Based On Time-frequency Analysis And Convolutional Neural Networks | This work investigated the detection of gravitational wave (GW) from
simulated damped sinusoid signals contaminated with Gaussian noise. We proposed
to treat it as a classification problem with one class bearing our special
attentions. Two successive steps of the proposed scheme are as following:
first, decompose the data using a wavelet packet and represent the GW signal
and noise using the derived decomposition coefficients; Second, detect the
existence of GW using a convolutional neural network (CNN). To reflect our
special attention on searching GW signals, the performance is evaluated using
not only the traditional classification accuracy (correct ratio), but also
receiver operating characteristic (ROC) curve, and experiments show excelllent
performances on both evaluation measures. The generalization of a proposed
searching scheme on GW model parameter and possible extensions to other data
analysis tasks are crucial for a machine learning based approach. On this
aspect, experiments shows that there is no significant difference between GW
model parameters on identification performances by our proposed scheme.
Therefore, the proposed scheme has excellent generalization and could be used
to search for non-trained and un-known GW signals or glitches in the future GW
astronomy era.
| 0 | 1 | 0 | 0 | 0 | 0 |
The connection between zero chromaticity and long in-plane polarization lifetime in a magnetic storage ring | In this paper, we demonstrate the connection between a magnetic storage ring
with additional sextupole fields set so that the x and y chromaticities vanish
and the maximizing of the lifetime of in-plane polarization (IPP) for a
0.97-GeV/c deuteron beam. The IPP magnitude was measured by continuously
monitoring the down-up scattering asymmetry (sensitive to sideways
polarization) in an in-beam, carbon-target polarimeter and unfolding the
precession of the IPP due to the magnetic anomaly of the deuteron. The optimum
operating conditions for a long IPP lifetime were made by scanning the field of
the storage ring sextupole magnet families while observing the rate of IPP loss
during storage of the beam. The beam was bunched and electron cooled. The IPP
losses appear to arise from the change of the orbit circumference, and
consequently the particle speed and spin tune, due to the transverse betatron
oscillations of individual particles in the beam. The effects of these changes
are canceled by an appropriate sextupole field setting.
| 0 | 1 | 0 | 0 | 0 | 0 |
Phonemic and Graphemic Multilingual CTC Based Speech Recognition | Training automatic speech recognition (ASR) systems requires large amounts of
data in the target language in order to achieve good performance. Whereas large
training corpora are readily available for languages like English, there exists
a long tail of languages which do suffer from a lack of resources. One method
to handle data sparsity is to use data from additional source languages and
build a multilingual system. Recently, ASR systems based on recurrent neural
networks (RNNs) trained with connectionist temporal classification (CTC) have
gained substantial research interest. In this work, we extended our previous
approach towards training CTC-based systems multilingually. Our systems feature
a global phone set, based on the joint phone sets of each source language. We
evaluated the use of different language combinations as well as the addition of
Language Feature Vectors (LFVs). As contrastive experiment, we built systems
based on graphemes as well. Systems having a multilingual phone set are known
to suffer in performance compared to their monolingual counterparts. With our
proposed approach, we could reduce the gap between these mono- and multilingual
setups, using either graphemes or phonemes.
| 1 | 0 | 0 | 0 | 0 | 0 |
Model-Based Clustering of Time-Evolving Networks through Temporal Exponential-Family Random Graph Models | Dynamic networks are a general language for describing time-evolving complex
systems, and discrete time network models provide an emerging statistical
technique for various applications. It is a fundamental research question to
detect the community structure in time-evolving networks. However, due to
significant computational challenges and difficulties in modeling communities
of time-evolving networks, there is little progress in the current literature
to effectively find communities in time-evolving networks. In this work, we
propose a novel model-based clustering framework for time-evolving networks
based on discrete time exponential-family random graph models. To choose the
number of communities, we use conditional likelihood to construct an effective
model selection criterion. Furthermore, we propose an efficient variational
expectation-maximization (EM) algorithm to find approximate maximum likelihood
estimates of network parameters and mixing proportions. By using variational
methods and minorization-maximization (MM) techniques, our method has appealing
scalability for large-scale time-evolving networks. The power of our method is
demonstrated in simulation studies and empirical applications to international
trade networks and the collaboration networks of a large American research
university.
| 0 | 0 | 0 | 1 | 0 | 0 |
Multi-agent Time-based Decision-making for the Search and Action Problem | Many robotic applications, such as search-and-rescue, require multiple agents
to search for and perform actions on targets. However, such missions present
several challenges, including cooperative exploration, task selection and
allocation, time limitations, and computational complexity. To address this, we
propose a decentralized multi-agent decision-making framework for the search
and action problem with time constraints. The main idea is to treat time as an
allocated budget in a setting where each agent action incurs a time cost and
yields a certain reward. Our approach leverages probabilistic reasoning to make
near-optimal decisions leading to maximized reward. We evaluate our method in
the search, pick, and place scenario of the Mohamed Bin Zayed International
Robotics Challenge (MBZIRC), by using a probability density map and reward
prediction function to assess actions. Extensive simulations show that our
algorithm outperforms benchmark strategies, and we demonstrate system
integration in a Gazebo-based environment, validating the framework's readiness
for field application.
| 1 | 0 | 0 | 0 | 0 | 0 |
Anisotropic twicing for single particle reconstruction using autocorrelation analysis | The missing phase problem in X-ray crystallography is commonly solved using
the technique of molecular replacement, which borrows phases from a previously
solved homologous structure, and appends them to the measured Fourier
magnitudes of the diffraction patterns of the unknown structure. More recently,
molecular replacement has been proposed for solving the missing orthogonal
matrices problem arising in Kam's autocorrelation analysis for single particle
reconstruction using X-ray free electron lasers and cryo-EM. In classical
molecular replacement, it is common to estimate the magnitudes of the unknown
structure as twice the measured magnitudes minus the magnitudes of the
homologous structure, a procedure known as `twicing'. Mathematically, this is
equivalent to finding an unbiased estimator for a complex-valued scalar. We
generalize this scheme for the case of estimating real or complex valued
matrices arising in single particle autocorrelation analysis. We name this
approach "Anisotropic Twicing" because unlike the scalar case, the unbiased
estimator is not obtained by a simple magnitude isotropic correction. We
compare the performance of the least squares, twicing and anisotropic twicing
estimators on synthetic and experimental datasets. We demonstrate 3D homology
modeling in cryo-EM directly from experimental data without iterative
refinement or class averaging, for the first time.
| 1 | 0 | 0 | 1 | 0 | 0 |
Epi-two-dimensional fluid flow: a new topological paradigm for dimensionality | While a variety of fundamental differences are known to separate
two-dimensional (2D) and three-dimensional (3D) fluid flows, it is not well
understood how they are related. Conventionally, dimensional reduction is
justified by an \emph{a priori} geometrical framework; i.e., 2D flows occur
under some geometrical constraint such as shallowness. However, deeper inquiry
into 3D flow often finds the presence of local 2D-like structures without such
a constraint, where 2D-like behavior may be identified by the integrability of
vortex lines or vanishing local helicity. Here we propose a new paradigm of
flow structure by introducing an intermediate class, termed epi-2-dimensional
flow, and thereby build a topological bridge between 2D and 3D flows. The
epi-2D property is local, and is preserved in fluid elements obeying ideal
(inviscid and barotropic) mechanics; a local epi-2D flow may be regarded as a
`particle' carrying a generalized enstrophy as its charge. A finite viscosity
may cause `fusion' of two epi-2D particles, generating helicity from their
charges giving rise to 3D flow.
| 0 | 1 | 0 | 0 | 0 | 0 |
Dimensional reduction and its breakdown in the driven random field O(N) model | The critical behavior of the random field $O(N)$ model driven at a uniform
velocity is investigated at zero-temperature. From naive phenomenological
arguments, we introduce a dimensional reduction property, which relates the
large-scale behavior of the $D$-dimensional driven random field $O(N)$ model to
that of the $(D-1)$-dimensional pure $O(N)$ model. This is an analogue of the
dimensional reduction property in equilibrium cases, which states that the
large-scale behavior of $D$-dimensional random field models is identical to
that of $(D-2)$-dimensional pure models. However, the dimensional reduction
property breaks down in low enough dimensions due to the presence of multiple
meta-stable states. By employing the non-perturbative renormalization group
approach, we calculate the critical exponents of the driven random field $O(N)$
model near three-dimensions and determine the range of $N$ in which the
dimensional reduction breaks down.
| 0 | 1 | 0 | 0 | 0 | 0 |
Statistical Properties of Loss Rate Estimators in Tree Topology (2) | Four types of explicit estimators are proposed here to estimate the loss
rates of the links in a network with the tree topology and all of them are
derived by the maximum likelihood principle. One of the four is developed from
an estimator that was used but neglected because it was suspected to have a
higher variance. All of the estimators are proved to be either unbiased or
asymptotic unbiased. In addition, a set of formulae are derived to compute the
efficiencies and variances of the estimates obtained by the estimators. One of
the formulae shows that if a path is divided into two segments, the variance of
the estimates obtained for the pass rate of a segment is equal to the variance
of the pass rate of the path divided by the square of the pass rate of the
other segment. A number of theorems and corollaries are derived from the
formulae that can be used to evaluate the performance of an estimator. Using
the theorems and corollaries, we find the estimators from the neglected one are
the best estimator for the networks with the tree topology in terms of
efficiency and computation complexity.
| 1 | 0 | 0 | 0 | 0 | 0 |
On The Communication Complexity of High-Dimensional Permutations | We study the multiparty communication complexity of high dimensional
permutations, in the Number On the Forehead (NOF) model. This model is due to
Chandra, Furst and Lipton (CFL) who also gave a nontrivial protocol for the
Exactly-n problem where three players receive integer inputs and need to decide
if their inputs sum to a given integer $n$. There is a considerable body of
literature dealing with the same problem, where $(\mathbb{N},+)$ is replaced by
some other abelian group. Our work can be viewed as a far-reaching extension of
this line of work.
We show that the known lower bounds for that group-theoretic problem apply to
all high dimensional permutations. We introduce new proof techniques that
appeal to recent advances in Additive Combinatorics and Ramsey theory. We
reveal new and unexpected connections between the NOF communication complexity
of high dimensional permutations and a variety of well known and thoroughly
studied problems in combinatorics.
Previous protocols for Exactly-n all rely on the construction of large sets
of integers without a 3-term arithmetic progression. No direct algorithmic
protocol was previously known for the problem, and we provide the first such
algorithm. This suggests new ways to significantly improve the CFL protocol.
Many new open questions are presented throughout.
| 1 | 0 | 0 | 0 | 0 | 0 |
Moonshine: Distilling with Cheap Convolutions | Many engineers wish to deploy modern neural networks in memory-limited
settings; but the development of flexible methods for reducing memory use is in
its infancy, and there is little knowledge of the resulting cost-benefit. We
propose structural model distillation for memory reduction using a strategy
that produces a student architecture that is a simple transformation of the
teacher architecture: no redesign is needed, and the same hyperparameters can
be used. Using attention transfer, we provide Pareto curves/tables for
distillation of residual networks with four benchmark datasets, indicating the
memory versus accuracy payoff. We show that substantial memory savings are
possible with very little loss of accuracy, and confirm that distillation
provides student network performance that is better than training that student
architecture directly on data.
| 1 | 0 | 0 | 1 | 0 | 0 |
A New Wiretap Channel Model and its Strong Secrecy Capacity | In this paper, a new wiretap channel model is proposed, where the legitimate
transmitter and receiver communicate over a discrete memoryless channel. The
wiretapper has perfect access to a fixed-length subset of the transmitted
codeword symbols of her choosing. Additionally, she observes the remainder of
the transmitted symbols through a discrete memoryless channel. This new model
subsumes the classical wiretap channel and wiretap channel II with noisy main
channel as its special cases. The strong secrecy capacity of the proposed
channel model is identified. Achievability is established by solving a dual
secret key agreement problem in the source model, and converting the solution
to the original channel model using probability distribution approximation
arguments. In the dual problem, a source encoder and decoder, who observe
random sequences independent and identically distributed according to the input
and output distributions of the legitimate channel in the original problem,
communicate a confidential key over a public error-free channel using a single
forward transmission, in the presence of a compound wiretapping source who has
perfect access to the public discussion. The security of the key is guaranteed
for the exponentially many possibilities of the subset chosen at wiretapper by
deriving a lemma which provides a doubly-exponential convergence rate for the
probability that, for a fixed choice of the subset, the key is uniform and
independent from the public discussion and the wiretapping source's
observation. The converse is derived by using Sanov's theorem to upper bound
the secrecy capacity of the new wiretap channel model by the secrecy capacity
when the tapped subset is randomly chosen by nature.
| 1 | 0 | 0 | 0 | 0 | 0 |
One-to-One Matching of RTT and Path Changes | Route selection based on performance measurements is an essential task in
inter-domain Traffic Engineering. It can benefit from the detection of
significant changes in RTT measurements and the understanding on potential
causes of change. Among the extensive works on change detection methods and
their applications in various domains, few focus on RTT measurements. It is
thus unclear which approach works the best on such data.
In this paper, we present an evaluation framework for change detection on RTT
times series, consisting of: 1) a carefully labelled 34,008-hour RTT dataset as
ground truth; 2) a scoring method specifically tailored for RTT measurements.
Furthermore, we proposed a data transformation that improves the detection
performance of existing methods. Path changes are as well attended to. We fix
shortcomings of previous works by distinguishing path changes due to routing
protocols (IGP and BGP) from those caused by load balancing.
Finally, we apply our change detection methods to a large set of measurements
from RIPE Atlas. The characteristics of both RTT and path changes are analyzed;
the correlation between the two are also illustrated. We identify extremely
frequent AS path changes yet with few consequences on RTT, which has not been
reported before.
| 1 | 0 | 0 | 0 | 0 | 0 |
Infinite horizon asymptotic average optimality for large-scale parallel server networks | We study infinite-horizon asymptotic average optimality for parallel server
network with multiple classes of jobs and multiple server pools in the
Halfin-Whitt regime. Three control formulations are considered: 1) minimizing
the queueing and idleness cost, 2) minimizing the queueing cost under a
constraints on idleness at each server pool, and 3) fairly allocating the idle
servers among different server pools. For the third problem, we consider a
class of bounded-queue, bounded-state (BQBS) stable networks, in which any
moment of the state is bounded by that of the queue only (for both the limiting
diffusion and diffusion-scaled state processes). We show that the optimal
values for the diffusion-scaled state processes converge to the corresponding
values of the ergodic control problems for the limiting diffusion. We present a
family of state-dependent Markov balanced saturation policies (BSPs) that
stabilize the controlled diffusion-scaled state processes. It is shown that
under these policies, the diffusion-scaled state process is exponentially
ergodic, provided that at least one class of jobs has a positive abandonment
rate. We also establish useful moment bounds, and study the ergodic properties
of the diffusion-scaled state processes, which play a crucial role in proving
the asymptotic optimality.
| 1 | 0 | 1 | 0 | 0 | 0 |
Energy fluxes and spectra for turbulent and laminar flows | Two well-known turbulence models to describe the inertial and dissipative
ranges simultaneously are by Pao~[Phys. Fluids {\bf 8}, 1063 (1965)] and
Pope~[{\em Turbulent Flows.} Cambridge University Press, 2000]. In this paper,
we compute energy spectrum $E(k)$ and energy flux $\Pi(k)$ using spectral
simulations on grids up to $4096^3$, and show consistency between the numerical
results and predictions by the aforementioned models. We also construct a model
for laminar flows that predicts $E(k)$ and $\Pi(k)$ to be of the form
$\exp(-k)$, and verify the model predictions using numerical simulations. The
shell-to-shell energy transfers for the turbulent flows are {\em forward and
local} for both inertial and dissipative range, but those for the laminar flows
are {\em forward and nonlocal}.
| 0 | 1 | 0 | 0 | 0 | 0 |
Understanding low-temperature bulk transport in samarium hexaboride without relying on in-gap bulk states | We present a new model to explain the difference between the transport and
spectroscopy gaps in samarium hexaboride (SmB$_6$), which has been a mystery
for some time. We propose that SmB$_6$ can be modeled as an intrinsic
semiconductor with a depletion length that diverges at cryogenic temperatures.
In this model, we find a self-consistent solution to Poisson's equation in the
bulk, with boundary conditions based on Fermi energy pinning due to surface
charges. The solution yields band bending in the bulk; this explains the
difference between the two gaps because spectroscopic methods measure the gap
near the surface, while transport measures the average over the bulk. We also
connect the model to transport parameters, including the Hall coefficient and
thermopower, using semiclassical transport theory. The divergence of the
depletion length additionally explains the 10-12 K feature in data for these
parameters, demonstrating a crossover from bulk dominated transport above this
temperature to surface-dominated transport below this temperature. We find good
agreement between our model and a collection of transport data from 4-40 K.
This model can also be generalized to materials with similar band structure.
| 0 | 1 | 0 | 0 | 0 | 0 |
Towards Optimal Strategy for Adaptive Probing in Incomplete Networks | We investigate a graph probing problem in which an agent has only an
incomplete view $G' \subsetneq G$ of the network and wishes to explore the
network with least effort. In each step, the agent selects a node $u$ in $G'$
to probe. After probing $u$, the agent gains the information about $u$ and its
neighbors. All the neighbors of $u$ become \emph{observed} and are
\emph{probable} in the subsequent steps (if they have not been probed). What is
the best probing strategy to maximize the number of nodes explored in $k$
probes? This problem serves as a fundamental component for other
decision-making problems in incomplete networks such as information harvesting
in social networks, network crawling, network security, and viral marketing
with incomplete information.
While there are a few methods proposed for the problem, none can perform
consistently well across different network types. In this paper, we establish a
strong (in)approximability for the problem, proving that no algorithm can
guarantees finite approximation ratio unless P=NP. On the bright side, we
design learning frameworks to capture the best probing strategies for
individual network. Our extensive experiments suggest that our framework can
learn efficient probing strategies that \emph{consistently} outperform previous
heuristics and metric-based approaches.
| 1 | 1 | 0 | 0 | 0 | 0 |
Generalised Discount Functions applied to a Monte-Carlo AImu Implementation | In recent years, work has been done to develop the theory of General
Reinforcement Learning (GRL). However, there are few examples demonstrating
these results in a concrete way. In particular, there are no examples
demonstrating the known results regarding gener- alised discounting. We have
added to the GRL simulation platform AIXIjs the functionality to assign an
agent arbitrary discount functions, and an environment which can be used to
determine the effect of discounting on an agent's policy. Using this, we
investigate how geometric, hyperbolic and power discounting affect an informed
agent in a simple MDP. We experimentally reproduce a number of theoretical
results, and discuss some related subtleties. It was found that the agent's
behaviour followed what is expected theoretically, assuming appropriate
parameters were chosen for the Monte-Carlo Tree Search (MCTS) planning
algorithm.
| 1 | 0 | 0 | 0 | 0 | 0 |
One year of monitoring the Vela pulsar using a Phased Array Feed | We have observed the Vela pulsar for one year using a Phased Array Feed (PAF)
receiver on the 12-metre antenna of the Parkes Test-Bed Facility. These
observations have allowed us to investigate the stability of the PAF
beam-weights over time, to demonstrate that pulsars can be timed over long
periods using PAF technology and to detect and study the most recent glitch
event that occurred on 12 December 2016. The beam-weights are shown to be
stable to 1% on time scales on the order of three weeks. We discuss the
implications of this for monitoring pulsars using PAFs on single dish
telescopes.
| 0 | 1 | 0 | 0 | 0 | 0 |
Using Multiple Seasonal Holt-Winters Exponential Smoothing to Predict Cloud Resource Provisioning | Elasticity is one of the key features of cloud computing that attracts many
SaaS providers to minimize their services' cost. Cost is minimized by
automatically provision and release computational resources depend on actual
computational needs. However, delay of starting up new virtual resources can
cause Service Level Agreement violation. Consequently, predicting cloud
resources provisioning gains a lot of attention to scale computational
resources in advance. However, most of current approaches do not consider
multi-seasonality in cloud workloads. This paper proposes cloud resource
provisioning prediction algorithm based on Holt-Winters exponential smoothing
method. The proposed algorithm extends Holt-Winters exponential smoothing
method to model cloud workload with multi-seasonal cycles. Prediction accuracy
of the proposed algorithm has been improved by employing Artificial Bee Colony
algorithm to optimize its parameters. Performance of the proposed algorithm has
been evaluated and compared with double and triple exponential smoothing
methods. Our results have shown that the proposed algorithm outperforms other
methods.
| 1 | 0 | 0 | 0 | 0 | 0 |
Possible evidence for spin-transfer torque induced by spin-triplet supercurrent | Cooper pairs in superconductors are normally spin singlet. Nevertheless,
recent studies suggest that spin-triplet Cooper pairs can be created at
carefully engineered superconductor-ferromagnet interfaces. If Cooper pairs are
spin-polarized they would transport not only charge but also a net spin
component, but without dissipation, and therefore minimize the heating effects
associated with spintronic devices. Although it is now established that triplet
supercurrents exist, their most interesting property - spin - is only inferred
indirectly from transport measurements. In conventional spintronics, it is well
known that spin currents generate spin-transfer torques that alter
magnetization dynamics and switch magnetic moments. The observation of similar
effects due to spin-triplet supercurrents would not only confirm the net spin
of triplet pairs but also pave the way for applications of superconducting
spintronics. Here, we present a possible evidence for spin-transfer torques
induced by triplet supercurrents in superconductor/ferromagnet/superconductor
(S/F/S) Josephson junctions. Below the superconducting transition temperature
T_c, the ferromagnetic resonance (FMR) field at X-band (~ 9.0 GHz) shifts
rapidly to a lower field with decreasing temperature due to the spin-transfer
torques induced by triplet supercurrents. In contrast, this phenomenon is
absent in ferromagnet/superconductor (F/S) bilayers and
superconductor/insulator/ferromagnet/superconductor (S/I/F/S) multilayers where
no supercurrents pass through the ferromagnetic layer. These experimental
observations are discussed with theoretical predictions for ferromagnetic
Josephson junctions with precessing magnetization.
| 0 | 1 | 0 | 0 | 0 | 0 |
Robust Guaranteed-Cost Adaptive Quantum Phase Estimation | Quantum parameter estimation plays a key role in many fields like quantum
computation, communication and metrology. Optimal estimation allows one to
achieve the most precise parameter estimates, but requires accurate knowledge
of the model. Any inevitable uncertainty in the model parameters may heavily
degrade the quality of the estimate. It is therefore desired to make the
estimation process robust to such uncertainties. Robust estimation was
previously studied for a varying phase, where the goal was to estimate the
phase at some time in the past, using the measurement results from both before
and after that time within a fixed time interval up to current time. Here, we
consider a robust guaranteed-cost filter yielding robust estimates of a varying
phase in real time, where the current phase is estimated using only past
measurements. Our filter minimizes the largest (worst-case) variance in the
allowable range of the uncertain model parameter(s) and this determines its
guaranteed cost. It outperforms in the worst case the optimal Kalman filter
designed for the model with no uncertainty, that corresponds to the center of
the possible range of the uncertain parameter(s). Moreover, unlike the Kalman
filter, our filter in the worst case always performs better than the best
achievable variance for heterodyne measurements, that we consider as the
tolerable threshold for our system. Furthermore, we consider effective quantum
efficiency and effective noise power, and show that our filter provides the
best results by these measures in the worst case.
| 1 | 0 | 1 | 0 | 0 | 0 |
End-to-End Multi-View Networks for Text Classification | We propose a multi-view network for text classification. Our method
automatically creates various views of its input text, each taking the form of
soft attention weights that distribute the classifier's focus among a set of
base features. For a bag-of-words representation, each view focuses on a
different subset of the text's words. Aggregating many such views results in a
more discriminative and robust representation. Through a novel architecture
that both stacks and concatenates views, we produce a network that emphasizes
both depth and width, allowing training to converge quickly. Using our
multi-view architecture, we establish new state-of-the-art accuracies on two
benchmark tasks.
| 1 | 0 | 0 | 0 | 0 | 0 |
Collaborative similarity analysis of multilayer developer-project bipartite network | To understand the multiple relations between developers and projects on
GitHub as a whole, we model them as a multilayer bipartite network and analyze
the degree distributions, the nearest neighbors' degree distributions and their
correlations with degree, and the collaborative similarity distributions and
their correlations with degree. Our results show that all degree distributions
have a power-law form, especially, the degree distribution of projects in
watching layer has double power-law form. Negative correlations between nearest
neighbors' degree and degree for both developers and projects are observed in
both layers, exhibiting a disassortative mixing pattern. The collaborative
similarity of both developers and projects negatively correlates with degree in
watching layer, while a positive correlations is observed for developers in
forking layer and no obvious correlation is observed for projects in forking
layer.
| 1 | 1 | 0 | 0 | 0 | 0 |
Evaluation of equity-based debt obligations | We consider a class of participation rights, i.e. obligations issued by a
company to investors who are interested in performance-based compensation.
Albeit having desirable economic properties equity-based debt obligations
(EbDO) pose challenges in accounting and contract pricing. We formulate and
solve the associated mathematical problem in a discrete time, as well as a
continuous time setting. In the latter case the problem is reduced to a
forward-backward stochastic differential equation (FBSDE) and solved using the
method of decoupling fields.
| 0 | 0 | 0 | 0 | 0 | 1 |
Adaptive Feature Selection: Computationally Efficient Online Sparse Linear Regression under RIP | Online sparse linear regression is an online problem where an algorithm
repeatedly chooses a subset of coordinates to observe in an adversarially
chosen feature vector, makes a real-valued prediction, receives the true label,
and incurs the squared loss. The goal is to design an online learning algorithm
with sublinear regret to the best sparse linear predictor in hindsight. Without
any assumptions, this problem is known to be computationally intractable. In
this paper, we make the assumption that data matrix satisfies restricted
isometry property, and show that this assumption leads to computationally
efficient algorithms with sublinear regret for two variants of the problem. In
the first variant, the true label is generated according to a sparse linear
model with additive Gaussian noise. In the second, the true label is chosen
adversarially.
| 1 | 0 | 0 | 0 | 0 | 0 |
Siamese Networks with Location Prior for Landmark Tracking in Liver Ultrasound Sequences | Image-guided radiation therapy can benefit from accurate motion tracking by
ultrasound imaging, in order to minimize treatment margins and radiate moving
anatomical targets, e.g., due to breathing. One way to formulate this tracking
problem is the automatic localization of given tracked anatomical landmarks
throughout a temporal ultrasound sequence. For this, we herein propose a
fully-convolutional Siamese network that learns the similarity between pairs of
image regions containing the same landmark. Accordingly, it learns to localize
and thus track arbitrary image features, not only predefined anatomical
structures. We employ a temporal consistency model as a location prior, which
we combine with the network-predicted location probability map to track a
target iteratively in ultrasound sequences. We applied this method on the
dataset of the Challenge on Liver Ultrasound Tracking (CLUST) with competitive
results, where our work is the first to effectively apply CNNs on this tracking
problem, thanks to our temporal regularization.
| 1 | 0 | 0 | 0 | 0 | 0 |
Single-Shot 3D Diffractive Imaging of Core-Shell Nanoparticles with Elemental Specificity | We report 3D coherent diffractive imaging of Au/Pd core-shell nanoparticles
with 6 nm resolution on 5-6 femtosecond timescales. We measured single-shot
diffraction patterns of core-shell nanoparticles using very intense and short
x-ray free electron laser pulses. By taking advantage of the curvature of the
Ewald sphere and the symmetry of the nanoparticle, we reconstructed the 3D
electron density of 34 core-shell structures from single-shot diffraction
patterns. We determined the size of the Au core and the thickness of the Pd
shell to be 65.0 +/- 1.0 nm and 4.0 +/- 0.5 nm, respectively, and identified
the 3D elemental distribution inside the nanoparticles with an accuracy better
than 2%. We anticipate this method can be used for quantitative 3D imaging of
symmetrical nanostructures and virus particles.
| 0 | 1 | 0 | 0 | 0 | 0 |
SEPIA - a new single pixel receiver at the APEX Telescope | Context: We describe the new SEPIA (Swedish-ESO PI Instrument for APEX)
receiver, which was designed and built by the Group for Advanced Receiver
Development (GARD), at Onsala Space Observatory (OSO) in collaboration with
ESO. It was installed and commissioned at the APEX telescope during 2015 with
an ALMA Band 5 receiver channel and updated with a new frequency channel (ALMA
Band 9) in February 2016. Aims: This manuscript aims to provide, for observers
who use the SEPIA receiver, a reference in terms of the hardware description,
optics and performance as well as the commissioning results. Methods: Out of
three available receiver cartridge positions in SEPIA, the two current
frequency channels, corresponding to ALMA Band 5, the RF band 158--211 GHz, and
Band 9, the RF band 600--722 GHz, provide state-of-the-art dual polarization
receivers. The Band 5 frequency channel uses 2SB SIS mixers with an average SSB
noise temperature around 45K with IF (intermediate frequency) band 4--8 GHz for
each sideband providing total 4x4 GHz IF band. The Band 9 frequency channel
uses DSB SIS mixers with a noise temperature of 75--125K with IF band 4--12 GHz
for each polarization. Results: Both current SEPIA receiver channels are
available to all APEX observers.
| 0 | 1 | 0 | 0 | 0 | 0 |
Beyond normality: Learning sparse probabilistic graphical models in the non-Gaussian setting | We present an algorithm to identify sparse dependence structure in continuous
and non-Gaussian probability distributions, given a corresponding set of data.
The conditional independence structure of an arbitrary distribution can be
represented as an undirected graph (or Markov random field), but most
algorithms for learning this structure are restricted to the discrete or
Gaussian cases. Our new approach allows for more realistic and accurate
descriptions of the distribution in question, and in turn better estimates of
its sparse Markov structure. Sparsity in the graph is of interest as it can
accelerate inference, improve sampling methods, and reveal important
dependencies between variables. The algorithm relies on exploiting the
connection between the sparsity of the graph and the sparsity of transport
maps, which deterministically couple one probability measure to another.
| 1 | 0 | 0 | 1 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.