title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Visible transitions in Ag-like and Cd-like lanthanide ions | We present visible spectra of Ag-like ($4d^{10}4f$) and Cd-like
($4d^{10}4f^2$) ions of Ho (atomic number $Z=67$), Er (68), and Tm (69)
observed with a compact electron beam ion trap. For Ag-like ions, prominent
emission corresponding to the M1 transitions between the ground state fine
structure splitting $4f_{5/2}$--$4f_{7/2}$ is identified. For Cd-like ions,
several M1 transitions in the ground state configuration are identified. The
transition wavelength and the transition probability are calculated with the
relativistic many-body perturbation theory and the relativistic CI + all-order
approach. Comparisons between the experiments and the calculations show good
agreement.
| 0 | 1 | 0 | 0 | 0 | 0 |
A gradient flow approach to linear Boltzmann equations | We introduce a gradient flow formulation of linear Boltzmann equations. Under
a diffusive scaling we derive a diffusion equation by using the machinery of
gradient flows.
| 0 | 0 | 1 | 0 | 0 | 0 |
Constraints on neutrino masses from Lyman-alpha forest power spectrum with BOSS and XQ-100 | We present constraints on masses of active and sterile neutrinos. We use the
one-dimensional Ly$\alpha$-forest power spectrum from the Baryon Oscillation
Spectroscopic Survey (BOSS) of the Sloan Digital Sky Survey (SDSS-III) and from
the VLT/XSHOOTER legacy survey (XQ-100). In this paper, we present our own
measurement of the power spectrum with the publicly released XQ-100 quasar
spectra.
Fitting Ly$\alpha$ data alone leads to cosmological parameters in excellent
agreement with the values derived independently from Planck 2015 Cosmic
Microwave Background (CMB) data. Combining BOSS and XQ-100 Ly$\alpha$ power
spectra, we constrain the sum of neutrino masses to $\sum m_\nu < 0.8$ eV (95\%
C.L). With the addition of CMB data, this bound is tightened to $\sum m_\nu <
0.14$ eV (95\% C.L.).
With their sensitivity to small scales, Ly$\alpha$ data are ideal to
constrain $\Lambda$WDM models. Using XQ-100 alone, we issue lower bounds on
pure dark matter particles: $m_X \gtrsim 2.08 \: \rm{keV}$ (95\% C.L.) for
early decoupled thermal relics, and $m_s \gtrsim 10.2 \: \rm{keV}$ (95\% C.L.)
for non-resonantly produced right-handed neutrinos. Combining the 1D Ly$\alpha$
forest power spectrum measured by BOSS and XQ-100, we improve the two bounds to
$m_X \gtrsim 4.17 \: \rm{keV}$ and $m_s \gtrsim 25.0 \: \rm{keV}$ (95\% C.L.).
The $3~\sigma$ bound shows a more significant improvement, increasing from $m_X
\gtrsim 2.74 \: \rm{keV}$ for BOSS alone to $m_X \gtrsim 3.10 \: \rm{keV}$ for
the combined BOSS+XQ-100 data set.
Finally, we include in our analysis the first two redshift bins ($z=4.2$ and
$z=4.6$) of the power spectrum measured with the high-resolution HIRES/MIKE
spectrographs. The addition of HIRES/MIKE power spectrum allows us to further
improve the two limits to $m_X \gtrsim 4.65 \: \rm{keV}$ and $m_s \gtrsim 28.8
\: \rm{keV}$ (95\% C.L.).
| 0 | 1 | 0 | 0 | 0 | 0 |
BubbleView: an interface for crowdsourcing image importance maps and tracking visual attention | In this paper, we present BubbleView, an alternative methodology for eye
tracking using discrete mouse clicks to measure which information people
consciously choose to examine. BubbleView is a mouse-contingent, moving-window
interface in which participants are presented with a series of blurred images
and click to reveal "bubbles" - small, circular areas of the image at original
resolution, similar to having a confined area of focus like the eye fovea.
Across 10 experiments with 28 different parameter combinations, we evaluated
BubbleView on a variety of image types: information visualizations, natural
images, static webpages, and graphic designs, and compared the clicks to eye
fixations collected with eye-trackers in controlled lab settings. We found that
BubbleView clicks can both (i) successfully approximate eye fixations on
different images, and (ii) be used to rank image and design elements by
importance. BubbleView is designed to collect clicks on static images, and
works best for defined tasks such as describing the content of an information
visualization or measuring image importance. BubbleView data is cleaner and
more consistent than related methodologies that use continuous mouse movements.
Our analyses validate the use of mouse-contingent, moving-window methodologies
as approximating eye fixations for different image and task types.
| 1 | 0 | 0 | 0 | 0 | 0 |
Effects of Disorder on the Pressure-Induced Mott Transition in $κ$-BEDT-TTF)$_2$Cu[N(CN)$_2$]Cl | We present a study of the influence of disorder on the Mott metal-insulator
transition for the organic charge-transfer salt
$\kappa$-(BEDT-TTF)$_2$Cu[N(CN)$_2$]Cl. To this end, disorder was introduced
into the system in a controlled way by exposing the single crystals to x-ray
irradiation. The crystals were then fine-tuned across the Mott transition by
the application of continuously controllable He-gas pressure at low
temperatures. Measurements of the thermal expansion and resistance show that
the first-order character of the Mott transition prevails for low irradiation
doses achieved by irradiation times up to 100 h. For these crystals with a
moderate degree of disorder, we find a first-order transition line which ends
in a second-order critical endpoint, akin to the pristine crystals. Compared to
the latter, however, we observe a significant reduction of both, the critical
pressure $p_c$ and the critical temperature $T_c$. This result is consistent
with the theoretically-predicted formation of a soft Coulomb gap in the
presence of strong correlations and small disorder. Furthermore, we
demonstrate, similar to the observation for the pristine sample, that the Mott
transition after 50 h of irradiation is accompanied by sizable lattice effects,
the critical behavior of which can be well described by mean-field theory. Our
results demonstrate that the character of the Mott transition remains
essentially unchanged at a low disorder level. However, after an irradiation
time of 150 h, no clear signatures of a discontinuous metal-insulator
transition could be revealed anymore. These results suggest that, above a
certain disorder level, the metal-insulator transition becomes a smeared
first-order transition with some residual hysteresis.
| 0 | 1 | 0 | 0 | 0 | 0 |
Dark Energy Survey Year 1 Results: Multi-Probe Methodology and Simulated Likelihood Analyses | We present the methodology for and detail the implementation of the Dark
Energy Survey (DES) 3x2pt DES Year 1 (Y1) analysis, which combines
configuration-space two-point statistics from three different cosmological
probes: cosmic shear, galaxy-galaxy lensing, and galaxy clustering, using data
from the first year of DES observations. We have developed two independent
modeling pipelines and describe the code validation process. We derive
expressions for analytical real-space multi-probe covariances, and describe
their validation with numerical simulations. We stress-test the inference
pipelines in simulated likelihood analyses that vary 6-7 cosmology parameters
plus 20 nuisance parameters and precisely resemble the analysis to be presented
in the DES 3x2pt analysis paper, using a variety of simulated input data
vectors with varying assumptions.
We find that any disagreement between pipelines leads to changes in assigned
likelihood $\Delta \chi^2 \le 0.045$ with respect to the statistical error of
the DES Y1 data vector. We also find that angular binning and survey mask do
not impact our analytic covariance at a significant level. We determine lower
bounds on scales used for analysis of galaxy clustering (8 Mpc$~h^{-1}$) and
galaxy-galaxy lensing (12 Mpc$~h^{-1}$) such that the impact of modeling
uncertainties in the non-linear regime is well below statistical errors, and
show that our analysis choices are robust against a variety of systematics.
These tests demonstrate that we have a robust analysis pipeline that yields
unbiased cosmological parameter inferences for the flagship 3x2pt DES Y1
analysis. We emphasize that the level of independent code development and
subsequent code comparison as demonstrated in this paper is necessary to
produce credible constraints from increasingly complex multi-probe analyses of
current data.
| 0 | 1 | 0 | 0 | 0 | 0 |
Neural-Guided Deductive Search for Real-Time Program Synthesis from Examples | Synthesizing user-intended programs from a small number of input-output
examples is a challenging problem with several important applications like
spreadsheet manipulation, data wrangling and code refactoring. Existing
synthesis systems either completely rely on deductive logic techniques that are
extensively hand-engineered or on purely statistical models that need massive
amounts of data, and in general fail to provide real-time synthesis on
challenging benchmarks. In this work, we propose Neural Guided Deductive Search
(NGDS), a hybrid synthesis technique that combines the best of both symbolic
logic techniques and statistical models. Thus, it produces programs that
satisfy the provided specifications by construction and generalize well on
unseen examples, similar to data-driven systems. Our technique effectively
utilizes the deductive search framework to reduce the learning problem of the
neural component to a simple supervised learning setup. Further, this allows us
to both train on sparingly available real-world data and still leverage
powerful recurrent neural network encoders. We demonstrate the effectiveness of
our method by evaluating on real-world customer scenarios by synthesizing
accurate programs with up to 12x speed-up compared to state-of-the-art systems.
| 1 | 0 | 0 | 0 | 0 | 0 |
Coupled Electron-Ion Monte Carlo simulation of hydrogen molecular crystals | We performed simulations for solid molecular hydrogen at high pressures
(250GPa$\leq$P$\leq$500GPa) along two isotherms at T=200 K (phases III and VI)
and at T=414 K (phase IV). At T=200K we considered likely candidates for phase
III, the C2c and Cmca12 structures, while at T=414K in phase IV we studied the
Pc48 structure. We employed both Coupled Electron-Ion Monte Carlo (CEIMC) and
Path Integral Molecular Dynamics (PIMD) based on Density Functional Theory
(DFT) using the vdW-DF approximation. The comparison between the two methods
allows us to address the question of the accuracy of the xc approximation of
DFT for thermal and quantum protons without recurring to perturbation theories.
In general, we find that atomic and molecular fluctuations in PIMD are larger
than in CEIMC which suggests that the potential energy surface from vdW-DF is
less structured than the one from Quantum Monte Carlo. We find qualitatively
different behaviors for systems prepared in the C2c structure for increasing
pressure. Within PIMD the C2c structure is dynamically partially stable for
P$\leq$250GPa only: it retains the symmetry of the molecular centers but not
the molecular orientation; at intermediate pressures it develops layered
structures like Pbcn or Ibam and transforms to the metallic Cmca-4 structure at
P$\geq$450GPa. Instead, within CEIMC, the C2c structure is found to be
dynamically stable at least up to 450GPa; at increasing pressure the molecular
bond length increases and the nuclear correlation decreases. For the other two
structures the two methods are in qualitative agreement although quantitative
differences remain. We discuss various structural properties and the electrical
conductivity. We find these structures become conducting around 350GPa but the
metallic Drude-like behavior is reached only at around 500GPa, consistent with
recent experimental claims.
| 0 | 1 | 0 | 0 | 0 | 0 |
Spline Based Search Method For Unmodeled Transient Gravitational Wave Chirps | A method is described for the detection and estimation of transient chirp
signals that are characterized by smoothly evolving, but otherwise unmodeled,
amplitude envelopes and instantaneous frequencies. Such signals are
particularly relevant for gravitational wave searches, where they may arise in
a wide range of astrophysical scenarios. The method uses splines with
continuously adjustable breakpoints to represent the amplitude envelope and
instantaneous frequency of a signal, and estimates them from noisy data using
penalized least squares and model selection. Simulations based on waveforms
spanning a wide morphological range show that the method performs well in a
signal-to-noise ratio regime where the time-frequency signature of a signal is
highly degraded, thereby extending the coverage of current unmodeled
gravitational wave searches to a wider class of signals.
| 0 | 1 | 0 | 0 | 0 | 0 |
Quantum mechanics from an epistemic state space | We derive the Hilbert space formalism of quantum mechanics from epistemic
principles. A key assumption is that a physical theory that relies on entities
or distinctions that are unknowable in principle gives rise to wrong
predictions. An epistemic formalism is developed, where concepts like
individual and collective knowledge are used, and knowledge may be actual or
potential. The physical state $S$ corresponds to the collective potential
knowledge. The state $S$ is a subset of a state space $\mathcal{S}=\{Z\}$, such
that $S$ always contains several elements $Z$, which correspond to unattainable
states of complete potential knowledge of the world. The evolution of $S$
cannot be determined in terms of the individual evolution of the elements $Z$,
unlike the evolution of an ensemble in classical phase space. The evolution of
$S$ is described in terms of sequential time $n\in \mathbf{\mathbb{N}}$, which
is updated according to $n\rightarrow n+1$ each time potential knowledge
changes. In certain experimental contexts $C$, there is initial knowledge at
time $n$ that a given series of properties $P,P',\ldots$ will be observed
within a given time frame, meaning that a series of values $p,p',\ldots$ of
these properties will become known. At time $n$, it is just known that these
values belong to predefined, finite sets $\{p\},\{p'\},\ldots$. In such a
context $C$, it is possible to define a complex Hilbert space $\mathcal{H}_{C}$
on top of $\mathcal{S}$, in which the elements are contextual state vectors
$\bar{S}_{C}$. Born's rule to calculate the probabilities to find the values
$p,p',\ldots$ is derived as the only generally applicable such rule. Also, we
can associate a self-adjoint operator $\bar{P}$ with eigenvalues $\{p\}$ to
each property $P$ observed within $C$. These operators obey
$[\bar{P},\bar{P}']=0$ if and only if the precise values of $P$ and $P'$ are
simultaneoulsy knowable.
| 0 | 1 | 0 | 0 | 0 | 0 |
Self-duality and scattering map for the hyperbolic van Diejen systems with two coupling parameters (with an appendix by S. Ruijsenaars) | In this paper, we construct global action-angle variables for a certain
two-parameter family of hyperbolic van Diejen systems. Following Ruijsenaars'
ideas on the translation invariant models, the proposed action-angle variables
come from a thorough analysis of the commutation relation obeyed by the Lax
matrix, whereas the proof of their canonicity is based on the study of the
scattering theory. As a consequence, we show that the van Diejen system of our
interest is self-dual with a factorized scattering map. Also, in an appendix by
S. Ruijsenaars, a novel proof of the spectral asymptotics of certain
exponential type matrix flows is presented. This result is of crucial
importance in our scattering-theoretical analysis.
| 0 | 1 | 1 | 0 | 0 | 0 |
Inference for heavy tailed stationary time series based on sliding blocks | The block maxima method in extreme value theory consists of fitting an
extreme value distribution to a sample of block maxima extracted from a time
series. Traditionally, the maxima are taken over disjoint blocks of
observations. Alternatively, the blocks can be chosen to slide through the
observation period, yielding a larger number of overlapping blocks. Inference
based on sliding blocks is found to be more efficient than inference based on
disjoint blocks. The asymptotic variance of the maximum likelihood estimator of
the Fréchet shape parameter is reduced by more than 18%. Interestingly, the
amount of the efficiency gain is the same whatever the serial dependence of the
underlying time series: as for disjoint blocks, the asymptotic distribution
depends on the serial dependence only through the sequence of scaling
constants. The findings are illustrated by simulation experiments and are
applied to the estimation of high return levels of the daily log-returns of the
Standard & Poor's 500 stock market index.
| 0 | 0 | 1 | 1 | 0 | 0 |
Active tuning of high-Q dielectric metasurfaces | We demonstrate the active tuning of all-dielectric metasurfaces exhibiting
high-quality factor (high-Q) resonances. The active control is provided by
embedding the asymmetric silicon meta-atoms with liquid crystals, which allows
the relative index of refraction to be controlled through heating. It is found
that high quality factor resonances ($Q=270\pm30$) can be tuned over more than
three resonance widths. Our results demonstrate the feasibility of using
all-dielectric metasurfaces to construct tunable narrow-band filters.
| 0 | 1 | 0 | 0 | 0 | 0 |
Semi-Supervised QA with Generative Domain-Adaptive Nets | We study the problem of semi-supervised question answering----utilizing
unlabeled text to boost the performance of question answering models. We
propose a novel training framework, the Generative Domain-Adaptive Nets. In
this framework, we train a generative model to generate questions based on the
unlabeled text, and combine model-generated questions with human-generated
questions for training question answering models. We develop novel domain
adaptation algorithms, based on reinforcement learning, to alleviate the
discrepancy between the model-generated data distribution and the
human-generated data distribution. Experiments show that our proposed framework
obtains substantial improvement from unlabeled text.
| 1 | 0 | 0 | 0 | 0 | 0 |
Rationality proofs by curve counting | We propose an approach for showing rationality of an algebraic variety $X$.
We try to cover $X$ by rational curves of certain type and count how many
curves pass through a generic point. If the answer is $1$, then we can
sometimes reduce the question of rationality of $X$ to the question of
rationality of a closed subvariety of $X$. This approach is applied to the case
of the so-called Ueno-Campana manifolds. Our experiments indicate that the
previously open cases $X_{4,6}$ and $X_{5,6}$ are both rational. However, this
result is not rigorously justified and depends on a heuristic argument and a
Monte Carlo type computer simulation. In an unexpected twist, existence of
lattices $D_6$, $E_8$ and $\Lambda_{10}$ turns out to be crucial.
| 0 | 0 | 1 | 0 | 0 | 0 |
Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models | With the availability of large databases and recent improvements in deep
learning methodology, the performance of AI systems is reaching or even
exceeding the human level on an increasing number of complex tasks. Impressive
examples of this development can be found in domains such as image
classification, sentiment analysis, speech understanding or strategic game
playing. However, because of their nested non-linear structure, these highly
successful machine learning and artificial intelligence models are usually
applied in a black box manner, i.e., no information is provided about what
exactly makes them arrive at their predictions. Since this lack of transparency
can be a major drawback, e.g., in medical applications, the development of
methods for visualizing, explaining and interpreting deep learning models has
recently attracted increasing attention. This paper summarizes recent
developments in this field and makes a plea for more interpretability in
artificial intelligence. Furthermore, it presents two approaches to explaining
predictions of deep learning models, one method which computes the sensitivity
of the prediction with respect to changes in the input and one approach which
meaningfully decomposes the decision in terms of the input variables. These
methods are evaluated on three classification tasks.
| 1 | 0 | 0 | 1 | 0 | 0 |
The study on quantum material WTe2 | WTe2 and its sister alloys have attracted tremendous attentions recent years
due to the large non-saturating magnetoresistance and topological non-trivial
properties. Herein, we briefly review the electrical property studies on this
new quantum material.
| 0 | 1 | 0 | 0 | 0 | 0 |
Learning Social Image Embedding with Deep Multimodal Attention Networks | Learning social media data embedding by deep models has attracted extensive
research interest as well as boomed a lot of applications, such as link
prediction, classification, and cross-modal search. However, for social images
which contain both link information and multimodal contents (e.g., text
description, and visual content), simply employing the embedding learnt from
network structure or data content results in sub-optimal social image
representation. In this paper, we propose a novel social image embedding
approach called Deep Multimodal Attention Networks (DMAN), which employs a deep
model to jointly embed multimodal contents and link information. Specifically,
to effectively capture the correlations between multimodal contents, we propose
a multimodal attention network to encode the fine-granularity relation between
image regions and textual words. To leverage the network structure for
embedding learning, a novel Siamese-Triplet neural network is proposed to model
the links among images. With the joint deep model, the learnt embedding can
capture both the multimodal contents and the nonlinear network information.
Extensive experiments are conducted to investigate the effectiveness of our
approach in the applications of multi-label classification and cross-modal
search. Compared to state-of-the-art image embeddings, our proposed DMAN
achieves significant improvement in the tasks of multi-label classification and
cross-modal search.
| 1 | 0 | 0 | 1 | 0 | 0 |
DOC: Deep Open Classification of Text Documents | Traditional supervised learning makes the closed-world assumption that the
classes appeared in the test data must have appeared in training. This also
applies to text learning or text classification. As learning is used
increasingly in dynamic open environments where some new/test documents may not
belong to any of the training classes, identifying these novel documents during
classification presents an important problem. This problem is called open-world
classification or open classification. This paper proposes a novel deep
learning based approach. It outperforms existing state-of-the-art techniques
dramatically.
| 1 | 0 | 0 | 0 | 0 | 0 |
When Is the First Spurious Variable Selected by Sequential Regression Procedures? | Applied statisticians use sequential regression procedures to produce a
ranking of explanatory variables and, in settings of low correlations between
variables and strong true effect sizes, expect that variables at the very top
of this ranking are truly relevant to the response. In a regime of certain
sparsity levels, however, three examples of sequential procedures--forward
stepwise, the lasso, and least angle regression--are shown to include the first
spurious variable unexpectedly early. We derive a rigorous, sharp prediction of
the rank of the first spurious variable for these three procedures,
demonstrating that the first spurious variable occurs earlier and earlier as
the regression coefficients become denser. This counterintuitive phenomenon
persists for statistically independent Gaussian random designs and an
arbitrarily large magnitude of the true effects. We gain a better understanding
of the phenomenon by identifying the underlying cause and then leverage the
insights to introduce a simple visualization tool termed the double-ranking
diagram to improve on sequential methods. As a byproduct of these findings, we
obtain the first provable result certifying the exact equivalence between the
lasso and least angle regression in the early stages of solution paths beyond
orthogonal designs. This equivalence can seamlessly carry over many important
model selection results concerning the lasso to least angle regression.
| 0 | 0 | 1 | 1 | 0 | 0 |
Interpreting Blackbox Models via Model Extraction | Interpretability has become incredibly important as machine learning is
increasingly used to inform consequential decisions. We propose to construct
global explanations of complex, blackbox models in the form of a decision tree
approximating the original model---as long as the decision tree is a good
approximation, then it mirrors the computation performed by the blackbox model.
We devise a novel algorithm for extracting decision tree explanations that
actively samples new training points to avoid overfitting. We evaluate our
algorithm on a random forest to predict diabetes risk and a learned controller
for cart-pole. Compared to several baselines, our decision trees are both
substantially more accurate and equally or more interpretable based on a user
study. Finally, we describe several insights provided by our interpretations,
including a causal issue validated by a physician.
| 1 | 0 | 0 | 0 | 0 | 0 |
Extending holomorphic motions and monodromy | Let $E$ be a closed set in the Riemann sphere $\widehat{\mathbb{C}}$. We
consider a holomorphic motion $\phi$ of $E$ over a complex manifold $M$, that
is, a holomorphic family of injections on $E$ parametrized by $M$. It is known
that if $M$ is the unit disk $\Delta$ in the complex plane, then any
holomorphic motion of $E$ over $\Delta$ can be extended to a holomorphic motion
of the Riemann sphere over $\Delta$. In this paper, we consider conditions
under which a holomorphic motion of $E$ over a non-simply connected Riemann
surface $X$ can be extended to a holomorphic motion of $\widehat{\mathbb{C}}$
over $X$. Our main result shows that a topological condition, the triviality of
the monodromy, gives a necessary and sufficient condition for a holomorphic
motion of $E$ over $X$ to be extended to a holomorphic motion of
$\widehat{\mathbb{C}}$ over $X$. We give topological and geometric conditions
for a holomorphic motion over a Riemann surface to be extended. We also apply
our result to a lifting problem for holomorphic maps to Teichmüller spaces.
| 0 | 0 | 1 | 0 | 0 | 0 |
A/D Converter Architectures for Energy-Efficient Vision Processor | AI applications have emerged in current world. Among AI applications,
computer-vision (CV) related applications have attracted high interest.
Hardware implementation of CV processors necessitates a high performance but
low-power image detector. The key to energy-efficiency work lies in
analog-digital converting, where output of imaging detectors is transferred to
digital domain and CV algorithms can be performed on data. In this paper,
analog-digital converter architectures are compared, and an example ADC design
is proposed which achieves both good performance and low power consumption.
| 1 | 0 | 0 | 0 | 0 | 0 |
Quotients in monadic programming: Projective algebras are equivalent to coalgebras | In monadic programming, datatypes are presented as free algebras, generated
by data values, and by the algebraic operations and equations capturing some
computational effects. These algebras are free in the sense that they satisfy
just the equations imposed by their algebraic theory, and remain free of any
additional equations. The consequence is that they do not admit quotient types.
This is, of course, often inconvenient. Whenever a computation involves data
with multiple representatives, and they need to be identified according to some
equations that are not satisfied by all data, the monadic programmer has to
leave the universe of free algebras, and resort to explicit destructors. We
characterize the situation when these destructors are preserved under all
operations, and the resulting quotients of free algebras are also their
subalgebras. Such quotients are called *projective*. Although popular in
universal algebra, projective algebras did not attract much attention in the
monadic setting, where they turn out to have a surprising avatar: for any given
monad, a suitable category of projective algebras is equivalent with the
category of coalgebras for the comonad induced by any monad resolution. For a
monadic programmer, this equivalence provides a convenient way to implement
polymorphic quotients as coalgebras. The dual correspondence of injective
coalgebras and all algebras leads to a different family of quotient types,
which seems to have a different family of applications. Both equivalences also
entail several general corollaries concerning monadicity and comonadicity.
| 1 | 0 | 1 | 0 | 0 | 0 |
Density Independent Algorithms for Sparsifying $k$-Step Random Walks | We give faster algorithms for producing sparse approximations of the
transition matrices of $k$-step random walks on undirected, weighted graphs.
These transition matrices also form graphs, and arise as intermediate objects
in a variety of graph algorithms. Our improvements are based on a better
understanding of processes that sample such walks, as well as tighter bounds on
key weights underlying these sampling processes. On a graph with $n$ vertices
and $m$ edges, our algorithm produces a graph with about $n\log{n}$ edges that
approximates the $k$-step random walk graph in about $m + n \log^4{n}$ time. In
order to obtain this runtime bound, we also revisit "density independent"
algorithms for sparsifying graphs whose runtime overhead is expressed only in
terms of the number of vertices.
| 1 | 0 | 0 | 0 | 0 | 0 |
On a generalized $k$-FL sequence and its applications | We introduce a generalized $k$-FL sequence and special kind of pairs of real
numbers that are related to it, and give an application on the integral
solutions of a certain equation using those pairs. Also, we associate skew
circulant and circulant matrices to each generalized $k$-FL sequence, and study
the determinantal variety of those matrices as an application.
| 0 | 0 | 1 | 0 | 0 | 0 |
Supervised Metric Learning with Generalization Guarantees | The crucial importance of metrics in machine learning algorithms has led to
an increasing interest in optimizing distance and similarity functions, an area
of research known as metric learning. When data consist of feature vectors, a
large body of work has focused on learning a Mahalanobis distance. Less work
has been devoted to metric learning from structured objects (such as strings or
trees), most of it focusing on optimizing a notion of edit distance. We
identify two important limitations of current metric learning approaches.
First, they allow to improve the performance of local algorithms such as
k-nearest neighbors, but metric learning for global algorithms (such as linear
classifiers) has not been studied so far. Second, the question of the
generalization ability of metric learning methods has been largely ignored. In
this thesis, we propose theoretical and algorithmic contributions that address
these limitations. Our first contribution is the derivation of a new kernel
function built from learned edit probabilities. Our second contribution is a
novel framework for learning string and tree edit similarities inspired by the
recent theory of (e,g,t)-good similarity functions. Using uniform stability
arguments, we establish theoretical guarantees for the learned similarity that
give a bound on the generalization error of a linear classifier built from that
similarity. In our third contribution, we extend these ideas to metric learning
from feature vectors by proposing a bilinear similarity learning method that
efficiently optimizes the (e,g,t)-goodness. Generalization guarantees are
derived for our approach, highlighting that our method minimizes a tighter
bound on the generalization error of the classifier. Our last contribution is a
framework for establishing generalization bounds for a large class of existing
metric learning algorithms based on a notion of algorithmic robustness.
| 1 | 0 | 0 | 0 | 0 | 0 |
Symbolic dynamics for Kuramoto-Sivashinsky PDE on the line --- a computer-assisted proof | The Kuramoto-Sivashinsky PDE on the line with odd and periodic boundary
conditions and with parameter $\nu=0.1212$ is considered. We give a
computer-assisted proof the existence of symbolic dynamics and countable
infinity of periodic orbits with arbitrary large periods.
| 0 | 1 | 0 | 0 | 0 | 0 |
Individual Dynamical Masses of Ultracool Dwarfs | We present the full results of our decade-long astrometric monitoring
programs targeting 31 ultracool binaries with component spectral types M7-T5.
Joint analysis of resolved imaging from Keck Observatory and Hubble Space
Telescope and unresolved astrometry from CFHT/WIRCam yields parallactic
distances for all systems, robust orbit determinations for 23 systems, and
photocenter orbits for 19 systems. As a result, we measure 38 precise
individual masses spanning 30-115 $M_{\rm Jup}$. We determine a
model-independent substellar boundary that is $\approx$70 $M_{\rm Jup}$ in mass
($\approx$L4 in spectral type), and we validate Baraffe et al. (2015)
evolutionary model predictions for the lithium-depletion boundary (60 $M_{\rm
Jup}$ at field ages). Assuming each binary is coeval, we test models of the
substellar mass-luminosity relation and find that in the L/T transition, only
the Saumon & Marley (2008) "hybrid" models accounting for cloud clearing match
our data. We derive a precise, mass-calibrated spectral type-effective
temperature relation covering 1100-2800 K. Our masses enable a novel direct
determination of the age distribution of field brown dwarfs spanning L4-T5 and
30-70 $M_{\rm Jup}$. We determine a median age of 1.3 Gyr, and our population
synthesis modeling indicates our sample is consistent with a constant star
formation history modulated by dynamical heating in the Galactic disk. We
discover two triple-brown-dwarf systems, the first with directly measured
masses and eccentricities. We examine the eccentricity distribution, carefully
considering biases and completeness, and find that low-eccentricity orbits are
significantly more common among ultracool binaries than solar-type binaries,
possibly indicating the early influence of long-lived dissipative gas disks.
Overall, this work represents a major advance in the empirical view of very
low-mass stars and brown dwarfs.
| 0 | 1 | 0 | 0 | 0 | 0 |
Beta Dips in the Gaia Era: Simulation Predictions of the Galactic Velocity Anisotropy Parameter for Stellar Halos | The velocity anisotropy parameter, beta, is a measure of the kinematic state
of orbits in the stellar halo which holds promise for constraining the merger
history of the Milky Way (MW). We determine global trends for beta as a
function of radius from three suites of simulations, including accretion only
and cosmological hydrodynamic simulations. We find that both types of
simulations are consistent and predict strong radial anisotropy (<beta>~0.7)
for Galactocentric radii greater than 10 kpc. Previous observations of beta for
the MW's stellar halo claim a detection of an isotropic or tangential "dip" at
r~20 kpc. Using the N-body+SPH simulations, we investigate the temporal
persistence, population origin, and severity of "dips" in beta. We find dips in
the in situ stellar halo are long-lived, while dips in the accreted stellar
halo are short-lived and tied to the recent accretion of satellite material. We
also find that a major merger as early as z~1 can result in a present day low
(isotropic to tangential) value of beta over a wide range of radii and angular
expanse. While all of these mechanisms are plausible drivers for the beta dip
observed in the MW, in the simulations, each mechanism has a unique metallicity
signature associated with it, implying that future spectroscopic surveys could
distinguish between them. Since an accurate knowledge of beta(r) is required
for measuring the mass of the MW halo, we note significant transient dips in
beta could cause an overestimate of the halo's mass when using spherical Jeans
equation modeling.
| 0 | 1 | 0 | 0 | 0 | 0 |
Computational Flows in Arithmetic | A computational flow is a pair consisting of a sequence of computational
problems of a certain sort and a sequence of computational reductions among
them. In this paper we will develop a theory for these computational flows and
we will use it to make a sound and complete interpretation for bounded theories
of arithmetic. This property helps us to decompose a first order arithmetical
proof to a sequence of computational reductions by which we can extract the
computational content of low complexity statements in some bounded theories of
arithmetic such as $I\Delta_0$, $T^k_n$, $I\Delta_0+EXP$ and $PRA$. In the last
section, by generalizing term-length flows to ordinal-length flows, we will
extend our investigation from bounded theories to strong unbounded ones such as
$I\Sigma_n$ and $PA+TI(\alpha)$ and we will capture their total $NP$ search
problems as a consequence.
| 1 | 0 | 1 | 0 | 0 | 0 |
Injective homomorphisms of mapping class groups of non-orientable surfaces | Let $N$ be a compact, connected, non-orientable surface of genus $\rho$ with
$n$ boundary components, with $\rho \ge 5$ and $n \ge 0$, and let $\mathcal{M}
(N)$ be the mapping class group of $N$. We show that, if $\mathcal{G}$ is a
finite index subgroup of $\mathcal{M} (N)$ and $\varphi: \mathcal{G} \to
\mathcal{M} (N)$ is an injective homomorphism, then there exists $f_0 \in
\mathcal{M} (N)$ such that $\varphi (g) = f_0 g f_0^{-1}$ for all $g \in
\mathcal{G}$. We deduce that the abstract commensurator of $\mathcal{M} (N)$
coincides with $\mathcal{M} (N)$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Transport in a disordered $ν=2/3$ fractional quantum Hall junction | Electric and thermal transport properties of a $\nu=2/3$ fractional quantum
Hall junction are analyzed. We investigate the evolution of the electric and
thermal two-terminal conductances, $G$ and $G^Q$, with system size $L$ and
temperature $T$. This is done both for the case of strong interaction between
the 1 and 1/ 3 modes (when the low-temperature physics of the interacting
segment of the device is controlled by the vicinity of the strong-disorder
Kane-Fisher-Polchinski fixed point) and for relatively weak interaction, for
which the disorder is irrelevant at $T=0$ in the renormalization-group sense.
The transport properties in both cases are similar in several respects. In
particular, $G(L)$ is close to 4/3 (in units of $e^2/h$) and $G^Q$ to 2 (in
units of $\pi T / 6 \hbar$) for small $L$, independently of the interaction
strength. For large $L$ the system is in an incoherent regime, with $G$ given
by 2/3 and $G^Q$ showing the Ohmic scaling, $G^Q\propto 1/L$, again for any
interaction strength. The hallmark of the strong-disorder fixed point is the
emergence of an intermediate range of $L$, in which the electric conductance
shows strong mesoscopic fluctuations and the thermal conductance is $G^Q=1$.
The analysis is extended also to a device with floating 1/3 mode, as studied in
a recent experiment [A. Grivnin et al, Phys. Rev. Lett. 113, 266803 (2014)].
| 0 | 1 | 0 | 0 | 0 | 0 |
Giant Field Enhancement in Longitudinal Epsilon Near Zero Films | We report that a longitudinal epsilon-near-zero (LENZ) film leads to giant
field enhancement and strong radiation emission of sources in it and that these
features are superior to what found in previous studies related to isotropic
ENZ. LENZ films are uniaxially anisotropic films where relative permittivity
along the normal direction to the film is much smaller than unity, while the
permittivity in the transverse plane of the film is not vanishing. It has been
shown previously that realistic isotropic ENZ films do not provide large field
enhancement due to material losses, however, we show the loss effects can be
overcome using LENZ films. We also prove that in comparison to the (isotropic)
ENZ case, the LENZ film field enhancement is not only remarkably larger but it
also occurs for a wider range of angles of incidence. Importantly, the field
enhancement near the interface of the LENZ film is almost independent of the
thickness unlike for the isotropic ENZ case where extremely small thickness is
required. We show that for a LENZ structure consisting of a multilayer of
dysprosium-doped cadmium oxide and silicon accounting for realistic losses,
field intensity enhancement of 30 is obtained which is almost 10 times larger
than that obtained with realistic ENZ materials
| 0 | 1 | 0 | 0 | 0 | 0 |
Analysis and optimal individual pitch control decoupling by inclusion of an azimuth offset in the multi-blade coordinate transformation | With the trend of increasing wind turbine rotor diameters, the mitigation of
blade fatigue loadings is of special interest to extend the turbine lifetime.
Fatigue load reductions can be partly accomplished using Individual Pitch
Control (IPC) facilitated by the so-called Multi-Blade Coordinate (MBC)
transformation. This operation transforms and decouples the blade load signals
in a yaw- and tilt-axis. However, in practical scenarios, the resulting
transformed system still shows coupling between the axes, posing a need for
more advanced Multiple-Input Multiple-Output (MIMO) control architectures. This
paper presents a novel analysis and design framework for decoupling of the
non-rotating axes by the inclusion of an azimuth offset in the reverse MBC
transformation, enabling the application of simple Single-Input Single-Output
(SISO) controllers. A thorough analysis is given by including the azimuth
offset in a frequency-domain representation. The result is evaluated on
simplified blade models, as well as linearizations obtained from the
NREL~5\nobreakdash-MW reference wind turbine. A sensitivity and decoupling
assessment justify the application of decentralized SISO control loops for IPC.
Furthermore, closed-loop high-fidelity simulations show beneficial effects on
pitch actuation and blade fatigue load reductions.
| 1 | 0 | 0 | 0 | 0 | 0 |
Deep Learning-aided Application Scheduler for Vehicular Safety Communication | 802.11p based V2X communication uses stochastic medium access control, which
cannot prevent broadcast packet collision, in particular during high channel
load. Wireless congestion control has been designed to keep the channel load at
an optimal point. However, vehicles' lack of precise and granular knowledge
about true channel activity, in time and space, makes it impossible to fully
avoid packet collisions. In this paper, we propose a machine learning approach
using deep neural network for learning the vehicles' transmit patterns, and as
such predicting future channel activity in space and time. We evaluate the
performance of our proposal via simulation considering multiple safety-related
V2X services involving heterogeneous transmit patterns. Our results show that
predicting channel activity, and transmitting accordingly, reduces collisions
and significantly improves communication performance.
| 1 | 0 | 0 | 0 | 0 | 0 |
Convergence rates in the central limit theorem for weighted sums of Bernoulli random fields | We prove moment inequalities for a class of functionals of i.i.d. random
fields. We then derive rates in the central limit theorem for weighted sums of
such randoms fields via an approximation by $m$-dependent random fields.
| 0 | 0 | 1 | 1 | 0 | 0 |
On Adaptive Estimation for Dynamic Bernoulli Bandits | The multi-armed bandit (MAB) problem is a classic example of the
exploration-exploitation dilemma. It is concerned with maximising the total
rewards for a gambler by sequentially pulling an arm from a multi-armed slot
machine where each arm is associated with a reward distribution. In static
MABs, the reward distributions do not change over time, while in dynamic MABs,
each arm's reward distribution can change, and the optimal arm can switch over
time. Motivated by many real applications where rewards are binary, we focus on
dynamic Bernoulli bandits. Standard methods like $\epsilon$-Greedy and Upper
Confidence Bound (UCB), which rely on the sample mean estimator, often fail to
track changes in the underlying reward for dynamic problems. In this paper, we
overcome the shortcoming of slow response to change by deploying adaptive
estimation in the standard methods and propose a new family of algorithms,
which are adaptive versions of $\epsilon$-Greedy, UCB, and Thompson sampling.
These new methods are simple and easy to implement. Moreover, they do not
require any prior knowledge about the dynamic reward process, which is
important for real applications. We examine the new algorithms numerically in
different scenarios and the results show solid improvements of our algorithms
in dynamic environments.
| 1 | 0 | 0 | 1 | 0 | 0 |
Near Optimal Sketching of Low-Rank Tensor Regression | We study the least squares regression problem \begin{align*} \min_{\Theta \in
\mathcal{S}_{\odot D,R}} \|A\Theta-b\|_2, \end{align*} where
$\mathcal{S}_{\odot D,R}$ is the set of $\Theta$ for which $\Theta =
\sum_{r=1}^{R} \theta_1^{(r)} \circ \cdots \circ \theta_D^{(r)}$ for vectors
$\theta_d^{(r)} \in \mathbb{R}^{p_d}$ for all $r \in [R]$ and $d \in [D]$, and
$\circ$ denotes the outer product of vectors. That is, $\Theta$ is a
low-dimensional, low-rank tensor. This is motivated by the fact that the number
of parameters in $\Theta$ is only $R \cdot \sum_{d=1}^D p_d$, which is
significantly smaller than the $\prod_{d=1}^{D} p_d$ number of parameters in
ordinary least squares regression. We consider the above CP decomposition model
of tensors $\Theta$, as well as the Tucker decomposition. For both models we
show how to apply data dimensionality reduction techniques based on {\it
sparse} random projections $\Phi \in \mathbb{R}^{m \times n}$, with $m \ll n$,
to reduce the problem to a much smaller problem $\min_{\Theta} \|\Phi A \Theta
- \Phi b\|_2$, for which if $\Theta'$ is a near-optimum to the smaller problem,
then it is also a near optimum to the original problem. We obtain significantly
smaller dimension and sparsity in $\Phi$ than is possible for ordinary least
squares regression, and we also provide a number of numerical simulations
supporting our theory.
| 1 | 0 | 0 | 1 | 0 | 0 |
Fast computation of p-values for the permutation test based on Pearson's correlation coefficient and other statistical tests | Permutation tests are among the simplest and most widely used statistical
tools. Their p-values can be computed by a straightforward sampling of
permutations. However, this way of computing p-values is often so slow that it
is replaced by an approximation, which is accurate only for part of the
interesting range of parameters. Moreover, the accuracy of the approximation
can usually not be improved by increasing the computation time.
We introduce a new sampling-based algorithm which uses the fast Fourier
transform to compute p-values for the permutation test based on Pearson's
correlation coefficient. The algorithm is practically and asymptotically faster
than straightforward sampling. Typically, its complexity is logarithmic in the
input size, while the complexity of straightforward sampling is linear. The
idea behind the algorithm can also be used to accelerate the computation of
p-values for many other common statistical tests. The algorithm is easy to
implement, but its analysis involves results from the representation theory of
the symmetric group.
| 0 | 0 | 0 | 1 | 0 | 0 |
Convergence analysis of the information matrix in Gaussian belief propagation | Gaussian belief propagation (BP) has been widely used for distributed
estimation in large-scale networks such as the smart grid, communication
networks, and social networks, where local measurements/observations are
scattered over a wide geographical area. However, the convergence of Gaus- sian
BP is still an open issue. In this paper, we consider the convergence of
Gaussian BP, focusing in particular on the convergence of the information
matrix. We show analytically that the exchanged message information matrix
converges for arbitrary positive semidefinite initial value, and its dis- tance
to the unique positive definite limit matrix decreases exponentially fast.
| 1 | 0 | 0 | 0 | 0 | 0 |
Vortex pairs in a spin-orbit coupled Bose-Einstein condensate | Static and dynamic properties of vortices in a two-component Bose-Einstein
condensate with Rashba spin-orbit coupling are investigated. The mass current
around a vortex core in the plane-wave phase is found to be deformed by the
spin-orbit coupling, and this makes the dynamics of the vortex pairs quite
different from those in a scalar Bose-Einstein condensate. The velocity of a
vortex-antivortex pair is much smaller than that without spin-orbit coupling,
and there exist stationary states. Two vortices with the same circulation move
away from each other or unite to form a stationary state.
| 0 | 1 | 0 | 0 | 0 | 0 |
GlobeNet: Convolutional Neural Networks for Typhoon Eye Tracking from Remote Sensing Imagery | Advances in remote sensing technologies have made it possible to use
high-resolution visual data for weather observation and forecasting tasks. We
propose the use of multi-layer neural networks for understanding complex
atmospheric dynamics based on multichannel satellite images. The capability of
our model was evaluated by using a linear regression task for single typhoon
coordinates prediction. A specific combination of models and different
activation policies enabled us to obtain an interesting prediction result in
the northeastern hemisphere (ENH).
| 1 | 0 | 0 | 0 | 0 | 0 |
The Incremental Multiresolution Matrix Factorization Algorithm | Multiresolution analysis and matrix factorization are foundational tools in
computer vision. In this work, we study the interface between these two
distinct topics and obtain techniques to uncover hierarchical block structure
in symmetric matrices -- an important aspect in the success of many vision
problems. Our new algorithm, the incremental multiresolution matrix
factorization, uncovers such structure one feature at a time, and hence scales
well to large matrices. We describe how this multiscale analysis goes much
farther than what a direct global factorization of the data can identify. We
evaluate the efficacy of the resulting factorizations for relative leveraging
within regression tasks using medical imaging data. We also use the
factorization on representations learned by popular deep networks, providing
evidence of their ability to infer semantic relationships even when they are
not explicitly trained to do so. We show that this algorithm can be used as an
exploratory tool to improve the network architecture, and within numerous other
settings in vision.
| 1 | 0 | 0 | 1 | 0 | 0 |
Carlsson's rank conjecture and a conjecture on square-zero upper triangular matrices | Let $k$ be an algebraically closed field and $A$ the polynomial algebra in
$r$ variables with coefficients in $k$. In case the characteristic of $k$ is
$2$, Carlsson conjectured that for any $DG$-$A$-module $M$ of dimension $N$ as
a free $A$-module, if the homology of $M$ is nontrivial and finite dimensional
as a $k$-vector space, then $2^r\leq N$. Here we state a stronger conjecture
about varieties of square-zero upper-triangular $N\times N$ matrices with
entries in $A$. Using stratifications of these varieties via Borel orbits, we
show that the stronger conjecture holds when $N < 8$ or $r < 3$ without any
restriction on the characteristic of $k$. As a consequence, we attain a new
proof for many of the known cases of Carlsson's conjecture and give new results
when $N > 4$ and $r = 2$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Effect of Anodizing Parameters on Corrosion Resistance of Coated Purified Magnesium | Magnesium and its alloys are being considered for biodegradable biomaterials.
However, high and uncontrollable corrosion rates have limited the use of
magnesium and its alloys in biological environments. In this research, high
purified magnesium (HP-Mg) was coated with stearic acid in order to improve the
corrosion resistance of magnesium. Anodization and immersion in stearic acid
were used to form a hydrophobic layer on magnesium substrate. Different DC
voltages, times, electrolytes, and temperatures were tested. Electrochemical
impedance spectroscopy and potentiodynamic polarization were used to measure
the corrosion rates of the coated HP-Mg. The results showed that optimum
corrosion resistance occurred for specimens anodized at +4 volts for 4 minutes
at 70°C in borate benzoate. The corrosion resistance was temporarily
enhanced by 1000x.
| 0 | 1 | 0 | 0 | 0 | 0 |
The probabilistic nature of McShane's identity: planar tree coding of simple loops | In this article, we discuss a probabilistic interpretation of McShane's
identity as describing a finite measure on the space of embedded paths though a
point.
| 0 | 0 | 1 | 0 | 0 | 0 |
RT-DAP: A Real-Time Data Analytics Platform for Large-scale Industrial Process Monitoring and Control | In most process control systems nowadays, process measurements are
periodically collected and archived in historians. Analytics applications
process the data, and provide results offline or in a time period that is
considerably slow in comparison to the performance of the manufacturing
process. Along with the proliferation of Internet-of-Things (IoT) and the
introduction of "pervasive sensors" technology in process industries,
increasing number of sensors and actuators are installed in process plants for
pervasive sensing and control, and the volume of produced process data is
growing exponentially. To digest these data and meet the ever-growing
requirements to increase production efficiency and improve product quality,
there needs to be a way to both improve the performance of the analytics system
and scale the system to closely monitor a much larger set of plant resources.
In this paper, we present a real-time data analytics platform, called RT-DAP,
to support large-scale continuous data analytics in process industries. RT-DAP
is designed to be able to stream, store, process and visualize a large volume
of realtime data flows collected from heterogeneous plant resources, and
feedback to the control system and operators in a realtime manner. A prototype
of the platform is implemented on Microsoft Azure. Our extensive experiments
validate the design methodologies of RT-DAP and demonstrate its efficiency in
both component and system levels.
| 1 | 0 | 0 | 0 | 0 | 0 |
BFGS convergence to nonsmooth minimizers of convex functions | The popular BFGS quasi-Newton minimization algorithm under reasonable
conditions converges globally on smooth convex functions. This result was
proved by Powell in 1976: we consider its implications for functions that are
not smooth. In particular, an analogous convergence result holds for functions,
like the Euclidean norm, that are nonsmooth at the minimizer.
| 0 | 0 | 1 | 0 | 0 | 0 |
A collaborative citizen science platform for real-time volunteer computing and games | Volunteer computing (VC) or distributed computing projects are common in the
citizen cyberscience (CCS) community and present extensive opportunities for
scientists to make use of computing power donated by volunteers to undertake
large-scale scientific computing tasks. Volunteer computing is generally a
non-interactive process for those contributing computing resources to a project
whereas volunteer thinking (VT) or distributed thinking, which allows
volunteers to participate interactively in citizen cyberscience projects to
solve human computation tasks. In this paper we describe the integration of
three tools, the Virtual Atom Smasher (VAS) game developed by CERN, LiveQ, a
job distribution middleware, and CitizenGrid, an online platform for hosting
and providing computation to CCS projects. This integration demonstrates the
combining of volunteer computing and volunteer thinking to help address the
scientific and educational goals of games like VAS. The paper introduces the
three tools and provides details of the integration process along with further
potential usage scenarios for the resulting platform.
| 1 | 0 | 0 | 0 | 0 | 0 |
Quasiparticle interference in multiband superconductors with strong coupling | We develop a theory of the quasiparticle interference (QPI) in multiband
superconductors based on strong-coupling Eliashberg approach within the Born
approximation. In the framework of this theory, we study dependencies of the
QPI response function in the multiband superconductors with nodeless s-wave
superconductive order parameter. We pay a special attention to the difference
of the quasiparticle scattering between the bands having the same and opposite
signs of the order parameter. We show that, at the momentum values close to the
momentum transfer between two bands, the energy dependence of the quasiparticle
interference response function has three singularities. Two of these correspond
to the values of the gap functions and the third one depends on both the gaps
and the transfer momentum. We argue that only the singularity near the smallest
band gap may be used as an universal tool to distinguish between $s_{++}$ and
$s_{\pm}$ order parameters. The robustness of the sign of the response function
peak near the smaller gap value, irrespective of the change in parameters, in
both the symmetry cases is a promising feature that can be harnessed
experimentally.
| 0 | 1 | 0 | 0 | 0 | 0 |
Trading Strategies Generated by Path-dependent Functionals of Market Weights | Almost twenty years ago, E.R. Fernholz introduced portfolio generating
functions which can be used to construct a variety of portfolios, solely in the
terms of the individual companies' market weights. I. Karatzas and J. Ruf
recently developed another methodology for the functional construction of
portfolios, which leads to very simple conditions for strong relative arbitrage
with respect to the market. In this paper, both of these notions of functional
portfolio generation are generalized in a pathwise, probability-free setting;
portfolio generating functions are substituted by path-dependent functionals,
which involve the current market weights, as well as additional
bounded-variation functions of past and present market weights. This
generalization leads to a wider class of functionally-generated portfolios than
was heretofore possible, and yields improved conditions for outperforming the
market portfolio over suitable time-horizons.
| 0 | 0 | 0 | 0 | 0 | 1 |
General Bayesian Inference over the Stiefel Manifold via the Givens Representation | We introduce an approach based on the Givens representation that allows for a
routine, reliable, and flexible way to infer Bayesian models with orthogonal
matrix parameters. This class of models most notably includes models from
multivariate statistics such factor models and probabilistic principal
component analysis (PPCA). Our approach overcomes several of the practical
barriers to using the Givens representation in a general Bayesian inference
framework. In particular, we show how to inexpensively compute the
change-of-measure term necessary for transformations of random variables. We
also show how to overcome specific topological pathologies that arise when
representing circular random variables in an unconstrained space. In addition,
we discuss how the alternative parameterization can be used to define new
distributions over orthogonal matrices as well as to constrain parameter space
to eliminate superfluous posterior modes in models such as PPCA. While previous
inference approaches to this problem involved specialized updates to the
orthogonal matrix parameters, our approach lets us represent these constrained
parameters in an unconstrained form. Unlike previous approaches, this allows
for the inference of models with orthogonal matrix parameters using any modern
inference algorithm including those available in modern Bayesian modeling
frameworks such as Stan, Edward, or PyMC3. We illustrate with examples how our
approach can be used in practice in Stan to infer models with orthogonal matrix
parameters, and we compare to existing methods.
| 0 | 0 | 0 | 1 | 0 | 0 |
Sine wave gating Silicon single-photon detectors for multiphoton entanglement experiments | Silicon single-photon detectors (SPDs) are the key devices for detecting
single photons in the visible wavelength range. Here we present high detection
efficiency silicon SPDs dedicated to the generation of multiphoton entanglement
based on the technique of high-frequency sine wave gating. The silicon
single-photon avalanche diodes (SPADs) components are acquired by disassembling
6 commercial single-photon counting modules (SPCMs). Using the new quenching
electronics, the average detection efficiency of SPDs is increased from 68.6%
to 73.1% at a wavelength of 785 nm. These sine wave gating SPDs are then
applied in a four-photon entanglement experiment, and the four-fold coincidence
count rate is increased by 30% without degrading its visibility compared with
the original SPCMs.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Tutorial on Fisher Information | In many statistical applications that concern mathematical psychologists, the
concept of Fisher information plays an important role. In this tutorial we
clarify the concept of Fisher information as it manifests itself across three
different statistical paradigms. First, in the frequentist paradigm, Fisher
information is used to construct hypothesis tests and confidence intervals
using maximum likelihood estimators; second, in the Bayesian paradigm, Fisher
information is used to define a default prior; lastly, in the minimum
description length paradigm, Fisher information is used to measure model
complexity.
| 0 | 0 | 1 | 1 | 0 | 0 |
A statistical approach to identify superluminous supernovae and probe their diversity | We investigate the identification of hydrogen-poor superluminous supernovae
(SLSNe I) using a photometric analysis, without including an arbitrary
magnitude threshold. We assemble a homogeneous sample of previously classified
SLSNe I from the literature, and fit their light curves using Gaussian
processes. From the fits, we identify four photometric parameters that have a
high statistical significance when correlated, and combine them in a parameter
space that conveys information on their luminosity and color evolution. This
parameter space presents a new definition for SLSNe I, which can be used to
analyse existing and future transient datasets. We find that 90% of previously
classified SLSNe I meet our new definition. We also examine the evidence for
two subclasses of SLSNe I, combining their photometric evolution with
spectroscopic information, namely the photospheric velocity and its gradient. A
cluster analysis reveals the presence of two distinct groups. `Fast' SLSNe show
fast light curves and color evolution, large velocities, and a large velocity
gradient. `Slow' SLSNe show slow light curve and color evolution, small
expansion velocities, and an almost non-existent velocity gradient. Finally, we
discuss the impact of our analyses in the understanding of the powering engine
of SLSNe, and their implementation as cosmological probes in current and future
surveys.
| 0 | 1 | 0 | 0 | 0 | 0 |
Low fertility rate reversal: a feature of interactions between Biological and Economic systems | An empirical relation indicates that an increase of living standard decreases
the Total Fertility Rate (TFR), but this trend was broken in highly developed
countries in 2005. The reversal of the TFR was associated with the continuous
economic and social development expressed by the Human Development Index (HDI).
We have investigated how universal and persistent the TFR reversal is. The
results show that in highly developed countries, $ \mathrm{HDI}>0.85 $, the TFR
and the HDI are not correlated in 2010-2014. Detailed analyses of correlations
and differences of the TFR and the HDI indicate a decrease of the TFR if the
HDI increases in this period. However, we found that a reversal of the TFR as a
consequence of economic development started at medium levels of the HDI, i.e. $
0.575<\mathrm{HDI}<0.85 $, in many countries. Our results show a transient
nature of the TFR reversal in highly developed countries in 2010-2014 and a
relative stable trend of the TFR increase in medium developed countries in
longer time periods. We believe that knowledge of the fundamental nature of the
TFR is very important for the survival of medium and highly developed
societies.
| 0 | 1 | 0 | 0 | 0 | 0 |
Analysis of $p$-Laplacian Regularization in Semi-Supervised Learning | We investigate a family of regression problems in a semi-supervised setting.
The task is to assign real-valued labels to a set of $n$ sample points,
provided a small training subset of $N$ labeled points. A goal of
semi-supervised learning is to take advantage of the (geometric) structure
provided by the large number of unlabeled data when assigning labels. We
consider random geometric graphs, with connection radius $\epsilon(n)$, to
represent the geometry of the data set. Functionals which model the task reward
the regularity of the estimator function and impose or reward the agreement
with the training data. Here we consider the discrete $p$-Laplacian
regularization.
We investigate asymptotic behavior when the number of unlabeled points
increases, while the number of training points remains fixed. We uncover a
delicate interplay between the regularizing nature of the functionals
considered and the nonlocality inherent to the graph constructions. We
rigorously obtain almost optimal ranges on the scaling of $\epsilon(n)$ for the
asymptotic consistency to hold. We prove that the minimizers of the discrete
functionals in random setting converge uniformly to the desired continuum
limit. Furthermore we discover that for the standard model used there is a
restrictive upper bound on how quickly $\epsilon(n)$ must converge to zero as
$n \to \infty$. We introduce a new model which is as simple as the original
model, but overcomes this restriction.
| 1 | 0 | 1 | 1 | 0 | 0 |
Spectral stability of shifted states on star graphs | We consider the nonlinear Schrödinger (NLS) equation with the subcritical
power nonlinearity on a star graph consisting of $N$ edges and a single vertex
under generalized Kirchhoff boundary conditions. The stationary NLS equation
may admit a family of solitary waves parameterized by a translational
parameter, which we call the shifted states. The two main examples include (i)
the star graph with even $N$ under the classical Kirchhoff boundary conditions
and (ii) the star graph with one incoming edge and $N-1$ outgoing edges under a
single constraint on coefficients of the generalized Kirchhoff boundary
conditions. We obtain the general counting results on the Morse index of the
shifted states and apply them to the two examples. In the case of (i), we prove
that the shifted states with even $N \geq 4$ are saddle points of the action
functional which are spectrally unstable under the NLS flow. In the case of
(ii), we prove that the shifted states with the monotone profiles in the $N-1$
outgoing edges are spectrally stable, whereas the shifted states with
non-monotone profiles in the $N-1$ outgoing edges are spectrally unstable, the
two families intersect at the half-soliton states which are spectrally stable
but nonlinearly unstable. Since the NLS equation on a star graph with shifted
states can be reduced to the homogeneous NLS equation on a line, the spectral
instability of shifted states is due to the perturbations breaking this
reduction. We give a simple argument suggesting that the spectrally stable
shifted states are nonlinear unstable under the NLS flow due to the
perturbations breaking the reduction to the NLS equation on a line.
| 0 | 1 | 0 | 0 | 0 | 0 |
Estimation of the multifractional function and the stability index of linear multifractional stable processes | In this paper we are interested in multifractional stable processes where the
self-similarity index $H$ is a function of time, in other words $H$ becomes
time changing, and the stability index $\alpha$ is a constant. Using $\beta$-
negative power variations ($-1/2<\beta<0$), we propose estimators for the value
of the multifractional function $H$ at a fixed time $t_0$ and for $\alpha$ for
two cases: multifractional Brownian motion ($\alpha=2$) and linear
multifractional stable motion ($0<\alpha<2$). We get the consistency of our
estimates for the underlying processes with the rate of convergence.
| 0 | 0 | 1 | 1 | 0 | 0 |
Observation of "Topological" Microflares in the Solar Atmosphere | We report on observation of the unusual kind of solar microflares, presumably
associated with the so-called "topological trigger" of magnetic reconnection,
which was theoretically suggested long time ago by Gorbachev et al. (Sov. Ast.
1988, v.32, p.308) but has not been clearly identified so far by observations.
As can be seen in pictures by Hinode SOT in CaII line, there may be a bright
loop connecting two sunspots, which looks at the first sight just as a magnetic
field line connecting the opposite poles. However, a closer inspection of SDO
HMI magnetograms shows that the respective arc is anchored in the regions of
the same polarity near the sunspot boundaries. Yet another peculiar feature is
that the arc flashes almost instantly as a thin strip and then begins to expand
and decay, while the typical chromospheric flares in CaII line are much wider
and propagate progressively in space. A qualitative explanation of the unusual
flare can be given by the above-mentioned model of topological trigger. Namely,
there are such configurations of the magnetic sources on the surface of
photosphere that their tiny displacements result in the formation and fast
motion of a 3D null point along the arc located well above the plane of the
sources. So, such a null point can quickly ignite a magnetic reconnection along
the entire its trajectory. Pictorially, this can be presented as flipping the
so-called two-dome magnetic-field structure (which is just the reason why such
mechanism was called topological). The most important prerequisite for the
development of topological instability in the two-dome structure is a cruciform
arrangement of the magnetic sources in its base, and this condition is really
satisfied in the case under consideration.
| 0 | 1 | 0 | 0 | 0 | 0 |
Commutative positive varieties of languages | We study the commutative positive varieties of languages closed under various
operations: shuffle, renaming and product over one-letter alphabets.
| 1 | 0 | 1 | 0 | 0 | 0 |
A generalized model of social and biological contagion | We present a model of contagion that unifies and generalizes existing models
of the spread of social influences and micro-organismal infections. Our model
incorporates individual memory of exposure to a contagious entity (e.g., a
rumor or disease), variable magnitudes of exposure (dose sizes), and
heterogeneity in the susceptibility of individuals. Through analysis and
simulation, we examine in detail the case where individuals may recover from an
infection and then immediately become susceptible again (analogous to the
so-called SIS model). We identify three basic classes of contagion models which
we call \textit{epidemic threshold}, \textit{vanishing critical mass}, and
\textit{critical mass} classes, where each class of models corresponds to
different strategies for prevention or facilitation. We find that the
conditions for a particular contagion model to belong to one of the these three
classes depend only on memory length and the probabilities of being infected by
one and two exposures respectively. These parameters are in principle
measurable for real contagious influences or entities, thus yielding empirical
implications for our model. We also study the case where individuals attain
permanent immunity once recovered, finding that epidemics inevitably die out
but may be surprisingly persistent when individuals possess memory.
| 0 | 1 | 0 | 0 | 0 | 0 |
Luminous Efficiency Estimates of Meteors -I. Uncertainty analysis | The luminous efficiency of meteors is poorly known, but critical for
determining the meteoroid mass. We present an uncertainty analysis of the
luminous efficiency as determined by the classical ablation equations, and
suggest a possible method for determining the luminous efficiency of real
meteor events. We find that a two-term exponential fit to simulated lag data is
able to reproduce simulated luminous efficiencies reasonably well.
| 0 | 1 | 0 | 0 | 0 | 0 |
Application of Van Der Waals Density Functionals to Two Dimensional Systems Based on a Mixed Basis Approach | A van der Waals (vdW) density functional was implemented in the mixed basis
approach previously developed for studying two dimensional systems, in which
the vdW interaction plays an important role. The basis functions here are taken
to be the localized B-splines for the finite non-periodic dimension and plane
waves for the two periodic directions. This approach will significantly reduce
the size of the basis set, especially for large systems, and therefore is
computationally efficient for the diagonalization of the Kohn-Sham Hamiltonian.
We applied the present algorithm to calculate the binding energy for the
two-layer graphene case and the results are consistent with data reported
earlier. We also found that, due to the relatively weak vdW interaction, the
charge density obtained self-consistently for the whole bi-layer graphene
system is not significantly different from the simple addition of those for the
two individual one-layer system, except when the interlayer separation is close
enough that the strong electron-repulsion dominates. This finding suggests an
efficient way to calculate the vdW interaction for large complex systems
involving the Moire pattern configurations.
| 0 | 1 | 0 | 0 | 0 | 0 |
Secondary atmospheres on HD 219134 b and c | We analyze the interiors of HD~219134~b and c, which are among the coolest
super Earths detected thus far. Without using spectroscopic measurements, we
aim at constraining if the possible atmospheres are hydrogen-rich or
hydrogen-poor. In a first step, we employ a full probabilistic Bayesian
inference analysis in order to rigorously quantify the degeneracy of interior
parameters given the data of mass, radius, refractory element abundances,
semi-major axes, and stellar irradiation. We obtain constraints on structure
and composition for core, mantle, ice layer, and atmosphere. In a second step,
we aim to draw conclusions on the nature of possible atmospheres by considering
atmospheric escape. Specifically, we compare the actual possible atmospheres to
a threshold thickness above which a primordial (H$_2$-dominated) atmosphere can
be retained against evaporation over the planet's lifetime. The best
constrained parameters are the individual layer thicknesses. The maximum radius
fraction of possible atmospheres are 0.18 and 0.13 $R$ (radius), for planets b
and c, respectively. These values are significantly smaller than the threshold
thicknesses of primordial atmospheres: 0.28 and 0.19 $R$, respectively. Thus,
the possible atmospheres of planets b and c are unlikely to be H$_2$-dominated.
However, whether possible volatile layers are made of gas or liquid/solid water
cannot be uniquely determined. Our main conclusions are: (1) the possible
atmospheres for planets b and c are enriched and thus possibly secondary in
nature, and (2) both planets may contain a gas layer, whereas the layer of HD
219134 b must be larger. HD 219134 c can be rocky.
| 0 | 1 | 0 | 0 | 0 | 0 |
Classification of rank two Lie conformal algebras | We give a complete classification (up to isomorphism) of Lie conformal
algebras which are free of rank two as $\C[\partial]$-modules, and determine
their automorphism groups.
| 0 | 0 | 1 | 0 | 0 | 0 |
Smooth equivalence of deformations of domains in complex euclidean spaces | We prove that two smooth families of 2-connected domains in $\cc$ are
smoothly equivalent if they are equivalent under a possibly discontinuous
family of biholomorphisms. We construct, for $m \geq 3$, two smooth families of
smoothly bounded $m$-connected domains in $\cc$, and for $n\geq2$, two families
of strictly pseudoconvex domains in $\cc^n$, that are equivalent under
discontinuous families of biholomorphisms but not under any continuous family
of biholomorphisms. Finally, we give sufficient conditions for the smooth
equivalence of two smooth families of domains.
| 0 | 0 | 1 | 0 | 0 | 0 |
A lower bound of the hyperbolic dimension for meromorphic functions having a logarithmic Hölder tract | We improve existing lower bounds of the hyperbolic dimension for meromophic
functions that have a logarithmic tract {\Omega} which is a Hölder domain.
These bounds are given in terms of the fractal behavior, measured with integral
means, of the boundary of {\Omega} at infinity.
| 0 | 0 | 1 | 0 | 0 | 0 |
Invariant Causal Prediction for Nonlinear Models | An important problem in many domains is to predict how a system will respond
to interventions. This task is inherently linked to estimating the system's
underlying causal structure. To this end, Invariant Causal Prediction (ICP)
(Peters et al., 2016) has been proposed which learns a causal model exploiting
the invariance of causal relations using data from different environments. When
considering linear models, the implementation of ICP is relatively
straightforward. However, the nonlinear case is more challenging due to the
difficulty of performing nonparametric tests for conditional independence. In
this work, we present and evaluate an array of methods for nonlinear and
nonparametric versions of ICP for learning the causal parents of given target
variables. We find that an approach which first fits a nonlinear model with
data pooled over all environments and then tests for differences between the
residual distributions across environments is quite robust across a large
variety of simulation settings. We call this procedure "invariant residual
distribution test". In general, we observe that the performance of all
approaches is critically dependent on the true (unknown) causal structure and
it becomes challenging to achieve high power if the parental set includes more
than two variables. As a real-world example, we consider fertility rate
modelling which is central to world population projections. We explore
predicting the effect of hypothetical interventions using the accepted models
from nonlinear ICP. The results reaffirm the previously observed central causal
role of child mortality rates.
| 0 | 0 | 0 | 1 | 0 | 0 |
Geometric Ergodicity of the MUCOGARCH(1,1) process | For the multivariate COGARCH(1,1) volatility process we show sufficient
conditions for the existence of a unique stationary distribution, for the
geometric ergodicity and for the finiteness of moments of the stationary
distribution. One of the conditions demands a sufficiently fast exponential
decay of the MUCOGARCH(1,1) volatility process. Furthermore, we show easily
applicable sufficient conditions for the needed irreducibility of the
volatility process living in the cone of positive semidefinite matrices, if the
driving Lévy process is a compound Poisson process.
| 0 | 0 | 1 | 0 | 0 | 0 |
Cognitive networks: brains, internet, and civilizations | In this short essay, we discuss some basic features of cognitive activity at
several different space-time scales: from neural networks in the brain to
civilizations. One motivation for such comparative study is its heuristic
value. Attempts to better understand the functioning of "wetware" involved in
cognitive activities of central nervous system by comparing it with a computing
device have a long tradition. We suggest that comparison with Internet might be
more adequate. We briefly touch upon such subjects as encoding, compression,
and Saussurean trichotomy langue/langage/parole in various environments.
| 1 | 0 | 0 | 0 | 0 | 0 |
Expect the unexpected: Harnessing Sentence Completion for Sarcasm Detection | The trigram `I love being' is expected to be followed by positive words such
as `happy'. In a sarcastic sentence, however, the word `ignored' may be
observed. The expected and the observed words are, thus, incongruous. We model
sarcasm detection as the task of detecting incongruity between an observed and
an expected word. In order to obtain the expected word, we use Context2Vec, a
sentence completion library based on Bidirectional LSTM. However, since the
exact word where such an incongruity occurs may not be known in advance, we
present two approaches: an All-words approach (which consults sentence
completion for every content word) and an Incongruous words-only approach
(which consults sentence completion for the 50% most incongruous content
words). The approaches outperform reported values for tweets but not for
discussion forum posts. This is likely to be because of redundant consultation
of sentence completion for discussion forum posts. Therefore, we consider an
oracle case where the exact incongruous word is manually labeled in a corpus
reported in past work. In this case, the performance is higher than the
all-words approach. This sets up the promise for using sentence completion for
sarcasm detection.
| 1 | 0 | 0 | 0 | 0 | 0 |
Sensitivity Analysis of Deep Neural Networks | Deep neural networks (DNNs) have achieved superior performance in various
prediction tasks, but can be very vulnerable to adversarial examples or
perturbations. Therefore, it is crucial to measure the sensitivity of DNNs to
various forms of perturbations in real applications. We introduce a novel
perturbation manifold and its associated influence measure to quantify the
effects of various perturbations on DNN classifiers. Such perturbations include
various external and internal perturbations to input samples and network
parameters. The proposed measure is motivated by information geometry and
provides desirable invariance properties. We demonstrate that our influence
measure is useful for four model building tasks: detecting potential
'outliers', analyzing the sensitivity of model architectures, comparing network
sensitivity between training and test sets, and locating vulnerable areas.
Experiments show reasonably good performance of the proposed measure for the
popular DNN models ResNet50 and DenseNet121 on CIFAR10 and MNIST datasets.
| 1 | 0 | 0 | 1 | 0 | 0 |
An application of the Hylleraas-B-splines basis set: High accuracy calculations of the static dipole polarizabilities of helium | The Hylleraas-B-splines basis set is introduced in this paper, which can be
used to obtain the eigenvalues and eigenstates of helium-like system's
Hamiltonian. Comparing with traditional B-splines basis, the rate of
convergence of our results has been significantly improved. Through combine
this method and pseudo-states sum over scheme, we obtained the high precision
values of static dipole porlarizabilities of the $1{}^1S-5{}^1S$,
$2{}^3S-6{}^3S$ states of helium in length and velocity gauges respectively,
and the results get good agreements. The final extrapolate results of
porlarizabilities in different quantum states arrived eight significant digits
at least, which fully illustrates the advantage and convenience of this method
in the problems involving continuous states.
| 0 | 1 | 0 | 0 | 0 | 0 |
Jupiter's South Equatorial Belt cycle in 2009-2011: II, The SEB Revival | A Revival of the South Equatorial Belt (SEB) is an organised disturbance on a
grand scale. It starts with a single vigorous outbreak from which energetic
storms and disturbances spread around the planet in the different zonal
currents. The Revival that began in 2010 was better observed than any before
it. The observations largely validate the historical descriptions of these
events: the major features portrayed therein, albeit at lower resolution, are
indeed the large structural features described here. Our major conclusions
about the 2010 SEB Revival are as follows, and we show that most of them may be
typical of SEB Revivals.
1) The Revival started with a bright white plume.
2) The initial plume erupted in a pre-existing cyclonic oval ('barge').
Subsequent white plumes continued to appear on the track of this barge, which
was the location of the sub-surface source of the whole Revival.
3) These plumes were extremely bright in the methane absorption band, i.e.
thrusting up to very high altitudes, especially when new.
4) Brilliant, methane-bright plumes also appeared along the leading edge of
the central branch. Altogether, 7 plumes appeared at the source and at least 6
along the leading edge.
5) The central branch of the outbreak was composed of large convective cells,
each initiated by a bright plume, which only occupied a part of each cell,
while a very dark streak defined its west edge.
6) The southern branch began with darkening and sudden acceleration of
pre-existing faint spots in a slowly retrograding wave-train.
7) Subsequent darker spots in the southern branch were complex structures,
not coherent vortices.
8) Dark spots in the southern branch had typical SEBs jetstream speeds but
were unusually far south....
| 0 | 1 | 0 | 0 | 0 | 0 |
Improved Regularization Techniques for End-to-End Speech Recognition | Regularization is important for end-to-end speech models, since the models
are highly flexible and easy to overfit. Data augmentation and dropout has been
important for improving end-to-end models in other domains. However, they are
relatively under explored for end-to-end speech models. Therefore, we
investigate the effectiveness of both methods for end-to-end trainable, deep
speech recognition models. We augment audio data through random perturbations
of tempo, pitch, volume, temporal alignment, and adding random noise.We further
investigate the effect of dropout when applied to the inputs of all layers of
the network. We show that the combination of data augmentation and dropout give
a relative performance improvement on both Wall Street Journal (WSJ) and
LibriSpeech dataset of over 20%. Our model performance is also competitive with
other end-to-end speech models on both datasets.
| 1 | 0 | 0 | 1 | 0 | 0 |
On the limits of coercivity in permanent magnets | The maximum coercivity that can be achieved for a given hard magnetic alloy
is estimated by computing the energy barrier for the nucleation of a reversed
domain in an idealized microstructure without any structural defects and
without any soft magnetic secondary phases. For
Sm$_{1-z}$Zr$_z$(Fe$_{1-y}$Co$_y$)$_{12-x}$Ti$_x$ based alloys, which are
considered an alternative to Nd$_2$Fe$_{14}$B magnets with lower rare-earth
content, the coercive field of a small magnetic cube is reduced to 60 percent
of the anisotropy field at room temperature and to 50 percent of the anisotropy
field at elevated temperature (473K). This decrease of the coercive field is
caused by misorientation, demagnetizing fields and thermal fluctuations.
| 0 | 1 | 0 | 0 | 0 | 0 |
$N$-soliton formula and blowup result of the Wadati-Konno-Ichikawa equation | We formulate the $N$ soliton solution of the Wadati-Konno-Ichikawa equation
that is determined by purely algebraic equations. Derivation is based on the
matrix Riemann-Hilbert problem. We give examples of one soliton solution that
include smooth soliton, bursting soliton, and loop type soliton. In addition,
we give an explicit example for two soliton solution that blows up in a finite
time.
| 0 | 1 | 1 | 0 | 0 | 0 |
Introduction of Improved Repairing Locality into Secret Sharing Schemes with Perfect Security | Repairing locality is an appreciated feature for distributed storage, in
which a damaged or lost data share can be repaired by accessing a subset of
other shares much smaller than is required for decoding the complete data.
However for Secret Sharing (SS) schemes, it has been proven theoretically that
local repairing can not be achieved with perfect security for the majority of
threshold SS schemes, where all the shares are equally regarded in both secret
recovering and share repairing. In this paper we make an attempt on decoupling
the two processes to make secure local repairing possible. Dedicated repairing
redundancies only for the repairing process are generated, which are random
numbers to the original secret. Through this manner a threshold SS scheme with
improved repairing locality is achieved on the condition that security of
repairing redundancies is ensured, or else our scheme degenerates into a
perfect access structure that is equivalent to the best existing schemes can
do. To maximize security of the repairing redundancies, a random placement
mechanism is also proposed.
| 1 | 0 | 0 | 0 | 0 | 0 |
A counterexample to a conjecture of Kiyota, Murai and Wada | Kiyota, Murai and Wada conjectured in 2002 that the largest eigenvalue of the
Cartan matrix C of a block of a finite group is rational if and only if all
eigenvalues of C are rational. We provide a counterexample to this conjecture
and discuss related questions.
| 0 | 0 | 1 | 0 | 0 | 0 |
Minimal Hermite-type eigenbasis of the discrete Fourier transform | There exist many ways to build an orthonormal basis of $\mathbb{R}^N$,
consisting of the eigenvectors of the discrete Fourier transform (DFT). In this
paper we show that there is only one such orthonormal eigenbasis of the DFT
that is optimal in the sense of an appropriate uncertainty principle. Moreover,
we show that these optimal eigenvectors of the DFT are direct analogues of the
Hermite functions, that they also satisfy a three-term recurrence relation and
that they converge to Hermite functions as $N$ increases to infinity.
| 0 | 0 | 1 | 0 | 0 | 0 |
Disagreement-Based Combinatorial Pure Exploration: Sample Complexity Bounds and an Efficient Algorithm | We design new algorithms for the combinatorial pure exploration problem in
the multi-arm bandit framework. In this problem, we are given $K$ distributions
and a collection of subsets $\mathcal{V} \subset 2^{[K]}$ of these
distributions, and we would like to find the subset $v \in \mathcal{V}$ that
has largest mean, while collecting, in a sequential fashion, as few samples
from the distributions as possible. In both the fixed budget and fixed
confidence settings, our algorithms achieve new sample-complexity bounds that
provide polynomial improvements on previous results in some settings. Via an
information-theoretic lower bound, we show that no approach based on uniform
sampling can improve on ours in any regime, yielding the first interactive
algorithms for this problem with this basic property. Computationally, we show
how to efficiently implement our fixed confidence algorithm whenever
$\mathcal{V}$ supports efficient linear optimization. Our results involve
precise concentration-of-measure arguments and a new algorithm for linear
programming with exponentially many constraints.
| 1 | 0 | 0 | 1 | 0 | 0 |
Insulator to Metal Transition in WO$_3$ Induced by Electrolyte Gating | Tungsten oxide and its associated bronzes (compounds of tungsten oxide and an
alkali metal) are well known for their interesting optical and electrical
characteristics. We have modified the transport properties of thin WO$_3$ films
by electrolyte gating using both ionic liquids and polymer electrolytes. We are
able to tune the resistivity of the gated film by more than five orders of
magnitude, and a clear insulator-to-metal transition is observed. To clarify
the doping mechanism, we have performed a series of incisive operando
experiments, ruling out both a purely electronic effect (charge accumulation
near the interface) and oxygen-related mechanisms. We propose instead that
hydrogen intercalation is responsible for doping WO$_3$ into a highly
conductive ground state and provide evidence that it can be described as a
dense polaronic gas.
| 0 | 1 | 0 | 0 | 0 | 0 |
Extensions and Exact Solutions to the Quaternion-Based RMSD Problem | We examine the problem of transforming matching collections of data points
into optimal correspondence. The classic RMSD (root-mean-square deviation)
method calculates a 3D rotation that minimizes the RMSD of a set of test data
points relative to a reference set of corresponding points. Similar literature
in aeronautics, photogrammetry, and proteomics employs numerical methods to
find the maximal eigenvalue of a particular $4\!\times\! 4$ quaternion-based
matrix, thus specifying the quaternion eigenvector corresponding to the optimal
3D rotation. Here we generalize this basic problem, sometimes referred to as
the "Procrustes Problem," and present algebraic solutions that exhibit
properties that are inaccessible to traditional numerical methods. We begin
with the 4D data problem, a problem one dimension higher than the conventional
3D problem, but one that is also solvable by quaternion methods, we then study
the 3D and 2D data problems as special cases. In addition, we consider data
that are themselves quaternions isomorphic to orthonormal triads describing 3
coordinate frames (amino acids in proteins possess such frames). Adopting a
reasonable approximation to the exact quaternion-data minimization problem, we
find a novel closed form "quaternion RMSD" (QRMSD) solution for the optimal
rotation from a quaternion data set to a reference set. We observe that
composites of the RMSD and QRMSD measures, combined with problem-dependent
parameters including scaling factors to make their incommensurate dimensions
compatible, could be suitable for certain matching tasks.
| 0 | 0 | 0 | 0 | 1 | 0 |
Particles, Cutoffs and Inequivalent Representations. Fraser andWallace on Quantum Field Theory | We critically review the recent debate between Doreen Fraser and David
Wallace on the interpretation of quantum field theory, with the aim of
identifying where the core of the disagreement lies. We show that, despite
appearances, their conflict does not concern the existence of particles or the
occurrence of unitarily inequivalent representations. Instead, the dispute
ultimately turns on the very definition of what a quantum field theory is. We
further illustrate the fundamental differences between the two approaches by
comparing them both to the Bohmian program in quantum field theory.
| 0 | 1 | 0 | 0 | 0 | 0 |
Dynamic Analysis of Executables to Detect and Characterize Malware | It is needed to ensure the integrity of systems that process sensitive
information and control many aspects of everyday life. We examine the use of
machine learning algorithms to detect malware using the system calls generated
by executables-alleviating attempts at obfuscation as the behavior is monitored
rather than the bytes of an executable. We examine several machine learning
techniques for detecting malware including random forests, deep learning
techniques, and liquid state machines. The experiments examine the effects of
concept drift on each algorithm to understand how well the algorithms
generalize to novel malware samples by testing them on data that was collected
after the training data. The results suggest that each of the examined machine
learning algorithms is a viable solution to detect malware-achieving between
90% and 95% class-averaged accuracy (CAA). In real-world scenarios, the
performance evaluation on an operational network may not match the performance
achieved in training. Namely, the CAA may be about the same, but the values for
precision and recall over the malware can change significantly. We structure
experiments to highlight these caveats and offer insights into expected
performance in operational environments. In addition, we use the induced models
to gain a better understanding about what differentiates the malware samples
from the goodware, which can further be used as a forensics tool to understand
what the malware (or goodware) was doing to provide directions for
investigation and remediation.
| 1 | 0 | 0 | 1 | 0 | 0 |
Network support of talented people | Network support is a key success factor for talented people. As an example,
the Hungarian Talent Support Network involves close to 1500 Talent Points and
more than 200,000 people. This network started the Hungarian Templeton Program
identifying and helping 315 exceptional cognitive talents. This network is a
part of the European Talent Support Network initiated by the European Council
for High Ability involving more than 300 organizations in over 30 countries in
Europe and extending in other continents. These networks are giving good
examples that talented people often occupy a central, but highly dynamic
position in social networks. The involvement of such 'creative nodes' in
network-related decision making processes is vital, especially in novel
environmental challenges. Such adaptive/learning responses characterize a large
variety of complex systems from proteins, through brains to society. It is
crucial for talent support programs to use these networking and learning
processes to increase their efficiency further.
| 1 | 1 | 0 | 0 | 0 | 0 |
How to Escape Saddle Points Efficiently | This paper shows that a perturbed form of gradient descent converges to a
second-order stationary point in a number iterations which depends only
poly-logarithmically on dimension (i.e., it is almost "dimension-free"). The
convergence rate of this procedure matches the well-known convergence rate of
gradient descent to first-order stationary points, up to log factors. When all
saddle points are non-degenerate, all second-order stationary points are local
minima, and our result thus shows that perturbed gradient descent can escape
saddle points almost for free. Our results can be directly applied to many
machine learning applications, including deep learning. As a particular
concrete example of such an application, we show that our results can be used
directly to establish sharp global convergence rates for matrix factorization.
Our results rely on a novel characterization of the geometry around saddle
points, which may be of independent interest to the non-convex optimization
community.
| 1 | 0 | 1 | 1 | 0 | 0 |
Geert Hofstede et al's set of national cultural dimensions - popularity and criticisms | This article outlines different stages in development of the national culture
model, created by Geert Hofstede and his affiliates. This paper reveals and
synthesizes the contemporary review of the application spheres of this
framework. Numerous applications of the dimensions set are used as a source of
identifying significant critiques, concerning different aspects in model's
operation. These critiques are classified and their underlying reasons are also
outlined by means of a fishbone diagram.
| 0 | 0 | 0 | 0 | 0 | 1 |
Deep and Confident Prediction for Time Series at Uber | Reliable uncertainty estimation for time series prediction is critical in
many fields, including physics, biology, and manufacturing. At Uber,
probabilistic time series forecasting is used for robust prediction of number
of trips during special events, driver incentive allocation, as well as
real-time anomaly detection across millions of metrics. Classical time series
models are often used in conjunction with a probabilistic formulation for
uncertainty estimation. However, such models are hard to tune, scale, and add
exogenous variables to. Motivated by the recent resurgence of Long Short Term
Memory networks, we propose a novel end-to-end Bayesian deep model that
provides time series prediction along with uncertainty estimation. We provide
detailed experiments of the proposed solution on completed trips data, and
successfully apply it to large-scale time series anomaly detection at Uber.
| 0 | 0 | 0 | 1 | 0 | 0 |
Coverage Analysis of a Vehicular Network Modeled as Cox Process Driven by Poisson Line Process | In this paper, we consider a vehicular network in which the wireless nodes
are located on a system of roads. We model the roadways, which are
predominantly straight and randomly oriented, by a Poisson line process (PLP)
and the locations of nodes on each road as a homogeneous 1D Poisson point
process (PPP). Assuming that each node transmits independently, the locations
of transmitting and receiving nodes are given by two Cox processes driven by
the same PLP. For this setup, we derive the coverage probability of a typical
receiver, which is an arbitrarily chosen receiving node, assuming independent
Nakagami-$m$ fading over all wireless channels. Assuming that the typical
receiver connects to its closest transmitting node in the network, we first
derive the distribution of the distance between the typical receiver and the
serving node to characterize the desired signal power. We then characterize
coverage probability for this setup, which involves two key technical
challenges. First, we need to handle several cases as the serving node can
possibly be located on any line in the network and the corresponding
interference experienced at the typical receiver is different in each case.
Second, conditioning on the serving node imposes constraints on the spatial
configuration of lines, which require careful analysis of the conditional
distribution of the lines. We address these challenges in order to accurately
characterize the interference experienced at the typical receiver. We then
derive an exact expression for coverage probability in terms of the derivative
of Laplace transform of interference power distribution. We analyze the trends
in coverage probability as a function of the network parameters: line density
and node density. We also study the asymptotic behavior of this model and
compare the coverage performance with that of a homogeneous 2D PPP model with
the same node density.
| 1 | 0 | 0 | 0 | 0 | 0 |
Database Engines: Evolution of Greenness | Context: Information Technology consumes up to 10\% of the world's
electricity generation, contributing to CO2 emissions and high energy costs.
Data centers, particularly databases, use up to 23% of this energy. Therefore,
building an energy-efficient (green) database engine could reduce energy
consumption and CO2 emissions.
Goal: To understand the factors driving databases' energy consumption and
execution time throughout their evolution.
Method: We conducted an empirical case study of energy consumption by two
MySQL database engines, InnoDB and MyISAM, across 40 releases. We examined the
relationships of four software metrics to energy consumption and execution time
to determine which metrics reflect the greenness and performance of a database.
Results: Our analysis shows that database engines' energy consumption and
execution time increase as databases evolve. Moreover, the Lines of Code metric
is correlated moderately to strongly with energy consumption and execution time
in 88% of cases.
Conclusions: Our findings provide insights to both practitioners and
researchers. Database administrators may use them to select a fast, green
release of the MySQL database engine. MySQL database-engine developers may use
the software metric to assess products' greenness and performance. Researchers
may use our findings to further develop new hypotheses or build models to
predict greenness and performance of databases.
| 1 | 0 | 0 | 0 | 0 | 0 |
Parseval Networks: Improving Robustness to Adversarial Examples | We introduce Parseval networks, a form of deep neural networks in which the
Lipschitz constant of linear, convolutional and aggregation layers is
constrained to be smaller than 1. Parseval networks are empirically and
theoretically motivated by an analysis of the robustness of the predictions
made by deep neural networks when their input is subject to an adversarial
perturbation. The most important feature of Parseval networks is to maintain
weight matrices of linear and convolutional layers to be (approximately)
Parseval tight frames, which are extensions of orthogonal matrices to
non-square matrices. We describe how these constraints can be maintained
efficiently during SGD. We show that Parseval networks match the
state-of-the-art in terms of accuracy on CIFAR-10/100 and Street View House
Numbers (SVHN) while being more robust than their vanilla counterpart against
adversarial examples. Incidentally, Parseval networks also tend to train faster
and make a better usage of the full capacity of the networks.
| 1 | 0 | 0 | 1 | 0 | 0 |
Production of 82Se enriched Zinc Selenide (ZnSe) crystals for the study of neutrinoless double beta decay | High purity Zinc Selenide (ZnSe) crystals are produced starting from
elemental Zn and Se to be used for the search of the neutrinoless double beta
decay (0{\nu}DBD) of 82Se. In order to increase the number of emitting
nuclides, enriched 82Se is used. Dedicated production lines for the synthesis
and conditioning of the Zn82Se powder in order to make it suitable for crystal
growth were assembled compliant with radio-purity constraints specific to rare
event physics experiments. Besides routine check of impurities concentration,
high sensitivity measurements are made for radio-isotope concentrations in raw
materials, reactants, consumables, ancillaries and intermediary products used
for ZnSe crystals production. Indications are given on the crystals perfection
and how it is achieved. Since very expensive isotopically enriched material
(82Se) is used, a special attention is given for acquiring the maximum yield in
the mass balance of all production stages. Production and certification
protocols are presented and resulting ready-to-use Zn82Se crystals are
described.
| 0 | 1 | 0 | 0 | 0 | 0 |
Computing Human-Understandable Strategies | Algorithms for equilibrium computation generally make no attempt to ensure
that the computed strategies are understandable by humans. For instance the
strategies for the strongest poker agents are represented as massive binary
files. In many situations, we would like to compute strategies that can
actually be implemented by humans, who may have computational limitations and
may only be able to remember a small number of features or components of the
strategies that have been computed. We study poker games where private
information distributions can be arbitrary. We create a large training set of
game instances and solutions, by randomly selecting the information
probabilities, and present algorithms that learn from the training instances in
order to perform well in games with unseen information distributions. We are
able to conclude several new fundamental rules about poker strategy that can be
easily implemented by humans.
| 0 | 0 | 0 | 1 | 0 | 0 |
A statistical model for aggregating judgments by incorporating peer predictions | We propose a probabilistic model to aggregate the answers of respondents
answering multiple-choice questions. The model does not assume that everyone
has access to the same information, and so does not assume that the consensus
answer is correct. Instead, it infers the most probable world state, even if
only a minority vote for it. Each respondent is modeled as receiving a signal
contingent on the actual world state, and as using this signal to both
determine their own answer and predict the answers given by others. By
incorporating respondent's predictions of others' answers, the model infers
latent parameters corresponding to the prior over world states and the
probability of different signals being received in all possible world states,
including counterfactual ones. Unlike other probabilistic models for
aggregation, our model applies to both single and multiple questions, in which
case it estimates each respondent's expertise. The model shows good
performance, compared to a number of other probabilistic models, on data from
seven studies covering different types of expertise.
| 0 | 0 | 0 | 1 | 0 | 0 |
The Inner 25 AU Debris Distribution in the epsilon Eri System | Debris disk morphology is wavelength dependent due to the wide range of
particle sizes and size-dependent dynamics influenced by various forces.
Resolved images of nearby debris disks reveal complex disk structures that are
difficult to distinguish from their spectral energy distributions. Therefore,
multi-wavelength resolved images of nearby debris systems provide an essential
foundation to understand the intricate interplay between collisional,
gravitational, and radiative forces that govern debris disk structures. We
present the SOFIA 35 um resolved disk image of epsilon Eri, the closest debris
disk around a star similar to the early Sun. Combining with the Spitzer
resolved image at 24 um and 15-38 um excess spectrum, we examine two proposed
origins of the inner debris in epsilon Eri: (1) in-situ planetesimal belt(s)
and (2) dragged-in grains from the cold outer belt. We find that the presence
of in-situ dust-producing planetesmial belt(s) is the most likely source of the
excess emission in the inner 25 au region. Although a small amount of
dragged-in grains from the cold belt could contribute to the excess emission in
the inner region, the resolution of the SOFIA data is high enough to rule out
the possibility that the entire inner warm excess results from dragged-in
grains, but not enough to distinguish one broad inner disk from two narrow
belts.
| 0 | 1 | 0 | 0 | 0 | 0 |
Model-Based Clustering of Nonparametric Weighted Networks | Water pollution is a major global environmental problem, and it poses a great
environmental risk to public health and biological diversity. This work is
motivated by assessing the potential environmental threat of coal mining
through increased sulfate concentrations in river networks, which do not belong
to any simple parametric distribution. However, existing network models mainly
focus on binary or discrete networks and weighted networks with known
parametric weight distributions. We propose a principled nonparametric weighted
network model based on exponential-family random graph models and local
likelihood estimation and study its model-based clustering with application to
large-scale water pollution network analysis. We do not require any parametric
distribution assumption on network weights. The proposed method greatly extends
the methodology and applicability of statistical network models. Furthermore,
it is scalable to large and complex networks in large-scale environmental
studies and geoscientific research. The power of our proposed methods is
demonstrated in simulation studies.
| 0 | 0 | 0 | 1 | 0 | 0 |
FPGA-Based Tracklet Approach to Level-1 Track Finding at CMS for the HL-LHC | During the High Luminosity LHC, the CMS detector will need charged particle
tracking at the hardware trigger level to maintain a manageable trigger rate
and achieve its physics goals. The tracklet approach is a track-finding
algorithm based on a road-search algorithm that has been implemented on
commercially available FPGA technology. The tracklet algorithm has achieved
high performance in track-finding and completes tracking within 3.4 $\mu$s on a
Xilinx Virtex-7 FPGA. An overview of the algorithm and its implementation on an
FPGA is given, results are shown from a demonstrator test stand and system
performance studies are presented.
| 0 | 1 | 0 | 0 | 0 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.