title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Teaching the Doppler Effect in Astrophysics | The Doppler effect is a shift in the frequency of waves emitted from an
object moving relative to the observer. By observing and analysing the Doppler
shift in electromagnetic waves from astronomical objects, astronomers gain
greater insight into the structure and operation of our universe. In this
paper, a simple technique is described for teaching the basics of the Doppler
effect to undergraduate astrophysics students using acoustic waves. An
advantage of the technique is that it produces a visual representation of the
acoustic Doppler shift. The equipment comprises a 40 kHz acoustic transmitter
and a microphone. The sound is bounced off a computer fan and the signal
collected by a DrDAQ ADC and processed by a spectrum analyser. Widening of the
spectrum is observed as the fan power supply potential is increased from 4 to
12 V.
| 0 | 1 | 0 | 0 | 0 | 0 |
Embedding Tarskian Semantics in Vector Spaces | We propose a new linear algebraic approach to the computation of Tarskian
semantics in logic. We embed a finite model M in first-order logic with N
entities in N-dimensional Euclidean space R^N by mapping entities of M to N
dimensional one-hot vectors and k-ary relations to order-k adjacency tensors
(multi-way arrays). Second given a logical formula F in prenex normal form, we
compile F into a set Sigma_F of algebraic formulas in multi-linear algebra with
a nonlinear operation. In this compilation, existential quantifiers are
compiled into a specific type of tensors, e.g., identity matrices in the case
of quantifying two occurrences of a variable. It is shown that a systematic
evaluation of Sigma_F in R^N gives the truth value, 1(true) or 0(false), of F
in M. Based on this framework, we also propose an unprecedented way of
computing the least models defined by Datalog programs in linear spaces via
matrix equations and empirically show its effectiveness compared to
state-of-the-art approaches.
| 1 | 0 | 0 | 0 | 0 | 0 |
Constraints from Dust Mass and Mass Accretion Rate Measurements on Angular Momentum Transport in Protoplanetary Disks | We investigate the relation between disk mass and mass accretion rate to
constrain the mechanism of angular momentum transport in protoplanetary disks.
Dust mass and mass accretion rate in Chamaeleon I are correlated with a slope
close to linear, similar to the one recently identified in Lupus. We
investigate the effect of stellar mass and find that the intrinsic scatter
around the best-fit Mdust-Mstar and Macc-Mstar relations is uncorrelated. Disks
with a constant alpha viscosity can fit the observed relations between dust
mass, mass accretion rate, and stellar mass, but over-predict the strength of
the correlation between disk mass and mass accretion rate when using standard
initial conditions. We find two possible solutions. 1) The observed scatter in
Mdust and Macc is not primoridal, but arises from additional physical processes
or uncertainties in estimating the disk gas mass. Most likely grain growth and
radial drift affect the observable dust mass, while variability on large time
scales affects the mass accretion rates. 2) The observed scatter is primordial,
but disks have not evolved substantially at the age of Lupus and Chamaeleon I
due to a low viscosity or a large initial disk radius. More accurate estimates
of the disk mass and gas disk sizes in a large sample of protoplanetary disks,
either through direct observations of the gas or spatially resolved
multi-wavelength observations of the dust with ALMA, are needed to discriminate
between both scenarios or to constrain alternative angular momentum transport
mechanisms such as MHD disk winds.
| 0 | 1 | 0 | 0 | 0 | 0 |
Character-Word LSTM Language Models | We present a Character-Word Long Short-Term Memory Language Model which both
reduces the perplexity with respect to a baseline word-level language model and
reduces the number of parameters of the model. Character information can reveal
structural (dis)similarities between words and can even be used when a word is
out-of-vocabulary, thus improving the modeling of infrequent and unknown words.
By concatenating word and character embeddings, we achieve up to 2.77% relative
improvement on English compared to a baseline model with a similar amount of
parameters and 4.57% on Dutch. Moreover, we also outperform baseline word-level
models with a larger number of parameters.
| 1 | 0 | 0 | 0 | 0 | 0 |
Deep Convolutional Neural Networks for Raman Spectrum Recognition: A Unified Solution | Machine learning methods have found many applications in Raman spectroscopy,
especially for the identification of chemical species. However, almost all of
these methods require non-trivial preprocessing such as baseline correction
and/or PCA as an essential step. Here we describe our unified solution for the
identification of chemical species in which a convolutional neural network is
trained to automatically identify substances according to their Raman spectrum
without the need of ad-hoc preprocessing steps. We evaluated our approach using
the RRUFF spectral database, comprising mineral sample data. Superior
classification performance is demonstrated compared with other frequently used
machine learning algorithms including the popular support vector machine.
| 1 | 0 | 0 | 1 | 0 | 0 |
Suppression of material transfer at contacting surfaces: The effect of adsorbates on Al/TiN and Cu/diamond interfaces from first-principles calculations | The effect of monolayers of oxygen (O) and hydrogen (H) on the possibility of
material transfer at aluminium/titanium nitride (Al/TiN) and copper/diamond
(Cu/C$_{\text{dia}}$) interfaces, respectively, were investigated within the
framework of density functional theory (DFT). To this end the approach,
contact, and subsequent separation of two atomically flat surfaces consisting
of the aforementioned pairs of materials were simulated. These calculations
were performed for the clean as well as oxygenated and hydrogenated Al and
C$_{\text{dia}}$ surfaces, respectively. Various contact configurations were
considered by studying several lateral arrangements of the involved surfaces at
the interface. Material transfer is typically possible at interfaces between
the investigated clean surfaces; however, the addition of O to the Al and H to
the C$_{\text{dia}}$ surfaces was found to hinder material transfer. This
passivation occurs because of a significant reduction of the adhesion energy at
the examined interfaces, which can be explained by the distinct bonding
situations.
| 0 | 1 | 0 | 0 | 0 | 0 |
Fine-grained ECG Classification Based on Deep CNN and Online Decision Fusion | Early recognition of abnormal rhythm in ECG signals is crucial for monitoring
or diagnosing patients' cardiac conditions and increasing the success rate of
the treatment. Classifying abnormal rhythms into fine-grained categories is
very challenging due to the the broad taxonomy of rhythms, noises and lack of
real-world data and annotations from large number of patients. This paper
presents a new ECG classification method based on Deep Convolutional Neural
Networks (DCNN) and online decision fusion. Different from previous methods
which utilize hand-crafted features or learn features from the original signal
domain, the proposed DCNN based method learns features and classifiers from the
time-frequency domain in an end-to-end manner. First, the ECG wave signal is
transformed to time-frequency domain by using Short-Time Fourier Transform.
Next, specific DCNN models are trained on ECG samples of specific length.
Finally, an online decision fusion method is proposed to fuse past and current
decisions from different models into a more accurate one. Experimental results
on both synthetic and real-world ECG datasets convince the effectiveness and
efficiency of the proposed method.
| 1 | 0 | 0 | 1 | 0 | 0 |
A Wavenet for Speech Denoising | Currently, most speech processing techniques use magnitude spectrograms as
front-end and are therefore by default discarding part of the signal: the
phase. In order to overcome this limitation, we propose an end-to-end learning
method for speech denoising based on Wavenet. The proposed model adaptation
retains Wavenet's powerful acoustic modeling capabilities, while significantly
reducing its time-complexity by eliminating its autoregressive nature.
Specifically, the model makes use of non-causal, dilated convolutions and
predicts target fields instead of a single target sample. The discriminative
adaptation of the model we propose, learns in a supervised fashion via
minimizing a regression loss. These modifications make the model highly
parallelizable during both training and inference. Both computational and
perceptual evaluations indicate that the proposed method is preferred to Wiener
filtering, a common method based on processing the magnitude spectrogram.
| 1 | 0 | 0 | 0 | 0 | 0 |
Warped Product Space-times | Many classical results in relativity theory concerning spherically symmetric
space-times have easy generalizations to warped product space-times, with a
two-dimensional Lorentzian base and arbitrary dimensional Riemannian fibers. We
first give a systematic presentation of the main geometric constructions, with
emphasis on the Kodama vector field and the Hawking energy; the construction is
signature independent. This leads to proofs of general Birkhoff-type theorems
for warped product manifolds; our theorems in particular apply to situations
where the warped product manifold is not necessarily Einstein, and thus can be
applied to solutions with matter content in general relativity. Next we
specialize to the Lorentzian case and study the propagation of null expansions
under the assumption of the dominant energy condition. We prove several
non-existence results relating to the Yamabe class of the fibers, in the spirit
of the black-hole topology theorem of Hawking-Galloway-Schoen. Finally we
discuss the effect of the warped product ansatz on matter models. In particular
we construct several cosmological solutions to the Einstein-Euler equations
whose spatial geometry is generally not isotropic.
| 0 | 0 | 1 | 0 | 0 | 0 |
ALMA Observations of the Young Substellar Binary System 2M1207 | We present ALMA observations of the 2M1207 system, a young binary made of a
brown dwarf with a planetary-mass companion at a projected separation of about
40 au. We detect emission from dust continuum at 0.89 mm and from the $J = 3 -
2$ rotational transition of CO from a very compact disk around the young brown
dwarf. The small radius found for this brown dwarf disk may be due to
truncation from the tidal interaction with the planetary-mass companion. Under
the assumption of optically thin dust emission, we estimated a dust mass of 0.1
$M_{\oplus}$ for the 2M1207A disk, and a 3$\sigma$ upper limit of $\sim
1~M_{\rm{Moon}}$ for dust surrounding 2M1207b, which is the tightest upper
limit obtained so far for the mass of dust particles surrounding a young
planetary-mass companion. We discuss the impact of this and other
non-detections of young planetary-mass companions for models of planet
formation, which predict the presence of circum-planetary material surrounding
these objects.
| 0 | 1 | 0 | 0 | 0 | 0 |
Multichannel Linear Prediction for Blind Reverberant Audio Source Separation | A class of methods based on multichannel linear prediction (MCLP) can achieve
effective blind dereverberation of a source, when the source is observed with a
microphone array. We propose an inventive use of MCLP as a pre-processing step
for blind source separation with a microphone array. We show theoretically
that, under certain assumptions, such pre-processing reduces the original blind
reverberant source separation problem to a non-reverberant one, which in turn
can be effectively tackled using existing methods. We demonstrate our claims
using real recordings obtained with an eight-microphone circular array in
reverberant environments.
| 1 | 0 | 0 | 0 | 0 | 0 |
SepNE: Bringing Separability to Network Embedding | Many successful methods have been proposed for learning low dimensional
representations on large-scale networks, while almost all existing methods are
designed in inseparable processes, learning embeddings for entire networks even
when only a small proportion of nodes are of interest. This leads to great
inconvenience, especially on super-large or dynamic networks, where these
methods become almost impossible to implement. In this paper, we formalize the
problem of separated matrix factorization, based on which we elaborate a novel
objective function that preserves both local and global information. We further
propose SepNE, a simple and flexible network embedding algorithm which
independently learns representations for different subsets of nodes in
separated processes. By implementing separability, our algorithm reduces the
redundant efforts to embed irrelevant nodes, yielding scalability to
super-large networks, automatic implementation in distributed learning and
further adaptations. We demonstrate the effectiveness of this approach on
several real-world networks with different scales and subjects. With comparable
accuracy, our approach significantly outperforms state-of-the-art baselines in
running times on large networks.
| 1 | 0 | 0 | 0 | 0 | 0 |
Opinion formation in a locally interacting community with recommender | We present a user of model interaction based on the physics of kinetic
exchange, and extend it to individuals placed in a grid with local interaction.
We show with numerical analysis and partial analytical results that the
critical symmetry breaking transitions and percolation effects typical of the
full interaction model do not take place if the range of interaction is
limited, allowing for the co-existence of majorty and minority opinions in the
same community.
We then introduce a peer recommender system in the model, showing that, even
with very local iteraction and a small probability of appeal to the
recommender, its presence is sufficient to make both symmetry breaking and
percolation reappear. This seems to indicate that one effect of a
recommendation system is to uniform the opinions of a community, reducing
minority opinions or making them disappear. Although the recommender system
does uniform the community opinion, it doesn't constrain it, in the sense that
all opinions have the same probability of becoming the dominating one. We do a
partial study, however, that suggests that a "mischievous" recommender might be
able to bias a community so that one opinion will emerge over the opposite with
overwhelming probability.
| 1 | 1 | 0 | 0 | 0 | 0 |
Integral representation of shallow neural network that attains the global minimum | We consider the supervised learning problem with shallow neural networks.
According to our unpublished experiments conducted several years prior to this
study, we had noticed an interesting similarity between the distribution of
hidden parameters after backprobagation (BP) training, and the ridgelet
spectrum of the same dataset. Therefore, we conjectured that the distribution
is expressed as a version of ridgelet transform, but it was not proven until
this study. One difficulty is that both the local minimizers and the ridgelet
transforms have an infinite number of varieties, and no relations are known
between them. By using the integral representation, we reformulate the BP
training as a strong-convex optimization problem and find a global minimizer.
Finally, by developing ridgelet analysis on a reproducing kernel Hilbert space
(RKHS), we write the minimizer explicitly and succeed to prove the conjecture.
The modified ridgelet transform has an explicit expression that can be computed
by numerical integration, which suggests that we can obtain the global
minimizer of BP, without BP.
| 0 | 0 | 0 | 1 | 0 | 0 |
DoKnowMe: Towards a Domain Knowledge-driven Methodology for Performance Evaluation | Software engineering considers performance evaluation to be one of the key
portions of software quality assurance. Unfortunately, there seems to be a lack
of standard methodologies for performance evaluation even in the scope of
experimental computer science. Inspired by the concept of "instantiation" in
object-oriented programming, we distinguish the generic performance evaluation
logic from the distributed and ad-hoc relevant studies, and develop an abstract
evaluation methodology (by analogy of "class") we name Domain Knowledge-driven
Methodology (DoKnowMe). By replacing five predefined domain-specific knowledge
artefacts, DoKnowMe could be instantiated into specific methodologies (by
analogy of "object") to guide evaluators in performance evaluation of different
software and even computing systems. We also propose a generic validation
framework with four indicators (i.e.~usefulness, feasibility, effectiveness and
repeatability), and use it to validate DoKnowMe in the Cloud services
evaluation domain. Given the positive and promising validation result, we plan
to integrate more common evaluation strategies to improve DoKnowMe and further
focus on the performance evaluation of Cloud autoscaler systems.
| 1 | 0 | 0 | 0 | 0 | 0 |
The Effect of Different Wavelengths on Porous Silicon Formation Process | Porous silicon layers (PS) have been prepared in this work via
Photoelectrochemical etching process (PEC) of n type silicon wafer of 0.8
ohm.cm resistivity in hydrofluoric (HF) acid of 24.5 precent concentration at
different etching times (5 to 25 min.). The irradiation has been achieved using
Tungsten lamp with different wavelengths (450 nm, 535 nm and 700 nm). The
morphological properties of these layers such as surface morphology, Porosity,
layer thickness, and also the etching rate have been investigated using optical
microscopy and the gravimetric method.
| 0 | 1 | 0 | 0 | 0 | 0 |
Topological dimension tunes activity patterns in hierarchical modular network models | Connectivity patterns of relevance in neuroscience and systems biology can be
encoded in hierarchical modular networks (HMNs). Moreover, recent studies
highlight the role of hierarchical modular organization in shaping brain
activity patterns, providing an excellent substrate to promote both the
segregation and integration of neural information. Here we propose an extensive
numerical analysis of the critical spreading rate (or "epidemic" threshold)
--separating a phase with endemic persistent activity from one in which
activity ceases-- on diverse HMNs. By employing analytical and computational
techniques we determine the nature of such a threshold and scrutinize how it
depends on general structural features of the underlying HMN. We critically
discuss the extent to which current graph-spectral methods can be applied to
predict the onset of spreading in HMNs, and we propose the network topological
dimension as a relevant and unifying structural parameter, controlling the
epidemic threshold.
| 0 | 1 | 0 | 0 | 0 | 0 |
DFTerNet: Towards 2-bit Dynamic Fusion Networks for Accurate Human Activity Recognition | Deep Convolutional Neural Networks (DCNNs) are currently popular in human
activity recognition applications. However, in the face of modern artificial
intelligence sensor-based games, many research achievements cannot be
practically applied on portable devices. DCNNs are typically resource-intensive
and too large to be deployed on portable devices, thus this limits the
practical application of complex activity detection. In addition, since
portable devices do not possess high-performance Graphic Processing Units
(GPUs), there is hardly any improvement in Action Game (ACT) experience.
Besides, in order to deal with multi-sensor collaboration, all previous human
activity recognition models typically treated the representations from
different sensor signal sources equally. However, distinct types of activities
should adopt different fusion strategies. In this paper, a novel scheme is
proposed. This scheme is used to train 2-bit Convolutional Neural Networks with
weights and activations constrained to {-0.5,0,0.5}. It takes into account the
correlation between different sensor signal sources and the activity types.
This model, which we refer to as DFTerNet, aims at producing a more reliable
inference and better trade-offs for practical applications. Our basic idea is
to exploit quantization of weights and activations directly in pre-trained
filter banks and adopt dynamic fusion strategies for different activity types.
Experiments demonstrate that by using dynamic fusion strategy can exceed the
baseline model performance by up to ~5% on activity recognition like
OPPORTUNITY and PAMAP2 datasets. Using the quantization method proposed, we
were able to achieve performances closer to that of full-precision counterpart.
These results were also verified using the UniMiB-SHAR dataset. In addition,
the proposed method can achieve ~9x acceleration on CPUs and ~11x memory
saving.
| 0 | 0 | 0 | 1 | 0 | 0 |
Generalized stealthy hyperuniform processes : maximal rigidity and the bounded holes conjecture | We study translation invariant stochastic processes on $\mathbb{R}^d$ or
$\mathbb{Z}^d$ whose diffraction spectrum or structure function $S(k)$, i.e.
the Fourier transform of the truncated total pair correlation function,
vanishes on an open set $U$ in the wave space. A key family of such processes
are stealthy hyperuniform point processes, for which the origin $k=0$ is in
$U$; these are of much current physical interest. We show that all such
processes exhibit the following remarkable maximal rigidity : namely, the
configuration outside a bounded region determines, with probability 1, the
exact value (or the exact locations of the points) of the process inside the
region. In particular, such processes are completely determined by their tail.
In the 1D discrete setting (i.e. $\mathbb{Z}$-valued processes on
$\mathbb{Z}$), this can also be seen as a consequence of a recent theorem of
Borichev, Sodin and Weiss; in higher dimensions or in the continuum, such a
phenomenon seems novel. For stealthy hyperuniform point processes, we prove the
Zhang-Stillinger-Torquato conjecture that such processes have bounded holes
(empty regions), with a universal bound that depends inversely on the size of
$U$.
| 0 | 1 | 0 | 0 | 0 | 0 |
High-frequency approximation of the interior dirichlet-to-neumann map and applications to the transmission eigenvalues | We study the high-frequency behavior of the Dirichlet-to-Neumann map for an
arbitrary compact Riemannian manifold with a non-empty smooth boundary. We show
that far from the real axis it can be approximated by a simpler operator. We
use this fact to get new results concerning the location of the transmission
eigenvalues on the complex plane. In some cases we obtain optimal transmission
eigenvalue-free regions.
| 0 | 0 | 1 | 0 | 0 | 0 |
Conformal scalar curvature equation on S^n: functions with two close critical points (twin pseudo-peaks) | By using the Lyapunov-Schmidt reduction method without perturbation, we
consider existence results for the conformal scalar curvature on S^n (n greater
or equal to 3) when the prescribed function (after being projected to R^n) has
two close critical points, which have the same value (positive), equal
"flatness" (twin, flatness < n - 2), and exhibit maximal behavior in certain
directions (pseudo-peaks). The proof relies on a balance between the two main
contributions to the reduced functional - one from the critical points and the
other from the interaction of the two bubbles.
| 0 | 0 | 1 | 0 | 0 | 0 |
Equivalence between non-Markovian and Markovian dynamics in epidemic spreading processes | A general formalism is introduced to allow the steady state of non-Markovian
processes on networks to be reduced to equivalent Markovian processes on the
same substrates. The example of an epidemic spreading process is considered in
detail, where all the non-Markovian aspects are shown to be captured within a
single parameter, the effective infection rate. Remarkably, this result is
independent of the topology of the underlying network, as demonstrated by
numerical simulations on two-dimensional lattices and various types of random
networks. Furthermore, an analytic approximation for the effective infection
rate is introduced, which enables the calculation of the critical point and of
the critical exponents for the non-Markovian dynamics.
| 0 | 1 | 0 | 0 | 0 | 0 |
Attacking Binarized Neural Networks | Neural networks with low-precision weights and activations offer compelling
efficiency advantages over their full-precision equivalents. The two most
frequently discussed benefits of quantization are reduced memory consumption,
and a faster forward pass when implemented with efficient bitwise operations.
We propose a third benefit of very low-precision neural networks: improved
robustness against some adversarial attacks, and in the worst case, performance
that is on par with full-precision models. We focus on the very low-precision
case where weights and activations are both quantized to $\pm$1, and note that
stochastically quantizing weights in just one layer can sharply reduce the
impact of iterative attacks. We observe that non-scaled binary neural networks
exhibit a similar effect to the original defensive distillation procedure that
led to gradient masking, and a false notion of security. We address this by
conducting both black-box and white-box experiments with binary models that do
not artificially mask gradients.
| 1 | 0 | 0 | 1 | 0 | 0 |
On Treewidth and Stable Marriage | Stable Marriage is a fundamental problem to both computer science and
economics. Four well-known NP-hard optimization versions of this problem are
the Sex-Equal Stable Marriage (SESM), Balanced Stable Marriage (BSM),
max-Stable Marriage with Ties (max-SMT) and min-Stable Marriage with Ties
(min-SMT) problems. In this paper, we analyze these problems from the viewpoint
of Parameterized Complexity. We conduct the first study of these problems with
respect to the parameter treewidth. First, we study the treewidth $\mathtt{tw}$
of the primal graph. We establish that all four problems are W[1]-hard. In
particular, while it is easy to show that all four problems admit algorithms
that run in time $n^{O(\mathtt{tw})}$, we prove that all of these algorithms
are likely to be essentially optimal. Next, we study the treewidth
$\mathtt{tw}$ of the rotation digraph. In this context, the max-SMT and min-SMT
are not defined. For both SESM and BSM, we design (non-trivial) algorithms that
run in time $2^{\mathtt{tw}}n^{O(1)}$. Then, for both SESM and BSM, we also
prove that unless SETH is false, algorithms that run in time
$(2-\epsilon)^{\mathtt{tw}}n^{O(1)}$ do not exist for any fixed $\epsilon>0$.
We thus present a comprehensive, complete picture of the behavior of central
optimization versions of Stable Marriage with respect to treewidth.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Holistic Approach to Forecasting Wholesale Energy Market Prices | Electricity market price predictions enable energy market participants to
shape their consumption or supply while meeting their economic and
environmental objectives. By utilizing the basic properties of the
supply-demand matching process performed by grid operators, we develop a method
to recover energy market's structure and predict the resulting nodal prices as
a function of generation mix and system load on the grid. Our methodology uses
the latest advancements in compressed sensing and statistics to cope with the
high-dimensional and sparse power grid topologies, underlying physical laws, as
well as scarce, public market data. Rigorous validations using Southwest Power
Pool (SPP) market data demonstrate significantly higher accuracy of the
proposed approach when compared to the state-of-the-art industry benchmark.
| 1 | 0 | 0 | 1 | 0 | 0 |
Matched bipartite block model with covariates | Community detection or clustering is a fundamental task in the analysis of
network data. Many real networks have a bipartite structure which makes
community detection challenging. In this paper, we consider a model which
allows for matched communities in the bipartite setting, in addition to node
covariates with information about the matching. We derive a simple fast
algorithm for fitting the model based on variational inference ideas and show
its effectiveness on both simulated and real data. A variation of the model to
allow for degree-correction is also considered, in addition to a novel approach
to fitting such degree-corrected models.
| 1 | 0 | 0 | 1 | 0 | 0 |
Topological Interference Management with Decoded Message Passing | The topological interference management (TIM) problem studies
partially-connected interference networks with no channel state information
except for the network topology (i.e., connectivity graph) at the transmitters.
In this paper, we consider a similar problem in the uplink cellular networks,
while message passing is enabled at the receivers (e.g., base stations), so
that the decoded messages can be routed to other receivers via backhaul links
to help further improve network performance. For this TIM problem with decoded
message passing (TIM-MP), we model the interference pattern by conflict
digraphs, connect orthogonal access to the acyclic set coloring on conflict
digraphs, and show that one-to-one interference alignment boils down to
orthogonal access because of message passing. With the aid of polyhedral
combinatorics, we identify the structural properties of certain classes of
network topologies where orthogonal access achieves the optimal
degrees-of-freedom (DoF) region in the information-theoretic sense. The
relation to the conventional index coding with simultaneous decoding is also
investigated by formulating a generalized index coding problem with successive
decoding as a result of decoded message passing. The properties of reducibility
and criticality are also studied, by which we are able to prove the linear
optimality of orthogonal access in terms of symmetric DoF for the networks up
to four users with all possible network topologies (218 instances). Practical
issues of the tradeoff between the overhead of message passing and the
achievable symmetric DoF are also discussed, in the hope of facilitating
efficient backhaul utilization.
| 1 | 0 | 0 | 0 | 0 | 0 |
Robust Regression via Mutivariate Regression Depth | This paper studies robust regression in the settings of Huber's
$\epsilon$-contamination models. We consider estimators that are maximizers of
multivariate regression depth functions. These estimators are shown to achieve
minimax rates in the settings of $\epsilon$-contamination models for various
regression problems including nonparametric regression, sparse linear
regression, reduced rank regression, etc. We also discuss a general notion of
depth function for linear operators that has potential applications in robust
functional linear regression.
| 0 | 0 | 1 | 1 | 0 | 0 |
Quotients of Buildings as $W$-Groupoids | We introduce structures which model the quotients of buildings by
type-preserving group actions. These structures, which we call W-groupoids,
generalize Bruhat decompositions, chambers systems of type M, and Tits
amalgams. We define the fundamental group of a W-groupoid, and characterize
buildings as connected simply connected W-groupoids. We give a brief outline of
the covering theory of W-groupoids, which produces buildings as the universal
covers of W-groupoids. The local-to-global theorem of Tits concerning spherical
3-resides allows for the construction of W-groupoids by gluing together
quotients of generalized polygons. In this way, W-groupoids can be used to
construct exotic, hyperbolic, and wild buildings.
| 0 | 0 | 1 | 0 | 0 | 0 |
Birth of the GUP and its effect on the entropy of the Universe in Lie-$N$-algebra | In this paper, the origin of the generalized uncertainty principle (GUP) in
an $M$-dimensional theory with Lie-$N$-algebra is considered. This theory which
we name GLNA(Generalized Lie-$N$-Algebra)-theory can be reduced to $M$-theory
with $M=11$ and $N=3$. In this theory, at the beginning, two energies with
positive and negative signs are created from nothing and produce two types of
branes with opposite quantum numbers and different numbers of timing
dimensions. Coincidence with the birth of these branes, various derivatives of
bosonic fields emerge in the action of the system which produce the $r$ GUP for
bosons. These branes interact with each other, compact and various derivatives
of spinor fields appear in the action of the system which leads to the creation
of the GUP for fermions. The previous predicted entropy of branes in the GUP is
corrected as due to the emergence of higher orders of derivatives and different
number of timing dimensions.
| 0 | 1 | 0 | 0 | 0 | 0 |
Data-adaptive smoothing for optimal-rate estimation of possibly non-regular parameters | We consider nonparametric inference of finite dimensional, potentially
non-pathwise differentiable target parameters. In a nonparametric model, some
examples of such parameters that are always non pathwise differentiable target
parameters include probability density functions at a point, or regression
functions at a point. In causal inference, under appropriate causal
assumptions, mean counterfactual outcomes can be pathwise differentiable or
not, depending on the degree at which the positivity assumption holds.
In this paper, given a potentially non-pathwise differentiable target
parameter, we introduce a family of approximating parameters, that are pathwise
differentiable. This family is indexed by a scalar. In kernel regression or
density estimation for instance, a natural choice for such a family is obtained
by kernel smoothing and is indexed by the smoothing level. For the
counterfactual mean outcome, a possible approximating family is obtained
through truncation of the propensity score, and the truncation level then plays
the role of the index.
We propose a method to data-adaptively select the index in the family, so as
to optimize mean squared error. We prove an asymptotic normality result, which
allows us to derive confidence intervals. Under some conditions, our estimator
achieves an optimal mean squared error convergence rate. Confidence intervals
are data-adaptive and have almost optimal width.
A simulation study demonstrates the practical performance of our estimators
for the inference of a causal dose-response curve at a given treatment dose.
| 0 | 0 | 1 | 1 | 0 | 0 |
Knowledge Transfer for Out-of-Knowledge-Base Entities: A Graph Neural Network Approach | Knowledge base completion (KBC) aims to predict missing information in a
knowledge base.In this paper, we address the out-of-knowledge-base (OOKB)
entity problem in KBC:how to answer queries concerning test entities not
observed at training time. Existing embedding-based KBC models assume that all
test entities are available at training time, making it unclear how to obtain
embeddings for new entities without costly retraining. To solve the OOKB entity
problem without retraining, we use graph neural networks (Graph-NNs) to compute
the embeddings of OOKB entities, exploiting the limited auxiliary knowledge
provided at test time.The experimental results show the effectiveness of our
proposed model in the OOKB setting.Additionally, in the standard KBC setting in
which OOKB entities are not involved, our model achieves state-of-the-art
performance on the WordNet dataset. The code and dataset are available at
this https URL
| 1 | 0 | 0 | 0 | 0 | 0 |
Feature Engineering for Predictive Modeling using Reinforcement Learning | Feature engineering is a crucial step in the process of predictive modeling.
It involves the transformation of given feature space, typically using
mathematical functions, with the objective of reducing the modeling error for a
given target. However, there is no well-defined basis for performing effective
feature engineering. It involves domain knowledge, intuition, and most of all,
a lengthy process of trial and error. The human attention involved in
overseeing this process significantly influences the cost of model generation.
We present a new framework to automate feature engineering. It is based on
performance driven exploration of a transformation graph, which systematically
and compactly enumerates the space of given options. A highly efficient
exploration strategy is derived through reinforcement learning on past
examples.
| 1 | 0 | 0 | 1 | 0 | 0 |
Cellular automata connections | It is shown that any two cellular automata (CA) in rule space can be
connected by a continuous path parameterized by a real number $\kappa \in (0,
\infty)$, each point in the path corresponding to a coupled map lattice (CML).
In the limits $\kappa \to 0$ and $\kappa \to \infty$ the CML becomes each of
the two CA entering in the connection. A mean-field, reduced model is obtained
from the connection and allows to gain insight in those parameter regimes at
intermediate $\kappa$ where the dynamics is approximately homogeneous within
each neighborhood.
| 0 | 1 | 1 | 0 | 0 | 0 |
Hypothesis Testing via Euclidean Separation | We discuss an "operational" approach to testing convex composite hypotheses
when the underlying distributions are heavy-tailed. It relies upon Euclidean
separation of convex sets and can be seen as an extension of the approach to
testing by convex optimization developed in [8, 12]. In particular, we show how
one can construct quasi-optimal testing procedures for families of
distributions which are majorated, in a certain precise sense, by a
sub-spherical symmetric one and study the relationship between tests based on
Euclidean separation and "potential-based tests." We apply the promoted
methodology in the problem of sequential detection and illustrate its practical
implementation in an application to sequential detection of changes in the
input of a dynamic system.
[8] Goldenshluger, Alexander and Juditsky, Anatoli and Nemirovski, Arkadi,
Hypothesis testing by convex optimization, Electronic Journal of Statistics,9
(2):1645-1712, 2015. [12] Juditsky, Anatoli and Nemirovski, Arkadi, Hypothesis
testing via affine detectors, Electronic Journal of Statistics, 10:2204--2242,
2016.
| 0 | 0 | 1 | 1 | 0 | 0 |
Detecting in-plane tension induced crystal plasticity transition with nanoindentation | We present experimental data and simulations on the effects of in-plane
tension on nanoindentation hardness and pop-in noise. Nanoindentation
experiments using a Berkovich tip are performed on bulk polycrystaline Al
samples, under tension in a custom 4pt-bending fixture. The hardness displays a
transition, for indentation depths smaller than 10nm, as function of the
in-plane stress at a value consistent with the bulk tensile yield stress.
Displacement bursts appear insensitive to in-plane tension and this transition
disappears for larger indentation depths. Two dimensional discrete dislocation
dynamics simulations confirm that a regime exists where hardness is sensitive
to tension-induced pre-existing dislocations.
| 0 | 1 | 0 | 0 | 0 | 0 |
A novel approach for fast mining frequent itemsets use N-list structure based on MapReduce | Frequent Pattern Mining is a one field of the most significant topics in data
mining. In recent years, many algorithms have been proposed for mining frequent
itemsets. A new algorithm has been presented for mining frequent itemsets based
on N-list data structure called Prepost algorithm. The Prepost algorithm is
enhanced by implementing compact PPC-tree with the general tree. Prepost
algorithm can only find a frequent itemsets with required (pre-order and
post-order) for each node. In this chapter, we improved prepost algorithm based
on Hadoop platform (HPrepost), proposed using the Mapreduce programming model.
The main goals of proposed method are efficient mining frequent itemsets
requiring less running time and memory usage. We have conduct experiments for
the proposed scheme to compare with another algorithms. With dense datasets,
which have a large average length of transactions, HPrepost is more effective
than frequent itemsets algorithms in terms of execution time and memory usage
for all min-sup. Generally, our algorithm outperforms algorithms in terms of
runtime and memory usage with small thresholds and large datasets.
| 1 | 0 | 0 | 0 | 0 | 0 |
Resonant thermalization of periodically driven strongly correlated electrons | We study the dynamics of the Fermi-Hubbard model driven by a time-periodic
modulation of the interaction within nonequilibrium Dynamical Mean-Field
Theory. For moderate interaction, we find clear evidence of thermalization to a
genuine infinite-temperature state with no residual oscillations. Quite
differently, in the strongly correlated regime, we find a quasi-stationary
extremely long-lived state with oscillations synchronized with the drive
(Floquet prethermalization). Remarkably, the nature of this state dramatically
changes upon tuning the drive frequency. In particular, we show the existence
of a critical frequency at which the system rapidly thermalizes despite the
large interaction. We characterize this resonant thermalization and provide an
analytical understanding in terms of a break down of the periodic
Schrieffer-Wolff transformation.
| 0 | 1 | 0 | 0 | 0 | 0 |
Fusible HSTs and the randomized k-server conjecture | We exhibit an $O((\log k)^6)$-competitive randomized algorithm for the
$k$-server problem on any metric space. It is shown that a potential-based
algorithm for the fractional $k$-server problem on hierarchically separated
trees (HSTs) with competitive ratio $f(k)$ can be used to obtain a randomized
algorithm for any metric space with competitive ratio $f(k)^2 O((\log k)^2)$.
Employing the $O((\log k)^2)$-competitive algorithm for HSTs from our joint
work with Bubeck, Cohen, Lee, and Mądry (2017) yields the claimed bound.
The best previous result independent of the geometry of the underlying metric
space is the $2k-1$ competitive ratio established for the deterministic work
function algorithm by Koutsoupias and Papadimitriou (1995). Even for the
special case when the underlying metric space is the real line, the best known
competitive ratio was $k$. Since deterministic algorithms can do no better than
$k$ on any metric space with at least $k+1$ points, this establishes that for
every metric space on which the problem is non-trivial, randomized algorithms
give an exponential improvement over deterministic algorithms.
| 1 | 0 | 1 | 0 | 0 | 0 |
BPS spectra and 3-manifold invariants | We provide a physical definition of new homological invariants $\mathcal{H}_a
(M_3)$ of 3-manifolds (possibly, with knots) labeled by abelian flat
connections. The physical system in question involves a 6d fivebrane theory on
$M_3$ times a 2-disk, $D^2$, whose Hilbert space of BPS states plays the role
of a basic building block in categorification of various partition functions of
3d $\mathcal{N}=2$ theory $T[M_3]$: $D^2\times S^1$ half-index, $S^2\times S^1$
superconformal index, and $S^2\times S^1$ topologically twisted index. The
first partition function is labeled by a choice of boundary condition and
provides a refinement of Chern-Simons (WRT) invariant. A linear combination of
them in the unrefined limit gives the analytically continued WRT invariant of
$M_3$. The last two can be factorized into the product of half-indices. We show
how this works explicitly for many examples, including Lens spaces, circle
fibrations over Riemann surfaces, and plumbed 3-manifolds.
| 0 | 0 | 1 | 0 | 0 | 0 |
Nonzero positive solutions of a multi-parameter elliptic system with functional BCs | We prove, by topological methods, new results on the existence of nonzero
positive weak solutions for a class of multi-parameter second order elliptic
systems subject to functional boundary conditions. The setting is fairly
general and covers the case of multi-point, integral and nonlinear boundary
conditions. We also present a non-existence result. We provide some examples to
illustrate the applicability our theoretical results.
| 0 | 0 | 1 | 0 | 0 | 0 |
Emotion Recognition from Speech based on Relevant Feature and Majority Voting | This paper proposes an approach to detect emotion from human speech employing
majority voting technique over several machine learning techniques. The
contribution of this work is in two folds: firstly it selects those features of
speech which is most promising for classification and secondly it uses the
majority voting technique that selects the exact class of emotion. Here,
majority voting technique has been applied over Neural Network (NN), Decision
Tree (DT), Support Vector Machine (SVM) and K-Nearest Neighbor (KNN). Input
vector of NN, DT, SVM and KNN consists of various acoustic and prosodic
features like Pitch, Mel-Frequency Cepstral coefficients etc. From speech
signal many feature have been extracted and only promising features have been
selected. To consider a feature as promising, Fast Correlation based feature
selection (FCBF) and Fisher score algorithms have been used and only those
features are selected which are highly ranked by both of them. The proposed
approach has been tested on Berlin dataset of emotional speech [3] and
Electromagnetic Articulography (EMA) dataset [4]. The experimental result shows
that majority voting technique attains better accuracy over individual machine
learning techniques. The employment of the proposed approach can effectively
recognize the emotion of human beings in case of social robot, intelligent chat
client, call-center of a company etc.
| 1 | 0 | 0 | 1 | 0 | 0 |
The basic principles and the structure and algorithmically software of computing by hypercomplex number | In article the basic principles put in a basis of algorithmicallysoftware of
hypercomplex number calculations, structure of a software, structure of
functional subsystems are considered. The most important procedures included in
subsystems are considered, program listings and examples of their application
are given.
| 1 | 0 | 0 | 0 | 0 | 0 |
Drawing cone spherical metrics via Strebel differentials | Cone spherical metrics are conformal metrics with constant curvature one and
finitely many conical singularities on compact Riemann surfaces. By using
Strebel differentials as a bridge, we construct a new class of cone spherical
metrics on compact Riemann surfaces by drawing on the surfaces some class of
connected metric ribbon graphs.
| 0 | 0 | 1 | 0 | 0 | 0 |
Thermoelectric radiation detector based on superconductor/ferromagnet systems | We suggest a new type of an ultrasensitive detector of electromagnetic fields
exploiting the giant thermoelectric effect recently found in
superconductor/ferromagnet hybrid structures. Compared to other types of
superconducting detectors where the detected signal is based on variations of
the detector impedance, the thermoelectric detector has the advantage of
requiring no external driving fields. This becomes especially relevant in
multi-pixel detectors where the number of bias lines and the heating induced by
them becomes an issue. We propose different material combinations to implement
the detector and provide a detailed analysis of its sensitivity and speed. In
particular, we perform to our knowledge the first proper noise analysis that
includes the cross correlation between heat and charge current noise and
thereby describes also thermoelectric detectors with a large thermoelectric
figure of merit.
| 0 | 1 | 0 | 0 | 0 | 0 |
Upper estimates of Christoffel function on convex domains | New upper bounds on the pointwise behaviour of Christoffel function on convex
domains in ${\mathbb{R}}^d$ are obtained. These estimates are established by
explicitly constructing the corresponding "needle"-like algebraic polynomials
having small integral norm on the domain, and are stated in terms of few
easy-to-measure geometric characteristics of the location of the point of
interest in the domain. Sharpness of the results is shown and examples of
applications are given.
| 0 | 0 | 1 | 0 | 0 | 0 |
Special Lagrangian and deformed Hermitian Yang-Mills on tropical manifold | From string theory, the notion of deformed Hermitian Yang-Mills connections
has been introduced by Mariño, Minasian, Moore and Strominger. After that,
Leung, Yau and Zaslow proved that it naturally appears as mirror objects of
special Lagrangian submanifolds via Fourier-Mukai transform between dual torus
fibrations. In their paper, some conditions are imposed for simplicity. In this
paper, data to glue their construction on tropical manifolds are proposed and a
generalization of the correspondence is proved without the assumption that the
Lagrangian submanifold is a section of the torus fibration.
| 0 | 0 | 1 | 0 | 0 | 0 |
An introduction to the qualitative and quantitative theory of homogenization | We present an introduction to periodic and stochastic homogenization of
ellip- tic partial differential equations. The first part is concerned with the
qualitative theory, which we present for equations with periodic and random
coefficients in a unified approach based on Tartar's method of oscillating test
functions. In partic- ular, we present a self-contained and elementary argument
for the construction of the sublinear corrector of stochastic homogenization.
(The argument also applies to elliptic systems and in particular to linear
elasticity). In the second part we briefly discuss the representation of the
homogenization error by means of a two- scale expansion. In the last part we
discuss some results of quantitative stochastic homogenization in a discrete
setting. In particular, we discuss the quantification of ergodicity via
concentration inequalities, and we illustrate that the latter in combi- nation
with elliptic regularity theory leads to a quantification of the growth of the
sublinear corrector and the homogenization error.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Computational Study of Yttria-Stabilized Zirconia: I. Using Crystal Chemistry to Search for the Ground State on a Glassy Energy Landscape | Yttria-stabilized zirconia (YSZ), a ZrO2-Y2O3 solid solution that contains a
large population of oxygen vacancies, is widely used in energy and industrial
applications. Past computational studies correctly predicted the anion
diffusivity but not the cation diffusivity, which is important for material
processing and stability. One of the challenges lies in identifying a plausible
configuration akin to the ground state in a glassy landscape. This is unlikely
to come from random sampling of even a very large sample space, but the odds
are much improved by incorporating packing preferences revealed by a modest
sized configurational library established from empirical potential
calculations. Ab initio calculations corroborated these preferences, which
prove remarkably robust extending to the fifth cation-oxygen shell about 8
{\AA} away. Yet because of frustration there are still rampant violations of
packing preferences and charge neutrality in the ground state, and the approach
toward it bears a close analogy to glass relaxations. Fast relaxations proceed
by fast oxygen movement around cations, while slow relaxations require slow
cation diffusion. The latter is necessarily cooperative because of strong
coupling imposed by the long-range packing preferences.
| 0 | 1 | 0 | 0 | 0 | 0 |
Reactive Power Compensation Game under Prospect-Theoretic Framing Effects | Reactive power compensation is an important challenge in current and future
smart power systems. However, in the context of reactive power compensation,
most existing studies assume that customers can assess their compensation
value, i.e., Var unit, objectively. In this paper, customers are assumed to
make decisions that pertain to reactive power coordination. In consequence, the
way in which those customers evaluate the compensation value resulting from
their individual decisions will impact the overall grid performance. In
particular, a behavioral framework, based on the framing effect of prospect
theory (PT), is developed to study the effect of both objective value and
subjective evaluation in a reactive power compensation game. For example, such
effect allows customers to optimize a subjective value of their utility which
essentially frames the objective utility with respect to a reference point.
This game enables customers to coordinate the use of their electrical devices
to compensate reactive power. For the proposed game, both the objective case
using expected utility theory (EUT) and the PT consideration are solved via a
learning algorithm that converges to a mixed-strategy Nash equilibrium. In
addition, several key properties of this game are derived analytically.
Simulation results show that, under PT, customers are likely to make decisions
that differ from those predicted by classical models. For instance, using an
illustrative two-customer case, we show that a PT customer will increase the
conservative strategy (achieving a high power factor) by 29% compared to a
conventional customer. Similar insights are also observed for a case with three
customers.
| 1 | 0 | 0 | 0 | 0 | 0 |
Some remarks on protolocalizations and protoadditive reflections | We investigate additional properties of protolocalizations, introduced and
studied by F. Borceux, M. M. Clementino, M. Gran, and L. Sousa, and of
protoadditive reflections, introduced and studied by T. Everaert and M. Gran.
Among other things we show that there are no non-trivial (protolocalizations
and) protoadditive reflections of the category of groups, and establish a
connection between protolocalizations and Kurosh--Amitsur radicals of groups
with multiple operators whose semisimple classes form subvarieties.
| 0 | 0 | 1 | 0 | 0 | 0 |
Resonances near Thresholds in slightly Twisted Waveguides | We consider the Dirichlet Laplacian in a straight three dimensional waveguide
with non-rotationally invariant cross section, perturbed by a twisting of small
amplitude. It is well known that such a perturbation does not create
eigenvalues below the essential spectrum. However, around the bottom of the
spectrum, we provide a meromorphic extension of the weighted resolvent of the
perturbed operator, and show the existence of exactly one resonance near this
point. Moreover, we obtain the asymptotic behavior of this resonance as the
size of the twisting goes to 0. We also extend the analysis to the upper
eigenvalues of the transversal problem, showing that the number of resonances
is bounded by the multiplicity of the eigenvalue and obtaining the
corresponding asymptotic behavior
| 0 | 0 | 1 | 0 | 0 | 0 |
Trapping and displacement of liquid collars and plugs in rough-walled tubes | A liquid film wetting the interior of a long circular cylinder redistributes
under the action of surface tension to form annular collars or occlusive plugs.
These equilibrium structures are invariant under axial translation within a
perfectly smooth uniform tube and therefore can be displaced axially by very
weak external forcing. We consider how this degeneracy is disrupted when the
tube wall is rough, and determine threshold conditions under which collars or
plugs resist displacement under forcing. Wall roughness is modelled as a
non-axisymmetric Gaussian random field of prescribed correlation length and
small variance, mimicking some of the geometric irregularities inherent in
applications such as lung airways. The thin film coating this surface is
modelled using lubrication theory. When the roughness is weak, we show how the
locations of equilibrium collars and plugs can be identified in terms of the
azimuthally averaged tube radius; we derive conditions specifying equilibrium
collar locations under an externally imposed shear flow, and plug locations
under an imposed pressure gradient. We use these results to determine the
probability of external forcing being sufficient to displace a collar or plug
from a rough-walled tube, when the tube roughness is defined only in
statistical terms.
| 0 | 1 | 0 | 0 | 0 | 0 |
Topological Brain Network Distances | Existing brain network distances are often based on matrix norms. The
element-wise differences in the existing matrix norms may fail to capture
underlying topological differences. Further, matrix norms are sensitive to
outliers. A major disadvantage to element-wise distance calculations is that it
could be severely affected even by a small number of extreme edge weights. Thus
it is necessary to develop network distances that recognize topology. In this
paper, we provide a survey of bottleneck, Gromov-Hausdorff (GH) and
Kolmogorov-Smirnov (KS) distances that are adapted for brain networks, and
compare them against matrix-norm based network distances. Bottleneck and
GH-distances are often used in persistent homology. However, they were rarely
utilized to measure similarity between brain networks. KS-distance is recently
introduced to measure the similarity between networks across different
filtration values. The performance analysis was conducted using the random
network simulations with the ground truths. Using a twin imaging study, which
provides biological ground truth, we demonstrate that the KS distance has the
ability to determine heritability.
| 0 | 0 | 0 | 0 | 1 | 0 |
The challenge of decentralized marketplaces | Online trust systems are playing an important role in to-days world and face
various challenges in building them. Billions of dollars of products and
services are traded through electronic commerce, files are shared among large
peer-to-peer networks and smart contracts can potentially replace paper
contracts with digital contracts. These systems rely on trust mechanisms in
peer-to-peer networks like reputation systems or a trustless public ledger. In
most cases, reputation systems are build to determine the trustworthiness of
users and to provide incentives for users to make a fair contribution to the
peer-to-peer network. The main challenges are how to set up a good trust
system, how to deal with security issues and how to deal with strategic users
trying to cheat on the system. The Sybil attack, the most important attack on
reputation systems is discussed. At last match making in two sided markets and
the strategy proofness of these markets are discussed.
| 1 | 0 | 0 | 0 | 0 | 0 |
Radially resolved simulations of collapsing pebble clouds in protoplanetary discs | We study the collapse of pebble clouds with a statistical model to find the
internal structure of comet-sized planetesimals. Pebble-pebble collisions occur
during the collapse and the outcome of these collisions affect the resulting
structure of the planetesimal. We expand our previous models by allowing the
individual pebble sub-clouds to contract at different rates and by including
the effect of gas drag on the contraction speed and in energy dissipation. Our
results yield comets that are porous pebble-piles with particle sizes varying
with depth. In the surface layers there is a mixture of primordial pebbles and
pebble fragments. The interior, on the other hand, consists only of primordial
pebbles with a narrower size distribution, yielding higher porosity there. Our
results imply that the gas in the protoplanetary disc plays an important role
in determining the radial distribution of pebble sizes and porosity inside
planetesimals.
| 0 | 1 | 0 | 0 | 0 | 0 |
Accelerating equilibrium isotope effect calculations: I. Stochastic thermodynamic integration with respect to mass | Accurate path integral Monte Carlo or molecular dynamics calculations of
isotope effects have until recently been expensive because of the necessity to
reduce three types of errors present in such calculations: statistical errors
due to sampling, path integral discretization errors, and thermodynamic
integration errors. While the statistical errors can be reduced with virial
estimators and path integral discretization errors with high-order
factorization of the Boltzmann operator, here we propose a method for
accelerating isotope effect calculations by eliminating the integration error.
We show that the integration error can be removed entirely by changing particle
masses stochastically during the calculation and by using a piecewise linear
umbrella biasing potential. Moreover, we demonstrate numerically that this
approach does not increase the statistical error. The resulting acceleration of
isotope effect calculations is demonstrated on a model harmonic system and on
deuterated species of methane.
| 0 | 1 | 0 | 0 | 0 | 0 |
CoAP over ICN | The Constrained Application Protocol (CoAP) is a specialized Web transfer
protocol for resource-oriented applications intended to run on constrained
devices, typically part of the Internet of Things. In this paper we leverage
Information-Centric Networking (ICN), deployed within the domain of a network
provider that interconnects, in addition to other terminals, CoAP endpoints in
order to provide enhanced CoAP services. We present various CoAP-specific
communication scenarios and discuss how ICN can provide benefits to both
network providers and CoAP applications, even though the latter are not aware
of the existence of ICN. In particular, the use of ICN results in smaller state
management complexity at CoAP endpoints, simpler implementation at CoAP
endpoints, and less communication overhead in the network.
| 1 | 0 | 0 | 0 | 0 | 0 |
Nonequilibrium mode-coupling theory for dense active systems of self-propelled particles | The physics of active systems of self-propelled particles, in the regime of a
dense liquid state, is an open puzzle of great current interest, both for
statistical physics and because such systems appear in many biological
contexts. We develop a nonequilibrium mode-coupling theory (MCT) for such
systems, where activity is included as a colored noise with the particles
having a self-propulsion foce $f_0$ and persistence time $\tau_p$. Using the
extended MCT and a generalized fluctuation-dissipation theorem, we calculate
the effective temperature $T_{eff}$ of the active fluid. The nonequilibrium
nature of the systems is manifested through a time-dependent $T_{eff}$ that
approaches a constant in the long-time limit, which depends on the activity
parameters $f_0$ and $\tau_p$. We find, phenomenologically, that this long-time
limit is captured by the potential energy of a single, trapped active particle
(STAP). Through a scaling analysis close to the MCT glass transition point, we
show that $\tau_\alpha$, the $\alpha$-relaxation time, behaves as
$\tau_\alpha\sim f_0^{-2\gamma}$, where $\gamma=1.74$ is the MCT exponent for
the passive system. $\tau_\alpha$ may increase or decrease as a function of
$\tau_p$ depending on the type of active force correlations, but the behavior
is always governed by the same value of the exponent $\gamma$. Comparison with
numerical solution of the nonequilibrium MCT as well as simulation results give
excellent agreement with the scaling analysis.
| 0 | 1 | 0 | 0 | 0 | 0 |
Symmetry Protected Dynamical Symmetry in the Generalized Hubbard Models | In this letter we present a theorem on the dynamics of the generalized
Hubbard models. This theorem shows that the symmetry of the single particle
Hamiltonian can protect a kind of dynamical symmetry driven by the
interactions. Here the dynamical symmetry refers to that the time evolution of
certain observables are symmetric between the repulsive and attractive Hubbard
models. We demonstrate our theorem with three different examples in which the
symmetry involves bipartite lattice symmetry, reflection symmetry and
translation symmetry, respectively. Each of these examples relates to one
recent cold atom experiment on the dynamics in the optical lattices where such
a dynamical symmetry is manifested. These experiments include expansion
dynamics of cold atoms, chirality of atomic motion within a synthetic magnetic
field and melting of charge-density-wave order. Therefore, our theorem provides
a unified view of these seemingly disparate phenomena.
| 0 | 1 | 0 | 0 | 0 | 0 |
Machine-learning a virus assembly fitness landscape | Realistic evolutionary fitness landscapes are notoriously difficult to
construct. A recent cutting-edge model of virus assembly consists of a
dodecahedral capsid with $12$ corresponding packaging signals in three affinity
bands. This whole genome/phenotype space consisting of $3^{12}$ genomes has
been explored via computationally expensive stochastic assembly models, giving
a fitness landscape in terms of the assembly efficiency. Using latest
machine-learning techniques by establishing a neural network, we show that the
intensive computation can be short-circuited in a matter of minutes to
astounding accuracy.
| 0 | 0 | 0 | 0 | 1 | 0 |
A multilevel block building algorithm for fast modeling generalized separable systems | Data-driven modeling plays an increasingly important role in different areas
of engineering. For most of existing methods, such as genetic programming (GP),
the convergence speed might be too slow for large scale problems with a large
number of variables. It has become the bottleneck of GP for practical
applications. Fortunately, in many applications, the target models are
separable in some sense. In this paper, we analyze different types of
separability of some real-world engineering equations and establish a
mathematical model of generalized separable system (GS system). In order to get
the structure of the GS system, a multilevel block building (MBB) algorithm is
proposed, in which the target model is decomposed into a number of blocks,
further into minimal blocks and factors. Compare to the conventional GP, MBB
can make large reductions to the search space. This makes MBB capable of
modeling a complex system. The minimal blocks and factors are optimized and
assembled with a global optimization search engine, low dimensional simplex
evolution (LDSE). An extensive study between the proposed MBB and a
state-of-the-art data-driven fitting tool, Eureqa, has been presented with
several man-made problems, as well as some real-world problems. Test results
indicate that the proposed method is more effective and efficient under all the
investigated cases.
| 0 | 0 | 1 | 0 | 0 | 0 |
Competition and Selection Among Conventions | In many domains, a latent competition among different conventions determines
which one will come to dominate. One sees such effects in the success of
community jargon, of competing frames in political rhetoric, or of terminology
in technical contexts. These effects have become widespread in the online
domain, where the data offers the potential to study competition among
conventions at a fine-grained level.
In analyzing the dynamics of conventions over time, however, even with
detailed on-line data, one encounters two significant challenges. First, as
conventions evolve, the underlying substance of their meaning tends to change
as well; and such substantive changes confound investigations of social
effects. Second, the selection of a convention takes place through the complex
interactions of individuals within a community, and contention between the
users of competing conventions plays a key role in the convention's evolution.
Any analysis must take place in the presence of these two issues.
In this work we study a setting in which we can cleanly track the competition
among conventions. Our analysis is based on the spread of low-level authoring
conventions in the eprint arXiv over 24 years: by tracking the spread of macros
and other author-defined conventions, we are able to study conventions that
vary even as the underlying meaning remains constant. We find that the
interaction among co-authors over time plays a crucial role in the selection of
them; the distinction between more and less experienced members of the
community, and the distinction between conventions with visible versus
invisible effects, are both central to the underlying processes. Through our
analysis we make predictions at the population level about the ultimate success
of different synonymous conventions over time--and at the individual level
about the outcome of "fights" between people over convention choices.
| 1 | 1 | 0 | 0 | 0 | 0 |
Tradeoff Between Delay and High SNR Capacity in Quantized MIMO Systems | Analog-to-digital converters (ADCs) are a major contributor to the power
consumption of multiple-input multiple-output (MIMO) communication systems with
large number of antennas. Use of low resolution ADCs has been proposed as a
means to decrease power consumption in MIMO receivers. However, reducing the
ADC resolution leads to performance loss in terms of achievable transmission
rates. In order to mitigate the rate-loss, the receiver can perform analog
processing of the received signals before quantization. Prior works consider
one-shot analog processing where at each channel-use, analog linear
combinations of the received signals are fed to a set of one-bit threshold
ADCs. In this paper, a receiver architecture is proposed which uses a sequence
of delay elements to allow for blockwise linear combining of the received
analog signals. In the high signal to noise ratio regime, it is shown that the
proposed architecture achieves the maximum achievable transmission rate given a
fixed number of one-bit ADCs. Furthermore, a tradeoff between transmission rate
and the number of delay elements is identified which quantifies the increase in
maximum achievable rate as the number of delay elements is increased.
| 1 | 0 | 0 | 0 | 0 | 0 |
Formal Synthesis of Control Strategies for Positive Monotone Systems | We design controllers from formal specifications for positive discrete-time
monotone systems that are subject to bounded disturbances. Such systems are
widely used to model the dynamics of transportation and biological networks.
The specifications are described using signal temporal logic (STL), which can
express a broad range of temporal properties. We formulate the problem as a
mixed-integer linear program (MILP) and show that under the assumptions made in
this paper, which are not restrictive for traffic applications, the existence
of open-loop control policies is sufficient and almost necessary to ensure the
satisfaction of STL formulas. We establish a relation between satisfaction of
STL formulas in infinite time and set-invariance theories and provide an
efficient method to compute robust control invariant sets in high dimensions.
We also develop a robust model predictive framework to plan controls optimally
while ensuring the satisfaction of the specification. Illustrative examples and
a traffic management case study are included.
| 1 | 0 | 1 | 0 | 0 | 0 |
Neumann Optimizer: A Practical Optimization Algorithm for Deep Neural Networks | Progress in deep learning is slowed by the days or weeks it takes to train
large models. The natural solution of using more hardware is limited by
diminishing returns, and leads to inefficient use of additional resources. In
this paper, we present a large batch, stochastic optimization algorithm that is
both faster than widely used algorithms for fixed amounts of computation, and
also scales up substantially better as more computational resources become
available. Our algorithm implicitly computes the inverse Hessian of each
mini-batch to produce descent directions; we do so without either an explicit
approximation to the Hessian or Hessian-vector products. We demonstrate the
effectiveness of our algorithm by successfully training large ImageNet models
(Inception-V3, Resnet-50, Resnet-101 and Inception-Resnet-V2) with mini-batch
sizes of up to 32000 with no loss in validation error relative to current
baselines, and no increase in the total number of steps. At smaller mini-batch
sizes, our optimizer improves the validation error in these models by 0.8-0.9%.
Alternatively, we can trade off this accuracy to reduce the number of training
steps needed by roughly 10-30%. Our work is practical and easily usable by
others -- only one hyperparameter (learning rate) needs tuning, and
furthermore, the algorithm is as computationally cheap as the commonly used
Adam optimizer.
| 1 | 0 | 0 | 1 | 0 | 0 |
Signal propagation in sensing and reciprocating cellular systems with spatial and structural heterogeneity | Sensing and reciprocating cellular systems (SARs) are important for the
operation of many biological systems. Production in interferon (IFN) SARs is
achieved through activation of the Jak-Stat pathway, and downstream
upregulation of IFN regulatory factor (IRF)-3 and IFN transcription, but the
role that high and low affinity IFNs play in this process remains unclear. We
present a comparative between a minimal spatio-temporal partial differential
equation (PDE) model and a novel spatio-structural-temporal (SST) model for the
consideration of receptor, binding, and metabolic aspects of SAR behaviour.
Using the SST framework, we simulate single- and multi-cluster paradigms of IFN
communication. Simulations reveal a cyclic process between the binding of IFN
to the receptor, and the consequent increase in metabolism, decreasing the
propensity for binding due to the internal feed-back mechanism. One observes
the effect of heterogeneity between cellular clusters, allowing them to
individualise and increase local production, and within clusters, where we
observe `sub popular quiescence'; a process whereby intra-cluster
subpopulations reduce their binding and metabolism such that other such
subpopulations may augment their production. Finally, we observe the ability
for low affinity IFN to communicate a long range signal, where high affinity
cannot, and the breakdown of this relationship through the introduction of cell
motility. Biological systems may utilise cell motility where environments are
unrestrictive and may use fixed system, with low affinity communication, where
a localised response is desirable.
| 0 | 0 | 0 | 0 | 1 | 0 |
A New Family of Asymmetric Distributions for Modeling Light-Tailed and Right-Skewed Data | A new three-parameter cumulative distribution function defined on
$(\alpha,\infty)$, for some $\alpha\geq0$, with asymmetric probability density
function and showing exponential decays at its both tails, is introduced. The
new distribution is near to familiar distributions like the gamma and
log-normal distributions, but this new one shows own elements and thus does not
generalize neither of these distributions. Hence, the new distribution
constitutes a new alternative to fit values showing light-tailed behaviors.
Further, this new distribution shows great flexibility to fit the bulk of data
by tuning some parameters. We refer to this new distribution as the generalized
exponential log-squared distribution (GEL-S). Statistical properties of the
GEL-S distribution are discussed. The maximum likelihood method is proposed for
estimating the model parameters, but incorporating adaptations in computational
procedures due to difficulties in the manipulation of the parameters. The
perfomance of the new distribution is studied using simulations. Applications
to real data sets coming from different domains are showed.
| 0 | 0 | 1 | 1 | 0 | 0 |
Secret-Key-Aided Scheme for Securing Untrusted DF Relaying Networks | This paper proposes a new scheme to secure the transmissions in an untrusted
decode-and-forward (DF) relaying network. A legitimate source node, Alice,
sends her data to a legitimate destination node, Bob, with the aid of an
untrusted DF relay node, Charlie. To secure the transmissions from Charlie
during relaying time slots, each data codeword is secured using a secret-key
codeword that has been previously shared between Alice and Bob during the
perfectly secured time slots (i.e., when the channel secrecy rate is positive).
The secret-key bits exchanged between Alice and Bob are stored in a
finite-length buffer and are used to secure data transmission whenever needed.
We model the secret-key buffer as a queueing system and analyze its Markov
chain. Our numerical results show the gains of our proposed scheme relative to
benchmarks. Moreover, the proposed scheme achieves an upper bound on the secure
throughput.
| 1 | 0 | 0 | 0 | 0 | 0 |
On deep speaker embeddings for text-independent speaker recognition | We investigate deep neural network performance in the textindependent speaker
recognition task. We demonstrate that using angular softmax activation at the
last classification layer of a classification neural network instead of a
simple softmax activation allows to train a more generalized discriminative
speaker embedding extractor. Cosine similarity is an effective metric for
speaker verification in this embedding space. We also address the problem of
choosing an architecture for the extractor. We found that deep networks with
residual frame level connections outperform wide but relatively shallow
architectures. This paper also proposes several improvements for previous
DNN-based extractor systems to increase the speaker recognition accuracy. We
show that the discriminatively trained similarity metric learning approach
outperforms the standard LDA-PLDA method as an embedding backend. The results
obtained on Speakers in the Wild and NIST SRE 2016 evaluation sets demonstrate
robustness of the proposed systems when dealing with close to real-life
conditions.
| 0 | 0 | 0 | 1 | 0 | 0 |
Accurate Computation of Marginal Data Densities Using Variational Bayes | Bayesian model selection and model averaging rely on estimates of marginal
data densities (MDDs) also known as marginal likelihoods. Estimation of MDDs is
often nontrivial and requires elaborate numerical integration methods. We
propose using the variational Bayes posterior density as a weighting density
within the class of reciprocal importance sampling MDD estimators. This
proposal is computationally convenient, is based on variational Bayes posterior
densities that are available for many models, only requires simulated draws
from the posterior distribution, and provides accurate estimates with a
moderate number of posterior draws. We show that this estimator is
theoretically well-justified, has finite variance, provides a minimum variance
candidate for the class of reciprocal importance sampling MDD estimators, and
that its reciprocal is consistent, asymptotically normally distributed and
unbiased. We also investigate the performance of the variational Bayes
approximate density as a weighting density within the class of bridge sampling
estimators. Using several examples, we show that our proposed estimators are at
least as good as the best existing estimators and outperform many MDD
estimators in terms of bias and numerical standard errors.
| 0 | 0 | 0 | 1 | 0 | 0 |
Random matrices and the New York City subway system | We analyze subway arrival times in the New York City subway system. We find
regimes where the gaps between trains exhibit both (unitarily invariant) random
matrix statistics and Poisson statistics. The departure from random matrix
statistics is captured by the value of the Coulomb potential along the subway
route. This departure becomes more pronounced as trains make more stops.
| 0 | 1 | 0 | 0 | 0 | 0 |
Weakening of the diamagnetic shielding in FeSe$_{1-x}$S$_x$ at high pressures | The superconducting transition of FeSe$_{1-x}$S$_x$ with three distinct
sulphur concentrations $x$ was studied under hydrostatic pressure up to
$\sim$70 kbar via bulk AC susceptibility. The pressure dependence of the
superconducting transition temperature ($T_c$) features a small dome-shaped
variation at low pressures for $x=0.04$ and $x=0.12$, followed by a more
substantial $T_c$ enhancement to a value of around 30 K at moderate pressures.
In $x=0.21$, a similar overall pressure dependence of $T_c$ is observed, except
that the small dome at low pressures is flattened. For all three
concentrations, a significant weakening of the diamagnetic shielding is
observed beyond the pressure around which the maximum $T_c$ of 30 K is reached
near the verge of pressure-induced magnetic phase. This observation points to a
strong competition between the magnetic and high-$T_c$ superconducting states
at high pressure in this system.
| 0 | 1 | 0 | 0 | 0 | 0 |
Incremental Skip-gram Model with Negative Sampling | This paper explores an incremental training strategy for the skip-gram model
with negative sampling (SGNS) from both empirical and theoretical perspectives.
Existing methods of neural word embeddings, including SGNS, are multi-pass
algorithms and thus cannot perform incremental model update. To address this
problem, we present a simple incremental extension of SGNS and provide a
thorough theoretical analysis to demonstrate its validity. Empirical
experiments demonstrated the correctness of the theoretical analysis as well as
the practical usefulness of the incremental algorithm.
| 1 | 0 | 0 | 0 | 0 | 0 |
Continuous-Time Accelerated Methods via a Hybrid Control Lens | Treating optimization methods as dynamical systems can be traced back
centuries ago in order to comprehend the notions and behaviors of optimization
methods. Lately, this mind set has become the driving force to design new
optimization methods. Inspired by the recent dynamical system viewpoint of
Nesterov's fast method, we propose two classes of fast methods, formulated as
hybrid control systems, to obtain pre-specified exponential convergence rate.
Alternative to the existing fast methods which are parametric-in-time second
order differential equations, we dynamically synthesize feedback controls in a
state-dependent manner. Namely, in the first class the damping term is viewed
as the control input, while in the second class the amplitude with which the
gradient of the objective function impacts the dynamics serves as the
controller. The objective function requires to satisfy a certain sharpness
criterion, the so-called Polyak--{\L}ojasiewicz inequality. Moreover, we
establish that both hybrid structures possess Zeno-free solution trajectories.
We finally provide a mechanism to determine the discretization step size to
attain an exponential convergence rate.
| 1 | 0 | 0 | 0 | 0 | 0 |
Fast-Slow Recurrent Neural Networks | Processing sequential data of variable length is a major challenge in a wide
range of applications, such as speech recognition, language modeling,
generative image modeling and machine translation. Here, we address this
challenge by proposing a novel recurrent neural network (RNN) architecture, the
Fast-Slow RNN (FS-RNN). The FS-RNN incorporates the strengths of both
multiscale RNNs and deep transition RNNs as it processes sequential data on
different timescales and learns complex transition functions from one time step
to the next. We evaluate the FS-RNN on two character level language modeling
data sets, Penn Treebank and Hutter Prize Wikipedia, where we improve state of
the art results to $1.19$ and $1.25$ bits-per-character (BPC), respectively. In
addition, an ensemble of two FS-RNNs achieves $1.20$ BPC on Hutter Prize
Wikipedia outperforming the best known compression algorithm with respect to
the BPC measure. We also present an empirical investigation of the learning and
network dynamics of the FS-RNN, which explains the improved performance
compared to other RNN architectures. Our approach is general as any kind of RNN
cell is a possible building block for the FS-RNN architecture, and thus can be
flexibly applied to different tasks.
| 1 | 0 | 0 | 0 | 0 | 0 |
On OR Many-Access Channels | OR multi-access channel is a simple model where the channel output is the
Boolean OR among the Boolean channel inputs. We revisit this model, showing
that employing Bloom filter, a randomized data structure, as channel inputs
achieves its capacity region with joint decoding and the symmetric sum rate of
$\ln 2$ bits per channel use without joint decoding. We then proceed to the
"many-access" regime where the number of potential users grows without bound,
treating both activity recognition and message transmission problems,
establishing scaling laws which are optimal within a constant factor, based on
Bloom filter channel inputs.
| 1 | 0 | 1 | 0 | 0 | 0 |
TwiInsight: Discovering Topics and Sentiments from Social Media Datasets | Social media platforms contain a great wealth of information which provides
opportunities for us to explore hidden patterns or unknown correlations, and
understand people's satisfaction with what they are discussing. As one
showcase, in this paper, we present a system, TwiInsight which explores the
insight of Twitter data. Different from other Twitter analysis systems,
TwiInsight automatically extracts the popular topics under different categories
(e.g., healthcare, food, technology, sports and transport) discussed in Twitter
via topic modeling and also identifies the correlated topics across different
categories. Additionally, it also discovers the people's opinions on the tweets
and topics via the sentiment analysis. The system also employs an intuitive and
informative visualization to show the uncovered insight. Furthermore, we also
develop and compare six most popular algorithms - three for sentiment analysis
and three for topic modeling.
| 1 | 0 | 0 | 0 | 0 | 0 |
The Promise and Peril of Human Evaluation for Model Interpretability | Transparency, user trust, and human comprehension are popular ethical
motivations for interpretable machine learning. In support of these goals,
researchers evaluate model explanation performance using humans and real world
applications. This alone presents a challenge in many areas of artificial
intelligence. In this position paper, we propose a distinction between
descriptive and persuasive explanations. We discuss reasoning suggesting that
functional interpretability may be correlated with cognitive function and user
preferences. If this is indeed the case, evaluation and optimization using
functional metrics could perpetuate implicit cognitive bias in explanations
that threaten transparency. Finally, we propose two potential research
directions to disambiguate cognitive function and explanation models, retaining
control over the tradeoff between accuracy and interpretability.
| 1 | 0 | 0 | 1 | 0 | 0 |
Guided Unfoldings for Finding Loops in Standard Term Rewriting | In this paper, we reconsider the unfolding-based technique that we have
introduced previously for detecting loops in standard term rewriting. We
improve it by guiding the unfolding process, using distinguished positions in
the rewrite rules. This results in a depth-first computation of the unfoldings,
whereas the original technique was breadth-first. We have implemented this new
approach in our tool NTI and compared it to the previous one on a bunch of
rewrite systems. The results we get are promising (better times, more
successful proofs).
| 1 | 0 | 0 | 0 | 0 | 0 |
Nutrients and biomass dynamics in photo-sequencing batch reactors treating wastewater with high nutrients loadings | The present study investigates different strategies for the treatment of a
mixture of digestate from an anaerobic digester diluted and secondary effluent
from a high rate algal pond. To this aim, the performance of two
photo-sequencing batch reactors (PSBRs) operated at high nutrients loading
rates and different solids retention times (SRTs) were compared with a
semi-continuous photobioreactor (SC). Performances were evaluated in terms of
wastewater treatment, biomass composition and biopolymers accumulation during
30 days of operation. PSBRs were operated at a hydraulic retention time (HRT)
of 2 days and SRTs of 10 and 5 days (PSBR2-10 and PSBR2-5, respectively),
whereas the semi-continuous reactor was operated at a coupled HRT/SRT of 10
days (SC10-10). Results showed that PSBR2-5 achieved the highest removal rates
in terms of TN (6.7 mg L-1 d-1), TP (0.31 mg L-1 d-1), TOC (29.32 mg L-1 d-1)
and TIC (3.91 mg L-1 d-1). These results were in general 3-6 times higher than
the removal rates obtained in the SC10-10 (TN 29.74 mg L-1 d-1, TP 0.96 mg L-1
d-1, TOC 29.32 mg L-1 d-1 and TIC 3.91 mg L-1 d-1). Furthermore, both PSBRs
were able to produce biomass up to 0.09 g L-1 d-1, more than twofold the
biomass produced by the semi-continuous reactor (0.04 g L-1 d-1), and achieved
a biomass settleability of 86-92%. This study also demonstrated that the
microbial composition could be controlled by the nutrients loads, since the
three reactors were dominated by different species depending on the nutritional
conditions. Concerning biopolymers accumulation, carbohydrates concentration
achieved similar values in the three reactors (11%), whereas <0.5 % of
polyhydrohybutyrates (PHB) was produced. These low values in biopolymers
production could be related to the lack of microorganisms as cyanobacteria that
are able to accumulate carbohydrates/PHB.
| 0 | 0 | 0 | 0 | 1 | 0 |
Random Fourier Features for Kernel Ridge Regression: Approximation Bounds and Statistical Guarantees | Random Fourier features is one of the most popular techniques for scaling up
kernel methods, such as kernel ridge regression. However, despite impressive
empirical results, the statistical properties of random Fourier features are
still not well understood. In this paper we take steps toward filling this gap.
Specifically, we approach random Fourier features from a spectral matrix
approximation point of view, give tight bounds on the number of Fourier
features required to achieve a spectral approximation, and show how spectral
matrix approximation bounds imply statistical guarantees for kernel ridge
regression.
Qualitatively, our results are twofold: on the one hand, we show that random
Fourier feature approximation can provably speed up kernel ridge regression
under reasonable assumptions. At the same time, we show that the method is
suboptimal, and sampling from a modified distribution in Fourier space, given
by the leverage function of the kernel, yields provably better performance. We
study this optimal sampling distribution for the Gaussian kernel, achieving a
nearly complete characterization for the case of low-dimensional bounded
datasets. Based on this characterization, we propose an efficient sampling
scheme with guarantees superior to random Fourier features in this regime.
| 1 | 0 | 0 | 1 | 0 | 0 |
Charge reconstruction study of the DAMPE Silicon-Tungsten Tracker with ion beams | The DArk Matter Particle Explorer (DAMPE) is one of the four satellites
within Strategic Pioneer Research Program in Space Science of the Chinese
Academy of Science (CAS). DAMPE can detect electrons, photons in a wide energy
range (5 GeV to 10 TeV) and ions up to iron (100GeV to 100 TeV).
Silicon-Tungsten Tracker (STK) is one of the four subdetectors in DAMPE,
providing photon-electron conversion, track reconstruction and charge
identification for ions. Ion beam test was carried out in CERN with 60GeV/u
Lead primary beams. Charge reconstruction and charge resolution of STK
detectors were investigated.
| 0 | 1 | 0 | 0 | 0 | 0 |
Scalable Cryogenic Read-out Circuit for a Superconducting Nanowire Single-Photon Detector System | The superconducting nanowire single photon detector (SNSPD) is a leading
technology for quantum information science applications using photons, and they
are finding increasing uses in photon-starved classical imaging applications.
Critical detector characteristics, such as timing resolution (jitter), reset
time and maximum count rate, are heavily influenced by the readout electronics
that sense and amplify the photon detection signal. We describe a readout
circuit for SNSPDs using commercial off-the-shelf amplifiers operating at
cryogenic temperatures. Our design demonstrates a 35 ps timing resolution and a
maximum count rate of over 2x10^7 counts per second while maintaining <3 mW
power consumption per channel, making it suitable for a multichannel readout.
| 0 | 1 | 0 | 0 | 0 | 0 |
String principal bundles and Courant algebroids | Just like Atiyah Lie algebroids encode the infinitesimal symmetries of
principal bundles, exact Courant algebroids are believed to encode the
infinitesimal symmetries of $S^1$-gerbes. At the same time, transitive Courant
algebroids may be viewed as the higher analogue of Atiyah Lie algebroids, and
the non-commutative analogue of exact Courant algebroids. In this article, we
explore what the "principal bundle" behind transitive Courant algebroids are,
and they turn out to be principal 2-bundles of string groups. First, we
construct the stack of principal 2-bundles of string groups with connection
data. We prove a lifting theorem for the stack of string principal bundles with
connections and show the multiplicity of the lifts once they exist. This is a
differential geometrical refinement of what is known for string structures by
Redden, Waldorf and Stolz-Teichner. We also extend the result of Bressler and
Chen-Stiénon-Xu on extension obstruction involving transitive Courant
algebroids to the case of transitive Courant algebroids with connections, as a
lifting theorem with the description of multiplicity once liftings exist. At
the end, we build a morphism between these two stacks. The morphism turns out
to be neither injective nor surjective in general, which shows that the process
of associating the "higher Atiyah algebroid" loses some information and at the
same time, only some special transitive Courant algebroids come from string
bundles.
| 0 | 0 | 1 | 0 | 0 | 0 |
Electroforming-Free TaOx Memristors using Focused Ion Beam Irradiations | We demonstrate creation of electroforming-free TaOx memristive devices using
focused ion beam irradiations to locally define conductive filaments in TaOx
films. Electrical characterization shows that these irradiations directly
create fully functional memristors without the need for electroforming. Ion
beam forming of conductive filaments combined with state-of-the-art
nano-patterning presents a CMOS compatible approach to wafer level fabrication
of fully formed and operational memristors.
| 0 | 1 | 0 | 0 | 0 | 0 |
Optimal hypothesis testing for stochastic block models with growing degrees | The present paper considers testing an Erdos--Renyi random graph model
against a stochastic block model in the asymptotic regime where the average
degree of the graph grows with the graph size n. Our primary interest lies in
those cases in which the signal-to-noise ratio is at a constant level. Focusing
on symmetric two block alternatives, we first derive joint central limit
theorems for linear spectral statistics of power functions for properly
rescaled graph adjacency matrices under both the null and local alternative
hypotheses. The powers in the linear spectral statistics are allowed to grow to
infinity together with the graph size. In addition, we show that linear
spectral statistics of Chebyshev polynomials are closely connected to signed
cycles of growing lengths that determine the asymptotic likelihood ratio test
for the hypothesis testing problem of interest. This enables us to construct a
sequence of test statistics that achieves the exact optimal asymptotic power
within $O(n^3 \log n)$ time complexity in the contiguous regime when $n^2
p_{n,av}^3 \to\infty$ where $p_{n,av}$ is the average connection probability.
We further propose a class of adaptive tests that are computationally tractable
and completely data-driven. They achieve nontrivial powers in the contiguous
regime and consistency in the singular regime whenever $n p_{n,av} \to\infty$.
These tests remain powerful when the alternative becomes a more general
stochastic block model with more than two blocks.
| 1 | 0 | 1 | 1 | 0 | 0 |
Generalized End-to-End Loss for Speaker Verification | In this paper, we propose a new loss function called generalized end-to-end
(GE2E) loss, which makes the training of speaker verification models more
efficient than our previous tuple-based end-to-end (TE2E) loss function. Unlike
TE2E, the GE2E loss function updates the network in a way that emphasizes
examples that are difficult to verify at each step of the training process.
Additionally, the GE2E loss does not require an initial stage of example
selection. With these properties, our model with the new loss function
decreases speaker verification EER by more than 10%, while reducing the
training time by 60% at the same time. We also introduce the MultiReader
technique, which allows us to do domain adaptation - training a more accurate
model that supports multiple keywords (i.e. "OK Google" and "Hey Google") as
well as multiple dialects.
| 1 | 0 | 0 | 1 | 0 | 0 |
Penalized pairwise pseudo likelihood for variable selection with nonignorable missing data | The regularization approach for variable selection was well developed for a
completely observed data set in the past two decades. In the presence of
missing values, this approach needs to be tailored to different missing data
mechanisms. In this paper, we focus on a flexible and generally applicable
missing data mechanism, which contains both ignorable and nonignorable missing
data mechanism assumptions. We show how the regularization approach for
variable selection can be adapted to the situation under this missing data
mechanism. The computational and theoretical properties for variable selection
consistency are established. The proposed method is further illustrated by
comprehensive simulation studies and real data analyses, for both low and high
dimensional settings.
| 0 | 0 | 0 | 1 | 0 | 0 |
Fast embedding of multilayer networks: An algorithm and application to group fMRI | Learning interpretable features from complex multilayer networks is a
challenging and important problem. The need for such representations is
particularly evident in multilayer networks of the brain, where nodal
characteristics may help model and differentiate regions of the brain according
to individual, cognitive task, or disease. Motivated by this problem, we
introduce the multi-node2vec algorithm, an efficient and scalable feature
engineering method that automatically learns continuous node feature
representations from multilayer networks. Multi-node2vec relies upon a
second-order random walk sampling procedure that efficiently explores the
inner- and intra- layer ties of the observed multilayer network is utilized to
identify multilayer neighborhoods. Maximum likelihood estimators of the nodal
features are identified through the use of the Skip-gram neural network model
on the collection of sampled neighborhoods. We investigate the conditions under
which multi-node2vec is an approximation of a closed-form matrix factorization
problem. We demonstrate the efficacy of multi-node2vec on a multilayer
functional brain network from resting state fMRI scans over a group of 74
healthy individuals. We find that multi-node2vec outperforms contemporary
methods on complex networks, and that multi-node2vec identifies nodal
characteristics that closely associate with the functional organization of the
brain.
| 1 | 0 | 0 | 0 | 0 | 0 |
Towards Better Summarizing Bug Reports with Crowdsourcing Elicited Attributes | Recent years have witnessed the growing demands for resolving numerous bug
reports in software maintenance. Aiming to reduce the time testers/developers
take in perusing bug reports, the task of bug report summarization has
attracted a lot of research efforts in the literature. However, no systematic
analysis has been conducted on attribute construction which heavily impacts the
performance of supervised algorithms for bug report summarization. In this
study, we first conduct a survey to reveal the existing methods for attribute
construction in mining software repositories. Then, we propose a new method
named Crowd-Attribute to infer new effective attributes from the crowdgenerated
data in crowdsourcing and develop a new tool named Crowdsourcing Software
Engineering Platform to facilitate this method. With Crowd-Attribute, we
successfully construct 11 new attributes and propose a new supervised algorithm
named Logistic Regression with Crowdsourced Attributes (LRCA). To evaluate the
effectiveness of LRCA, we build a series of large scale data sets with 105,177
bug reports. Experiments over both the public data set SDS with 36 manually
annotated bug reports and new large-scale data sets demonstrate that LRCA can
consistently outperform the state-of-the-art algorithms for bug report
summarization.
| 1 | 0 | 0 | 0 | 0 | 0 |
Extraordinary linear dynamic range in laser-defined functionalized graphene photodetectors | Graphene-based photodetectors have demonstrated mechanical flexibility, large
operating bandwidth, and broadband spectral response. However, their linear
dynamic range (LDR) is limited by graphene's intrinsichot-carrier dynamics,
which causes deviation from a linear photoresponse at low incident powers. At
the same time, multiplication of hot carriers causes the photoactive region to
be smeared over distances of a few micro-meters, limiting the use of graphene
in high-resolution applications. We present a novel method for engineer-ing
photoactive junctions in FeCl3-intercalated graphene using laser irradiation.
Photocurrent measured at these planar junctions shows an extraordinary linear
response with an LDR value at least 4500 times larger than that of other
graphene devices (44 dB) while maintaining high stability against environmental
contamination without the need for encapsulation. The observed photoresponse is
purely photovoltaic, demonstrating complete quenching of hot-carrier effects.
These results pave the way toward the design of ultrathin photode-tectors with
unprecedented LDR for high-definition imaging and sensing.
| 0 | 1 | 0 | 0 | 0 | 0 |
Efficient Transfer Learning Schemes for Personalized Language Modeling using Recurrent Neural Network | In this paper, we propose an efficient transfer leaning methods for training
a personalized language model using a recurrent neural network with long
short-term memory architecture. With our proposed fast transfer learning
schemes, a general language model is updated to a personalized language model
with a small amount of user data and a limited computing resource. These
methods are especially useful for a mobile device environment while the data is
prevented from transferring out of the device for privacy purposes. Through
experiments on dialogue data in a drama, it is verified that our transfer
learning methods have successfully generated the personalized language model,
whose output is more similar to the personal language style in both qualitative
and quantitative aspects.
| 1 | 0 | 0 | 0 | 0 | 0 |
Convergence of ground state solutions for nonlinear Schrödinger equations on graphs | We consider the nonlinear Schrödinger equation $-\Delta u+(\lambda
a(x)+1)u=|u|^{p-1}u$ on a locally finite graph $G=(V,E)$. We prove via the
Nehari method that if $a(x)$ satisfies certain assumptions, for any
$\lambda>1$, the equation admits a ground state solution $u_\lambda$. Moreover,
as $\lambda\rightarrow \infty$, the solution $u_\lambda$ converges to a
solution of the Dirichlet problem $-\Delta u+u=|u|^{p-1}u$ which is defined on
the potential well $\Omega$. We also provide a numerical experiment which
solves the equation on a finite graph to illustrate our results.
| 0 | 0 | 1 | 0 | 0 | 0 |
Error Analysis of the Stochastic Linear Feedback Particle Filter | This paper is concerned with the convergence and long-term stability analysis
of the feedback particle filter (FPF) algorithm. The FPF is an interacting
system of $N$ particles where the interaction is designed such that the
empirical distribution of the particles approximates the posterior
distribution. It is known that in the mean-field limit ($N=\infty$), the
distribution of the particles is equal to the posterior distribution. However
little is known about the convergence to the mean-field limit. In this paper,
we consider the FPF algorithm for the linear Gaussian setting. In this setting,
the algorithm is similar to the ensemble Kalman-Bucy filter algorithm. Although
these algorithms have been numerically evaluated and widely used in
applications, their convergence and long-term stability analysis remains an
active area of research. In this paper, we show that, (i) the mean-field limit
is well-defined with a unique strong solution; (ii) the mean-field process is
stable with respect to the initial condition; (iii) we provide conditions such
that the finite-$N$ system is long term stable and we obtain some mean-squared
error estimates that are uniform in time.
| 1 | 0 | 0 | 0 | 0 | 0 |
Geometry in the Courtroom | There has been a recent media blitz on a cohort of mathematicians valiantly
working to fix America's democratic system by combatting gerrymandering with
geometry. While statistics commonly features in the courtroom (forensics, DNA
analysis, etc.), the gerrymandering news raises a natural question: in what
other ways has pure math, specifically geometry and topology, been involved in
court cases and legal scholarship? In this survey article, we collect a few
examples with topics ranging from the Pythagorean formula to the Ham Sandwich
Theorem, and we discuss some jurists' perspectives on geometric reasoning in
the legal realm. One of our goals is to provide math educators with engaging
real-world instances of some abstract geometric concepts.
| 0 | 0 | 1 | 0 | 0 | 0 |
Proof of Concept of Wireless TERS Monitoring | Temporary earth retaining structures (TERS) help prevent collapse during
construction excavation. To ensure that these structures are operating within
design specifications, load forces on supports must be monitored. Current
monitoring approaches are expensive, sparse, off-line, and thus difficult to
integrate into predictive models. This work aims to show that wirelessly
connected battery powered sensors are feasible, practical, and have similar
accuracy to existing sensor systems. We present the design and validation of
ReStructure, an end-to-end prototype wireless sensor network for collection,
communication, and aggregation of strain data. ReStructure was validated
through a six months deployment on a real-life excavation site with all but one
node producing valid and accurate strain measurements at higher frequency than
existing ones. These results and the lessons learnt provide the basis for
future widespread wireless TERS monitoring that increase measurement density
and integrate closely with predictive models to provide timely alerts of damage
or potential failure.
| 1 | 0 | 0 | 0 | 0 | 0 |
Hyperrigid subsets of Cuntz-Krieger algebras and the property of rigidity at zero | A subset $\mathcal{G}$ generating a $C^*$-algebra $A$ is said to be
hyperrigid if for every faithful nondegenerate $*$-representation $A\subseteq
B(H)$ and a sequence $\phi_n:B(H) \to B(H)$ of unital completely positive maps,
we have that \[ \lim_{n\to\infty}\phi_n(g)= g~~\text{for all } g\in \mathcal{G}
~~ \implies ~~ \lim_{n\to\infty}\phi_n(a)= a~~\text{for all } a\in A \] where
all convergence are in norm. In this paper, we show that for the Cuntz-Krieger
algebra $\mathcal{O}(G)$ associated to a row-finite directed graph $G$ with no
isolated vertices, the set of partial isometries $\mathcal{E}=\{S_e:e\in E\}$
is hyperrigid.
In addition, we define and examine a closely related notion: the property of
rigidity at $0$. A generating subset $\mathcal{G}$ of a $C^*$-algebra $A$ is
said to be rigid at $0$ if for every sequence of contractive positive maps
$\varphi_n:A\to \mathbb C$ satisfying $\lim_{n\to \infty}\varphi_n(g)=0$ for
every $g\in \mathcal{G}$, we have that $\lim_{n\to \infty}\varphi_n(a)=0$ for
every $a\in A$.
We show that, when combined, hyperrigidity and rigidity at $0$ are equivalent
to a somewhat stronger notion of hyperrigidity, and we connect this to the
unique extension property. This, however, is not the case for the generating
set $\mathcal{E}$. More precisely, we show that for any graph $G$, subsets of
the Cuntz-Krieger family generating $\mathcal{O}(G)$ are rigid at $0$ if and
only if they contain every vertex projection.
| 0 | 0 | 1 | 0 | 0 | 0 |
Chiral Mott insulators in frustrated Bose-Hubbard models on ladders and two-dimensional lattices: a combined perturbative and density matrix renormalization group study | We study the fully gapped chiral Mott insulator (CMI) of frustrated
Bose-Hubbard models on ladders and two-dimensional lattices by perturbative
strong-coupling analysis and density matrix renormalization group (DMRG). First
we show the existence of a low-lying exciton state on all geometries carrying
the correct quantum numbers responsible for the condensation of excitons and
formation of the CMI in the intermediate interaction regime. Then we perform
systematic DMRG simulations on several two-leg ladder systems with $\pi$-flux
and carefully characterize the two quantum phase transitions. We discuss the
possibility to extend the generally very small CMI window by including
repulsive nearest-neighbour interactions or changing density and coupling
ratios.
| 0 | 1 | 0 | 0 | 0 | 0 |
Thermodynamic Limit of Interacting Particle Systems over Time-varying Sparse Random Networks | We establish a functional weak law of large numbers for observable
macroscopic state variables of interacting particle systems (e.g., voter and
contact processes) over fast time-varying sparse random networks of
interactions. We show that, as the number of agents $N$ grows large, the
proportion of agents $\left(\overline{Y}_{k}^{N}(t)\right)$ at a certain state
$k$ converges in distribution -- or, more precisely, weakly with respect to the
uniform topology on the space of \emph{càdlàg} sample paths -- to the
solution of an ordinary differential equation over any compact interval
$\left[0,T\right]$. Although the limiting process is Markov, the prelimit
processes, i.e., the normalized macrostate vector processes
$\left(\mathbf{\overline{Y}}^{N}(t)\right)=\left(\overline{Y}_{1}^{N}(t),\ldots,\overline{Y}_{K}^{N}(t)\right)$,
are non-Markov as they are tied to the \emph{high-dimensional} microscopic
state of the system, which precludes the direct application of standard
arguments for establishing weak convergence. The techniques developed in the
paper for establishing weak convergence might be of independent interest.
| 0 | 0 | 1 | 0 | 0 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.