text
stringlengths 138
2.38k
| labels
sequencelengths 6
6
| Predictions
sequencelengths 1
3
|
---|---|---|
Title: Note on character varieties and cluster algebras,
Abstract: We use Bonahon-Wong's trace map to study character varieties of the
once-punctured torus and of the 4-punctured sphere. We clarify a relationship
with cluster algebra associated with ideal triangulations of surfaces, and we
show that the Goldman Poisson algebra of loops on surfaces is recovered from
the Poisson structure of cluster algebra. It is also shown that cluster
mutations give the automorphism of the character varieties. Motivated by a work
of Chekhov-Mazzocco-Rubtsov, we revisit confluences of punctures on sphere from
cluster algebraic viewpoint, and we obtain associated affine cubic surfaces
constructed by van der Put-Saito based on the Riemann-Hilbert correspondence.
Further studied are quantizations of character varieties by use of quantum
cluster algebra. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: An ALMA survey of submillimetre galaxies in the COSMOS field: The extent of the radio-emitting region revealed by 3 GHz imaging with the Very Large Array,
Abstract: We determine the radio size distribution of a large sample of 152 SMGs in
COSMOS that were detected with ALMA at 1.3 mm. For this purpose, we used the
observations taken by the VLA-COSMOS 3 GHz Large Project. One hundred and
fifteen of the 152 target SMGs were found to have a 3 GHz counterpart. The
median value of the major axis FWHM at 3 GHz is derived to be $4.6\pm0.4$ kpc.
The radio sizes show no evolutionary trend with redshift, or difference between
different galaxy morphologies. We also derived the spectral indices between 1.4
and 3 GHz, and 3 GHz brightness temperatures for the sources, and the median
values were found to be $\alpha=-0.67$ and $T_{\rm B}=12.6\pm2$ K. Three of the
target SMGs, which are also detected with the VLBA, show clearly higher
brightness temperatures than the typical values. Although the observed radio
emission appears to be predominantly powered by star formation and supernova
activity, our results provide a strong indication of the presence of an AGN in
the VLBA and X-ray-detected SMG AzTEC/C61. The median radio-emitting size we
have derived is 1.5-3 times larger than the typical FIR dust-emitting sizes of
SMGs, but similar to that of the SMGs' molecular gas component traced through
mid-$J$ line emission of CO. The physical conditions of SMGs probably render
the diffusion of cosmic-ray electrons inefficient, and hence an unlikely
process to lead to the observed extended radio sizes. Instead, our results
point towards a scenario where SMGs are driven by galaxy interactions and
mergers. Besides triggering vigorous starbursts, galaxy collisions can also
pull out the magnetised fluids from the interacting disks, and give rise to a
taffy-like synchrotron-emitting bridge. This provides an explanation for the
spatially extended radio emission of SMGs, and can also cause a deviation from
the well-known IR-radio correlation. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Astrophysics"
] |
Title: Audio-replay attack detection countermeasures,
Abstract: This paper presents the Speech Technology Center (STC) replay attack
detection systems proposed for Automatic Speaker Verification Spoofing and
Countermeasures Challenge 2017. In this study we focused on comparison of
different spoofing detection approaches. These were GMM based methods, high
level features extraction with simple classifier and deep learning frameworks.
Experiments performed on the development and evaluation parts of the challenge
dataset demonstrated stable efficiency of deep learning approaches in case of
changing acoustic conditions. At the same time SVM classifier with high level
features provided a substantial input in the efficiency of the resulting STC
systems according to the fusion systems results. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science"
] |
Title: Joint Scheduling and Transmission Power Control in Wireless Ad Hoc Networks,
Abstract: In this paper, we study how to determine concurrent transmissions and the
transmission power level of each link to maximize spectrum efficiency and
minimize energy consumption in a wireless ad hoc network. The optimal joint
transmission packet scheduling and power control strategy are determined when
the node density goes to infinity and the network area is unbounded. Based on
the asymptotic analysis, we determine the fundamental capacity limits of a
wireless network, subject to an energy consumption constraint. We propose a
scheduling and transmission power control mechanism to approach the optimal
solution to maximize spectrum and energy efficiencies in a practical network.
The distributed implementation of the proposed scheduling and transmission
power control scheme is presented based on our MAC framework proposed in [1].
Simulation results demonstrate that the proposed scheme achieves 40% higher
throughput than existing schemes. Also, the energy consumption using the
proposed scheme is about 20% of the energy consumed using existing power saving
MAC protocols. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Correlative cellular ptychography with functionalized nanoparticles at the Fe L-edge,
Abstract: Precise localization of nanoparticles within a cell is crucial to the
understanding of cell-particle interactions and has broad applications in
nanomedicine. Here, we report a proof-of-principle experiment for imaging
individual functionalized nanoparticles within a mammalian cell by correlative
microscopy. Using a chemically-fixed, HeLa cell labeled with fluorescent
core-shell nanoparticles as a model system, we implemented a graphene-oxide
layer as a substrate to significantly reduce background scattering. We
identified cellular features of interest by fluorescence microscopy, followed
by scanning transmission X-ray tomography to localize the particles in 3D, and
ptychographic coherent diffractive imaging of the fine features in the region
at high resolution. By tuning the X-ray energy to the Fe L-edge, we
demonstrated sensitive detection of nanoparticles composed of a 22 nm magnetic
Fe3O4 core encased by a 25-nm-thick fluorescent silica (SiO2) shell. These
fluorescent core-shell nanoparticles act as landmarks and offer clarity in a
cellular context. Our correlative microscopy results confirmed a subset of
particles to be fully internalized, and high-contrast ptychographic images
showed two oxidation states of individual nanoparticles with a resolution of
~16.5 nm. The ability to precisely localize individual fluorescent
nanoparticles within mammalian cells will expand our understanding of the
structure/function relationships for functionalized nanoparticles. | [
0,
1,
0,
0,
0,
0
] | [
"Quantitative Biology",
"Physics"
] |
Title: A Dynamic-Adversarial Mining Approach to the Security of Machine Learning,
Abstract: Operating in a dynamic real world environment requires a forward thinking and
adversarial aware design for classifiers, beyond fitting the model to the
training data. In such scenarios, it is necessary to make classifiers - a)
harder to evade, b) easier to detect changes in the data distribution over
time, and c) be able to retrain and recover from model degradation. While most
works in the security of machine learning has concentrated on the evasion
resistance (a) problem, there is little work in the areas of reacting to
attacks (b and c). Additionally, while streaming data research concentrates on
the ability to react to changes to the data distribution, they often take an
adversarial agnostic view of the security problem. This makes them vulnerable
to adversarial activity, which is aimed towards evading the concept drift
detection mechanism itself. In this paper, we analyze the security of machine
learning, from a dynamic and adversarial aware perspective. The existing
techniques of Restrictive one class classifier models, Complex learning models
and Randomization based ensembles, are shown to be myopic as they approach
security as a static task. These methodologies are ill suited for a dynamic
environment, as they leak excessive information to an adversary, who can
subsequently launch attacks which are indistinguishable from the benign data.
Based on empirical vulnerability analysis against a sophisticated adversary, a
novel feature importance hiding approach for classifier design, is proposed.
The proposed design ensures that future attacks on classifiers can be detected
and recovered from. The proposed work presents motivation, by serving as a
blueprint, for future work in the area of Dynamic-Adversarial mining, which
combines lessons learned from Streaming data mining, Adversarial learning and
Cybersecurity. | [
0,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Software stage-effort estimation based on association rule mining and fuzzy set theory,
Abstract: Relaying on early effort estimation to predict the required number of
resources is not often sufficient, and could lead to under or over estimation.
It is widely acknowledge that that software development process should be
refined regularly and that software prediction made at early stage of software
development is yet kind of guesses. Even good predictions are not sufficient
with inherent uncertainty and risks. The stage-effort estimation allows project
manager to re-allocate correct number of resources, re-schedule project and
control project progress to finish on time and within budget. In this paper we
propose an approach to utilize prior effort records to predict stage effort.
The proposed model combines concepts of Fuzzy set theory and association rule
mining. The results were good in terms of prediction accuracy and have
potential to deliver good stage-effort estimation. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Vulnerability and co-susceptibility determine the size of network cascades,
Abstract: In a network, a local disturbance can propagate and eventually cause a
substantial part of the system to fail, in cascade events that are easy to
conceptualize but extraordinarily difficult to predict. Here, we develop a
statistical framework that can predict cascade size distributions by
incorporating two ingredients only: the vulnerability of individual components
and the co-susceptibility of groups of components (i.e., their tendency to fail
together). Using cascades in power grids as a representative example, we show
that correlations between component failures define structured and often
surprisingly large groups of co-susceptible components. Aside from their
implications for blackout studies, these results provide insights and a new
modeling framework for understanding cascades in financial systems, food webs,
and complex networks in general. | [
1,
1,
0,
0,
0,
0
] | [
"Physics",
"Statistics",
"Quantitative Finance"
] |
Title: Scale-invariant magnetoresistance in a cuprate superconductor,
Abstract: The anomalous metallic state in high-temperature superconducting cuprates is
masked by the onset of superconductivity near a quantum critical point. Use of
high magnetic fields to suppress superconductivity has enabled a detailed study
of the ground state in these systems. Yet, the direct effect of strong magnetic
fields on the metallic behavior at low temperatures is poorly understood,
especially near critical doping, $x=0.19$. Here we report a high-field
magnetoresistance study of thin films of \LSCO cuprates in close vicinity to
critical doping, $0.161\leq x\leq0.190$. We find that the metallic state
exposed by suppressing superconductivity is characterized by a
magnetoresistance that is linear in magnetic field up to the highest measured
fields of $80$T. The slope of the linear-in-field resistivity is
temperature-independent at very high fields. It mirrors the magnitude and
doping evolution of the linear-in-temperature resistivity that has been
ascribed to Planckian dissipation near a quantum critical point. This
establishes true scale-invariant conductivity as the signature of the strange
metal state in the high-temperature superconducting cuprates. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Exploring one particle orbitals in large Many-Body Localized systems,
Abstract: Strong disorder in interacting quantum systems can give rise to the
phenomenon of Many-Body Localization (MBL), which defies thermalization due to
the formation of an extensive number of quasi local integrals of motion. The
one particle operator content of these integrals of motion is related to the
one particle orbitals of the one particle density matrix and shows a strong
signature across the MBL transition as recently pointed out by Bera et al.
[Phys. Rev. Lett. 115, 046603 (2015); Ann. Phys. 529, 1600356 (2017)]. We study
the properties of the one particle orbitals of many-body eigenstates of an MBL
system in one dimension. Using shift-and-invert MPS (SIMPS), a matrix product
state method to target highly excited many-body eigenstates introduced in
[Phys. Rev. Lett. 118, 017201 (2017)], we are able to obtain accurate results
for large systems of sizes up to L = 64. We find that the one particle orbitals
drawn from eigenstates at different energy densities have high overlap and
their occupations are correlated with the energy of the eigenstates. Moreover,
the standard deviation of the inverse participation ratio of these orbitals is
maximal at the nose of the mobility edge. Also, the one particle orbitals decay
exponentially in real space, with a correlation length that increases at low
disorder. In addition, we find a 1/f distribution of the coupling constants of
a certain range of the number operators of the OPOs, which is related to their
exponential decay. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Detection of an Optical Counterpart to the ALFALFA Ultra-compact High Velocity Cloud AGC 249525,
Abstract: We report on the detection at $>$98% confidence of an optical counterpart to
AGC 249525, an Ultra-Compact High Velocity Cloud (UCHVC) discovered by the
ALFALFA blind neutral hydrogen survey. UCHVCs are compact, isolated HI clouds
with properties consistent with their being nearby low-mass galaxies, but
without identified counterparts in extant optical surveys. Analysis of the
resolved stellar sources in deep $g$- and $i$-band imaging from the WIYN pODI
camera reveals a clustering of possible Red Giant Branch stars associated with
AGC 249525 at a distance of 1.64$\pm$0.45 Mpc. Matching our optical detection
with the HI synthesis map of AGC 249525 from Adams et al. (2016) shows that the
stellar overdensity is exactly coincident with the highest-density HI contour
from that study. Combining our optical photometry and the HI properties of this
object yields an absolute magnitude of $-7.1 \leq M_V \leq -4.5$, a stellar
mass between $2.2\pm0.6\times10^4 M_{\odot}$ and $3.6\pm1.0\times10^5
M_{\odot}$, and an HI to stellar mass ratio between 9 and 144. This object has
stellar properties within the observed range of gas-poor Ultra-Faint Dwarfs in
the Local Group, but is gas-dominated. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Activation cross-section data for alpha-particle induced nuclear reactions on natural ytterbium for some longer lived radioisotopes,
Abstract: Additional experimental cross sections were deduced for the long half-life
activation products (172Hf and 173Lu) from the alpha particle induced reactions
on ytterbium up to 38 MeV from late, long measurements and for 175Yb, 167Tm
from a re-evaluation of earlier measured spectra. The cross-sections are
compared with the earlier experimental datasets and with the data based on the
TALYS theoretical nuclear reaction model (available in the TENDL-2014 and 2015
libraries) and the ALICE-IPPE code. | [
0,
0,
0,
1,
0,
0
] | [
"Physics"
] |
Title: Congestion-Aware Distributed Network Selection for Integrated Cellular and Wi-Fi Networks,
Abstract: Intelligent network selection plays an important role in achieving an
effective data offloading in the integrated cellular and Wi-Fi networks.
However, previously proposed network selection schemes mainly focused on
offloading as much data traffic to Wi-Fi as possible, without systematically
considering the Wi-Fi network congestion and the ping-pong effect, both of
which may lead to a poor overall user quality of experience. Thus, in this
paper, we study a more practical network selection problem by considering both
the impacts of the network congestion and switching penalties. More
specifically, we formulate the users' interactions as a Bayesian network
selection game (NSG) under the incomplete information of the users' mobilities.
We prove that it is a Bayesian potential game and show the existence of a pure
Bayesian Nash equilibrium that can be easily reached. We then propose a
distributed network selection (DNS) algorithm based on the network congestion
statistics obtained from the operator. Furthermore, we show that computing the
optimal centralized network allocation is an NP-hard problem, which further
justifies our distributed approach. Simulation results show that the DNS
algorithm achieves the highest user utility and a good fairness among users, as
compared with the on-the-spot offloading and cellular-only benchmark schemes. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: A Theory of Exoplanet Transits with Light Scattering,
Abstract: Exoplanet transit spectroscopy enables the characterization of distant
worlds, and will yield key results for NASA's James Webb Space Telescope.
However, transit spectra models are often simplified, omitting potentially
important processes like refraction and multiple scattering. While the former
process has seen recent development, the effects of light multiple scattering
on exoplanet transit spectra has received little attention. Here, we develop a
detailed theory of exoplanet transit spectroscopy that extends to the full
refracting and multiple scattering case. We explore the importance of
scattering for planet-wide cloud layers, where the relevant parameters are the
slant scattering optical depth, the scattering asymmetry parameter, and the
angular size of the host star. The latter determines the size of the "target"
for a photon that is back-mapped from an observer. We provide results that
straightforwardly indicate the potential importance of multiple scattering for
transit spectra. When the orbital distance is smaller than 10-20 times the
stellar radius, multiple scattering effects for aerosols with asymmetry
parameters larger than 0.8-0.9 can become significant. We provide examples of
the impacts of cloud/haze multiple scattering on transit spectra of a hot
Jupiter-like exoplanet. For cases with a forward and conservatively scattering
cloud/haze, differences due to multiple scattering effects can exceed 200 ppm,
but shrink to zero at wavelength ranges corresponding to strong gas absorption
or when the slant optical depth of the cloud exceeds several tens. We conclude
with a discussion of types of aerosols for which multiple scattering in transit
spectra may be important. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Quantitative Biology"
] |
Title: A repulsive skyrmion chain as guiding track for a race track memory,
Abstract: A skyrmion racetrack design is proposed that allows for thermally stable
skyrmions to code information and dynamical pinning sites that move with the
applied current. This concept solves the problem of intrinsic distributions of
pinning times and pinning currents of skyrmions at static geometrical or
magnetic pinning sites. The dynamical pinning sites are realized by a skyrmion
carrying wire, where the skyrmion repulsion is used in order to keep the
skyrmions at equal distances. The information is coded by an additional layer
where the presence and absence of a skyrmion is used to code the information.
The lowest energy barrier for a data loss is calculated to be DE = 55 kBT300
which is sufficient for long time thermal stability. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Computer Science"
] |
Title: Compressive Sensing-Based Detection with Multimodal Dependent Data,
Abstract: Detection with high dimensional multimodal data is a challenging problem when
there are complex inter- and intra- modal dependencies. While several
approaches have been proposed for dependent data fusion (e.g., based on copula
theory), their advantages come at a high price in terms of computational
complexity. In this paper, we treat the detection problem with compressive
sensing (CS) where compression at each sensor is achieved via low dimensional
random projections. CS has recently been exploited to solve detection problems
under various assumptions on the signals of interest, however, its potential
for dependent data fusion has not been explored adequately. We exploit the
capability of CS to capture statistical properties of uncompressed data in
order to compute decision statistics for detection in the compressed domain.
First, a Gaussian approximation is employed to perform likelihood ratio (LR)
based detection with compressed data. In this approach, inter-modal dependence
is captured via a compressed version of the covariance matrix of the
concatenated (temporally and spatially) uncompressed data vector. We show that,
under certain conditions, this approach with a small number of compressed
measurements per node leads to enhanced performance compared to detection with
uncompressed data using widely considered suboptimal approaches. Second, we
develop a nonparametric approach where a decision statistic based on the second
order statistics of uncompressed data is computed in the compressed domain. The
second approach is promising over other related nonparametric approaches and
the first approach when multimodal data is highly correlated at the expense of
slightly increased computational complexity. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Factors in Recommending Contrarian Content on Social Media,
Abstract: Polarization is a troubling phenomenon that can lead to societal divisions
and hurt the democratic process. It is therefore important to develop methods
to reduce it.
We propose an algorithmic solution to the problem of reducing polarization.
The core idea is to expose users to content that challenges their point of
view, with the hope broadening their perspective, and thus reduce their
polarity. Our method takes into account several aspects of the problem, such as
the estimated polarity of the user, the probability of accepting the
recommendation, the polarity of the content, and popularity of the content
being recommended.
We evaluate our recommendations via a large-scale user study on Twitter users
that were actively involved in the discussion of the US elections results.
Results shows that, in most cases, the factors taken into account in the
recommendation affect the users as expected, and thus capture the essential
features of the problem. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Adaptive p-value weighting with power optimality,
Abstract: Weighting the p-values is a well-established strategy that improves the power
of multiple testing procedures while dealing with heterogeneous data. However,
how to achieve this task in an optimal way is rarely considered in the
literature. This paper contributes to fill the gap in the case of
group-structured null hypotheses, by introducing a new class of procedures
named ADDOW (for Adaptive Data Driven Optimal Weighting) that adapts both to
the alternative distribution and to the proportion of true null hypotheses. We
prove the asymptotical FDR control and power optimality among all weighted
procedures of ADDOW, which shows that it dominates all existing procedures in
that framework. Some numerical experiments show that the proposed method
preserves its optimal properties in the finite sample setting when the number
of tests is moderately large. | [
0,
0,
1,
1,
0,
0
] | [
"Statistics",
"Mathematics"
] |
Title: Perceptual Context in Cognitive Hierarchies,
Abstract: Cognition does not only depend on bottom-up sensor feature abstraction, but
also relies on contextual information being passed top-down. Context is higher
level information that helps to predict belief states at lower levels. The main
contribution of this paper is to provide a formalisation of perceptual context
and its integration into a new process model for cognitive hierarchies. Several
simple instantiations of a cognitive hierarchy are used to illustrate the role
of context. Notably, we demonstrate the use context in a novel approach to
visually track the pose of rigid objects with just a 2D camera. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Quantitative Biology"
] |
Title: Coherence for braided and symmetric pseudomonoids,
Abstract: Presentations for unbraided, braided and symmetric pseudomonoids are defined.
Biequivalences characterising the semistrict bicategories generated by these
presentations are proven. It is shown that these biequivalences categorify
results in the theory of monoids and commutative monoids, and generalise
standard coherence theorems for braided and symmetric monoidal categories. | [
1,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Toward construction of a consistent field theory with Poincare covariance in terms of step-function-type basis functions showing confinement/deconfinement, mass-gap and Regge trajectory for non-pure/pure non-Abelian gauge fields,
Abstract: This article is a review by the authors concerning the construction of a
Poincar${\rm \acute{e}}$ covariant (owing to spacetime continuum)
field-theoretic formalism in terms of step-function-type basis functions
without ultraviolet divergences. This formalism analytically derives
confinement/deconfinement, mass-gap and Regge trajectory for non-Abelian gauge
fields, and gives solutions for self-interacting scalar fields. Fields
propagate in spacetime continuum and fields with finite degrees of freedom
toward continuum limit have no ultraviolet divergence. Basis functions defined
in a parameter spacetime are mapped to real spacetime. The authors derive a new
solution comprised of classical fields as a vacuum and quantum fluctuations,
leading to the linear potential between the particle and antiparticle from the
Wilson loop. The Polyakov line gives finite binding energies and reveals the
deconfining property at high temperatures. The quantum action yields positive
mass from the classical fields and quantum fluctuations produces the Coulomb
potential. Pure Yang-Mills fields show the same mass-gap owing to the
particle-antiparticle pair creation. The Dirac equation under linear potential
is analytically solved in this formalism, reproducing the principal properties
of Regge trajectories at a quantum level. Further outlook mentions a
possibility of the difference between conventional continuum and present wave
functions responsible for the cosmological constant. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Fast Generation for Convolutional Autoregressive Models,
Abstract: Convolutional autoregressive models have recently demonstrated
state-of-the-art performance on a number of generation tasks. While fast,
parallel training methods have been crucial for their success, generation is
typically implemented in a naïve fashion where redundant computations are
unnecessarily repeated. This results in slow generation, making such models
infeasible for production environments. In this work, we describe a method to
speed up generation in convolutional autoregressive models. The key idea is to
cache hidden states to avoid redundant computation. We apply our fast
generation method to the Wavenet and PixelCNN++ models and achieve up to
$21\times$ and $183\times$ speedups respectively. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science"
] |
Title: Variable domain N-linked glycosylation and negative surface charge are key features of monoclonal ACPA: implications for B-cell selection,
Abstract: Autoreactive B cells have a central role in the pathogenesis of rheumatoid
arthritis (RA), and recent findings have proposed that anti-citrullinated
protein autoantibodies (ACPA) may be directly pathogenic. Herein, we
demonstrate the frequency of variable-region glycosylation in single-cell
cloned mAbs. A total of 14 ACPA mAbs were evaluated for predicted N-linked
glycosylation motifs in silico and compared to 452 highly-mutated mAbs from RA
patients and controls. Variable region N-linked motifs (N-X-S/T) were
strikingly prevalent within ACPA (100%) compared to somatically hypermutated
(SHM) RA bone marrow plasma cells (21%), and synovial plasma cells from
seropositive (39%) and seronegative RA (7%). When normalized for SHM, ACPA
still had significantly higher frequency of N-linked motifs compared to all
studied mAbs including highly-mutated HIV broadly-neutralizing and
malaria-associated mAbs. The Fab glycans of ACPA-mAbs were highly sialylated,
contributed to altered charge, but did not influence antigen binding. The
analysis revealed evidence of unusual B-cell selection pressure and
SHM-mediated decreased in surface charge and isoelectric point in ACPA. It is
still unknown how these distinct features of anti-citrulline immunity may have
an impact on pathogenesis. However, it is evident that they offer selective
advantages for ACPA+ B cells, possibly also through non-antigen driven
mechanisms. | [
0,
0,
0,
0,
1,
0
] | [
"Quantitative Biology"
] |
Title: Counterexample Guided Inductive Optimization,
Abstract: This paper describes three variants of a counterexample guided inductive
optimization (CEGIO) approach based on Satisfiability Modulo Theories (SMT)
solvers. In particular, CEGIO relies on iterative executions to constrain a
verification procedure, in order to perform inductive generalization, based on
counterexamples extracted from SMT solvers. CEGIO is able to successfully
optimize a wide range of functions, including non-linear and non-convex
optimization problems based on SMT solvers, in which data provided by
counterexamples are employed to guide the verification engine, thus reducing
the optimization domain. The present algorithms are evaluated using a large set
of benchmarks typically employed for evaluating optimization techniques.
Experimental results show the efficiency and effectiveness of the proposed
algorithms, which find the optimal solution in all evaluated benchmarks, while
traditional techniques are usually trapped by local minima. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Bounds for the difference between two Čebyšev functionals,
Abstract: In this work, a generalization of pre-Grüss inequality is established.
Several bounds for the difference between two Čebyšev functional are
proved. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Model Checking of Cache for WCET Analysis Refinement,
Abstract: On real-time systems running under timing constraints, scheduling can be
performed when one is aware of the worst case execution time (WCET) of tasks.
Usually, the WCET of a task is unknown and schedulers make use of safe
over-approximations given by static WCET analysis. To reduce the
over-approximation, WCET analysis has to gain information about the underlying
hardware behavior, such as pipelines and caches. In this paper, we focus on the
cache analysis, which classifies memory accesses as hits/misses according to
the set of possible cache states. We propose to refine the results of classical
cache analysis using a model checker, introducing a new cache model for the
least recently used (LRU) policy. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Rational Solutions of the Painlevé-II Equation Revisited,
Abstract: The rational solutions of the Painlevé-II equation appear in several
applications and are known to have many remarkable algebraic and analytic
properties. They also have several different representations, useful in
different ways for establishing these properties. In particular,
Riemann-Hilbert representations have proven to be useful for extracting the
asymptotic behavior of the rational solutions in the limit of large degree
(equivalently the large-parameter limit). We review the elementary properties
of the rational Painlevé-II functions, and then we describe three different
Riemann-Hilbert representations of them that have appeared in the literature: a
representation by means of the isomonodromy theory of the Flaschka-Newell Lax
pair, a second representation by means of the isomonodromy theory of the
Jimbo-Miwa Lax pair, and a third representation found by Bertola and Bothner
related to pseudo-orthogonal polynomials. We prove that the Flaschka-Newell and
Bertola-Bothner Riemann-Hilbert representations of the rational Painlevé-II
functions are explicitly connected to each other. Finally, we review recent
results describing the asymptotic behavior of the rational Painlevé-II
functions obtained from these Riemann-Hilbert representations by means of the
steepest descent method. | [
0,
1,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: PHAST: Protein-like heteropolymer analysis by statistical thermodynamics,
Abstract: PHAST is a software package written in standard Fortran, with MPI and CUDA
extensions, able to efficiently perform parallel multicanonical Monte Carlo
simulations of single or multiple heteropolymeric chains, as coarse-grained
models for proteins. The outcome data can be straightforwardly analyzed within
its microcanonical Statistical Thermodynamics module, which allows for
computing the entropy, caloric curve, specific heat and free energies. As a
case study, we investigate the aggregation of heteropolymers bioinspired on
$A\beta_{25-33}$ fragments and their cross-seeding with $IAPP_{20-29}$
isoforms. Excellent parallel scaling is observed, even under numerically
difficult first-order like phase transitions, which are properly described by
the built-in fully reconfigurable force fields. Still, the package is free and
open source, this shall motivate users to readily adapt it to specific
purposes. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Quantitative Biology",
"Statistics"
] |
Title: Is the annual growth rate in balance of trade time series for Ireland nonlinear,
Abstract: We describe the Time Series Multivariate Adaptive Regressions Splines
(TSMARS) method. This method is useful for identifying nonlinear structure in a
time series. We use TSMARS to model the annual change in the balance of trade
for Ireland from 1970 to 2007. We compare the TSMARS estimate with long memory
ARFIMA estimates and long-term parsimonious linear models. We show that the
change in the balance of trade is nonlinear and possesses weakly long range
effects. Moreover, we compare the period prior to the introduction of the
Intrastat system in 1993 with the period from 1993 onward. Here we show that in
the earlier period the series had a substantial linear signal embedded in it
suggesting that estimation efforts in the earlier period may have resulted in
an over-smoothed series. | [
0,
0,
0,
1,
0,
0
] | [
"Statistics",
"Quantitative Finance"
] |
Title: Sparse-View X-Ray CT Reconstruction Using $\ell_1$ Prior with Learned Transform,
Abstract: A major challenge in X-ray computed tomography (CT) is reducing radiation
dose while maintaining high quality of reconstructed images. To reduce the
radiation dose, one can reduce the number of projection views (sparse-view CT);
however, it becomes difficult to achieve high quality image reconstruction as
the number of projection views decreases. Researchers have applied the concept
of learning sparse representations from (high-quality) CT image dataset to the
sparse-view CT reconstruction. We propose a new statistical CT reconstruction
model that combines penalized weighted-least squares (PWLS) and $\ell_1$
regularization with learned sparsifying transform (PWLS-ST-$\ell_1$), and an
algorithm for PWLS-ST-$\ell_1$. Numerical experiments for sparse-view 2D
fan-beam CT and 3D axial cone-beam CT show that the $\ell_1$ regularizer
significantly improves the sharpness of edges of reconstructed images compared
to the CT reconstruction methods using edge-preserving regularizer and $\ell_2$
regularization with learned ST. | [
1,
1,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics",
"Quantitative Biology"
] |
Title: Dirac Composite Fermion - A Particle-Hole Spinor,
Abstract: The particle-hole (PH) symmetry at half-filled Landau level requires the
relationship between the flux number N_phi and the particle number N on a
sphere to be exactly N_phi - 2(N-1) = 1. The wave functions of composite
fermions with 1/2 "orbital spin", which contributes to the shift "1" in the
N_phi and N relationship, are proposed, shown to be PH symmetric, and validated
with exact finite system results. It is shown the many-body composite electron
and composite hole wave functions at half-filling can be formed from the two
components of the same spinor wave function of a massless Dirac fermion at
zero-magnetic field. It is further shown that away from half-filling, the
many-body composite electron wave function at filling factor nu and its PH
conjugated composite hole wave function at 1-nu can be formed from the two
components of the very same spinor wave functions of a massless Dirac fermion
at non-zero magnetic field. This relationship leads to the proposal of a very
simple Dirac composite fermion effective field theory, where the two-component
Dirac fermion field is a particle-hole spinor field coupled to the same
emergent gauge field, with one field component describing the composite
electrons and the other describing the PH conjugated composite holes. As such,
the density of the Dirac spinor field is the density sum of the composite
electron and hole field components, and therefore is equal to the degeneracy of
the Lowest Landau level. On the other hand, the charge density coupled to the
external magnetic field is the density difference between the composite
electron and hole field components, and is therefore neutral at exactly
half-filling. It is shown that the proposed particle-hole spinor effective
field theory gives essentially the same electromagnetic responses as Son's
Dirac composite fermion theory does. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Safe Adaptive Importance Sampling,
Abstract: Importance sampling has become an indispensable strategy to speed up
optimization algorithms for large-scale applications. Improved adaptive
variants - using importance values defined by the complete gradient information
which changes during optimization - enjoy favorable theoretical properties, but
are typically computationally infeasible. In this paper we propose an efficient
approximation of gradient-based sampling, which is based on safe bounds on the
gradient. The proposed sampling distribution is (i) provably the best sampling
with respect to the given bounds, (ii) always better than uniform sampling and
fixed importance sampling and (iii) can efficiently be computed - in many
applications at negligible extra cost. The proposed sampling scheme is generic
and can easily be integrated into existing algorithms. In particular, we show
that coordinate-descent (CD) and stochastic gradient descent (SGD) can enjoy
significant a speed-up under the novel scheme. The proven efficiency of the
proposed sampling is verified by extensive numerical testing. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics",
"Statistics"
] |
Title: Categorically closed topological groups,
Abstract: Let $\mathcal C$ be a subcategory of the category of topologized semigroups
and their partial continuous homomorphisms. An object $X$ of the category
${\mathcal C}$ is called ${\mathcal C}$-closed if for each morphism $f:X\to Y$
of the category ${\mathcal C}$ the image $f(X)$ is closed in $Y$. In the paper
we detect topological groups which are $\mathcal C$-closed for the categories
$\mathcal C$ whose objects are Hausdorff topological (semi)groups and whose
morphisms are isomorphic topological embeddings, injective continuous
homomorphisms, continuous homomorphisms, or partial continuous homomorphisms
with closed domain. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Extracting significant signal of news consumption from social networks: the case of Twitter in Italian political elections,
Abstract: According to the Eurobarometer report about EU media use of May 2018, the
number of European citizens who consult on-line social networks for accessing
information is considerably increasing. In this work we analyze approximately
$10^6$ tweets exchanged during the last Italian elections. By using an
entropy-based null model discounting the activity of the users, we first
identify potential political alliances within the group of verified accounts:
if two verified users are retweeted more than expected by the non-verified
ones, they are likely to be related. Then, we derive the users' affiliation to
a coalition measuring the polarization of unverified accounts. Finally, we
study the bipartite directed representation of the tweets and retweets network,
in which tweets and users are collected on the two layers. Users with the
highest out-degree identify the most popular ones, whereas highest out-degree
posts are the most "viral". We identify significant content spreaders by
statistically validating the connections that cannot be explained by users'
tweeting activity and posts' virality by using an entropy-based null model as
benchmark. The analysis of the directed network of validated retweets reveals
signals of the alliances formed after the elections, highlighting commonalities
of interests before the event of the national elections. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Cloud Radiative Effect Study Using Sky Camera,
Abstract: The analysis of clouds in the earth's atmosphere is important for a variety
of applications, viz. weather reporting, climate forecasting, and solar energy
generation. In this paper, we focus our attention on the impact of cloud on the
total solar irradiance reaching the earth's surface. We use weather station to
record the total solar irradiance. Moreover, we employ collocated ground-based
sky camera to automatically compute the instantaneous cloud coverage. We
analyze the relationship between measured solar irradiance and computed cloud
coverage value, and conclude that higher cloud coverage greatly impacts the
total solar irradiance. Such studies will immensely help in solar energy
generation and forecasting. | [
1,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: On Testing Machine Learning Programs,
Abstract: Nowadays, we are witnessing a wide adoption of Machine learning (ML) models
in many safety-critical systems, thanks to recent breakthroughs in deep
learning and reinforcement learning. Many people are now interacting with
systems based on ML every day, e.g., voice recognition systems used by virtual
personal assistants like Amazon Alexa or Google Home. As the field of ML
continues to grow, we are likely to witness transformative advances in a wide
range of areas, from finance, energy, to health and transportation. Given this
growing importance of ML-based systems in our daily life, it is becoming
utterly important to ensure their reliability. Recently, software researchers
have started adapting concepts from the software testing domain (e.g., code
coverage, mutation testing, or property-based testing) to help ML engineers
detect and correct faults in ML programs. This paper reviews current existing
testing practices for ML programs. First, we identify and explain challenges
that should be addressed when testing ML programs. Next, we report existing
solutions found in the literature for testing ML programs. Finally, we identify
gaps in the literature related to the testing of ML programs and make
recommendations of future research directions for the scientific community. We
hope that this comprehensive review of software testing practices will help ML
engineers identify the right approach to improve the reliability of their
ML-based systems. We also hope that the research community will act on our
proposed research directions to advance the state of the art of testing for ML
programs. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Scalable and Efficient Statistical Inference with Estimating Functions in the MapReduce Paradigm for Big Data,
Abstract: The theory of statistical inference along with the strategy of
divide-and-conquer for large- scale data analysis has recently attracted
considerable interest due to great popularity of the MapReduce programming
paradigm in the Apache Hadoop software framework. The central analytic task in
the development of statistical inference in the MapReduce paradigm pertains to
the method of combining results yielded from separately mapped data batches.
One seminal solution based on the confidence distribution has recently been
established in the setting of maximum likelihood estimation in the literature.
This paper concerns a more general inferential methodology based on estimating
functions, termed as the Rao-type confidence distribution, of which the maximum
likelihood is a special case. This generalization provides a unified framework
of statistical inference that allows regression analyses of massive data sets
of important types in a parallel and scalable fashion via a distributed file
system, including longitudinal data analysis, survival data analysis, and
quantile regression, which cannot be handled using the maximum likelihood
method. This paper investigates four important properties of the proposed
method: computational scalability, statistical optimality, methodological
generality, and operational robustness. In particular, the proposed method is
shown to be closely connected to Hansen's generalized method of moments (GMM)
and Crowder's optimality. An interesting theoretical finding is that the
asymptotic efficiency of the proposed Rao-type confidence distribution
estimator is always greater or equal to the estimator obtained by processing
the full data once. All these properties of the proposed method are illustrated
via numerical examples in both simulation studies and real-world data analyses. | [
0,
0,
0,
1,
0,
0
] | [
"Statistics",
"Computer Science"
] |
Title: Analysis of Sequence Polymorphism of LINEs and SINEs in Entamoeba histolytica,
Abstract: The goal of this dissertation is to study the sequence polymorphism in
retrotransposable elements of Entamoeba histolytica. The Quasispecies theory, a
concept of equilibrium (stationary), has been used to understand the behaviour
of these elements. Two datasets of retrotransposons of Entamoeba histolytica
have been used. We present results from both datasets of retrotransposons
(SINE1s) of E. histolytica. We have calculated the mutation rate of EhSINE1s
for both datasets and drawn a phylogenetic tree for newly determined EhSINE1s
(dataset II). We have also discussed the variation in lengths of EhSINE1s for
both datasets. Using the quasispecies model, we have shown how sequences of
SINE1s vary within the population. The outputs of the quasispecies model are
discussed in the presence and the absence of back mutation by taking different
values of fitness. From our study of Non-long terminal repeat retrotransposons
(LINEs and their non-autonomous partner's SINEs) of Entamoeba histolytica, we
can conclude that an active EhSINE can generate very similar copies of itself
by retrotransposition. Due to this reason, it increases mutations which give
the result of sequence polymorphism. We have concluded that the mutation rate
of SINE is very high. This high mutation rate provides an idea for the
existence of SINEs, which may affect the genetic analysis of EhSINE1
ancestries, and calculation of phylogenetic distances. | [
0,
0,
0,
0,
1,
0
] | [
"Quantitative Biology"
] |
Title: Classification of digital affine noncommutative geometries,
Abstract: It is known that connected translation invariant $n$-dimensional
noncommutative differentials $d x^i$ on the algebra $k[x^1,\cdots,x^n]$ of
polynomials in $n$-variables over a field $k$ are classified by commutative
algebras $V$ on the vector space spanned by the coordinates. This data also
applies to construct differentials on the Heisenberg algebra `spacetime' with
relations $[x^\mu,x^\nu]=\lambda\Theta^{\mu\nu}$ where $ \Theta$ is an
antisymmetric matrix as well as to Lie algebras with pre-Lie algebra
structures. We specialise the general theory to the field $k={\ \mathbb{F}}_2$
of two elements, in which case translation invariant metrics (i.e. with
constant coefficients) are equivalent to making $V$ a Frobenius algebras. We
classify all of these and their quantum Levi-Civita bimodule connections for
$n=2,3$, with partial results for $n=4$. For $n=2$ we find 3 inequivalent
differential structures admitting 1,2 and 3 invariant metrics respectively. For
$n=3$ we find 6 differential structures admitting $0,1,2,3,4,7$ invariant
metrics respectively. We give some examples for $n=4$ and general $n$.
Surprisingly, not all our geometries for $n\ge 2$ have zero quantum Riemann
curvature. Quantum gravity is normally seen as a weighted `sum' over all
possible metrics but our results are a step towards a deeper approach in which
we must also `sum' over differential structures. Over ${\mathbb{F}}_2$ we
construct some of our algebras and associated structures by digital gates,
opening up the possibility of `digital geometry'. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Physics"
] |
Title: Eliminating higher-multiplicity intersections in the metastable dimension range,
Abstract: The $r$-fold analogues of Whitney trick were `in the air' since 1960s.
However, only in this century they were stated, proved and applied to obtain
interesting results, most notably by Mabillard and Wagner. Here we prove and
apply a version of the $r$-fold Whitney trick when general position $r$-tuple
intersections have positive dimension.
Theorem. Assume that $D=D_1\sqcup\ldots\sqcup D_r$ is disjoint union of
$k$-dimensional disks, $rd\ge (r+1)k+3$, and $f:D\to B^d$ a proper PL (smooth)
map such that $f\partial D_1\cap\ldots\cap f\partial D_r=\emptyset$. If the map
$$f^r:\partial(D_1\times\ldots\times D_r)\to
(B^d)^r-\{(x,x,\ldots,x)\in(B^d)^r\ |\ x\in B^d\}$$ extends to
$D_1\times\ldots\times D_r$, then there is a proper PL (smooth) map $\overline
f:D\to B^d$ such that $\overline f=f$ on $\partial D$ and $\overline
fD_1\cap\ldots\cap \overline fD_r=\emptyset$. | [
1,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Manipulative Elicitation -- A New Attack on Elections with Incomplete Preferences,
Abstract: Lu and Boutilier proposed a novel approach based on "minimax regret" to use
classical score based voting rules in the setting where preferences can be any
partial (instead of complete) orders over the set of alternatives. We show here
that such an approach is vulnerable to a new kind of manipulation which was not
present in the classical (where preferences are complete orders) world of
voting. We call this attack "manipulative elicitation." More specifically, it
may be possible to (partially) elicit the preferences of the agents in a way
that makes some distinguished alternative win the election who may not be a
winner if we elicit every preference completely. More alarmingly, we show that
the related computational task is polynomial time solvable for a large class of
voting rules which includes all scoring rules, maximin, Copeland$^\alpha$ for
every $\alpha\in[0,1]$, simplified Bucklin voting rules, etc. We then show that
introducing a parameter per pair of alternatives which specifies the minimum
number of partial preferences where this pair of alternatives must be
comparable makes the related computational task of manipulative elicitation
\NPC for all common voting rules including a class of scoring rules which
includes the plurality, $k$-approval, $k$-veto, veto, and Borda voting rules,
maximin, Copeland$^\alpha$ for every $\alpha\in[0,1]$, and simplified Bucklin
voting rules. Hence, in this work, we discover a fundamental vulnerability in
using minimax regret based approach in partial preferential setting and propose
a novel way to tackle it. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Modeling The Intensity Function Of Point Process Via Recurrent Neural Networks,
Abstract: Event sequence, asynchronously generated with random timestamp, is ubiquitous
among applications. The precise and arbitrary timestamp can carry important
clues about the underlying dynamics, and has lent the event data fundamentally
different from the time-series whereby series is indexed with fixed and equal
time interval. One expressive mathematical tool for modeling event is point
process. The intensity functions of many point processes involve two
components: the background and the effect by the history. Due to its inherent
spontaneousness, the background can be treated as a time series while the other
need to handle the history events. In this paper, we model the background by a
Recurrent Neural Network (RNN) with its units aligned with time series indexes
while the history effect is modeled by another RNN whose units are aligned with
asynchronous events to capture the long-range dynamics. The whole model with
event type and timestamp prediction output layers can be trained end-to-end.
Our approach takes an RNN perspective to point process, and models its
background and history effect. For utility, our method allows a black-box
treatment for modeling the intensity which is often a pre-defined parametric
form in point processes. Meanwhile end-to-end training opens the venue for
reusing existing rich techniques in deep network for point process modeling. We
apply our model to the predictive maintenance problem using a log dataset by
more than 1000 ATMs from a global bank headquartered in North America. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: A note on relative amenable of finite von Neumann algebras,
Abstract: Let $M$ be a finite von Neumann algebra (resp. a type II$_{1}$ factor) and
let $N\subset M$ be a II$_{1}$ factor (resp. $N\subset M$ have an atomic part).
We prove that the inclusion $N\subset M$ is amenable implies the identity map
on $M$ has an approximate factorization through $M_m(\mathbb{C})\otimes N $ via
trace preserving normal unital completely positive maps, which is a
generalization of a result of Haagerup. We also prove two permanence properties
for amenable inclusions. One is weak Haagerup property, the other is weak
exactness. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: On stably trivial spin torsors over low-dimensional schemes,
Abstract: The paper discusses stably trivial torsors for spin and orthogonal groups
over smooth affine schemes over infinite perfect fields of characteristic
unequal to 2. We give a complete description of all the invariants relevant for
the classification of such objects over schemes of dimension at most $3$, along
with many examples. The results are based on the
$\mathbb{A}^1$-representability theorem for torsors and transfer of known
computations of $\mathbb{A}^1$-homotopy sheaves along the sporadic isomorphisms
to spin groups. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: On the relation between dependency distance, crossing dependencies, and parsing. Comment on "Dependency distance: a new perspective on syntactic patterns in natural languages" by Haitao Liu et al,
Abstract: Liu et al. (2017) provide a comprehensive account of research on dependency
distance in human languages. While the article is a very rich and useful report
on this complex subject, here I will expand on a few specific issues where
research in computational linguistics (specifically natural language
processing) can inform DDM research, and vice versa. These aspects have not
been explored much in the article by Liu et al. or elsewhere, probably due to
the little overlap between both research communities, but they may provide
interesting insights for improving our understanding of the evolution of human
languages, the mechanisms by which the brain processes and understands
language, and the construction of effective computer systems to achieve this
goal. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Quantitative Biology"
] |
Title: One-dimensional plasmonic hotspots located between silver nanowire dimers evaluated by surface-enhanced resonance Raman scattering,
Abstract: Hotspots of surface-enhanced resonance Raman scattering (SERRS) are localized
within 1 nm at gaps or crevices of plasmonic nanoparticle (NP) dimers. We
demonstrate SERRS hotspots with volumes that are extended in one dimension tens
of thousand times compared to standard zero-dimensional hotspots using gaps or
crevices of plasmonic nanowire (NW) dimers. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: An efficient methodology for the analysis and modeling of computer experiments with large number of inputs,
Abstract: Complex computer codes are often too time expensive to be directly used to
perform uncertainty, sensitivity, optimization and robustness analyses. A
widely accepted method to circumvent this problem consists in replacing
cpu-time expensive computer models by cpu inexpensive mathematical functions,
called metamodels. For example, the Gaussian process (Gp) model has shown
strong capabilities to solve practical problems , often involving several
interlinked issues. However, in case of high dimensional experiments (with
typically several tens of inputs), the Gp metamodel building process remains
difficult, even unfeasible, and application of variable selection techniques
cannot be avoided. In this paper, we present a general methodology allowing to
build a Gp metamodel with large number of inputs in a very efficient manner.
While our work focused on the Gp metamodel, its principles are fully generic
and can be applied to any types of metamodel. The objective is twofold:
estimating from a minimal number of computer experiments a highly predictive
metamodel. This methodology is successfully applied on an industrial computer
code. | [
0,
0,
1,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: A Datamining Approach for Emotions Extraction and Discovering Cricketers performance from Stadium to Sensex,
Abstract: Microblogging sites are the direct platform for the users to express their
views. It has been observed from previous studies that people are viable to
flaunt their emotions for events (eg. natural catastrophes, sports, academics
etc.), for persons (actor/actress, sports person, scientist) and for the places
they visit. In this study we focused on a sport event, particularly the cricket
tournament and collected the emotions of the fans for their favorite players
using their tweets. Further, we acquired the stock market performance of the
brands which are either endorsing the players or sponsoring the match in the
tournament. It has been observed that performance of the player triggers the
users to flourish their emotions over social media therefore, we observed
correlation between players performance and fans' emotions. Therefore, we found
the direct connection between player's performance with brand's behavior on
stock market. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Quantitative Finance"
] |
Title: On Vector ARMA Models Consistent with a Finite Matrix Covariance Sequence,
Abstract: We formulate the so called "VARMA covariance matching problem" and
demonstrate the existence of a solution using the degree theory from
differential topology. | [
0,
0,
1,
1,
0,
0
] | [
"Mathematics",
"Statistics"
] |
Title: Review of flexible and transparent thin-film transistors based on zinc oxide and related materials,
Abstract: Flexible and transparent electronics presents a new era of electronic
technologies. Ubiquitous applications involve wearable electronics, biosensors,
flexible transparent displays, radio-frequency identifications (RFIDs),
etc.Zinc oxide (ZnO) and related materials are the most commonly used inorganic
semiconductors in flexible and transparent devices, owing to their high
electrical performance, together with low processing temperature and good
optical transparency.In this paper, we review recent advances in flexible and
transparent thin-film transistors (TFTs) based on ZnO and related
materials.After a brief introduction, the main progresses on the preparation of
each component (substrate, electrodes, channel and dielectrics) are summarized
and discussed. Then, the effect of mechanical bending on electrical performance
was highlighted. Finally, we suggest the challenges and opportunities in future
investigations. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Evolutionary Data Systems,
Abstract: Anyone in need of a data system today is confronted with numerous complex
options in terms of system architectures, such as traditional relational
databases, NoSQL and NewSQL solutions as well as several sub-categories like
column-stores, row-stores etc. This overwhelming array of choices makes
bootstrapping data-driven applications difficult and time consuming, requiring
expertise often not accessible due to cost issues (e.g., to scientific labs or
small businesses). In this paper, we present the vision of evolutionary data
systems that free systems architects and application designers from the
complex, cumbersome and expensive process of designing and tuning specialized
data system architectures that fit only a single, static application scenario.
Setting up an evolutionary system is as simple as identifying the data. As new
data and queries come in, the system automatically evolves so that its
architecture matches the properties of the incoming workload at all times.
Inspired by the theory of evolution, at any given point in time, an
evolutionary system may employ multiple competing solutions down at the low
level of database architectures -- characterized as combinations of data
layouts, access methods and execution strategies. Over time, "the fittest wins"
and becomes the dominant architecture until the environment (workload) changes.
In our initial prototype, we demonstrate solutions that can seamlessly evolve
(back and forth) between a key-value store and a column-store architecture in
order to adapt to changing workloads. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Optimal Learning for Sequential Decision Making for Expensive Cost Functions with Stochastic Binary Feedbacks,
Abstract: We consider the problem of sequentially making decisions that are rewarded by
"successes" and "failures" which can be predicted through an unknown
relationship that depends on a partially controllable vector of attributes for
each instance. The learner takes an active role in selecting samples from the
instance pool. The goal is to maximize the probability of success in either
offline (training) or online (testing) phases. Our problem is motivated by
real-world applications where observations are time-consuming and/or expensive.
We develop a knowledge gradient policy using an online Bayesian linear
classifier to guide the experiment by maximizing the expected value of
information of labeling each alternative. We provide a finite-time analysis of
the estimated error and show that the maximum likelihood estimator based
produced by the KG policy is consistent and asymptotically normal. We also show
that the knowledge gradient policy is asymptotically optimal in an offline
setting. This work further extends the knowledge gradient to the setting of
contextual bandits. We report the results of a series of experiments that
demonstrate its efficiency. | [
0,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Determination of hysteresis in finite-state random walks using Bayesian cross validation,
Abstract: Consider the problem of modeling hysteresis for finite-state random walks
using higher-order Markov chains. This Letter introduces a Bayesian framework
to determine, from data, the number of prior states of recent history upon
which a trajectory is statistically dependent. The general recommendation is to
use leave-one-out cross validation, using an easily-computable formula that is
provided in closed form. Importantly, Bayes factors using flat model priors are
biased in favor of too-complex a model (more hysteresis) when a large amount of
data is present and the Akaike information criterion (AIC) is biased in favor
of too-sparse a model (less hysteresis) when few data are present. | [
0,
1,
0,
1,
0,
0
] | [
"Statistics",
"Mathematics"
] |
Title: Moyennes effectives de fonctions multiplicatives complexes,
Abstract: We establish effective mean-value estimates for a wide class of
multiplicative arithmetic functions, thereby providing (essentially optimal)
quantitative versions of Wirsing's classical estimates and extending those of
Halász. Several applications are derived, including: estimates for the
difference of mean-values of so-called pretentious functions, local laws for
the distribution of prime factors in an arbitrary set, and weighted
distribution of additive functions. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Magnetic Excitations and Continuum of a Field-Induced Quantum Spin Liquid in $α$-RuCl$_3$,
Abstract: We report on terahertz spectroscopy of quantum spin dynamics in
$\alpha$-RuCl$_3$, a system proximate to the Kitaev honeycomb model, as a
function of temperature and magnetic field. An extended magnetic continuum
develops below the structural phase transition at $T_{s2}=62$K. With the onset
of a long-range magnetic order at $T_N=6.5$K, spectral weight is transferred to
a well-defined magnetic excitation at $\hbar \omega_1 = 2.48$meV, which is
accompanied by a higher-energy band at $\hbar \omega_2 = 6.48$meV. Both
excitations soften in magnetic field, signaling a quantum phase transition at
$B_c=7$T where we find a broad continuum dominating the dynamical response.
Above $B_c$, the long-range order is suppressed, and on top of the continuum,
various emergent magnetic excitations evolve. These excitations follow clear
selection rules and exhibit distinct field dependencies, characterizing the
dynamical properties of the field-induced quantum spin liquid. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Improving Development Practices through Experimentation: an Industrial TDD Case,
Abstract: Test-Driven Development (TDD), an agile development approach that enforces
the construction of software systems by means of successive micro-iterative
testing coding cycles, has been widely claimed to increase external software
quality. In view of this, some managers at Paf-a Nordic gaming entertainment
company-were interested in knowing how would TDD perform at their premises.
Eventually, if TDD outperformed their traditional way of coding (i.e., YW,
short for Your Way), it would be possible to switch to TDD considering the
empirical evidence achieved at the company level. We conduct an experiment at
Paf to evaluate the performance of TDD, YW and the reverse approach of TDD
(i.e., ITL, short for Iterative-Test Last) on external quality. TDD outperforms
YW and ITL at Paf. Despite the encouraging results, we cannot recommend Paf to
immediately adopt TDD as the difference in performance between YW and TDD is
small. However, as TDD looks promising at Paf, we suggest to move some
developers to TDD and to run a future experiment to compare the performance of
TDD and YW. TDD slightly outperforms ITL in controlled experiments for TDD
novices. However, more industrial experiments are still needed to evaluate the
performance of TDD in real-life contexts. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Divisor-sum fibers,
Abstract: Let $s(\cdot)$ denote the sum-of-proper-divisors function, that is, $s(n) =
\sum_{d\mid n,~d<n}d$. Erdős-Granville-Pomerance-Spiro conjectured that for
any set $\mathcal{A}$ of asymptotic density zero, the preimage set
$s^{-1}(\mathcal{A})$ also has density zero. We prove a weak form of this
conjecture: If $\epsilon(x)$ is any function tending to $0$ as $x\to\infty$,
and $\mathcal{A}$ is a set of integers of cardinality at most
$x^{\frac12+\epsilon(x)}$, then the number of integers $n\le x$ with $s(n) \in
\mathcal{A}$ is $o(x)$, as $x\to\infty$. In particular, the EGPS conjecture
holds for infinite sets with counting function $O(x^{\frac12 + \epsilon(x)})$.
We also disprove a hypothesis from the same paper of EGPS by showing that for
any positive numbers $\alpha$ and $\epsilon$, there are integers $n$ with
arbitrarily many $s$-preimages lying between $\alpha(1-\epsilon)n$ and
$\alpha(1+\epsilon)n$. Finally, we make some remarks on solutions $n$ to
congruences of the form $\sigma(n) \equiv a\pmod{n}$, proposing a modification
of a conjecture appearing in recent work of the first two authors. We also
improve a previous upper bound for the number of solutions $n \leq x$, making
it uniform in $a$. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: New zirconium hydrides predicted by structure search method based on first principles calculations,
Abstract: The formation of precipitated zirconium (Zr) hydrides is closely related to
the hydrogen embrittlement problem for the cladding materials of pressured
water reactors (PWR). In this work, we systematically investigated the crystal
structures of zirconium hydride (ZrHx) with different hydrogen concentrations
(x = 0~2, atomic ratio) by combining the basin hopping algorithm with first
principles calculations. We conclude that the P3m1 {\zeta}-ZrH0.5 is
dynamically unstable, while a novel dynamically stable P3m1 ZrH0.5 structure
was discovered in the structure search. The stability of bistable P42/nnm
ZrH1.5 structures and I4/mmm ZrH2 structures are also revisited. We find that
the P42/nnm (c/a > 1) ZrH1.5 is dynamically unstable, while the I4/mmm (c/a =
1.57) ZrH2 is dynamically stable.The P42/nnm (c/a < 1) ZrH1.5 might be a key
intermediate phase for the transition of {\gamma}->{\delta}->{\epsilon} phases.
Additionally, by using the thermal dynamic simulations, we find that
{\delta}-ZrH1.5 is the most stable structure at high temperature while ZrH2 is
the most stable hydride at low temperature. Slow cooling process will promote
the formation of {\delta}-ZrH1.5, and fast cooling process will promote the
formation of {\gamma}-ZrH. These results may help to understand the phase
transitions of zirconium hydrides. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Studying Magnetic Fields using Low-frequency Pulsar Observations,
Abstract: Low-frequency polarisation observations of pulsars, facilitated by
next-generation radio telescopes, provide powerful probes of astrophysical
plasmas that span many orders of magnitude in magnetic field strength and
scale: from pulsar magnetospheres to intervening magneto-ionic plasmas
including the ISM and the ionosphere. Pulsar magnetospheres with teragauss
field strengths can be explored through their numerous emission phenomena
across multiple frequencies, the mechanism behind which remains elusive.
Precise dispersion and Faraday rotation measurements towards a large number of
pulsars probe the three-dimensional large-scale (and eventually small-scale)
structure of the Galactic magnetic field, which plays a role in many
astrophysical processes, but is not yet well understood, especially towards the
Galactic halo. We describe some results and ongoing work from the Low Frequency
Array (LOFAR) and the Murchison Widefield Array (MWA) radio telescopes in these
areas. These and other pathfinder and precursor telescopes have reinvigorated
low-frequency science and build towards the Square Kilometre Array (SKA), which
will make significant advancements in studies of astrophysical magnetic fields
in the next 50 years. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Optimal hedging under fast-varying stochastic volatility,
Abstract: In a market with a rough or Markovian mean-reverting stochastic volatility
there is no perfect hedge. Here it is shown how various delta-type hedging
strategies perform and can be evaluated in such markets. A precise
characterization of the hedging cost, the replication cost caused by the
volatility fluctuations, is presented in an asymptotic regime of rapid mean
reversion for the volatility fluctuations. The optimal dynamic asset based
hedging strategy in the considered regime is identified as the so-called
`practitioners' delta hedging scheme. It is moreover shown that the
performances of the delta-type hedging schemes are essentially independent of
the regularity of the volatility paths in the considered regime and that the
hedging costs are related to a vega risk martingale whose magnitude is
proportional to a new market risk parameter. | [
0,
0,
0,
0,
0,
1
] | [
"Quantitative Finance",
"Statistics"
] |
Title: Personal Food Computer: A new device for controlled-environment agriculture,
Abstract: Due to their interdisciplinary nature, devices for controlled-environment
agriculture have the possibility to turn into ideal tools not only to conduct
research on plant phenology but also to create curricula in a wide range of
disciplines. Controlled-environment devices are increasing their
functionalities as well as improving their accessibility. Traditionally,
building one of these devices from scratch implies knowledge in fields such as
mechanical engineering, digital electronics, programming, and energy
management. However, the requirements of an effective controlled environment
device for personal use brings new constraints and challenges. This paper
presents the OpenAg Personal Food Computer (PFC); a low cost desktop size
platform, which not only targets plant phenology researchers but also
hobbyists, makers, and teachers from elementary to high-school levels (K-12).
The PFC is completely open-source and it is intended to become a tool that can
be used for collective data sharing and plant growth analysis. Thanks to its
modular design, the PFC can be used in a large spectrum of activities. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Quantitative Biology"
] |
Title: Quenched Noise and Nonlinear Oscillations in Bistable Multiscale Systems,
Abstract: Nonlinear oscillators are a key modelling tool in many applications. The
influence of annealed noise on nonlinear oscillators has been studied
intensively. It can induce effects in nonlinear oscillators not present in the
deterministic setting. Yet, there is no theory regarding the quenched noise
scenario of random parameters sampled on fixed time intervals, although this
situation is often a lot more natural. Here we study a paradigmatic nonlinear
oscillator of van-der-Pol/FitzHugh-Nagumo type under quenched noise as a
piecewise-deterministic Markov process. There are several interesting effects
such as period shifts and new different trapped types of small-amplitude
oscillations, which can be captured analytically. Furthermore, we numerically
discover quenched resonance and show that it differs significantly from
previous finite-noise optimality resonance effects. This demonstrates that
quenched oscillatorscan be viewed as a new building block of nonlinear
dynamics. | [
0,
1,
1,
0,
0,
0
] | [
"Physics",
"Mathematics"
] |
Title: Multi-Layer Convolutional Sparse Modeling: Pursuit and Dictionary Learning,
Abstract: The recently proposed Multi-Layer Convolutional Sparse Coding (ML-CSC) model,
consisting of a cascade of convolutional sparse layers, provides a new
interpretation of Convolutional Neural Networks (CNNs). Under this framework,
the computation of the forward pass in a CNN is equivalent to a pursuit
algorithm aiming to estimate the nested sparse representation vectors -- or
feature maps -- from a given input signal. Despite having served as a pivotal
connection between CNNs and sparse modeling, a deeper understanding of the
ML-CSC is still lacking: there are no pursuit algorithms that can serve this
model exactly, nor are there conditions to guarantee a non-empty model. While
one can easily obtain signals that approximately satisfy the ML-CSC
constraints, it remains unclear how to simply sample from the model and, more
importantly, how one can train the convolutional filters from real data.
In this work, we propose a sound pursuit algorithm for the ML-CSC model by
adopting a projection approach. We provide new and improved bounds on the
stability of the solution of such pursuit and we analyze different practical
alternatives to implement this in practice. We show that the training of the
filters is essential to allow for non-trivial signals in the model, and we
derive an online algorithm to learn the dictionaries from real data,
effectively resulting in cascaded sparse convolutional layers. Last, but not
least, we demonstrate the applicability of the ML-CSC model for several
applications in an unsupervised setting, providing competitive results. Our
work represents a bridge between matrix factorization, sparse dictionary
learning and sparse auto-encoders, and we analyze these connections in detail. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Autonomous Vehicle Speed Control for Safe Navigation of Occluded Pedestrian Crosswalk,
Abstract: Both humans and the sensors on an autonomous vehicle have limited sensing
capabilities. When these limitations coincide with scenarios involving
vulnerable road users, it becomes important to account for these limitations in
the motion planner. For the scenario of an occluded pedestrian crosswalk, the
speed of the approaching vehicle should be a function of the amount of
uncertainty on the roadway. In this work, the longitudinal controller is
formulated as a partially observable Markov decision process and dynamic
programming is used to compute the control policy. The control policy scales
the speed profile to be used by a model predictive steering controller. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Statistical inference methods for cumulative incidence function curves at a fixed point in time,
Abstract: Competing risks data arise frequently in clinical trials. When the
proportional subdistribution hazard assumption is violated or two cumulative
incidence function (CIF) curves cross, rather than comparing the overall
treatment effects, researchers may be interested in focusing on a comparison of
clinical utility at some fixed time points. This paper extend a series of tests
that are constructed based on a pseudo-value regression technique or different
transformation functions for CIFs and their variances based on Gaynor's or
Aalen's work, and the differences among CIFs at a given time point are
compared. | [
0,
0,
0,
1,
0,
0
] | [
"Statistics"
] |
Title: XSAT of Linear CNF Formulas,
Abstract: Open questions with respect to the computational complexity of linear CNF
formulas in connection with regularity and uniformity are addressed. In
particular it is proven that any l-regular monotone CNF formula is
XSAT-unsatisfiable if its number of clauses m is not a multiple of l. For exact
linear formulas one finds surprisingly that l-regularity implies k-uniformity,
with m = 1 + k(l-1)) and allowed k-values obey k(k-1) = 0 (mod l). Then the
computational complexity of the class of monotone exact linear and l-regular
CNF formulas with respect to XSAT can be determined: XSAT-satisfiability is
either trivial, if m is not a multiple of l, or it can be decided in
sub-exponential time, namely O(exp(n^^1/2)). Sub-exponential time behaviour for
the wider class of regular and uniform linear CNF formulas can be shown for
certain subclasses. | [
1,
0,
1,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Stability of Conditional Sequential Monte Carlo,
Abstract: The particle Gibbs (PG) sampler is a Markov Chain Monte Carlo (MCMC)
algorithm, which uses an interacting particle system to perform the Gibbs
steps. Each Gibbs step consists of simulating a particle system conditioned on
one particle path. It relies on a conditional Sequential Monte Carlo (cSMC)
method to create the particle system. We propose a novel interpretation of the
cSMC algorithm as a perturbed Sequential Monte Carlo (SMC) method and apply
telescopic decompositions developed for the analysis of SMC algorithms
\cite{delmoral2004} to derive a bound for the distance between the expected
sampled path from cSMC and the target distribution of the MCMC algorithm. This
can be used to get a uniform ergodicity result. In particular, we can show that
the mixing rate of cSMC can be kept constant by increasing the number of
particles linearly with the number of observations. Based on our decomposition,
we also prove a central limit theorem for the cSMC Algorithm, which cannot be
done using the approaches in \cite{Andrieu2013} and \cite{Lindsten2014}. | [
0,
0,
0,
1,
0,
0
] | [
"Statistics",
"Mathematics",
"Computer Science"
] |
Title: Diffusion along chains of normally hyperbolic cylinders,
Abstract: The present paper is part of a series of articles dedicated to the existence
of Arnold diffusion for cusp-residual perturbations of Tonelli Hamiltonians on
$\mathbb{A}^3$. Our goal here is to construct an abstract geometric framework
that can be used to prove the existence of diffusing orbits in the so-called a
priori stable setting, once the preliminary geometric reductions are preformed.
Our framework also applies, rather directly, to the a priori unstable setting.
The main geometric objects of interest are $3$-dimensional normally
hyperbolic invariant cylinders with boundary, which in particular admit
well-defined stable and unstable manifolds. These enable us to define, in our
setting, chains of cylinders, i.e., finite, ordered families of cylinders in
which each cylinder admits homoclinic connections, and any two consecutive
elements in the family admit heteroclinic connections.
Our main result is the existence of diffusing orbits drifting along such
chains, under precise conditions on the dynamics on the cylinders, and on their
homoclinic and heteroclinic structure. | [
0,
1,
1,
0,
0,
0
] | [
"Mathematics",
"Physics"
] |
Title: Interior transmission eigenvalue problems on compact manifolds with boundary conductivity parameters,
Abstract: In this paper, we consider an interior transmission eigenvalue (ITE) problem
on some compact $C^{\infty }$-Riemannian manifolds with a common smooth
boundary. In particular, these manifolds may have different topologies, but we
impose some conditions of Riemannian metrics, indices of refraction and
boundary conductivity parameters on the boundary. Then we prove the
discreteness of the set of ITEs, the existence of infinitely many ITEs, and its
Weyl type lower bound. For our settings, we can adopt the argument by
Lakshtanov and Vainberg, considering the Dirichlet-to-Neumann map. As an
application, we derive the existence of non-scattering energies for
time-harmonic acoustic equations. For the sake of simplicity, we consider the
scattering theory on the Euclidean space. However, the argument is applicable
for certain kinds of non-compact manifolds with ends on which we can define the
scattering matrix. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Physics"
] |
Title: A superpolynomial lower bound for the size of non-deterministic complement of an unambiguous automaton,
Abstract: Unambiguous non-deterministic finite automata have intermediate expressive
power and succinctness between deterministic and non-deterministic automata. It
has been conjectured that every unambiguous non-deterministic one-way finite
automaton (1UFA) recognizing some language L can be converted into a 1UFA
recognizing the complement of the original language L with polynomial increase
in the number of states. We disprove this conjecture by presenting a family of
1UFAs on a single-letter alphabet such that recognizing the complements of the
corresponding languages requires superpolynomial increase in the number of
states even for generic non-deterministic one-way finite automata. We also note
that both the languages and their complements can be recognized by sweeping
deterministic automata with a linear increase in the number of states. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Identifying Clickbait Posts on Social Media with an Ensemble of Linear Models,
Abstract: The purpose of a clickbait is to make a link so appealing that people click
on it. However, the content of such articles is often not related to the title,
shows poor quality, and at the end leaves the reader unsatisfied.
To help the readers, the organizers of the clickbait challenge
(this http URL) asked the participants to build a machine
learning model for scoring articles with respect to their "clickbaitness".
In this paper we propose to solve the clickbait problem with an ensemble of
Linear SVM models, and our approach was tested successfully in the challenge:
it showed great performance of 0.036 MSE and ranked 3rd among all the solutions
to the contest. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: A Realistic Dataset for the Smart Home Device Scheduling Problem for DCOPs,
Abstract: The field of Distributed Constraint Optimization has gained momentum in
recent years thanks to its ability to address various applications related to
multi-agent cooperation. While techniques to solve Distributed Constraint
Optimization Problems (DCOPs) are abundant and have matured substantially since
the field inception, the number of DCOP realistic applications and benchmark
used to asses the performance of DCOP algorithms is lagging behind. To contrast
this background we (i) introduce the Smart Home Device Scheduling (SHDS)
problem, which describe the problem of coordinating smart devices schedules
across multiple homes as a multi-agent system, (ii) detail the physical models
adopted to simulate smart sensors, smart actuators, and homes environments, and
(iii) introduce a DCOP realistic benchmark for SHDS problems. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: The extra scalar degrees of freedom from the two Higgs doublet model for dark energy,
Abstract: In principle a minimal extension of the standard model of Particle Physics,
the two Higgs doublet model, can be invoked to explain the scalar field
responsible of dark energy. The two doublets are in general mixed. After
diagonalization, the lightest CP-even Higgs and CP-odd Higgs are jointly taken
to be the dark energy candidate. The dark energy obtained from Higgs fields in
this case is indistinguishable from the cosmological constant. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: On the tail behavior of a class of multivariate conditionally heteroskedastic processes,
Abstract: Conditions for geometric ergodicity of multivariate autoregressive
conditional heteroskedasticity (ARCH) processes, with the so-called BEKK (Baba,
Engle, Kraft, and Kroner) parametrization, are considered. We show for a class
of BEKK-ARCH processes that the invariant distribution is regularly varying. In
order to account for the possibility of different tail indices of the
marginals, we consider the notion of vector scaling regular variation, in the
spirit of Perfekt (1997, Advances in Applied Probability, 29, pp. 138-164). The
characterization of the tail behavior of the processes is used for deriving the
asymptotic properties of the sample covariance matrices. | [
0,
0,
1,
1,
0,
0
] | [
"Statistics",
"Quantitative Finance"
] |
Title: On the Complexity of Detecting Convexity over a Box,
Abstract: It has recently been shown that the problem of testing global convexity of
polynomials of degree four is {strongly} NP-hard, answering an open question of
N.Z. Shor. This result is minimal in the degree of the polynomial when global
convexity is of concern. In a number of applications however, one is interested
in testing convexity only over a compact region, most commonly a box (i.e.,
hyper-rectangle). In this paper, we show that this problem is also strongly
NP-hard, in fact for polynomials of degree as low as three. This result is
minimal in the degree of the polynomial and in some sense justifies why
convexity detection in nonlinear optimization solvers is limited to quadratic
functions or functions with special structure. As a byproduct, our proof shows
that the problem of testing whether all matrices in an interval family are
positive semidefinite is strongly NP-hard. This problem, which was previously
shown to be (weakly) NP-hard by Nemirovski, is of independent interest in the
theory of robust control. | [
0,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Variance-Reduced Stochastic Learning under Random Reshuffling,
Abstract: Several useful variance-reduced stochastic gradient algorithms, such as SVRG,
SAGA, Finito, and SAG, have been proposed to minimize empirical risks with
linear convergence properties to the exact minimizer. The existing convergence
results assume uniform data sampling with replacement. However, it has been
observed in related works that random reshuffling can deliver superior
performance over uniform sampling and, yet, no formal proofs or guarantees of
exact convergence exist for variance-reduced algorithms under random
reshuffling. This paper makes two contributions. First, it resolves this open
issue and provides the first theoretical guarantee of linear convergence under
random reshuffling for SAGA; the argument is also adaptable to other
variance-reduced algorithms. Second, under random reshuffling, the paper
proposes a new amortized variance-reduced gradient (AVRG) algorithm with
constant storage requirements compared to SAGA and with balanced gradient
computations compared to SVRG. AVRG is also shown analytically to converge
linearly. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Mathematics",
"Statistics"
] |
Title: CELIO: An application development framework for interactive spaces,
Abstract: Developing applications for interactive space is different from developing
cross-platform applications for personal computing. Input, output, and
architectural variations in each interactive space introduce big overhead in
terms of cost and time for developing, deploying and maintaining applications
for interactive spaces. Often, these applications become on-off experience tied
to the deployed spaces. To alleviate this problem and enable rapid responsive
space design applications similar to responsive web design, we present CELIO
application development framework for interactive spaces. The framework is
micro services based and neatly decouples application and design specifications
from hardware and architecture specifications of an interactive space. In this
paper, we describe this framework and its implementation details. Also, we
briefly discuss the use cases developed using this framework. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Active particles in periodic lattices,
Abstract: Both natural and artificial small-scale swimmers may often self-propel in
environments subject to complex geometrical constraints. While most past
theoretical work on low-Reynolds number locomotion addressed idealised
geometrical situations, not much is known on the motion of swimmers in
heterogeneous environments. As a first theoretical model, we investigate
numerically the behaviour of a single spherical micro-swimmer located in an
infinite, periodic body-centred cubic lattice consisting of rigid inert spheres
of the same size as the swimmer. Running a large number of simulations we
uncover the phase diagram of possible trajectories as a function of the
strength of the swimming actuation and the packing density of the lattice. We
then use hydrodynamic theory to rationalise our computational results and show
in particular how the far-field nature of the swimmer (pusher vs. puller)
governs even the behaviour at high volume fractions. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Quantitative Biology"
] |
Title: Fractional Abelian topological phases of matter for fermions in two-dimensional space,
Abstract: These notes constitute chapter 7 from "l'Ecole de Physique des Houches"
Session CIII, August 2014 dedicated to Topological Aspects of Condensed matter
physics. The tenfold way in quasi-one-dimensional space is presented. The
method of chiral Abelian bosonization is reviewed. It is then applied to the
stability analysis for the edge theory in symmetry class AII, and for the
construction of two-dimensional topological phases from coupled wires. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: A Comprehensive Framework for Dynamic Bike Rebalancing in a Large Bike Sharing Network,
Abstract: Bike sharing is a vital component of a modern multi-modal transportation
system. However, its implementation can lead to bike supply-demand imbalance
due to fluctuating spatial and temporal demands. This study proposes a
comprehensive framework to develop optimal dynamic bike rebalancing strategies
in a large bike sharing network. It consists of three components, including a
station-level pick-up/drop-off prediction model, station clustering model, and
capacitated location-routing optimization model. For the first component, we
propose a powerful deep learning model called graph convolution neural network
model (GCNN) with data-driven graph filter (DDGF), which can automatically
learn the hidden spatial-temporal correlations among stations to provide more
accurate predictions; for the second component, we apply a graph clustering
algorithm labeled the Community Detection algorithm to cluster stations that
locate geographically close to each other and have a small net demand gap;
last, a capacitated location-routing problem (CLRP) is solved to deal with the
combination of two types of decision variables: the locations of bike
distribution centers and the design of distribution routes for each cluster. | [
0,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: On Thin Air Reads: Towards an Event Structures Model of Relaxed Memory,
Abstract: To model relaxed memory, we propose confusion-free event structures over an
alphabet with a justification relation. Executions are modeled by justified
configurations, where every read event has a justifying write event.
Justification alone is too weak a criterion, since it allows cycles of the kind
that result in so-called thin-air reads. Acyclic justification forbids such
cycles, but also invalidates event reorderings that result from compiler
optimizations and dynamic instruction scheduling. We propose the notion of
well-justification, based on a game-like model, which strikes a middle ground.
We show that well-justified configurations satisfy the DRF theorem: in any
data-race free program, all well-justified configurations are sequentially
consistent. We also show that rely-guarantee reasoning is sound for
well-justified configurations, but not for justified configurations. For
example, well-justified configurations are type-safe.
Well-justification allows many, but not all reorderings performed by relaxed
memory. In particular, it fails to validate the commutation of independent
reads. We discuss variations that may address these shortcomings. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: A conjecture on $C$-matrices of cluster algebras,
Abstract: For a skew-symmetrizable cluster algebra $\mathcal A_{t_0}$ with principal
coefficients at $t_0$, we prove that each seed $\Sigma_t$ of $\mathcal A_{t_0}$
is uniquely determined by its {\bf C-matrix}, which was proposed by Fomin and
Zelevinsky in \cite{FZ3} as a conjecture. Our proof is based on the fact that
the positivity of cluster variables and sign-coherence of $c$-vectors hold for
$\mathcal A_{t_0}$, which was actually verified in \cite{GHKK}. More discussion
is given in the sign-skew-symmetric case so as to obtain a conclusion as weak
version of the conjecture in this general case. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Attacking Similarity-Based Link Prediction in Social Networks,
Abstract: Link prediction is one of the fundamental problems in computational social
science. A particularly common means to predict existence of unobserved links
is via structural similarity metrics, such as the number of common neighbors;
node pairs with higher similarity are thus deemed more likely to be linked.
However, a number of applications of link prediction, such as predicting links
in gang or terrorist networks, are adversarial, with another party incentivized
to minimize its effectiveness by manipulating observed information about the
network. We offer a comprehensive algorithmic investigation of the problem of
attacking similarity-based link prediction through link deletion, focusing on
two broad classes of such approaches, one which uses only local information
about target links, and another which uses global network information. While we
show several variations of the general problem to be NP-Hard for both local and
global metrics, we exhibit a number of well-motivated special cases which are
tractable. Additionally, we provide principled and empirically effective
algorithms for the intractable cases, in some cases proving worst-case
approximation guarantees. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Stabilization and control of Majorana bound states with elongated skyrmions,
Abstract: We show that elongated magnetic skyrmions can host Majorana bound states in a
proximity-coupled two-dimensional electron gas sandwiched between a chiral
magnet and an $s$-wave superconductor. Our proposal requires stable skyrmions
with unit topological charge, which can be realized in a wide range of
multilayer magnets, and allows quantum information transfer by using standard
methods in spintronics via skyrmion motion. We also show how braiding
operations can be realized in our proposal. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Computer Science"
] |
Title: Measure-geometric Laplacians for discrete distributions,
Abstract: In 2002 Freiberg and Zähle introduced and developed a harmonic calculus for
measure-geometric Laplacians associated to continuous distributions. We show
their theory can be extended to encompass distributions with finite support and
give a matrix representation for the resulting operators. In the case of a
uniform discrete distribution we make use of this matrix representation to
explicitly determine the eigenvalues and the eigenfunctions of the associated
Laplacian. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: A Semantics Comparison Workbench for a Concurrent, Asynchronous, Distributed Programming Language,
Abstract: A number of high-level languages and libraries have been proposed that offer
novel and simple to use abstractions for concurrent, asynchronous, and
distributed programming. The execution models that realise them, however, often
change over time---whether to improve performance, or to extend them to new
language features---potentially affecting behavioural and safety properties of
existing programs. This is exemplified by SCOOP, a message-passing approach to
concurrent object-oriented programming that has seen multiple changes proposed
and implemented, with demonstrable consequences for an idiomatic usage of its
core abstraction. We propose a semantics comparison workbench for SCOOP with
fully and semi-automatic tools for analysing and comparing the state spaces of
programs with respect to different execution models or semantics. We
demonstrate its use in checking the consistency of properties across semantics
by applying it to a set of representative programs, and highlighting a
deadlock-related discrepancy between the principal execution models of SCOOP.
Furthermore, we demonstrate the extensibility of the workbench by generalising
the formalisation of an execution model to support recently proposed extensions
for distributed programming. Our workbench is based on a modular and
parameterisable graph transformation semantics implemented in the GROOVE tool.
We discuss how graph transformations are leveraged to atomically model
intricate language abstractions, how the visual yet algebraic nature of the
model can be used to ascertain soundness, and highlight how the approach could
be applied to similar languages. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Adaptive Network Coding Schemes for Satellite Communications,
Abstract: In this paper, we propose two novel physical layer aware adaptive network
coding and coded modulation schemes for time variant channels. The proposed
schemes have been applied to different satellite communications scenarios with
different Round Trip Times (RTT). Compared to adaptive network coding, and
classical non-adaptive network coding schemes for time variant channels, as
benchmarks, the proposed schemes demonstrate that adaptation of packet
transmission based on the channel variation and corresponding erasures allows
for significant gains in terms of throughput, delay and energy efficiency. We
shed light on the trade-off between energy efficiency and delay-throughput
gains, demonstrating that conservative adaptive approaches that favors less
transmission under high erasures, might cause higher delay and less throughput
gains in comparison to non-conservative approaches that favor more transmission
to account for high erasures. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Physics"
] |
Title: A Symbolic Computation Framework for Constitutive Modelling Based On Entropy Principles,
Abstract: The entropy principle in the formulation of Müller and Liu is a common
tool used in constitutive modelling for the development of restrictions on the
unknown constitutive functions describing material properties of various
physical continua. In the current work, a symbolic software implementation of
the Liu algorithm, based on \verb|Maple| software and the \verb|GeM| package,
is presented. The computational framework is used to algorithmically perform
technically demanding symbolic computations related to the entropy principle,
to simplify and reduce Liu's identities, and ultimately to derive explicit
formulas describing classes of constitutive functions that do not violate the
entropy principle. Detailed physical examples are presented and discussed. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Mathematics",
"Computer Science"
] |
Title: Beyond Word Embeddings: Learning Entity and Concept Representations from Large Scale Knowledge Bases,
Abstract: Text representations using neural word embeddings have proven effective in
many NLP applications. Recent researches adapt the traditional word embedding
models to learn vectors of multiword expressions (concepts/entities). However,
these methods are limited to textual knowledge bases (e.g., Wikipedia). In this
paper, we propose a novel and simple technique for integrating the knowledge
about concepts from two large scale knowledge bases of different structure
(Wikipedia and Probase) in order to learn concept representations. We adapt the
efficient skip-gram model to seamlessly learn from the knowledge in Wikipedia
text and Probase concept graph. We evaluate our concept embedding models on two
tasks: (1) analogical reasoning, where we achieve a state-of-the-art
performance of 91% on semantic analogies, (2) concept categorization, where we
achieve a state-of-the-art performance on two benchmark datasets achieving
categorization accuracy of 100% on one and 98% on the other. Additionally, we
present a case study to evaluate our model on unsupervised argument type
identification for neural semantic parsing. We demonstrate the competitive
accuracy of our unsupervised method and its ability to better generalize to out
of vocabulary entity mentions compared to the tedious and error prone methods
which depend on gazetteers and regular expressions. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: High order surface radiation conditions for time-harmonic waves in exterior domains,
Abstract: We formulate a new family of high order on-surface radiation conditions to
approximate the outgoing solution to the Helmholtz equation in exterior
domains. Motivated by the pseudo-differential expansion of the
Dirichlet-to-Neumann operator developed by Antoine et al. (J. Math. Anal. Appl.
229:184-211, 1999), we design a systematic procedure to apply
pseudo-differential symbols of arbitrarily high order. Numerical results are
presented to illustrate the performance of the proposed method for solving both
the Dirichlet and the Neumann boundary value problems. Possible improvements
and extensions are also discussed. | [
0,
1,
1,
0,
0,
0
] | [
"Mathematics",
"Physics"
] |
Title: Probabilistic PARAFAC2,
Abstract: The PARAFAC2 is a multimodal factor analysis model suitable for analyzing
multi-way data when one of the modes has incomparable observation units, for
example because of differences in signal sampling or batch sizes. A fully
probabilistic treatment of the PARAFAC2 is desirable in order to improve
robustness to noise and provide a well founded principle for determining the
number of factors, but challenging because the factor loadings are constrained
to be orthogonal. We develop two probabilistic formulations of the PARAFAC2
along with variational procedures for inference: In the one approach, the mean
values of the factor loadings are orthogonal leading to closed form variational
updates, and in the other, the factor loadings themselves are orthogonal using
a matrix Von Mises-Fisher distribution. We contrast our probabilistic
formulation to the conventional direct fitting algorithm based on maximum
likelihood. On simulated data and real fluorescence spectroscopy and gas
chromatography-mass spectrometry data, we compare our approach to the
conventional PARAFAC2 model estimation and find that the probabilistic
formulation is more robust to noise and model order misspecification. The
probabilistic PARAFAC2 thus forms a promising framework for modeling multi-way
data accounting for uncertainty. | [
0,
0,
0,
1,
0,
0
] | [
"Statistics",
"Mathematics"
] |
Title: Evolutionary Stability of Reputation Management System in Peer to Peer Networks,
Abstract: Each participant in peer-to-peer network prefers to free-ride on the
contribution of other participants. Reputation based resource sharing is a way
to control the free riding. Instead of classical game theory we use
evolutionary game theory to analyse the reputation based resource sharing in
peer to peer system. Classical game-theoretical approach requires global
information of the population. However, the evolutionary games only assumes
light cognitive capabilities of users, that is, each user imitates the behavior
of other user with better payoff. We find that without any extra benefit
reputation strategy is not stable in the system. We also find the fraction of
users who calculate the reputation for controlling the free riding in
equilibrium. In this work first we made a game theoretical model for the
reputation system and then we calculate the threshold of the fraction of users
with which the reputation strategy is sustainable in the system. We found that
in simplistic conditions reputation calculation is not evolutionarily stable
strategy but if we impose some initial payment to all users and then distribute
that payment among the users who are calculating reputation then reputation is
evolutionary stable strategy. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Intermetallic Nanocrystals: Syntheses and Catalytic Applications,
Abstract: At the forefront of nanochemistry, there exists a research endeavor centered
around intermetallic nanocrystals, which are unique in terms of long-range
atomic ordering, well-defined stoichiometry, and controlled crystal structure.
In contrast to alloy nanocrystals with no atomic ordering, it has been
challenging to synthesize intermetallic nanocrystals with a tight control over
their size and shape. This review article highlights recent progress in the
synthesis of intermetallic nanocrystals with controllable sizes and
well-defined shapes. We begin with a simple analysis and some insights key to
the selection of experimental conditions for generating intermetallic
nanocrystals. We then present examples to highlight the viable use of
intermetallic nanocrystals as electrocatalysts or catalysts for various
reactions, with a focus on the enhanced performance relative to their alloy
counterparts that lack atomic ordering. We conclude with perspectives on future
developments in the context of synthetic control, structure-property
relationship, and application. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Nonlinear Mapping Convergence and Application to Social Networks,
Abstract: This paper discusses discrete-time maps of the form $x(k + 1) = F(x(k))$,
focussing on equilibrium points of such maps. Under some circumstances,
Lefschetz fixed-point theory can be used to establish the existence of a single
locally attractive equilibrium (which is sometimes globally attractive) when a
general property of local attractivity is known for any equilibrium. Problems
in social networks often involve such discrete-time systems, and we make an
application to one such problem. | [
1,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Computer Science"
] |
Title: Links with nontrivial Alexander polynomial which are topologically concordant to the Hopf link,
Abstract: We give infinitely many $2$-component links with unknotted components which
are topologically concordant to the Hopf link, but not smoothly concordant to
any $2$-component link with trivial Alexander polynomial. Our examples are
pairwise non-concordant. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Shape optimization in laminar flow with a label-guided variational autoencoder,
Abstract: Computational design optimization in fluid dynamics usually requires to solve
non-linear partial differential equations numerically. In this work, we explore
a Bayesian optimization approach to minimize an object's drag coefficient in
laminar flow based on predicting drag directly from the object shape. Jointly
training an architecture combining a variational autoencoder mapping shapes to
latent representations and Gaussian process regression allows us to generate
improved shapes in the two dimensional case we consider. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics",
"Physics"
] |
Title: A combination chaotic system and application in color image encryption,
Abstract: In this paper, by using Logistic, Sine and Tent systems we define a
combination chaotic system. Some properties of the chaotic system are studied
by using figures and numerical results. A color image encryption algorithm is
introduced based on new chaotic system. Also this encryption algorithm can be
used for gray scale or binary images.
The experimental results of the encryption algorithm show that the encryption
algorithm is secure and practical. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network,
Abstract: Brains need to predict how the body reacts to motor commands. It is an open
question how networks of spiking neurons can learn to reproduce the non-linear
body dynamics caused by motor commands, using local, online and stable learning
rules. Here, we present a supervised learning scheme for the feedforward and
recurrent connections in a network of heterogeneous spiking neurons. The error
in the output is fed back through fixed random connections with a negative
gain, causing the network to follow the desired dynamics, while an online and
local rule changes the weights. The rule for Feedback-based Online Local
Learning Of Weights (FOLLOW) is local in the sense that weight changes depend
on the presynaptic activity and the error signal projected onto the
postsynaptic neuron. We provide examples of learning linear, non-linear and
chaotic dynamics, as well as the dynamics of a two-link arm. Using the Lyapunov
method, and under reasonable assumptions and approximations, we show that
FOLLOW learning is stable uniformly, with the error going to zero
asymptotically. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Quantitative Biology"
] |
Title: Adaptive Exploration-Exploitation Tradeoff for Opportunistic Bandits,
Abstract: In this paper, we propose and study opportunistic bandits - a new variant of
bandits where the regret of pulling a suboptimal arm varies under different
environmental conditions, such as network load or produce price. When the
load/price is low, so is the cost/regret of pulling a suboptimal arm (e.g.,
trying a suboptimal network configuration). Therefore, intuitively, we could
explore more when the load/price is low and exploit more when the load/price is
high. Inspired by this intuition, we propose an Adaptive Upper-Confidence-Bound
(AdaUCB) algorithm to adaptively balance the exploration-exploitation tradeoff
for opportunistic bandits. We prove that AdaUCB achieves $O(\log T)$ regret
with a smaller coefficient than the traditional UCB algorithm. Furthermore,
AdaUCB achieves $O(1)$ regret with respect to $T$ if the exploration cost is
zero when the load level is below a certain threshold. Last, based on both
synthetic data and real-world traces, experimental results show that AdaUCB
significantly outperforms other bandit algorithms, such as UCB and TS (Thompson
Sampling), under large load/price fluctuations. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: An integral formula for Riemannian $G$-structures with applications to almost hermitian and almost contact structures,
Abstract: For a Riemannian $G$-structure, we compute the divergence of the vector field
induced by the intrinsic torsion. Applying the Stokes theorem, we obtain the
integral formula on a closed oriented Riemannian manifold, which we interpret
in certain cases. We focus on almost harmitian and almost contact metric
structures. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.