text
stringlengths 57
2.88k
| labels
sequencelengths 6
6
|
---|---|
Title: Prediction of helium vapor quality in steady state Two-phase operation for SST-1 Toroidal field magnets,
Abstract: Steady State Superconducting Tokamak (SST-1) at the Institute for Plasma
Research (IPR) is an operational device and is the first superconducting
Tokamak in India. Superconducting Magnets System (SCMS) in SST-1 comprises of
sixteen Toroidal field (TF) magnets and nine Poloidal Field (PF) magnets
manufactured using NbTi/Cu based cable-in-conduit-conductor (CICC) concept.
SST-1, superconducting TF magnets are operated in a Cryo-stable manner being
cooled with two-phase (TP) flow helium. The typical operating pressure of the
TP helium is 1.6 bar (a) at corresponding saturation temperature. The SCMS has
a typical cool-down time of about 14 days from 300 K down to 4.5 K using Helium
plant of equivalent cooling capacity of 1350 W at 4.5 K. Using the onset of
experimental data from the HRL, we estimated the vapor quality for the input
heat load on to the TF magnets system. In this paper, we report the
characteristics of two-phase flow for given thermo-hydraulic conditions during
long steady state operation of the SST-1 TF magnets. Finally, the
experimentally obtained results have been compared with the well-known
correlations of two-phase flow. | [
0,
1,
0,
0,
0,
0
] |
Title: Criticality as It Could Be: organizational invariance as self-organized criticality in embodied agents,
Abstract: This paper outlines a methodological approach for designing adaptive agents
driving themselves near points of criticality. Using a synthetic approach we
construct a conceptual model that, instead of specifying mechanistic
requirements to generate criticality, exploits the maintenance of an
organizational structure capable of reproducing critical behavior. Our approach
exploits the well-known principle of universality, which classifies critical
phenomena inside a few universality classes of systems independently of their
specific mechanisms or topologies. In particular, we implement an artificial
embodied agent controlled by a neural network maintaining a correlation
structure randomly sampled from a lattice Ising model at a critical point. We
evaluate the agent in two classical reinforcement learning scenarios: the
Mountain Car benchmark and the Acrobot double pendulum, finding that in both
cases the neural controller reaches a point of criticality, which coincides
with a transition point between two regimes of the agent's behaviour,
maximizing the mutual information between neurons and sensorimotor patterns.
Finally, we discuss the possible applications of this synthetic approach to the
comprehension of deeper principles connected to the pervasive presence of
criticality in biological and cognitive systems. | [
1,
1,
0,
0,
0,
0
] |
Title: Probing Hidden Spin Order with Interpretable Machine Learning,
Abstract: The search of unconventional magnetic and non-magnetic states is a major
topic in the study of frustrated magnetism. Canonical examples of those states
include various spin liquids and spin nematics. However, discerning their
existence and the correct characterization is usually challenging. Here we
introduce a machine-learning protocol that can identify general nematic order
and their order parameter from seemingly featureless spin configurations, thus
providing comprehensive insight on the presence or absence of hidden orders. We
demonstrate the capabilities of our method by extracting the analytical form of
nematic order parameter tensors up to rank 6. This may prove useful in the
search for novel spin states and for ruling out spurious spin liquid
candidates. | [
0,
0,
0,
1,
0,
0
] |
Title: Observation and calculation of the quasi-bound rovibrational levels of the electronic ground state of H$_2^+$,
Abstract: Although the existence of quasi-bound rotational levels of the $X^+ \
^2\Sigma_g^+$ ground state of H$_2^+$ has been predicted a long time ago, these
states have never been observed. Calculated positions and widths of quasi-bound
rotational levels located close to the top of the centrifugal barriers have not
been reported either. Given the role that such states play in the recombination
of H(1s) and H$^+$ to form H$_2^+$, this lack of data may be regarded as one of
the largest unknown aspects of this otherwise accurately known fundamental
molecular cation. We present measurements of the positions and widths of the
lowest-lying quasi-bound rotational levels of H$_2^+$ and compare the
experimental results with the positions and widths we calculate using a
potential model for the $X^+$ state of H$_2^+$ which includes adiabatic,
nonadiabatic, relativistic and radiative corrections to the Born-Oppenheimer
approximation. | [
0,
1,
0,
0,
0,
0
] |
Title: Out-of-focus: Learning Depth from Image Bokeh for Robotic Perception,
Abstract: In this project, we propose a novel approach for estimating depth from RGB
images. Traditionally, most work uses a single RGB image to estimate depth,
which is inherently difficult and generally results in poor performance, even
with thousands of data examples. In this work, we alternatively use multiple
RGB images that were captured while changing the focus of the camera's lens.
This method leverages the natural depth information correlated to the different
patterns of clarity/blur in the sequence of focal images, which helps
distinguish objects at different depths. Since no such data set exists for
learning this mapping, we collect our own data set using customized hardware.
We then use a convolutional neural network for learning the depth from the
stacked focal images. Comparative studies were conducted on both a standard
RGBD data set and our own data set (learning from both single and multiple
images), and results verified that stacked focal images yield better depth
estimation than using just single RGB image. | [
1,
0,
0,
0,
0,
0
] |
Title: GibbsNet: Iterative Adversarial Inference for Deep Graphical Models,
Abstract: Directed latent variable models that formulate the joint distribution as
$p(x,z) = p(z) p(x \mid z)$ have the advantage of fast and exact sampling.
However, these models have the weakness of needing to specify $p(z)$, often
with a simple fixed prior that limits the expressiveness of the model.
Undirected latent variable models discard the requirement that $p(z)$ be
specified with a prior, yet sampling from them generally requires an iterative
procedure such as blocked Gibbs-sampling that may require many steps to draw
samples from the joint distribution $p(x, z)$. We propose a novel approach to
learning the joint distribution between the data and a latent code which uses
an adversarially learned iterative procedure to gradually refine the joint
distribution, $p(x, z)$, to better match with the data distribution on each
step. GibbsNet is the best of both worlds both in theory and in practice.
Achieving the speed and simplicity of a directed latent variable model, it is
guaranteed (assuming the adversarial game reaches the virtual training criteria
global minimum) to produce samples from $p(x, z)$ with only a few sampling
iterations. Achieving the expressiveness and flexibility of an undirected
latent variable model, GibbsNet does away with the need for an explicit $p(z)$
and has the ability to do attribute prediction, class-conditional generation,
and joint image-attribute modeling in a single model which is not trained for
any of these specific tasks. We show empirically that GibbsNet is able to learn
a more complex $p(z)$ and show that this leads to improved inpainting and
iterative refinement of $p(x, z)$ for dozens of steps and stable generation
without collapse for thousands of steps, despite being trained on only a few
steps. | [
1,
0,
0,
1,
0,
0
] |
Title: Characterization of 1-Tough Graphs using Factors,
Abstract: For a graph $G$, let $odd(G)$ and $\omega(G)$ denote the number of odd
components and the number of components of $G$, respectively. Then it is
well-known that $G$ has a 1-factor if and only if $odd(G-S)\le |S|$ for all
$S\subset V(G)$. Also it is clear that $odd(G-S) \le \omega(G-S)$. In this
paper we characterize a 1-tough graph $G$, which satisfies $\omega(G-S) \le
|S|$ for all $\emptyset \ne S \subset V(G)$, using an $H$-factor of a
set-valued function $H:V(G) \to \{ \{1\}, \{0,2\} \}$. Moreover, we generalize
this characterization to a graph that satisfies $\omega(G-S) \le f(S)$ for all
$\emptyset \ne S \subset V(G)$, where $f:V(G) \to \{1,3,5, \ldots\}$. | [
0,
0,
1,
0,
0,
0
] |
Title: A forward--backward random process for the spectrum of 1D Anderson operators,
Abstract: We give a new expression for the law of the eigenvalues of the discrete
Anderson model on the finite interval $[0,N]$, in terms of two random processes
starting at both ends of the interval. Using this formula, we deduce that the
tail of the eigenvectors behaves approximatelylike $\exp(\sigma
B\_{|n-k|}-\gamma\frac{|n-k|}{4})$ where $B\_{s}$ is the Brownian motion and
$k$ is uniformly chosen in $[0,N]$ independentlyof $B\_{s}$. A similar result
has recently been shown by B. Rifkind and B. Virag in the critical case, that
is, when the random potential is multiplied by a factor $\frac{1}{\sqrt{N}}$ | [
0,
0,
1,
0,
0,
0
] |
Title: Importance sampling the union of rare events with an application to power systems analysis,
Abstract: We consider importance sampling to estimate the probability $\mu$ of a union
of $J$ rare events $H_j$ defined by a random variable $\boldsymbol{x}$. The
sampler we study has been used in spatial statistics, genomics and
combinatorics going back at least to Karp and Luby (1983). It works by sampling
one event at random, then sampling $\boldsymbol{x}$ conditionally on that event
happening and it constructs an unbiased estimate of $\mu$ by multiplying an
inverse moment of the number of occuring events by the union bound. We prove
some variance bounds for this sampler. For a sample size of $n$, it has a
variance no larger than $\mu(\bar\mu-\mu)/n$ where $\bar\mu$ is the union
bound. It also has a coefficient of variation no larger than
$\sqrt{(J+J^{-1}-2)/(4n)}$ regardless of the overlap pattern among the $J$
events. Our motivating problem comes from power system reliability, where the
phase differences between connected nodes have a joint Gaussian distribution
and the $J$ rare events arise from unacceptably large phase differences. In the
grid reliability problems even some events defined by $5772$ constraints in
$326$ dimensions, with probability below $10^{-22}$, are estimated with a
coefficient of variation of about $0.0024$ with only $n=10{,}000$ sample
values. | [
1,
0,
0,
1,
0,
0
] |
Title: Matrix product moments in normal variables,
Abstract: Let ${\cal X }=XX^{\prime}$ be a random matrix associated with a centered
$r$-column centered Gaussian vector $X$ with a covariance matrix $P$. In this
article we compute expectations of matrix-products of the form $\prod_{1\leq
i\leq n}({\cal X } P^{v_i})$ for any $n\geq 1$ and any multi-index parameters
$v_i\in\mathbb{N}$. We derive closed form formulae and a simple sequential
algorithm to compute these matrices w.r.t. the parameter $n$. The second part
of the article is dedicated to a non commutative binomial formula for the
central matrix-moments $\mathbb{E}\left(\left[{\cal X }-P\right]^n\right)$. The
matrix product moments discussed in this study are expressed in terms of
polynomial formulae w.r.t. the powers of the covariance matrix, with
coefficients depending on the trace of these matrices. We also derive a series
of estimates w.r.t. the Loewner order on quadratic forms. For instance we shall
prove the rather crude estimate $\mathbb{E}\left(\left[{\cal X
}-P\right]^n\right)\leq \mathbb{E}\left({\cal X }^n-P^n\right)$, for any $n\geq
1$ | [
0,
0,
1,
1,
0,
0
] |
Title: Population-specific design of de-immunized protein biotherapeutics,
Abstract: Immunogenicity is a major problem during the development of biotherapeutics
since it can lead to rapid clearance of the drug and adverse reactions. The
challenge for biotherapeutic design is therefore to identify mutants of the
protein sequence that minimize immunogenicity in a target population whilst
retaining pharmaceutical activity and protein function. Current approaches are
moderately successful in designing sequences with reduced immunogenicity, but
do not account for the varying frequencies of different human leucocyte antigen
alleles in a specific population and in addition, since many designs are
non-functional, require costly experimental post-screening. Here we report a
new method for de-immunization design using multi-objective combinatorial
optimization that simultaneously optimizes the likelihood of a functional
protein sequence at the same time as minimizing its immunogenicity tailored to
a target population. We bypass the need for three-dimensional protein structure
or molecular simulations to identify functional designs by automatically
generating sequences using probabilistic models that have been used previously
for mutation effect prediction and structure prediction. As proof-of-principle
we designed sequences of the C2 domain of Factor VIII and tested them
experimentally, resulting in a good correlation with the predicted
immunogenicity of our model. | [
1,
0,
0,
0,
0,
0
] |
Title: Linearized Binary Regression,
Abstract: Probit regression was first proposed by Bliss in 1934 to study mortality
rates of insects. Since then, an extensive body of work has analyzed and used
probit or related binary regression methods (such as logistic regression) in
numerous applications and fields. This paper provides a fresh angle to such
well-established binary regression methods. Concretely, we demonstrate that
linearizing the probit model in combination with linear estimators performs on
par with state-of-the-art nonlinear regression methods, such as posterior mean
or maximum aposteriori estimation, for a broad range of real-world regression
problems. We derive exact, closed-form, and nonasymptotic expressions for the
mean-squared error of our linearized estimators, which clearly separates them
from nonlinear regression methods that are typically difficult to analyze. We
showcase the efficacy of our methods and results for a number of synthetic and
real-world datasets, which demonstrates that linearized binary regression finds
potential use in a variety of inference, estimation, signal processing, and
machine learning applications that deal with binary-valued observations or
measurements. | [
0,
0,
0,
1,
0,
0
] |
Title: Arithmetic properties of polynomials,
Abstract: In this paper, first, we prove that the Diophantine system
\[f(z)=f(x)+f(y)=f(u)-f(v)=f(p)f(q)\] has infinitely many integer solutions for
$f(X)=X(X+a)$ with nonzero integers $a\equiv 0,1,4\pmod{5}$. Second, we show
that the above Diophantine system has an integer parametric solution for
$f(X)=X(X+a)$ with nonzero integers $a$, if there are integers $m,n,k$ such
that \[\begin{cases} \begin{split} (n^2-m^2) (4mnk(k+a+1) + a(m^2+2mn-n^2))
&\equiv0\pmod{(m^2+n^2)^2},\\ (m^2+2mn-n^2) ((m^2-2mn-n^2)k(k+a+1) - 2amn)
&\equiv0 \pmod{(m^2+n^2)^2}, \end{split} \end{cases}\] where $k\equiv0\pmod{4}$
when $a$ is even, and $k\equiv2\pmod{4}$ when $a$ is odd. Third, we get that
the Diophantine system \[f(z)=f(x)+f(y)=f(u)-f(v)=f(p)f(q)=\frac{f(r)}{f(s)}\]
has a five-parameter rational solution for $f(X)=X(X+a)$ with nonzero rational
number $a$ and infinitely many nontrivial rational parametric solutions for
$f(X)=X(X+a)(X+b)$ with nonzero integers $a,b$ and $a\neq b$. At last, we raise
some related questions. | [
0,
0,
1,
0,
0,
0
] |
Title: Large-type Artin groups are systolic,
Abstract: We prove that Artin groups from a class containing all large-type Artin
groups are systolic. This provides a concise yet precise description of their
geometry. Immediate consequences are new results concerning large-type Artin
groups: biautomaticity; existence of $EZ$-boundaries; the Novikov conjecture;
descriptions of finitely presented subgroups, of virtually solvable subgroups,
and of centralizers for infinite order elements; the Burghelea conjecture and
the Bass conjecture; existence of low-dimensional models for classifying spaces
for some families of subgroups. | [
0,
0,
1,
0,
0,
0
] |
Title: An optimization approach for dynamical Tucker tensor approximation,
Abstract: An optimization-based approach for the Tucker tensor approximation of
parameter-dependent data tensors and solutions of tensor differential equations
with low Tucker rank is presented. The problem of updating the tensor
decomposition is reformulated as fitting problem subject to the tangent space
without relying on an orthogonality gauge condition. A discrete Euler scheme is
established in an alternating least squares framework, where the quadratic
subproblems reduce to trace optimization problems, that are shown to be
explicitly solvable and accessible using SVD of small size. In the presence of
small singular values, instability for larger ranks is reduced, since the
method does not need the (pseudo) inverse of matricizations of the core tensor.
Regularization of Tikhonov type can be used to compensate for the lack of
uniqueness in the tangent space. The method is validated numerically and shown
to be stable also for larger ranks in the case of small singular values of the
core unfoldings. Higher order explicit integrators of Runge-Kutta type can be
composed. | [
0,
1,
0,
0,
0,
0
] |
Title: On right $S$-Noetherian rings and $S$-Noetherian modules,
Abstract: In this paper we study right $S$-Noetherian rings and modules, extending of
notions introduced by Anderson and Dumitrescu in commutative algebra to
noncommutative rings. Two characterizations of right $S$-Noetherian rings are
given in terms of completely prime right ideals and point annihilator sets. We
also prove an existence result for completely prime point annihilators of
certain $S$-Noetherian modules with the following consequence in commutative
algebra: If a module $M$ over a commutative ring is $S$-Noetherian with respect
to a multiplicative set $S$ that contains no zero-divisors for $M$, then $M$
has an associated prime. | [
0,
0,
1,
0,
0,
0
] |
Title: Reconfiguration of Brain Network between Resting-state and Oddball Paradigm,
Abstract: The oddball paradigm is widely applied to the investigation of multiple
cognitive functions. Prior studies have explored the cortical oscillation and
power spectral differing from the resting-state conduction to oddball paradigm,
but whether brain networks existing the significant difference is still
unclear. Our study addressed how the brain reconfigures its architecture from a
resting-state condition (i.e., baseline) to P300 stimulus task in the visual
oddball paradigm. In this study, electroencephalogram (EEG) datasets were
collected from 24 postgraduate students, who were required to only mentally
count the number of target stimulus; afterwards the functional EEG networks
constructed in different frequency bands were compared between baseline and
oddball task conditions to evaluate the reconfiguration of functional network
in the brain. Compared to the baseline, our results showed the significantly (p
< 0.05) enhanced delta/theta EEG connectivity and decreased alpha default mode
network in the progress of brain reconfiguration to the P300 task. Furthermore,
the reconfigured coupling strengths were demonstrated to relate to P300
amplitudes, which were then regarded as input features to train a classifier to
differentiate the high and low P300 amplitudes groups with an accuracy of
77.78%. The findings of our study help us to understand the changes of
functional brain connectivity from resting-state to oddball stimulus task, and
the reconfigured network pattern has the potential for the selection of good
subjects for P300-based brain- computer interface. | [
0,
0,
0,
0,
1,
0
] |
Title: Optimised information gathering in smartphone users,
Abstract: Human activities from hunting to emailing are performed in a fractal-like
scale invariant pattern. These patterns are considered efficient for hunting or
foraging, but are they efficient for gathering information? Here we link the
scale invariant pattern of inter-touch intervals on the smartphone to optimal
strategies for information gathering. We recorded touchscreen touches in 65
individuals for a month and categorized the activity into checking for
information vs. sharing content. For both categories, the inter-touch intervals
were well described by power-law fits spanning 5 orders of magnitude, from 1 s
to several hours. The power-law exponent typically found for checking was 1.5
and for generating it was 1.3. Next, by using computer simulations we addressed
whether the checking pattern was efficient - in terms of minimizing futile
attempts yielding no new information. We find that the best performing power
law exponent depends on the duration of the assessment and the exponent of 1.5
was the most efficient in the short-term i.e. in the few minutes range.
Finally, we addressed whether how people generated and shared content was in
tune with the checking pattern. We assumed that the unchecked posts must be
minimized for maximal efficiency and according to our analysis the most
efficient temporal pattern to share content was the exponent of 1.3 - which was
also the pattern displayed by the smartphone users. The behavioral organization
for content generation is different from content consumption across time
scales. We propose that this difference is a signature of optimal behavior and
the short-term assessments used in modern human actions. | [
1,
1,
0,
0,
0,
0
] |
Title: Sparsity/Undersampling Tradeoffs in Anisotropic Undersampling, with Applications in MR Imaging/Spectroscopy,
Abstract: We study anisotropic undersampling schemes like those used in
multi-dimensional NMR spectroscopy and MR imaging, which sample exhaustively in
certain time dimensions and randomly in others.
Our analysis shows that anisotropic undersampling schemes are equivalent to
certain block-diagonal measurement systems. We develop novel exact formulas for
the sparsity/undersampling tradeoffs in such measurement systems. Our formulas
predict finite-N phase transition behavior differing substantially from the
well known asymptotic phase transitions for classical Gaussian undersampling.
Extensive empirical work shows that our formulas accurately describe observed
finite-N behavior, while the usual formulas based on universality are
substantially inaccurate.
We also vary the anisotropy, keeping the total number of samples fixed, and
for each variation we determine the precise sparsity/undersampling tradeoff
(phase transition). We show that, other things being equal, the ability to
recover a sparse object decreases with an increasing number of
exhaustively-sampled dimensions. | [
1,
0,
0,
0,
0,
0
] |
Title: Effect of the non-thermal Sunyaev-Zel'dovich Effect on the temperature determination of galaxy clusters,
Abstract: A recent stacking analysis of Planck HFI data of galaxy clusters (Hurier
2016) allowed to derive the cluster temperatures by using the relativistic
corrections to the Sunyaev-Zel'dovich effect (SZE). However, the temperatures
of high-temperature clusters, as derived from this analysis, resulted to be
basically higher than the temperatures derived from X-ray measurements, at a
moderate statistical significance of $1.5\sigma$. This discrepancy has been
attributed by Hurier (2016) to calibration issues. In this paper we discuss an
alternative explanation for this discrepancy in terms of a non-thermal SZE
astrophysical component. We find that this explanation can work if non-thermal
electrons in galaxy clusters have a low value of their minimum momentum
($p_1\sim0.5-1$), and if their pressure is of the order of $20-30\%$ of the
thermal gas pressure. Both these conditions are hard to obtain if the
non-thermal electrons are mixed with the hot gas in the intra cluster medium,
but can be possibly obtained if the non-thermal electrons are mainly confined
in bubbles with high content of non-thermal plasma and low content of thermal
plasma, or in giant radio lobes/relics located in the outskirts of clusters. In
order to derive more precise results on the properties of non-thermal electrons
in clusters, and in view of more solid detections of a discrepancy between
X-rays and SZE derived clusters temperatures that cannot be explained in other
ways, it would be necessary to reproduce the full analysis done by Hurier
(2016) by adding systematically the non-thermal component of the SZE. | [
0,
1,
0,
0,
0,
0
] |
Title: ModelFactory: A Matlab/Octave based toolbox to create human body models,
Abstract: Background: Model-based analysis of movements can help better understand
human motor control. Here, the models represent the human body as an
articulated multi-body system that reflects the characteristics of the human
being studied.
Results: We present an open-source toolbox that allows for the creation of
human models with easy-to-setup, customizable configurations. The toolbox
scripts are written in Matlab/Octave and provide a command-based interface as
well as a graphical interface to construct, visualize and export models.
Built-in software modules provide functionalities such as automatic scaling of
models based on subject height and weight, custom scaling of segment lengths,
mass and inertia, addition of body landmarks, and addition of motion capture
markers. Users can set up custom definitions of joints, segments and other body
properties using the many included examples as templates. In addition to the
human, any number of objects (e.g. exoskeletons, orthoses, prostheses, boxes)
can be added to the modeling environment.
Conclusions: The ModelFactory toolbox is published as open-source software
under the permissive zLib license. The toolbox fulfills an important function
by making it easier to create human models, and should be of interest to human
movement researchers.
This document is the author's version of this article. | [
1,
0,
0,
0,
1,
0
] |
Title: Dimensionality reduction with missing values imputation,
Abstract: In this study, we propose a new statical approach for high-dimensionality
reduction of heterogenous data that limits the curse of dimensionality and
deals with missing values. To handle these latter, we propose to use the Random
Forest imputation's method. The main purpose here is to extract useful
information and so reducing the search space to facilitate the data exploration
process. Several illustrative numeric examples, using data coming from publicly
available machine learning repositories are also included. The experimental
component of the study shows the efficiency of the proposed analytical
approach. | [
1,
0,
0,
1,
0,
0
] |
Title: On the Wiener-Hopf method for surface plasmons: Diffraction from semi-infinite metamaterial sheet,
Abstract: By formally invoking the Wiener-Hopf method, we explicitly solve a
one-dimensional, singular integral equation for the excitation of a slowly
decaying electromagnetic wave, called surface plasmon-polariton (SPP), of small
wavelength on a semi-infinite, flat conducting sheet irradiated by a plane wave
in two spatial dimensions. This setting is germane to wave diffraction by edges
of large sheets of single-layer graphene. Our analytical approach includes: (i)
formulation of a functional equation in the Fourier domain; (ii) evaluation of
a split function, which is expressed by a contour integral and is a key
ingredient of the Wiener-Hopf factorization; and (iii) extraction of the SPP as
a simple-pole residue of a Fourier integral. Our analytical solution is in good
agreement with a finite-element numerical computation. | [
0,
0,
1,
0,
0,
0
] |
Title: Sim2Real View Invariant Visual Servoing by Recurrent Control,
Abstract: Humans are remarkably proficient at controlling their limbs and tools from a
wide range of viewpoints and angles, even in the presence of optical
distortions. In robotics, this ability is referred to as visual servoing:
moving a tool or end-point to a desired location using primarily visual
feedback. In this paper, we study how viewpoint-invariant visual servoing
skills can be learned automatically in a robotic manipulation scenario. To this
end, we train a deep recurrent controller that can automatically determine
which actions move the end-point of a robotic arm to a desired object. The
problem that must be solved by this controller is fundamentally ambiguous:
under severe variation in viewpoint, it may be impossible to determine the
actions in a single feedforward operation. Instead, our visual servoing system
must use its memory of past movements to understand how the actions affect the
robot motion from the current viewpoint, correcting mistakes and gradually
moving closer to the target. This ability is in stark contrast to most visual
servoing methods, which either assume known dynamics or require a calibration
phase. We show how we can learn this recurrent controller using simulated data
and a reinforcement learning objective. We then describe how the resulting
model can be transferred to a real-world robot by disentangling perception from
control and only adapting the visual layers. The adapted model can servo to
previously unseen objects from novel viewpoints on a real-world Kuka IIWA
robotic arm. For supplementary videos, see:
this https URL | [
1,
0,
0,
0,
0,
0
] |
Title: Pressure Drop and Flow development in the Entrance Region of Micro-Channels with Second Order Slip Boundary Conditions and the Requirement for Development Length,
Abstract: In the present investigation, the development of axial velocity profile, the
requirement for development length ($L^*_{fd}=L/D_{h}$) and the pressure drop
in the entrance region of circular and parallel plate micro-channels have been
critically analysed for a large range of operating conditions ($10^{-2}\le
Re\le 10^{4}$, $10^{-4}\le Kn\le 0.2$ and $0\le C_2\le 0.5$). For this purpose,
the conventional Navier-Stokes equations have been numerically solved using the
finite volume method on non-staggered grid, while employing the second-order
velocity slip condition at the wall with $C_1=1$. The results indicate that
although the magnitude of local velocity slip at the wall is always greater
than that for the fully-developed section, the local wall shear stress,
particularly for higher $Kn$ and $C_2$, could be considerably lower than its
fully-developed value. This effect, which is more prominent for lower $Re$,
significantly affects the local and the fully-developed incremental pressure
drop number $K(x)$ and $K_{fd}$, respectively. As a result, depending upon the
operating condition, $K_{fd}$, as well as $K(x)$, could assume negative values.
This never reported observation implies that in the presence of enhanced
velocity slip at the wall, the pressure gradient in the developing region could
even be less than that in the fully-developed section. From simulated data, it
has been observed that both $L^*_{fd}$ and $K_{fd}$ are characterised by the
low and the high $Re$ asymptotes, using which, extremely accurate correlations
for them have been proposed for both geometries. Although owing to the complex
nature, no correlation could be derived for $K(x)$ and an exact knowledge of
$K(x)$ is necessary for evaluating the actual pressure drop for a duct length
$L^*<L^*_{fd}$, a method has been proposed that provides a conservative
estimate of the pressure drop for both $K_{fd}>0$ and $K_{fd}\le0$. | [
0,
1,
0,
0,
0,
0
] |
Title: Household poverty classification in data-scarce environments: a machine learning approach,
Abstract: We describe a method to identify poor households in data-scarce countries by
leveraging information contained in nationally representative household
surveys. It employs standard statistical learning techniques---cross-validation
and parameter regularization---which together reduce the extent to which the
model is over-fitted to match the idiosyncracies of observed survey data. The
automated framework satisfies three important constraints of this development
setting: i) The prediction model uses at most ten questions, which limits the
costs of data collection; ii) No computation beyond simple arithmetic is needed
to calculate the probability that a given household is poor, immediately after
data on the ten indicators is collected; and iii) One specification of the
model (i.e. one scorecard) is used to predict poverty throughout a country that
may be characterized by significant sub-national differences. Using survey data
from Zambia, the model's out-of-sample predictions distinguish poor households
from non-poor households using information contained in ten questions. | [
0,
0,
0,
1,
0,
0
] |
Title: Dissecting Ponzi schemes on Ethereum: identification, analysis, and impact,
Abstract: Ponzi schemes are financial frauds where, under the promise of high profits,
users put their money, recovering their investment and interests only if enough
users after them continue to invest money. Originated in the offline world 150
years ago, Ponzi schemes have since then migrated to the digital world,
approaching first on the Web, and more recently hanging over cryptocurrencies
like Bitcoin. Smart contract platforms like Ethereum have provided a new
opportunity for scammers, who have now the possibility of creating
"trustworthy" frauds that still make users lose money, but at least are
guaranteed to execute "correctly". We present a comprehensive survey of Ponzi
schemes on Ethereum, analysing their behaviour and their impact from various
viewpoints. Perhaps surprisingly, we identify a remarkably high number of Ponzi
schemes, despite the hosting platform has been operating for less than two
years. | [
1,
0,
0,
0,
0,
0
] |
Title: Chunk-Based Bi-Scale Decoder for Neural Machine Translation,
Abstract: In typical neural machine translation~(NMT), the decoder generates a sentence
word by word, packing all linguistic granularities in the same time-scale of
RNN. In this paper, we propose a new type of decoder for NMT, which splits the
decode state into two parts and updates them in two different time-scales.
Specifically, we first predict a chunk time-scale state for phrasal modeling,
on top of which multiple word time-scale states are generated. In this way, the
target sentence is translated hierarchically from chunks to words, with
information in different granularities being leveraged. Experiments show that
our proposed model significantly improves the translation performance over the
state-of-the-art NMT model. | [
1,
0,
0,
0,
0,
0
] |
Title: Sufficient Markov Decision Processes with Alternating Deep Neural Networks,
Abstract: Advances in mobile computing technologies have made it possible to monitor
and apply data-driven interventions across complex systems in real time. Markov
decision processes (MDPs) are the primary model for sequential decision
problems with a large or indefinite time horizon. Choosing a representation of
the underlying decision process that is both Markov and low-dimensional is
non-trivial. We propose a method for constructing a low-dimensional
representation of the original decision process for which: 1. the MDP model
holds; 2. a decision strategy that maximizes mean utility when applied to the
low-dimensional representation also maximizes mean utility when applied to the
original process. We use a deep neural network to define a class of potential
process representations and estimate the process of lowest dimension within
this class. The method is illustrated using data from a mobile study on heavy
drinking and smoking among college students. | [
0,
0,
1,
1,
0,
0
] |
Title: Three-Dimensional Electronic Structure of type-II Weyl Semimetal WTe$_2$,
Abstract: By combining bulk sensitive soft-X-ray angular-resolved photoemission
spectroscopy and accurate first-principles calculations we explored the bulk
electronic properties of WTe$_2$, a candidate type-II Weyl semimetal featuring
a large non-saturating magnetoresistance. Despite the layered geometry
suggesting a two-dimensional electronic structure, we find a three-dimensional
electronic dispersion. We report an evident band dispersion in the reciprocal
direction perpendicular to the layers, implying that electrons can also travel
coherently when crossing from one layer to the other. The measured Fermi
surface is characterized by two well-separated electron and hole pockets at
either side of the $\Gamma$ point, differently from previous more surface
sensitive ARPES experiments that additionally found a significant quasiparticle
weight at the zone center. Moreover, we observe a significant sensitivity of
the bulk electronic structure of WTe$_2$ around the Fermi level to electronic
correlations and renormalizations due to self-energy effects, previously
neglected in first-principles descriptions. | [
0,
1,
0,
0,
0,
0
] |
Title: Decentralized Online Learning with Kernels,
Abstract: We consider multi-agent stochastic optimization problems over reproducing
kernel Hilbert spaces (RKHS). In this setting, a network of interconnected
agents aims to learn decision functions, i.e., nonlinear statistical models,
that are optimal in terms of a global convex functional that aggregates data
across the network, with only access to locally and sequentially observed
samples. We propose solving this problem by allowing each agent to learn a
local regression function while enforcing consensus constraints. We use a
penalized variant of functional stochastic gradient descent operating
simultaneously with low-dimensional subspace projections. These subspaces are
constructed greedily by applying orthogonal matching pursuit to the sequence of
kernel dictionaries and weights. By tuning the projection-induced bias, we
propose an algorithm that allows for each individual agent to learn, based upon
its locally observed data stream and message passing with its neighbors only, a
regression function that is close to the globally optimal regression function.
That is, we establish that with constant step-size selections agents' functions
converge to a neighborhood of the globally optimal one while satisfying the
consensus constraints as the penalty parameter is increased. Moreover, the
complexity of the learned regression functions is guaranteed to remain finite.
On both multi-class kernel logistic regression and multi-class kernel support
vector classification with data generated from class-dependent Gaussian mixture
models, we observe stable function estimation and state of the art performance
for distributed online multi-class classification. Experiments on the Brodatz
textures further substantiate the empirical validity of this approach. | [
1,
0,
1,
1,
0,
0
] |
Title: Enumeration of complementary-dual cyclic $\mathbb{F}_{q}$-linear $\mathbb{F}_{q^t}$-codes,
Abstract: Let $\mathbb{F}_q$ denote the finite field of order $q,$ $n$ be a positive
integer coprime to $q$ and $t \geq 2$ be an integer. In this paper, we
enumerate all the complementary-dual cyclic $\mathbb{F}_q$-linear
$\mathbb{F}_{q^t}$-codes of length $n$ by placing $\ast$, ordinary and
Hermitian trace bilinear forms on $\mathbb{F}_{q^t}^n.$ | [
0,
0,
1,
0,
0,
0
] |
Title: Nearly-Linear Time Spectral Graph Reduction for Scalable Graph Partitioning and Data Visualization,
Abstract: This paper proposes a scalable algorithmic framework for spectral reduction
of large undirected graphs. The proposed method allows computing much smaller
graphs while preserving the key spectral (structural) properties of the
original graph. Our framework is built upon the following two key components: a
spectrum-preserving node aggregation (reduction) scheme, as well as a spectral
graph sparsification framework with iterative edge weight scaling. We show that
the resulting spectrally-reduced graphs can robustly preserve the first few
nontrivial eigenvalues and eigenvectors of the original graph Laplacian. In
addition, the spectral graph reduction method has been leveraged to develop
much faster algorithms for multilevel spectral graph partitioning as well as
t-distributed Stochastic Neighbor Embedding (t-SNE) of large data sets. We
conducted extensive experiments using a variety of large graphs and data sets,
and obtained very promising results. For instance, we are able to reduce the
"coPapersCiteseer" graph with 0.43 million nodes and 16 million edges to a much
smaller graph with only 13K (32X fewer) nodes and 17K (950X fewer) edges in
about 16 seconds; the spectrally-reduced graphs also allow us to achieve up to
1100X speedup for spectral graph partitioning and up to 60X speedup for t-SNE
visualization of large data sets. | [
1,
0,
0,
0,
0,
0
] |
Title: Temperature dependence of the bulk Rashba splitting in the bismuth tellurohalides,
Abstract: We study the temperature dependence of the Rashba-split bands in the bismuth
tellurohalides BiTe$X$ $(X=$ I, Br, Cl) from first principles. We find that
increasing temperature reduces the Rashba splitting, with the largest effect
observed in BiTeI with a reduction of the Rashba parameter of $40$% when
temperature increases from $0$ K to $300$ K. These results highlight the
inadequacy of previous interpretations of the observed Rashba splitting in
terms of static-lattice calculations alone. Notably, we find the opposite
trend, a strengthening of the Rashba splitting with rising temperature, in the
pressure-stabilized topological-insulator phase of BiTeI. We propose that the
opposite trends with temperature on either side of the topological phase
transition could be an experimental signature for identifying it. The predicted
temperature dependence is consistent with optical conductivity measurements,
and should also be observable using photoemission spectroscopy, which could
provide further insights into the nature of spin splitting and topology in the
bismuth tellurohalides. | [
0,
1,
0,
0,
0,
0
] |
Title: Towards Understanding the Evolution of the WWW Conference,
Abstract: The World Wide Web conference is a well-established and mature venue with an
already long history. Over the years it has been attracting papers reporting
many important research achievements centered around the Web. In this work we
aim at understanding the evolution of WWW conference series by detecting
crucial years and important topics. We propose a simple yet novel approach
based on tracking the classification errors of the conference papers according
to their predicted publication years. | [
1,
0,
0,
0,
0,
0
] |
Title: The generalized Milne problem in gas-dusty atmosphere,
Abstract: We consider the generalized Milne problem in non-conservative plane-parallel
optically thick atmosphere consisting of two components - the free electrons
and small dust particles. Recall, that the traditional Milne problem describes
the propagation of radiation through the conservative (without absorption)
optically thick atmosphere when the source of thermal radiation located far
below the surface. In such case, the flux of propagating light is the same at
every distance in an atmosphere. In the generalized Milne problem, the flux
changes inside the atmosphere. The solutions of the both Milne problems give
the angular distribution and polarization degree of emerging radiation. The
considered problem depends on two dimensionless parameters W and (a+b), which
depend on three parameters: $\eta$ - the ratio of optical depth due to free
electrons to optical depth due to small dust grains; the absorption factor
$\varepsilon$ of dust grains and two coefficients - $\bar b_1$ and $\bar b_2$,
describing the averaged anisotropic dust grains. These coefficients obey the
relation $\bar b_1+3\bar b_2=1$. The goal of the paper is to study the
dependence of the radiation angular distribution and degree of polarization of
emerging light on these parameters. Here we consider only continuum radiation. | [
0,
1,
0,
0,
0,
0
] |
Title: Fundamental solutions for second order parabolic systems with drift terms,
Abstract: We construct fundamental solutions of second-order parabolic systems of
divergence form with bounded and measurable leading coefficients and divergence
free first-order coefficients in the class of $BMO^{-1}_x$, under the
assumption that weak solutions of the system satisfy a certain local
boundedness estimate. We also establish Gaussian upper bound for such
fundamental solutions under the same conditions. | [
0,
0,
1,
0,
0,
0
] |
Title: CMB in the river frame and gauge invariance at second order,
Abstract: GAUGE INVARIANCE: The Sachs-Wolfe formula describing the Cosmic Microwave
Background (CMB) temperature anisotropies is one of the most important
relations in cosmology. Despite its importance, the gauge invariance of this
formula has only been discussed at first order. Here we discuss the subtle
issue of second-order gauge transformations on the CMB. By introducing two
rules (needed to handle the subtle issues), we prove the gauge invariance of
the second-order Sachs-Wolfe formula and provide several compact expressions
which can be useful for the study of gauge transformations on cosmology. Our
results go beyond a simple technicality: we discuss from a physical point of
view several aspects that improve our understanding of the CMB. We also
elucidate how crucial it is to understand gauge transformations on the CMB in
order to avoid errors and/or misconceptions as occurred in the past. THE RIVER
FRAME: we introduce a cosmological frame which we call the river frame. In this
frame, photons and any object can be thought as fishes swimming in the river
and relations are easily expressed in either the metric or the covariant
formalism then ensuring a transparent geometric meaning. Finally, our results
show that the river frame is useful to make perturbative and non-perturbative
analysis. In particular, it was already used to obtain the fully nonlinear
generalization of the Sachs-Wolfe formula and is used here to describe
second-order perturbations. | [
0,
1,
0,
0,
0,
0
] |
Title: Active matrix completion with uncertainty quantification,
Abstract: The noisy matrix completion problem, which aims to recover a low-rank matrix
$\mathbf{X}$ from a partial, noisy observation of its entries, arises in many
statistical, machine learning, and engineering applications. In this paper, we
present a new, information-theoretic approach for active sampling (or
designing) of matrix entries for noisy matrix completion, based on the maximum
entropy design principle. One novelty of our method is that it implicitly makes
use of uncertainty quantification (UQ) -- a measure of uncertainty for
unobserved matrix entries -- to guide the active sampling procedure. The
proposed framework reveals several novel insights on the role of compressive
sensing (e.g., coherence) and coding design (e.g., Latin squares) on the
sampling performance and UQ for noisy matrix completion. Using such insights,
we develop an efficient posterior sampler for UQ, which is then used to guide a
closed-form sampling scheme for matrix entries. Finally, we illustrate the
effectiveness of this integrated sampling / UQ methodology in simulation
studies and two applications to collaborative filtering. | [
0,
0,
0,
1,
0,
0
] |
Title: Synthetic geometry of differential equations: I. Jets and comonad structure,
Abstract: We give an abstract formulation of the formal theory partial differential
equations (PDEs) in synthetic differential geometry, one that would seamlessly
generalize the traditional theory to a range of enhanced contexts, such as
super-geometry, higher (stacky) differential geometry, or even a combination of
both. A motivation for such a level of generality is the eventual goal of
solving the open problem of covariant geometric pre-quantization of locally
variational field theories, which may include fermions and (higher) gauge
fields. (abridged) | [
0,
0,
1,
0,
0,
0
] |
Title: Hamiltonian analogs of combustion engines: a systematic exception to adiabatic decoupling,
Abstract: Workhorse theories throughout all of physics derive effective Hamiltonians to
describe slow time evolution, even though low-frequency modes are actually
coupled to high-frequency modes. Such effective Hamiltonians are accurate
because of \textit{adiabatic decoupling}: the high-frequency modes `dress' the
low-frequency modes, and renormalize their Hamiltonian, but they do not
steadily inject energy into the low-frequency sector. Here, however, we
identify a broad class of dynamical systems in which adiabatic decoupling fails
to hold, and steady energy transfer across a large gap in natural frequency
(`steady downconversion') instead becomes possible, through nonlinear
resonances of a certain form. Instead of adiabatic decoupling, the special
features of multiple time scale dynamics lead in these cases to efficiency
constraints that somewhat resemble thermodynamics. | [
0,
1,
0,
0,
0,
0
] |
Title: Unbiased Simulation for Optimizing Stochastic Function Compositions,
Abstract: In this paper, we introduce an unbiased gradient simulation algorithms for
solving convex optimization problem with stochastic function compositions. We
show that the unbiased gradient generated from the algorithm has finite
variance and finite expected computation cost. We then combined the unbiased
gradient simulation with two variance reduced algorithms (namely SVRG and SCSG)
and showed that the proposed optimization algorithms based on unbiased gradient
simulations exhibit satisfactory convergence properties. Specifically, in the
SVRG case, the algorithm with simulated gradient can be shown to converge
linearly to optima in expectation and almost surely under strong convexity.
Finally, for the numerical experiment,we applied the algorithms to two
important cases of stochastic function compositions optimization: maximizing
the Cox's partial likelihood model and training conditional random fields. | [
0,
0,
0,
1,
0,
0
] |
Title: Temporal Grounding Graphs for Language Understanding with Accrued Visual-Linguistic Context,
Abstract: A robot's ability to understand or ground natural language instructions is
fundamentally tied to its knowledge about the surrounding world. We present an
approach to grounding natural language utterances in the context of factual
information gathered through natural-language interactions and past visual
observations. A probabilistic model estimates, from a natural language
utterance, the objects,relations, and actions that the utterance refers to, the
objectives for future robotic actions it implies, and generates a plan to
execute those actions while updating a state representation to include newly
acquired knowledge from the visual-linguistic context. Grounding a command
necessitates a representation for past observations and interactions; however,
maintaining the full context consisting of all possible observed objects,
attributes, spatial relations, actions, etc., over time is intractable.
Instead, our model, Temporal Grounding Graphs, maintains a learned state
representation for a belief over factual groundings, those derived from
natural-language interactions, and lazily infers new groundings from visual
observations using the context implied by the utterance. This work
significantly expands the range of language that a robot can understand by
incorporating factual knowledge and observations of its workspace in its
inference about the meaning and grounding of natural-language utterances. | [
1,
0,
0,
0,
0,
0
] |
Title: Nonconvex generalizations of ADMM for nonlinear equality constrained problems,
Abstract: The growing demand on efficient and distributed optimization algorithms for
large-scale data stimulates the popularity of Alternative Direction Methods of
Multipliers (ADMM) in numerous areas, such as compressive sensing, matrix
completion, and sparse feature learning. While linear equality constrained
problems have been extensively explored to be solved by ADMM, there lacks a
generic framework for ADMM to solve problems with nonlinear equality
constraints, which are common in practical application (e.g., orthogonality
constraints). To address this problem, in this paper, we proposed a new generic
ADMM framework for handling nonlinear equality constraints, called neADMM.
First, we propose the generalized problem formulation and systematically
provide the sufficient condition for the convergence of neADMM. Second, we
prove a sublinear convergence rate based on variational inequality framework
and also provide an novel accelerated strategy on the update of the penalty
parameter. In addition, several practical applications under the generic
framework of neADMM are provided. Experimental results on several applications
demonstrate the usefulness of our neADMM. | [
1,
0,
1,
0,
0,
0
] |
Title: Nearest Embedded and Embedding Self-Nested Trees,
Abstract: Self-nested trees present a systematic form of redundancy in their subtrees
and thus achieve optimal compression rates by DAG compression. A method for
quantifying the degree of self-similarity of plants through self-nested trees
has been introduced by Godin and Ferraro in 2010. The procedure consists in
computing a self-nested approximation, called the nearest embedding self-nested
tree, that both embeds the plant and is the closest to it. In this paper, we
propose a new algorithm that computes the nearest embedding self-nested tree
with a smaller overall complexity, but also the nearest embedded self-nested
tree. We show from simulations that the latter is mostly the closest to the
initial data, which suggests that this better approximation should be used as a
privileged measure of the degree of self-similarity of plants. | [
1,
0,
0,
0,
0,
0
] |
Title: The bottom of the spectrum of time-changed processes and the maximum principle of Schrödinger operators,
Abstract: We give a necessary and sufficient condition for the maximum principle of
Schrödinger operators in terms of the bottom of the spectrum of
time-changed processes. As a corollary, we obtain a sufficient condition for
the Liouville property of Schrödinger operators. | [
0,
0,
1,
0,
0,
0
] |
Title: Autocorrelation and Lower Bound on the 2-Adic Complexity of LSB Sequence of $p$-ary $m$-Sequence,
Abstract: In modern stream cipher, there are many algorithms, such as ZUC, LTE
encryption algorithm and LTE integrity algorithm, using bit-component sequences
of $p$-ary $m$-sequences as the input of the algorithm. Therefore, analyzing
their statistical property (For example, autocorrelation, linear complexity and
2-adic complexity) of bit-component sequences of $p$-ary $m$-sequences is
becoming an important research topic. In this paper, we first derive some
autocorrelation properties of LSB (Least Significant Bit) sequences of $p$-ary
$m$-sequences, i.e., we convert the problem of computing autocorrelations of
LSB sequences of period $p^n-1$ for any positive $n\geq2$ to the problem of
determining autocorrelations of LSB sequence of period $p-1$. Then, based on
this property and computer calculation, we list some autocorrelation
distributions of LSB sequences of $p$-ary $m$-sequences with order $n$ for some
small primes $p$'s, such as $p=3,5,7,11,17,31$. Additionally, using their
autocorrelation distributions and the method inspired by Hu, we give the lower
bounds on the 2-adic complexities of these LSB sequences. Our results show that
the main parts of all the lower bounds on the 2-adic complexity of these LSB
sequencesare larger than $\frac{N}{2}$, where $N$ is the period of these
sequences. Therefor, these bounds are large enough to resist the analysis of
RAA (Rational Approximation Algorithm) for FCSR (Feedback with Carry Shift
Register). Especially, for a Mersenne prime $p=2^k-1$, since all its
bit-component sequences of a $p$-ary $m$-sequence are shift equivalent, our
results hold for all its bit-component sequences. | [
1,
0,
0,
0,
0,
0
] |
Title: Integration of Machine Learning Techniques to Evaluate Dynamic Customer Segmentation Analysis for Mobile Customers,
Abstract: The telecommunications industry is highly competitive, which means that the
mobile providers need a business intelligence model that can be used to achieve
an optimal level of churners, as well as a minimal level of cost in marketing
activities. Machine learning applications can be used to provide guidance on
marketing strategies. Furthermore, data mining techniques can be used in the
process of customer segmentation. The purpose of this paper is to provide a
detailed analysis of the C.5 algorithm, within naive Bayesian modelling for the
task of segmenting telecommunication customers behavioural profiling according
to their billing and socio-demographic aspects. Results have been
experimentally implemented. | [
1,
0,
0,
1,
0,
0
] |
Title: A Convex Cycle-based Degradation Model for Battery Energy Storage Planning and Operation,
Abstract: A vital aspect in energy storage planning and operation is to accurately
model its operational cost, which mainly comes from the battery cell
degradation. Battery degradation can be viewed as a complex material fatigue
process that based on stress cycles. Rainflow algorithm is a popular way for
cycle identification in material fatigue process, and has been extensively used
in battery degradation assessment. However, the rainflow algorithm does not
have a closed form, which makes the major difficulty to include it in
optimization. In this paper, we prove the rainflow cycle-based cost is convex.
Convexity enables the proposed degradation model to be incorporated in
different battery optimization problems and guarantees the solution quality. We
provide a subgradient algorithm to solve the problem. A case study on PJM
regulation market demonstrates the effectiveness of the proposed degradation
model in maximizing the battery operating profits as well as extending its
lifetime. | [
1,
0,
1,
0,
0,
0
] |
Title: Demonstration of cascaded modulator-chicane micro-bunching of a relativistic electron beam,
Abstract: We present results of an experiment showing the first successful
demonstration of a cascaded micro-bunching scheme. Two modulator-chicane
pre-bunchers arranged in series and a high power mid-IR laser seed are used to
modulate a 52 MeV electron beam into a train of sharp microbunches phase-locked
to the external drive laser. This configuration allows to increase the fraction
of electrons trapped in a strongly tapered inverse free electron laser (IFEL)
undulator to 96\%, with up to 78\% of the particles accelerated to the final
design energy yielding a significant improvement compared to the classical
single buncher scheme. These results represent a critical advance in
laser-based longitudinal phase space manipulations and find application both in
high gradient advanced acceleration as well as in high peak and average power
coherent radiation sources. | [
0,
1,
0,
0,
0,
0
] |
Title: Sobczyk's simplicial calculus does not have a proper foundation,
Abstract: The pseudoscalars in Garret Sobczyk's paper \emph{Simplicial Calculus with
Geometric Algebra} are not well defined. Therefore his calculus does not have a
proper foundation. | [
0,
0,
1,
0,
0,
0
] |
Title: Reexamination of Tolman's law and the Gibbs adsorption equation for curved interfaces,
Abstract: The influence of the surface curvature on the surface tension of small
droplets in equilibrium with a surrounding vapour, or small bubbles in
equilibrium with a surrounding liquid, can be expanded as $\gamma(R) = \gamma_0
+ c_1\gamma_0/R + O(1/R^2)$, where $R = R_\gamma$ is the radius of the surface
of tension and $\gamma_0$ is the surface tension of the planar interface,
corresponding to zero curvature. According to Tolman's law, the first-order
coefficient in this expansion is assumed to be related to the planar limit
$\delta_0$ of the Tolman length, i.e., the difference $\delta = R_\rho -
R_\gamma$ between the equimolar radius and the radius of the surface of
tension, by $c_1 = -2\delta_0$.
We show here that the deduction of Tolman's law from interfacial
thermodynamics relies on an inaccurate application of the Gibbs adsorption
equation to dispersed phases (droplets or bubbles). A revision of the
underlying theory reveals that the adsorption equation needs to be employed in
an alternative manner to that suggested by Tolman. Accordingly, we develop a
generalized Gibbs adsorption equation which consistently takes the size
dependence of interfacial properties into account, and show that from this
equation, a relation between the Tolman length and the influence of the size of
the dispersed phase on the surface tension cannot be deduced, invalidating the
argument which was put forward by Tolman [J. Chem. Phys. 17 (1949) 333]. | [
0,
1,
0,
0,
0,
0
] |
Title: Dual Supervised Learning,
Abstract: Many supervised learning tasks are emerged in dual forms, e.g.,
English-to-French translation vs. French-to-English translation, speech
recognition vs. text to speech, and image classification vs. image generation.
Two dual tasks have intrinsic connections with each other due to the
probabilistic correlation between their models. This connection is, however,
not effectively utilized today, since people usually train the models of two
dual tasks separately and independently. In this work, we propose training the
models of two dual tasks simultaneously, and explicitly exploiting the
probabilistic correlation between them to regularize the training process. For
ease of reference, we call the proposed approach \emph{dual supervised
learning}. We demonstrate that dual supervised learning can improve the
practical performances of both tasks, for various applications including
machine translation, image processing, and sentiment analysis. | [
1,
0,
0,
1,
0,
0
] |
Title: Deictic Image Maps: An Abstraction For Learning Pose Invariant Manipulation Policies,
Abstract: In applications of deep reinforcement learning to robotics, it is often the
case that we want to learn pose invariant policies: policies that are invariant
to changes in the position and orientation of objects in the world. For
example, consider a peg-in-hole insertion task. If the agent learns to insert a
peg into one hole, we would like that policy to generalize to holes presented
in different poses. Unfortunately, this is a challenge using conventional
methods. This paper proposes a novel state and action abstraction that is
invariant to pose shifts called \textit{deictic image maps} that can be used
with deep reinforcement learning. We provide broad conditions under which
optimal abstract policies are optimal for the underlying system. Finally, we
show that the method can help solve challenging robotic manipulation problems. | [
1,
0,
0,
0,
0,
0
] |
Title: Pitfalls of Graph Neural Network Evaluation,
Abstract: Semi-supervised node classification in graphs is a fundamental problem in
graph mining, and the recently proposed graph neural networks (GNNs) have
achieved unparalleled results on this task. Due to their massive success, GNNs
have attracted a lot of attention, and many novel architectures have been put
forward. In this paper we show that existing evaluation strategies for GNN
models have serious shortcomings. We show that using the same
train/validation/test splits of the same datasets, as well as making
significant changes to the training procedure (e.g. early stopping criteria)
precludes a fair comparison of different architectures. We perform a thorough
empirical evaluation of four prominent GNN models and show that considering
different splits of the data leads to dramatically different rankings of
models. Even more importantly, our findings suggest that simpler GNN
architectures are able to outperform the more sophisticated ones if the
hyperparameters and the training procedure are tuned fairly for all models. | [
1,
0,
0,
0,
0,
0
] |
Title: An Information Matrix Approach for State Secrecy,
Abstract: This paper studies the problem of remote state estimation in the presence of
a passive eavesdropper. A sensor measures a linear plant's state and transmits
it to an authorized user over a packet-dropping channel, which is susceptible
to eavesdropping. Our goal is to design a coding scheme such that the
eavesdropper cannot infer the plant's current state, while the user
successfully decodes the sent messages. We employ a novel class of codes,
termed State-Secrecy Codes, which are fast and efficient for dynamical systems.
They apply linear time-varying transformations to the current and past states
received by the user. In this way, they force the eavesdropper's information
matrix to decrease with asymptotically the same rate as in the open-loop
prediction case, i.e. when the eavesdropper misses all messages. As a result,
the eavesdropper's minimum mean square error (mmse) for the unstable states
grows unbounded, while the respective error for the stable states converges to
the open-loop prediction one. These secrecy guarantees are achieved under
minimal conditions, which require that, at least once, the user receives the
corresponding packet while the eavesdropper fails to intercept it. Meanwhile,
the user's estimation performance remains optimal. The theoretical results are
illustrated in simulations. | [
1,
0,
0,
0,
0,
0
] |
Title: Time-reversed magnetically controlled perturbation (TRMCP) optical focusing inside scattering media,
Abstract: Manipulating and focusing light deep inside biological tissue and tissue-like
complex media has been desired for long yet considered challenging. One
feasible strategy is through optical wavefront engineering, where the optical
scattering-induced phase distortions are time reversed or pre-compensated so
that photons travel along different optical paths interfere constructively at
the targeted position within a scattering medium. To define the targeted
position, an internal guidestar is needed to guide or provide a feedback for
wavefront engineering. It could be injected or embedded probes such as
fluorescence or nonlinear microspheres, ultrasonic modulation, as well as
absorption perturbation. Here we propose to use a magnetically controlled
optical absorbing microsphere as the internal guidestar. Using a digital
optical phase conjugation system, we obtained sharp optical focusing within
scattering media through time-reversing the scattered light perturbed by the
magnetic microshpere. Since the object is magnetically controlled, dynamic
optical focusing is allowed with a relatively large field-of-view by scanning
the magnetic field externally. Moreover, the magnetic microsphere can be
packaged with an organic membrane, using biological or chemical means to serve
as a carrier. Therefore the technique may find particular applications for
enhanced targeted drug delivery, and imaging and photoablation of angiogenic
vessels in tumours. | [
0,
1,
0,
0,
0,
0
] |
Title: Herschel survey and modelling of externally-illuminated photoevaporating protoplanetary disks,
Abstract: Protoplanetary disks undergo substantial mass-loss by photoevaporation, a
mechanism which is crucial to their dynamical evolution. However, the processes
regulating the gas energetics have not been well constrained by observations so
far. We aim at studying the processes involved in disk photoevaporation when it
is driven by far-UV photons. We present a unique Herschel survey and new ALMA
observations of four externally-illuminated photoevaporating disks (a.k.a.
proplyds). For the analysis of these data, we developed a 1D model of the
photodissociation region (PDR) of a proplyd, based on the Meudon PDR code and
computed the far infrared line emission. We successfully reproduce most of the
observations and derive key physical parameters, i.e. densities at the disk
surface of about $10^{6}$ cm$^{-3}$ and local gas temperatures of about 1000 K.
Our modelling suggests that all studied disks are found in a transitional
regime resulting from the interplay between several heating and cooling
processes that we identify. These differ from those dominating in classical
PDRs, i.e. grain photo-electric effect and cooling by [OI] and [CII] FIR lines.
This energetic regime is associated to an equilibrium dynamical point of the
photoevaporation flow: the mass-loss rate is self-regulated to set the envelope
column density at a value that maintains the temperature at the disk surface
around 1000 K. From our best-fit models, we estimate mass-loss rates - of the
order of $10^{-7}$ $\mathrm{M}_\odot$/yr - that are in agreement with earlier
spectroscopic observation of ionised gas tracers. This holds only if we assume
an evaporation flow launched from the disk surface at sound speed
(supercritical regime). We have identified the energetic regime regulating
FUV-photoevaporation in proplyds. This regime could be implemented into models
of the dynamical evolution of protoplanetary disks. | [
0,
1,
0,
0,
0,
0
] |
Title: Spectral curves for the rogue waves,
Abstract: Here we find the spectral curves, corresponding to the known rational or
quasi-rational solutions of AKNS hierarchy equations, ultimately connected with
the modeling of the rogue waves events in the optical waveguides and in
hydrodynamics. We also determine spectral curves for the multi-phase
trigonometric, hyperbolic and elliptic solutions for the same hierarchy. It
seams that the nature of the related spectral curves was not sufficiently
discussed in existing literature. | [
0,
1,
1,
0,
0,
0
] |
Title: Volume functional of compact manifolds with a prescribed boundary metric,
Abstract: We prove that a critical metric of the volume functional on a
four-dimensional compact manifold with boundary satisfying a second-order
vanishing condition on the Weyl tensor must be isometric to a geodesic ball in
a simply connected space form $\mathbb{R}^{4}$, $\mathbb{H}^{4}$ or
$\mathbb{S}^{4}.$ Moreover, we provide an integral curvature estimate involving
the Yamabe constant for critical metrics of the volume functional, which allows
us to get a rigidity result for such critical metrics on four-dimensional
manifolds. | [
0,
0,
1,
0,
0,
0
] |
Title: Deep Multitask Learning for Semantic Dependency Parsing,
Abstract: We present a deep neural architecture that parses sentences into three
semantic dependency graph formalisms. By using efficient, nearly arc-factored
inference and a bidirectional-LSTM composed with a multi-layer perceptron, our
base system is able to significantly improve the state of the art for semantic
dependency parsing, without using hand-engineered features or syntax. We then
explore two multitask learning approaches---one that shares parameters across
formalisms, and one that uses higher-order structures to predict the graphs
jointly. We find that both approaches improve performance across formalisms on
average, achieving a new state of the art. Our code is open-source and
available at this https URL. | [
1,
0,
0,
0,
0,
0
] |
Title: Chentsov's theorem for exponential families,
Abstract: Chentsov's theorem characterizes the Fisher information metric on statistical
models as essentially the only Riemannian metric that is invariant under
sufficient statistics. This implies that each statistical model is naturally
equipped with a geometry, so Chentsov's theorem explains why many statistical
properties can be described in geometric terms. However, despite being one of
the foundational theorems of statistics, Chentsov's theorem has only been
proved previously in very restricted settings or under relatively strong
regularity and invariance assumptions. We therefore prove a version of this
theorem for the important case of exponential families. In particular, we
characterise the Fisher information metric as the only Riemannian metric (up to
rescaling) on an exponential family and its derived families that is invariant
under independent and identically distributed extensions and canonical
sufficient statistics. Our approach is based on the central limit theorem, so
it gives a unified proof for both discrete and continuous exponential families,
and it is less technical than previous approaches. | [
1,
0,
1,
1,
0,
0
] |
Title: Dark trions and biexcitons in WS2 and WSe2 made bright by e-e scattering,
Abstract: The direct band gap character and large spin-orbit splitting of the valence
band edges (at the K and K' valleys) in monolayer transition metal
dichalcogenides have put these two-dimensional materials under the spot-light
of intense experimental and theoretical studies. In particular, for Tungsten
dichalcogenides it has been found that the sign of spin splitting of conduction
band edges makes ground state excitons radiatively inactive (dark) due to spin
and momentum mismatch between the constituent electron and hole. One might
similarly assume that the ground states of charged excitons and biexcitons in
these monolayers are also dark. Here, we show that the intervalley
K$\leftrightarrows$K' electron-electron scattering mixes bright and dark states
of these complexes, and estimate the radiative lifetimes in the ground states
of these "semi-dark" trions and biexcitons to be ~ 10ps, and analyse how these
complexes appear in the temperature-dependent photoluminescence spectra of WS2
and WSe2 monolayers. | [
0,
1,
0,
0,
0,
0
] |
Title: Shannon's entropy and its Generalizations towards Statistics, Reliability and Information Science during 1948-2018,
Abstract: Starting from the pioneering works of Shannon and Weiner in 1948, a plethora
of works have been reported on entropy in different directions. Entropy-related
review work in the direction of statistics, reliability and information
science, to the best of our knowledge, has not been reported so far. Here we
have tried to collect all possible works in this direction during the period
1948-2018 so that people interested in entropy, specially the new researchers,
get benefited. | [
0,
0,
0,
1,
0,
0
] |
Title: Multi-Entity Dependence Learning with Rich Context via Conditional Variational Auto-encoder,
Abstract: Multi-Entity Dependence Learning (MEDL) explores conditional correlations
among multiple entities. The availability of rich contextual information
requires a nimble learning scheme that tightly integrates with deep neural
networks and has the ability to capture correlation structures among
exponentially many outcomes. We propose MEDL_CVAE, which encodes a conditional
multivariate distribution as a generating process. As a result, the variational
lower bound of the joint likelihood can be optimized via a conditional
variational auto-encoder and trained end-to-end on GPUs. Our MEDL_CVAE was
motivated by two real-world applications in computational sustainability: one
studies the spatial correlation among multiple bird species using the eBird
data and the other models multi-dimensional landscape composition and human
footprint in the Amazon rainforest with satellite images. We show that
MEDL_CVAE captures rich dependency structures, scales better than previous
methods, and further improves on the joint likelihood taking advantage of very
large datasets that are beyond the capacity of previous methods. | [
1,
0,
0,
1,
0,
0
] |
Title: A recognition algorithm for simple-triangle graphs,
Abstract: A simple-triangle graph is the intersection graph of triangles that are
defined by a point on a horizontal line and an interval on another horizontal
line. The time complexity of the recognition problem for simple-triangle graphs
was a longstanding open problem, which was recently settled. This paper
provides a new recognition algorithm for simple-triangle graphs to improve the
time bound from $O(n^2 \overline{m})$ to $O(nm)$, where $n$, $m$, and
$\overline{m}$ are the number of vertices, edges, and non-edges of the graph,
respectively. The algorithm uses the vertex ordering characterization that a
graph is a simple-triangle graph if and only if there is a linear ordering of
the vertices containing both an alternating orientation of the graph and a
transitive orientation of the complement of the graph. We also show, as a
byproduct, that an alternating orientation can be obtained in $O(nm)$ time for
cocomparability graphs, and it is NP-complete to decide whether a graph has an
orientation that is alternating and acyclic. | [
1,
0,
0,
0,
0,
0
] |
Title: A normalized gradient flow method with attractive-repulsive splitting for computing ground states of Bose-Einstein condensates with higher-order interaction,
Abstract: In this paper, we generalize the normalized gradient flow method to compute
the ground states of Bose-Einstein condensates (BEC) with higher order
interactions (HOI), which is modelled via the modified Gross-Pitaevskii
equation (MGPE). Schemes constructed in naive ways suffer from severe stability
problems due to the high restrictions on time steps. To build an efficient and
stable scheme, we split the HOI term into two parts with each part treated
separately. The part corresponding to a repulsive/positive energy is treated
semi-implicitly while the one corresponding to an attractive/negative energy is
treated fully explicitly. Based on the splitting, we construct the
BEFD-splitting and BESP-splitting schemes. A variety of numerical experiments
shows that the splitting will improve the stability of the schemes
significantly. Besides, we will show that the methods can be applied to
multidimensional problems and to the computation of the first excited state as
well. | [
0,
1,
0,
0,
0,
0
] |
Title: A conservative scheme for electromagnetic simulation of magnetized plasmas with kinetic electrons,
Abstract: A conservative scheme has been formulated and verified for gyrokinetic
particle simulations of electromagnetic waves and instabilities in magnetized
plasmas. An electron continuity equation derived from drift kinetic equation is
used to time advance electron density perturbation by using the perturbed
mechanical flow calculated from the parallel vector potential, and the parallel
vector potential is solved by using the perturbed canonical flow from the
perturbed distribution function. In gyrokinetic particle simulations using this
new scheme, shear Alfvén wave dispersion relation in shearless slab and
continuum damping in sheared cylinder have been recovered. The new scheme
overcomes the stringent requirement in conventional perturbative simulation
method that perpendicular grid size needs to be as small as electron
collisionless skin depth even for the long wavelength Alfvén waves. The new
scheme also avoids the problem in conventional method that an unphysically
large parallel electric field arises due to the inconsistency between
electrostatic potential calculated from the perturbed density and vector
potential calculated from the perturbed canonical flow. Finally, the
gyrokinetic particle simulations of the Alfvén waves in sheared cylinder have
superior numerical properties compared with the fluid simulations, which suffer
from numerical difficulties associated with singular mode structures. | [
0,
1,
0,
0,
0,
0
] |
Title: Corrupt Bandits for Preserving Local Privacy,
Abstract: We study a variant of the stochastic multi-armed bandit (MAB) problem in
which the rewards are corrupted. In this framework, motivated by privacy
preservation in online recommender systems, the goal is to maximize the sum of
the (unobserved) rewards, based on the observation of transformation of these
rewards through a stochastic corruption process with known parameters. We
provide a lower bound on the expected regret of any bandit algorithm in this
corrupted setting. We devise a frequentist algorithm, KLUCB-CF, and a Bayesian
algorithm, TS-CF and give upper bounds on their regret. We also provide the
appropriate corruption parameters to guarantee a desired level of local privacy
and analyze how this impacts the regret. Finally, we present some experimental
results that confirm our analysis. | [
1,
0,
0,
1,
0,
0
] |
Title: Deep Reinforcement Learning based Optimal Control of Hot Water Systems,
Abstract: Energy consumption for hot water production is a major draw in high
efficiency buildings. Optimizing this has typically been approached from a
thermodynamics perspective, decoupled from occupant influence. Furthermore,
optimization usually presupposes existence of a detailed dynamics model for the
hot water system. These assumptions lead to suboptimal energy efficiency in the
real world. In this paper, we present a novel reinforcement learning based
methodology which optimizes hot water production. The proposed methodology is
completely generalizable, and does not require an offline step or human domain
knowledge to build a model for the hot water vessel or the heating element.
Occupant preferences too are learnt on the fly. The proposed system is applied
to a set of 32 houses in the Netherlands where it reduces energy consumption
for hot water production by roughly 20% with no loss of occupant comfort.
Extrapolating, this translates to absolute savings of roughly 200 kWh for a
single household on an annual basis. This performance can be replicated to any
domestic hot water system and optimization objective, given that the fairly
minimal requirements on sensor data are met. With millions of hot water systems
operational worldwide, the proposed framework has the potential to reduce
energy consumption in existing and new systems on a multi Gigawatt-hour scale
in the years to come. | [
0,
0,
0,
1,
0,
0
] |
Title: A Model-Based Fuzzy Control Approach to Achieving Adaptation with Contextual Uncertainties,
Abstract: Self-adaptive system (SAS) is capable of adjusting its behavior in response
to meaningful changes in the operational context and itself. Due to the
inherent volatility of the open and changeable environment in which SAS is
embedded, the ability of adaptation is highly demanded by many
software-intensive systems. Two concerns, i.e., the requirements uncertainty
and the context uncertainty are most important among others. An essential issue
to be addressed is how to dynamically adapt non-functional requirements (NFRs)
and task configurations of SASs with context uncertainty. In this paper, we
propose a model-based fuzzy control approach that is underpinned by the
feedforward-feedback control mechanism. This approach identifies and represents
NFR uncertainties, task uncertainties and context uncertainties with linguistic
variables, and then designs an inference structure and rules for the fuzzy
controller based on the relations between the requirements model and the
context model. The adaptation of NFRs and task configurations is achieved
through fuzzification, inference, defuzzification and readaptation. Our
approach is demonstrated with a mobile computing application and is evaluated
through a series of simulation experiments. | [
1,
0,
0,
0,
0,
0
] |
Title: Particle Identification with the TOP and ARICH detectors at Belle II,
Abstract: Particle identification at the Belle II experiment will be provided by two
ring imaging Cherenkov devices, the time of propagation counters in the central
region and the proximity focusing RICH with aerogel radiator in the forward
end-cap region. The key features of these two detectors, the performance
studies, and the construction progress is presented. | [
0,
1,
0,
0,
0,
0
] |
Title: End-to-end Learning of Deterministic Decision Trees,
Abstract: Conventional decision trees have a number of favorable properties, including
interpretability, a small computational footprint and the ability to learn from
little training data. However, they lack a key quality that has helped fuel the
deep learning revolution: that of being end-to-end trainable, and to learn from
scratch those features that best allow to solve a given supervised learning
problem. Recent work (Kontschieder 2015) has addressed this deficit, but at the
cost of losing a main attractive trait of decision trees: the fact that each
sample is routed along a small subset of tree nodes only. We here propose a
model and Expectation-Maximization training scheme for decision trees that are
fully probabilistic at train time, but after a deterministic annealing process
become deterministic at test time. We also analyze the learned oblique split
parameters on image datasets and show that Neural Networks can be trained at
each split node. In summary, we present the first end-to-end learning scheme
for deterministic decision trees and present results on par with or superior to
published standard oblique decision tree algorithms. | [
1,
0,
0,
1,
0,
0
] |
Title: Conceptualization of Object Compositions Using Persistent Homology,
Abstract: A topological shape analysis is proposed and utilized to learn concepts that
reflect shape commonalities. Our approach is two-fold: i) a spatial topology
analysis of point cloud segment constellations within objects. Therein
constellations are decomposed and described in an hierarchical manner - from
single segments to segment groups until a single group reflects an entire
object. ii) a topology analysis of the description space in which segment
decompositions are exposed in. Inspired by Persistent Homology, hidden groups
of shape commonalities are revealed from object segment decompositions.
Experiments show that extracted persistent groups of commonalities can
represent semantically meaningful shape concepts. We also show the
generalization capability of the proposed approach considering samples of
external datasets. | [
1,
0,
0,
0,
0,
0
] |
Title: Thermodynamics of Spin-1/2 Kagomé Heisenberg Antiferromagnet: Algebraic Paramagnetic Liquid and Finite-Temperature Phase Diagram,
Abstract: Quantum fluctuations from frustration can trigger quantum spin liquids (QSLs)
at zero temperature. However, it is unclear how thermal fluctuations affect a
QSL. We employ state-of-the-art tensor network-based methods to explore the
ground state and thermodynamic properties of the spin-1/2 kagome Heisenberg
antiferromagnet (KHA). Its ground state is shown to be consistent with a
gapless QSL by observing the absence of zero-magnetization plateau as well as
the algebraic behaviors of susceptibility and specific heat at low
temperatures, respectively. We show that there exists an \textit{algebraic
paramagnetic liquid} (APL) that possesses both the paramagnetic properties and
the algebraic behaviors inherited from the QSL. The APL is induced under the
interplay between quantum fluctuations from geometrical frustration and thermal
fluctuations. By studying the temperature-dependent behaviors of specific heat
and magnetic susceptibility, a finite-temperature phase diagram in a magnetic
field is suggested, where various phases are identified. This present study
gains useful insight into the thermodynamic properties of the spin-1/2 KHA with
or without a magnetic field and is helpful for relevant experimental studies. | [
0,
1,
0,
0,
0,
0
] |
Title: Massive MIMO 5G Cellular Networks: mm-wave vs. μ-wave Frequencies,
Abstract: Enhanced mobile broadband (eMBB) is one of the key use-cases for the
development of the new standard 5G New Radio for the next generation of mobile
wireless networks. Large-scale antenna arrays, a.k.a. Massive MIMO, the usage
of carrier frequencies in the range 10-100 GHz, the so-called millimeter wave
(mm-wave) band, and the network densification with the introduction of
small-sized cells are the three technologies that will permit implementing eMBB
services and realizing the Gbit/s mobile wireless experience. This paper is
focused on the massive MIMO technology; initially conceived for conventional
cellular frequencies in the sub-6 GHz range (\mu-wave), the massive MIMO
concept has been then progressively extended to the case in which mm-wave
frequencies are used. However, due to different propagation mechanisms in urban
scenarios, the resulting MIMO channel models at \mu-wave and mm-wave are
radically different. Six key basic differences are pinpointed in this paper,
along with the implications that they have on the architecture and algorithms
of the communication transceivers and on the attainable performance in terms of
reliability and multiplexing capabilities. | [
1,
0,
0,
0,
0,
0
] |
Title: Two-Person Zero-Sum Games with Unbounded Payoff Functions and Uncertain Expected Payoffs,
Abstract: This paper provides sufficient conditions for the existence of values and
solutions for two-person zero-sum one-step games with possibly noncompact
action sets for both players and possibly unbounded payoff functions, which may
be neither convex nor concave. For such games payoffs may not be defined for
some pairs of strategies. In addition to the existence of values and solutions,
this paper investigates continuity properties of the value functions and
solution multifunctions for families of games with possibly noncompact action
sets and unbounded payoff functions, when action sets and payoffs depend on a
parameter. | [
0,
0,
1,
0,
0,
0
] |
Title: Compact arrangement for femtosecond laser induced generation of broadband hard x-ray pulses,
Abstract: We present a simple apparatus for femtosecond laser induced generation of
X-rays. The apparatus consists of a vacuum chamber containing an off-axis
parabolic focusing mirror, a reel system, a debris protection setup, a quartz
window for the incoming laser beam, and an X-ray window. Before entering the
vacuum chamber, the femtosecond laser is expanded with an all reflective
telescope design to minimize laser intensity losses and pulse broadening while
allowing for focusing as well as peak intensity optimization. The laser pulse
duration was characterized by second-harmonic generation frequency resolved
optical gating. A high spatial resolution knife-edge technique was implemented
to characterize the beam size at the focus of the X-ray generation apparatus.
We have characterized x-ray spectra obtained with three different samples:
titanium, iron:chromium alloy, and copper. In all three cases, the femtosecond
laser generated X-rays give spectral lines consistent with literature reports.
We present a rms amplitude analysis of the generated X-ray pulses, and provide
an upper bound for the duration of the X-ray pulses. | [
0,
1,
0,
0,
0,
0
] |
Title: Hierarchical Summarization of Metric Changes,
Abstract: We study changes in metrics that are defined on a cartesian product of trees.
Such metrics occur naturally in many practical applications, where a global
metric (such as revenue) can be broken down along several hierarchical
dimensions (such as location, gender, etc).
Given a change in such a metric, our goal is to identify a small set of
non-overlapping data segments that account for the change. An organization
interested in improving the metric can then focus their attention on these data
segments.
Our key contribution is an algorithm that mimics the operation of a
hierarchical organization of analysts. The algorithm has been successfully
applied, for example within Google Adwords to help advertisers triage the
performance of their advertising campaigns.
We show that the algorithm is optimal for two dimensions, and has an
approximation ratio $\log^{d-2}(n+1)$ for $d \geq 3$ dimensions, where $n$ is
the number of input data segments. For the Adwords application, we can show
that our algorithm is in fact a $2$-approximation.
Mathematically, we identify a certain data pattern called a \emph{conflict}
that both guides the design of the algorithm, and plays a central role in the
hardness results. We use these conflicts to both derive a lower bound of
$1.144^{d-2}$ (again $d\geq3$) for our algorithm, and to show that the problem
is NP-hard, justifying the focus on approximation. | [
1,
0,
0,
0,
0,
0
] |
Title: Multi-Task Learning Using Neighborhood Kernels,
Abstract: This paper introduces a new and effective algorithm for learning kernels in a
Multi-Task Learning (MTL) setting. Although, we consider a MTL scenario here,
our approach can be easily applied to standard single task learning, as well.
As shown by our empirical results, our algorithm consistently outperforms the
traditional kernel learning algorithms such as uniform combination solution,
convex combinations of base kernels as well as some kernel alignment-based
models, which have been proven to give promising results in the past. We
present a Rademacher complexity bound based on which a new Multi-Task Multiple
Kernel Learning (MT-MKL) model is derived. In particular, we propose a Support
Vector Machine-regularized model in which, for each task, an optimal kernel is
learned based on a neighborhood-defining kernel that is not restricted to be
positive semi-definite. Comparative experimental results are showcased that
underline the merits of our neighborhood-defining framework in both
classification and regression problems. | [
1,
0,
0,
1,
0,
0
] |
Title: Optimized Bucket Wheel Design for Asteroid Excavation,
Abstract: Current spacecraft need to launch with all of their required fuel for travel.
This limits the system performance, payload capacity, and mission flexibility.
One compelling alternative is to perform In-Situ Resource Utilization (ISRU) by
extracting fuel from small bodies in local space such as asteroids or small
satellites. Compared to the Moon or Mars, the microgravity on an asteroid
demands a fraction of the energy for digging and accessing hydrated regolith
just below the surface. Previous asteroid excavation efforts have focused on
discrete capture events (an extension of sampling technology) or whole-asteroid
capture and processing. This paper proposes an optimized bucket wheel design
for surface excavation of an asteroid or small-body. Asteroid regolith is
excavated and water extracted for use as rocket propellant. Our initial study
focuses on system design, bucket wheel mechanisms, and capture dynamics applied
to ponded materials known to exist on asteroids like Itokawa and Eros and small
satellites like Phobos and Deimos. For initial evaluation of
material-spacecraft dynamics and mechanics, we assume lunar-like regolith for
bulk density, particle size and cohesion. We shall present our estimates for
the energy balance of excavation and processing versus fuel gained.
Conventional electrolysis of water is used to produce hydrogen and oxygen. It
is compared with steam for propulsion and both show significant delta-v. We
show that a return trip from Deimos to Earth is possible for a 12 kg craft
using ISRU processed fuel. | [
1,
1,
0,
0,
0,
0
] |
Title: Fraction of the X-ray selected AGNs with optical emission lines in galaxy groups,
Abstract: Compared with numerous X-ray dominant active galactic nuclei (AGNs) without
emission-line signatures in their optical spectra, the X-ray selected AGNs with
optical emission lines are probably still in the high-accretion phase of black
hole growth. This paper presents an investigation on the fraction of these
X-ray detected AGNs with optical emission-line spectra in 198 galaxy groups at
$z<1$ in a rest frame 0.1-2.4 keV luminosity range 41.3 <log(L_X/erg s-1) <
44.1 within the COSMOS field, as well as its variations with redshift and group
richness. For various selection criteria of member galaxies, the numbers of
galaxies and the AGNs with optical emission lines in each galaxy group are
obtained. It is found that, in total 198 X-ray groups, there are 27 AGNs
detected in 26 groups. AGN fraction is on everage less than $4.6 (\pm 1.2)\%$
for individual groups hosting at least one AGN. The corrected overall AGN
fraction for whole group sample is less than $0.98 (\pm 0.11) \%$. The
normalized locations of group AGNs show that 15 AGNs are found to be located in
group centers, including all 6 low-luminosity group AGNs. A week rising
tendency with $z$ are found: overall AGN fraction is 0.30-0.43% for the groups
at $z<0.5$, and 0.55-0.64% at 0.5 < z < 1.0. For the X-ray groups at $z>0.5$,
most member AGNs are X-ray bright, optically dull, which results in a lower AGN
fractions at higher redshifts. The AGN fraction in isolated fields also
exhibits a rising trend with redshift, and the slope is consistent with that in
groups. The environment of galaxy groups seems to make no difference in
detection probability of the AGNs with emission lines. Additionally, a larger
AGN fractions are found in poorer groups, which implies that the AGNs in poorer
groups might still be in the high-accretion phase, whereas the AGN population
in rich clusters is mostly in the low-accretion, X-ray dominant phase. | [
0,
1,
0,
0,
0,
0
] |
Title: Towards Automatic Learning of Heuristics for Mechanical Transformations of Procedural Code,
Abstract: The current trends in next-generation exascale systems go towards integrating
a wide range of specialized (co-)processors into traditional supercomputers.
Due to the efficiency of heterogeneous systems in terms of Watts and FLOPS per
surface unit, opening the access of heterogeneous platforms to a wider range of
users is an important problem to be tackled. However, heterogeneous platforms
limit the portability of the applications and increase development complexity
due to the programming skills required. Program transformation can help make
programming heterogeneous systems easier by defining a step-wise transformation
process that translates a given initial code into a semantically equivalent
final code, but adapted to a specific platform. Program transformation systems
require the definition of efficient transformation strategies to tackle the
combinatorial problem that emerges due to the large set of transformations
applicable at each step of the process. In this paper we propose a machine
learning-based approach to learn heuristics to define program transformation
strategies. Our approach proposes a novel combination of reinforcement learning
and classification methods to efficiently tackle the problems inherent to this
type of systems. Preliminary results demonstrate the suitability of this
approach. | [
1,
0,
0,
0,
0,
0
] |
Title: Optimized Deformed Laplacian for Spectrum-based Community Detection in Sparse Heterogeneous Graphs,
Abstract: Spectral clustering is one of the most popular, yet still incompletely
understood, methods for community detection on graphs. In this article we study
spectral clustering based on the deformed Laplacian matrix $D-rA$, for sparse
heterogeneous graphs (following a two-class degree-corrected stochastic block
model). For a specific value $r = \zeta$, we show that, unlike competing
methods such as the Bethe Hessian or non-backtracking operator approaches,
clustering is insensitive to the graph heterogeneity. Based on heuristic
arguments, we study the behavior of the informative eigenvector of $D-\zeta A$
and, as a result, we accurately predict the clustering accuracy. Via extensive
simulations and application to real networks, the resulting clustering
algorithm is validated and observed to systematically outperform
state-of-the-art competing methods. | [
1,
0,
0,
1,
0,
0
] |
Title: High dimensional deformed rectangular matrices with applications in matrix denoising,
Abstract: We consider the recovery of a low rank $M \times N$ matrix $S$ from its noisy
observation $\tilde{S}$ in two different regimes. Under the assumption that $M$
is comparable to $N$, we propose two consistent estimators for $S$. Our
analysis relies on the local behavior of the large dimensional rectangular
matrices with finite rank perturbation. We also derive the convergent limits
and rates for the singular values and vectors of such matrices. | [
0,
0,
1,
1,
0,
0
] |
Title: Validation of the 3-under-2 principle of cell wall growth in Gram-positive bacteria by simulation of a simple coarse-grained model,
Abstract: The aim of this work is to propose a first coarse-grained model of Bacillus
subtilis cell wall, handling explicitly the existence of multiple layers of
peptidoglycans. In this first work, we aim at the validation of the recently
proposed "three under two" principle. | [
0,
1,
0,
0,
0,
0
] |
Title: Control of Gene Regulatory Networks with Noisy Measurements and Uncertain Inputs,
Abstract: This paper is concerned with the problem of stochastic control of gene
regulatory networks (GRNs) observed indirectly through noisy measurements and
with uncertainty in the intervention inputs. The partial observability of the
gene states and uncertainty in the intervention process are accounted for by
modeling GRNs using the partially-observed Boolean dynamical system (POBDS)
signal model with noisy gene expression measurements. Obtaining the optimal
infinite-horizon control strategy for this problem is not attainable in
general, and we apply reinforcement learning and Gaussian process techniques to
find a near-optimal solution. The POBDS is first transformed to a
directly-observed Markov Decision Process in a continuous belief space, and the
Gaussian process is used for modeling the cost function over the belief and
intervention spaces. Reinforcement learning then is used to learn the cost
function from the available gene expression data. In addition, we employ
sparsification, which enables the control of large partially-observed GRNs. The
performance of the resulting algorithm is studied through a comprehensive set
of numerical experiments using synthetic gene expression data generated from a
melanoma gene regulatory network. | [
0,
0,
0,
1,
0,
0
] |
Title: Unsupervised Domain Adaptation for Face Recognition in Unlabeled Videos,
Abstract: Despite rapid advances in face recognition, there remains a clear gap between
the performance of still image-based face recognition and video-based face
recognition, due to the vast difference in visual quality between the domains
and the difficulty of curating diverse large-scale video datasets. This paper
addresses both of those challenges, through an image to video feature-level
domain adaptation approach, to learn discriminative video frame
representations. The framework utilizes large-scale unlabeled video data to
reduce the gap between different domains while transferring discriminative
knowledge from large-scale labeled still images. Given a face recognition
network that is pretrained in the image domain, the adaptation is achieved by
(i) distilling knowledge from the network to a video adaptation network through
feature matching, (ii) performing feature restoration through synthetic data
augmentation and (iii) learning a domain-invariant feature through a domain
adversarial discriminator. We further improve performance through a
discriminator-guided feature fusion that boosts high-quality frames while
eliminating those degraded by video domain-specific factors. Experiments on the
YouTube Faces and IJB-A datasets demonstrate that each module contributes to
our feature-level domain adaptation framework and substantially improves video
face recognition performance to achieve state-of-the-art accuracy. We
demonstrate qualitatively that the network learns to suppress diverse artifacts
in videos such as pose, illumination or occlusion without being explicitly
trained for them. | [
1,
0,
0,
0,
0,
0
] |
Title: Dynamic Curriculum Learning for Imbalanced Data Classification,
Abstract: Human attribute analysis is a challenging task in the field of computer
vision, since the data is largely imbalance-distributed. Common techniques such
as re-sampling and cost-sensitive learning require prior-knowledge to train the
system. To address this problem, we propose a unified framework called Dynamic
Curriculum Learning (DCL) to online adaptively adjust the sampling strategy and
loss learning in single batch, which resulting in better generalization and
discrimination. Inspired by the curriculum learning, DCL consists of two level
curriculum schedulers: (1) sampling scheduler not only manages the data
distribution from imbalanced to balanced but also from easy to hard; (2) loss
scheduler controls the learning importance between classification and metric
learning loss. Learning from these two schedulers, we demonstrate our DCL
framework with the new state-of-the-art performance on the widely used face
attribute dataset CelebA and pedestrian attribute dataset RAP. | [
1,
0,
0,
0,
0,
0
] |
Title: Causal Discovery in the Presence of Measurement Error: Identifiability Conditions,
Abstract: Measurement error in the observed values of the variables can greatly change
the output of various causal discovery methods. This problem has received much
attention in multiple fields, but it is not clear to what extent the causal
model for the measurement-error-free variables can be identified in the
presence of measurement error with unknown variance. In this paper, we study
precise sufficient identifiability conditions for the measurement-error-free
causal model and show what information of the causal model can be recovered
from observed data. In particular, we present two different sets of
identifiability conditions, based on the second-order statistics and
higher-order statistics of the data, respectively. The former was inspired by
the relationship between the generating model of the
measurement-error-contaminated data and the factor analysis model, and the
latter makes use of the identifiability result of the over-complete independent
component analysis problem. | [
1,
0,
0,
1,
0,
0
] |
Title: Elliptic fibrations on covers of the elliptic modular surface of level 5,
Abstract: We consider the K3 surfaces that arise as double covers of the elliptic
modular surface of level 5, $R_{5,5}$. Such surfaces have a natural elliptic
fibration induced by the fibration on $R_{5,5}$. Moreover, they admit several
other elliptic fibrations. We describe such fibrations in terms of linear
systems of curves on $R_{5,5}$. This has a major advantage over other methods
of classification of elliptic fibrations, namely, a simple algorithm that has
as input equations of linear systems of curves in the projective plane yields a
Weierstrass equation for each elliptic fibration. We deal in detail with the
cases for which the double cover is branched over the two reducible fibers of
type $I_5$ and for which it is branched over two smooth fibers, giving a
complete list of elliptic fibrations for these two scenarios. | [
0,
0,
1,
0,
0,
0
] |
Title: A wide field-of-view crossed Dragone optical system using the anamorphic aspherical surfaces,
Abstract: A side-fed crossed Dragone telescope provides a wide field-of-view. This type
of a telescope is commonly employed in the measurement of cosmic microwave
background (CMB) polarization, which requires an image-space telecentric
telescope with a large focal plane over broadband coverage. We report the
design of the wide field-of-view crossed Dragone optical system using the
anamorphic aspherical surfaces with correction terms up to the 10th order. We
achieved the Strehl ratio larger than 0.95 over 32 by 18 square degrees at 150
GHz. This design is an image-space telecentric and fully diffraction-limited
system below 400 GHz. We discuss the optical performance in the uniformity of
the axially symmetric point spread function and telecentricity over the
field-of-view. We also address the analysis to evaluate the polarization
properties, including the instrumental polarization, extinction rate, and
polarization angle rotation. This work is a part of programs to design a
compact multi-color wide field-of-view telescope for LiteBIRD, which is a next
generation CMB polarization satellite. | [
0,
1,
0,
0,
0,
0
] |
Title: On the variance of internode distance under the multispecies coalescent,
Abstract: We consider the problem of estimating species trees from unrooted gene tree
topologies in the presence of incomplete lineage sorting, a common phenomenon
that creates gene tree heterogeneity in multilocus datasets. One popular class
of reconstruction methods in this setting is based on internode distances, i.e.
the average graph distance between pairs of species across gene trees. While
statistical consistency in the limit of large numbers of loci has been
established in some cases, little is known about the sample complexity of such
methods. Here we make progress on this question by deriving a lower bound on
the worst-case variance of internode distance which depends linearly on the
corresponding graph distance in the species tree. We also discuss some
algorithmic implications. | [
0,
0,
0,
0,
1,
0
] |
Title: Online Human Gesture Recognition using Recurrent Neural Networks and Wearable Sensors,
Abstract: Gestures are a natural communication modality for humans. The ability to
interpret gestures is fundamental for robots aiming to naturally interact with
humans. Wearable sensors are promising to monitor human activity, in particular
the usage of triaxial accelerometers for gesture recognition have been
explored. Despite this, the state of the art presents lack of systems for
reliable online gesture recognition using accelerometer data. The article
proposes SLOTH, an architecture for online gesture recognition, based on a
wearable triaxial accelerometer, a Recurrent Neural Network (RNN) probabilistic
classifier and a procedure for continuous gesture detection, relying on
modelling gesture probabilities, that guarantees (i) good recognition results
in terms of precision and recall, (ii) immediate system reactivity. | [
1,
0,
0,
0,
0,
0
] |
Title: Machine Learning Topological Invariants with Neural Networks,
Abstract: In this Letter we supervisedly train neural networks to distinguish different
topological phases in the context of topological band insulators. After
training with Hamiltonians of one-dimensional insulators with chiral symmetry,
the neural network can predict their topological winding numbers with nearly
100% accuracy, even for Hamiltonians with larger winding numbers that are not
included in the training data. These results show a remarkable success that the
neural network can capture the global and nonlinear topological features of
quantum phases from local inputs. By opening up the neural network, we confirm
that the network does learn the discrete version of the winding number formula.
We also make a couple of remarks regarding the role of the symmetry and the
opposite effect of regularization techniques when applying machine learning to
physical systems. | [
1,
1,
0,
0,
0,
0
] |
Title: Parallel Markov Chain Monte Carlo for the Indian Buffet Process,
Abstract: Indian Buffet Process based models are an elegant way for discovering
underlying features within a data set, but inference in such models can be
slow. Inferring underlying features using Markov chain Monte Carlo either
relies on an uncollapsed representation, which leads to poor mixing, or on a
collapsed representation, which leads to a quadratic increase in computational
complexity. Existing attempts at distributing inference have introduced
additional approximation within the inference procedure. In this paper we
present a novel algorithm to perform asymptotically exact parallel Markov chain
Monte Carlo inference for Indian Buffet Process models. We take advantage of
the fact that the features are conditionally independent under the
beta-Bernoulli process. Because of this conditional independence, we can
partition the features into two parts: one part containing only the finitely
many instantiated features and the other part containing the infinite tail of
uninstantiated features. For the finite partition, parallel inference is simple
given the instantiation of features. But for the infinite tail, performing
uncollapsed MCMC leads to poor mixing and hence we collapse out the features.
The resulting hybrid sampler, while being parallel, produces samples
asymptotically from the true posterior. | [
0,
0,
0,
1,
0,
0
] |
Title: Bootstrapping a Lexicon for Emotional Arousal in Software Engineering,
Abstract: Emotional arousal increases activation and performance but may also lead to
burnout in software development. We present the first version of a Software
Engineering Arousal lexicon (SEA) that is specifically designed to address the
problem of emotional arousal in the software developer ecosystem. SEA is built
using a bootstrapping approach that combines word embedding model trained on
issue-tracking data and manual scoring of items in the lexicon. We show that
our lexicon is able to differentiate between issue priorities, which are a
source of emotional activation and then act as a proxy for arousal. The best
performance is obtained by combining SEA (428 words) with a previously created
general purpose lexicon by Warriner et al. (13,915 words) and it achieves
Cohen's d effect sizes up to 0.5. | [
1,
0,
0,
0,
0,
0
] |
Title: Early Solar System irradiation quantified by linked vanadium and beryllium isotope variations in meteorites,
Abstract: X-ray emission in young stellar objects (YSOs) is orders of magnitude more
intense than in main sequence stars1,2, suggestive of cosmic ray irradiation of
surrounding accretion disks. Protoplanetary disk irradiation has been detected
around YSOs by HERSCHEL3. In our solar system, short-lived 10Be (half-life =
1.39 My4), which cannot be produced by stellar nucleosynthesis, was discovered
in the oldest solar system solids, the calcium-aluminium-rich inclusions
(CAIs)5. The high 10Be abundance, as well as detection of other irradiation
tracers6,7, suggest 10Be likely originates from cosmic ray irradiation caused
by solar flares8. Nevertheless, the nature of these flares (gradual or
impulsive), the target (gas or dust), and the duration and location of
irradiation remain unknown. Here we use the vanadium isotopic composition,
together with initial 10Be abundance to quantify irradiation conditions in the
early Solar System9. For the initial 10Be abundances recorded in CAIs, 50V
excesses of a few per mil relative to chondrites have been predicted10,11. We
report 50V excesses in CAIs up to 4.4 per mil that co-vary with 10Be abundance.
Their co-variation dictates that excess 50V and 10Be were synthesised through
irradiation of refractory dust. Modelling of the production rate of 50V and
10Be demonstrates that the dust was exposed to solar cosmic rays produced by
gradual flares for less than 300 years at about 0.1 au from the protoSun. | [
0,
1,
0,
0,
0,
0
] |
Title: Two classes of nonlocal Evolution Equations related by a shared Traveling Wave Problem,
Abstract: We consider reaction-diffusion equations and Korteweg-de Vries-Burgers (KdVB)
equations, i.e. scalar conservation laws with diffusive-dispersive
regularization. We review the existence of traveling wave solutions for these
two classes of evolution equations. For classical equations the traveling wave
problem (TWP) for a local KdVB equation can be identified with the TWP for a
reaction-diffusion equation. In this article we study this relationship for
these two classes of evolution equations with nonlocal diffusion/dispersion.
This connection is especially useful, if the TW equation is not studied
directly, but the existence of a TWS is proven using one of the evolution
equations instead. Finally, we present three models from fluid dynamics and
discuss the TWP via its link to associated reaction-diffusion equations. | [
0,
0,
1,
0,
0,
0
] |
Title: HornDroid: Practical and Sound Static Analysis of Android Applications by SMT Solving,
Abstract: We present HornDroid, a new tool for the static analysis of information flow
properties in Android applications. The core idea underlying HornDroid is to
use Horn clauses for soundly abstracting the semantics of Android applications
and to express security properties as a set of proof obligations that are
automatically discharged by an off-the-shelf SMT solver. This approach makes it
possible to fine-tune the analysis in order to achieve a high degree of
precision while still using off-the-shelf verification tools, thereby
leveraging the recent advances in this field. As a matter of fact, HornDroid
outperforms state-of-the-art Android static analysis tools on benchmarks
proposed by the community. Moreover, HornDroid is the first static analysis
tool for Android to come with a formal proof of soundness, which covers the
core of the analysis technique: besides yielding correctness assurances, this
proof allowed us to identify some critical corner-cases that affect the
soundness guarantees provided by some of the previous static analysis tools for
Android. | [
1,
0,
0,
0,
0,
0
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.