title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Local Gradient Estimates for Second-Order Nonlinear Elliptic and Parabolic Equations by the Weak Bernstein's Method | In the theory of second-order, nonlinear elliptic and parabolic equations,
obtaining local or global gradient bounds is often a key step for proving the
existence of solutions but it may be even more useful in many applications, for
example to singular perturbations problems. The classical Bernstein's method is
the well-known tool to obtain these bounds but, in most cases, it has the
defect of providing only a priori estimates. The "weak Bernstein's method" ,
based on viscosity solutions' theory, is an alternative way to prove the global
Lipschitz regularity of solutions together with some estimates but it is not so
easy to perform in the case of local bounds. The aim of this paper is to
provide an extension of the "weak Bernstein's method" which allows to prove
local gradient bounds with reasonnable technicalities. The classical
Bernstein's method is a well-known tool for obtaining gradient estimates for
solutions of second-order, elliptic and parabolic equations
| 0 | 0 | 1 | 0 | 0 | 0 |
Dynamic patterns of knowledge flows across technological domains: empirical results and link prediction | The purpose of this study is to investigate the structure and evolution of
knowledge spillovers across technological domains. Specifically, dynamic
patterns of knowledge flow among 29 technological domains, measured by patent
citations for eight distinct periods, are identified and link prediction is
tested for capability for forecasting the evolution in these cross-domain
patent networks. The overall success of the predictions using the Katz metric
implies that there is a tendency to generate increased knowledge flows mostly
within the set of previously linked technological domains. This study
contributes to innovation studies by characterizing the structural change and
evolutionary behaviors in dynamic technology networks and by offering the basis
for predicting the emergence of future technological knowledge flows.
| 1 | 1 | 0 | 0 | 0 | 0 |
Accurate calculation of oblate spheroidal wave functions | Alternative expressions for calculating the oblate spheroidal radial
functions of both kinds R1ml and R2ml are shown to provide accurate values over
very large parameter ranges using 64 bit arithmetic, even where the traditional
expressions fail. First is the expansion of the product of a radial function
and the angular function of the first kind in a series of products of the
corresponding spherical functions, with the angular coordinate being a free
parameter. Setting the angular coordinate equal to zero leads to accurate
values for R2ml when the radial coordinate xi is larger than 0.01 and l is
somewhat larger than m. Allowing it to vary with increasing l leads to highly
accurate values for R1ml over all parameter ranges. Next is the calculation of
R2ml as an integral of the product of S1ml and a spherical Neumann function
kernel. This is useful for smaller values of xi. Also used is the near equality
of pairs of low order eigenvalues when the size parameter c is large that leads
to accurate values for R2ml using neighboring accurate values for R1ml. A
modified method is described that provides accurate values for the necessary
expansion coefficients when c is large and l is near m and traditional methods
fail. A resulting Fortran computer program Oblfcn almost always provides radial
function values with at least 8 accurate decimal digits using 64 bit arithmetic
for m up to at least 1000 with c up to at least 2000 when xi is greater than
0.000001 and c up to at least 5000 when xi is greater than 0.01. Use of 128 bit
arithmetic extends the accuracy to 15 or more digits and extends xi to all
values other than zero. Oblfcn is freely available.
| 0 | 0 | 1 | 0 | 0 | 0 |
Fully Decentralized Policies for Multi-Agent Systems: An Information Theoretic Approach | Learning cooperative policies for multi-agent systems is often challenged by
partial observability and a lack of coordination. In some settings, the
structure of a problem allows a distributed solution with limited
communication. Here, we consider a scenario where no communication is
available, and instead we learn local policies for all agents that collectively
mimic the solution to a centralized multi-agent static optimization problem.
Our main contribution is an information theoretic framework based on rate
distortion theory which facilitates analysis of how well the resulting fully
decentralized policies are able to reconstruct the optimal solution. Moreover,
this framework provides a natural extension that addresses which nodes an agent
should communicate with to improve the performance of its individual policy.
| 1 | 1 | 1 | 0 | 0 | 0 |
Ion distribution and ablation depth measurements of a fs-ps laser-irradiated solid tin target | The ablation of solid tin surfaces by an 800-nanometer-wavelength laser is
studied for a pulse length range from 500 fs to 4.5 ps and a fluence range
spanning 0.9 to 22 J/cm^2. The ablation depth and volume are obtained employing
a high-numerical-aperture optical microscope, while the ion yield and energy
distributions are obtained from a set of Faraday cups set up under various
angles. We found a slight increase of the ion yield for an increasing pulse
length, while the ablation depth is slightly decreasing. The ablation volume
remained constant as a function of pulse length. The ablation depth follows a
two-region logarithmic dependence on the fluence, in agreement with the
available literature and theory. In the examined fluence range, the ion yield
angular distribution is sharply peaked along the target normal at low fluences
but rapidly broadens with increasing fluence. The total ionization fraction
increases monotonically with fluence to a 5-6% maximum, which is substantially
lower than the typical ionization fractions obtained with nanosecond-pulse
ablation. The angular distribution of the ions does not depend on the laser
pulse length within the measurement uncertainty. These results are of
particular interest for the possible utilization of fs-ps laser systems in
plasma sources of extreme ultraviolet light for nanolithography.
| 0 | 1 | 0 | 0 | 0 | 0 |
Measuring the Hubble constant with Type Ia supernovae as near-infrared standard candles | The most precise local measurements of $H_0$ rely on observations of Type Ia
supernovae (SNe Ia) coupled with Cepheid distances to SN Ia host galaxies.
Recent results have shown tension comparing $H_0$ to the value inferred from
CMB observations assuming $\Lambda$CDM, making it important to check for
potential systematic uncertainties in either approach. To date, precise local
$H_0$ measurements have used SN Ia distances based on optical photometry, with
corrections for light curve shape and colour. Here, we analyse SNe Ia as
standard candles in the near-infrared (NIR), where intrinsic variations in the
supernovae and extinction by dust are both reduced relative to the optical.
From a combined fit to 9 nearby calibrator SNe with host Cepheid distances from
Riess et al. (2016) and 27 SNe in the Hubble flow, we estimate the absolute
peak $J$ magnitude $M_J = -18.524\;\pm\;0.041$ mag and $H_0 = 72.8\;\pm\;1.6$
(statistical) $\pm$ 2.7 (systematic) km s$^{-1}$ Mpc$^{-1}$. The 2.2 $\%$
statistical uncertainty demonstrates that the NIR provides a compelling avenue
to measuring SN Ia distances, and for our sample the intrinsic (unmodeled) peak
$J$ magnitude scatter is just $\sim$0.10 mag, even without light curve shape or
colour corrections. Our results do not vary significantly with different sample
selection criteria, though photometric calibration in the NIR may be a dominant
systematic uncertainty. Our findings suggest that tension in the competing
$H_0$ distance ladders is likely not a result of supernova systematics that
could be expected to vary between optical and NIR wavelengths, like dust
extinction. We anticipate further improvements in $H_0$ with a larger
calibrator sample of SNe Ia with Cepheid distances, more Hubble flow SNe Ia
with NIR light curves, and better use of the full NIR photometric data set
beyond simply the peak $J$-band magnitude.
| 0 | 1 | 0 | 0 | 0 | 0 |
Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency.
| 1 | 0 | 0 | 0 | 0 | 0 |
Universal Construction of Cheater-Identifiable Secret Sharing Against Rushing Cheaters Based on Message Authentication | For conventional secret sharing, if cheaters can submit possibly forged
shares after observing shares of the honest users in the reconstruction phase
then they cannot only disturb the protocol but also only they may reconstruct
the true secret. To overcome the problem, secret sharing scheme with properties
of cheater-identification have been proposed. Existing protocols for
cheater-identifiable secret sharing assumed non-rushing cheaters or honest
majority. In this paper, we remove both conditions simultaneously, and give its
universal construction from any secret sharing scheme. To resolve this end, we
propose the concepts of "individual identification" and "agreed
identification".
| 1 | 0 | 0 | 0 | 0 | 0 |
Regrasp Planning Considering Bipedal Stability Constraints | This paper presents a Center of Mass (CoM) based manipulation and regrasp
planner that implements stability constraints to preserve the robot balance.
The planner provides a graph of IK-feasible, collision-free and stable motion
sequences, constructed using an energy based motion planning algorithm. It
assures that the assembly motions are stable and prevent the robot from falling
while performing dexterous tasks in different situations. Furthermore, the
constraints are also used to perform an RRT-inspired task-related stability
estimation in several simulations. The estimation can be used to select between
single-arm and dual-arm regrasping configurations to achieve more stability and
robustness for a given manipulation task. To validate the planner and the
task-related stability estimations, several tests are performed in simulations
and real-world experiments involving the HRP5P humanoid robot, the 5th
generation of the HRP robot family. The experiment results suggest that the
planner and the task-related stability estimation provide robust behavior for
the humanoid robot while performing regrasp tasks.
| 1 | 0 | 0 | 0 | 0 | 0 |
Unsaturated deformable porous media flow with phase transition | In the present paper, a continuum model is introduced for fluid flow in a
deformable porous medium, where the fluid may undergo phase transitions.
Typically, such problems arise in modeling liquid-solid phase transformations
in groundwater flows. The system of equations is derived here from the
conservation principles for mass, momentum, and energy and from the
Clausius-Duhem inequality for entropy. It couples the evolution of the
displacement in the matrix material, of the capillary pressure, of the absolute
temperature, and of the phase fraction. Mathematical results are proved under
the additional hypothesis that inertia effects and shear stresses can be
neglected. For the resulting highly nonlinear system of two PDEs, one ODE and
one ordinary differential inclusion with natural initial and boundary
conditions, existence of global in time solutions is proved by means of cut-off
techniques and suitable Moser-type estimates.
| 0 | 0 | 1 | 0 | 0 | 0 |
Active Mini-Batch Sampling using Repulsive Point Processes | The convergence speed of stochastic gradient descent (SGD) can be improved by
actively selecting mini-batches. We explore sampling schemes where similar data
points are less likely to be selected in the same mini-batch. In particular, we
prove that such repulsive sampling schemes lowers the variance of the gradient
estimator. This generalizes recent work on using Determinantal Point Processes
(DPPs) for mini-batch diversification (Zhang et al., 2017) to the broader class
of repulsive point processes. We first show that the phenomenon of variance
reduction by diversified sampling generalizes in particular to non-stationary
point processes. We then show that other point processes may be computationally
much more efficient than DPPs. In particular, we propose and investigate
Poisson Disk sampling---frequently encountered in the computer graphics
community---for this task. We show empirically that our approach improves over
standard SGD both in terms of convergence speed as well as final model
performance.
| 0 | 0 | 0 | 1 | 0 | 0 |
Customizing First Person Image Through Desired Actions | This paper studies a problem of inverse visual path planning: creating a
visual scene from a first person action. Our conjecture is that the spatial
arrangement of a first person visual scene is deployed to afford an action, and
therefore, the action can be inversely used to synthesize a new scene such that
the action is feasible. As a proof-of-concept, we focus on linking visual
experiences induced by walking.
A key innovation of this paper is a concept of ActionTunnel---a 3D virtual
tunnel along the future trajectory encoding what the wearer will visually
experience as moving into the scene. This connects two distinctive first person
images through similar walking paths. Our method takes a first person image
with a user defined future trajectory and outputs a new image that can afford
the future motion. The image is created by combining present and future
ActionTunnels in 3D where the missing pixels in adjoining area are computed by
a generative adversarial network. Our work can provide a travel across
different first person experiences in diverse real world scenes.
| 1 | 0 | 0 | 0 | 0 | 0 |
Michell trusses in two dimensions as a Gamma-limit of optimal design problems in linear elasticity | We reconsider the minimization of the compliance of a two dimensional elastic
body with traction boundary conditions for a given weight. It is well known how
to rewrite this optimal design problem as a nonlinear variational problem. We
take the limit of vanishing weight by sending a suitable Lagrange multiplier to
infinity in the variational formulation. We show that the limit, in the sense
of $\Gamma$-convergence, is a certain Michell truss problem. This proves a
conjecture by Kohn and Allaire.
| 0 | 0 | 1 | 0 | 0 | 0 |
Moment analysis of highway-traffic clearance distribution | To help with the planning of inter-vehicular communication networks, an
accurate understanding of traffic behavior and traffic phase transition is
required. We calculate inter-vehicle spacings from empirical data collected in
a multi-lane highway in California, USA. We calculate the correlation
coefficients for spacings between vehicles in individual lanes to show that the
flows are independent. We determine the first four moments for individual lanes
at regular time intervals, namely the mean, variance, skewness and kurtosis. We
follow the evolution of these moments as the traffic condition changes from the
low-density free flow to high-density congestion. We find that the higher
moments of inter-vehicle spacings have a well defined dependence on the mean
value. The variance of the spacing distribution monotonously increases with the
mean vehicle spacing. In contrast, our analysis suggests that the skewness and
kurtosis provide one of the most sensitive probes towards the search for the
critical points. We find two significant results. First, the kurtosis
calculated in different time intervals for different lanes varies smoothly with
the skewness. They share the same behavior with the skewness and kurtosis
calculated for probability density functions that depend on a single parameter.
Second, the skewness and kurtosis as functions of the mean intervehicle spacing
show sharp peaks at critical densities expected for transitions between
different traffic phases. The data show a considerable scatter near the peak
positions, which suggests that the critical behavior may depend on other
parameters in addition to the traffic density.
| 0 | 1 | 0 | 0 | 0 | 0 |
Single-image Tomography: 3D Volumes from 2D Cranial X-Rays | As many different 3D volumes could produce the same 2D x-ray image, inverting
this process is challenging. We show that recent deep learning-based
convolutional neural networks can solve this task. As the main challenge in
learning is the sheer amount of data created when extending the 2D image into a
3D volume, we suggest firstly to learn a coarse, fixed-resolution volume which
is then fused in a second step with the input x-ray into a high-resolution
volume. To train and validate our approach we introduce a new dataset that
comprises of close to half a million computer-simulated 2D x-ray images of 3D
volumes scanned from 175 mammalian species. Applications of our approach
include stereoscopic rendering of legacy x-ray images, re-rendering of x-rays
including changes of illumination, view pose or geometry. Our evaluation
includes comparison to previous tomography work, previous learning methods
using our data, a user study and application to a set of real x-rays.
| 1 | 0 | 0 | 0 | 0 | 0 |
Stochastic Methods for Composite and Weakly Convex Optimization Problems | We consider minimization of stochastic functionals that are compositions of a
(potentially) non-smooth convex function $h$ and smooth function $c$ and, more
generally, stochastic weakly-convex functionals. We develop a family of
stochastic methods---including a stochastic prox-linear algorithm and a
stochastic (generalized) sub-gradient procedure---and prove that, under mild
technical conditions, each converges to first-order stationary points of the
stochastic objective. We provide experiments further investigating our methods
on non-smooth phase retrieval problems; the experiments indicate the practical
effectiveness of the procedures.
| 0 | 0 | 1 | 1 | 0 | 0 |
Thermal Expansion of the Heavy-fermion Superconductor PuCoGa$_{5}$ | We have performed high-resolution powder x-ray diffraction measurements on a
sample of $^{242}$PuCoGa$_{5}$, the heavy-fermion superconductor with the
highest critical temperature $T_{c}$ = 18.7 K. The results show that the
tetragonal symmetry of its crystallographic lattice is preserved down to 2 K.
Marginal evidence is obtained for an anomalous behaviour below $T_{c}$ of the
$a$ and $c$ lattice parameters. The observed thermal expansion is isotropic
down to 150 K, and becomes anisotropic for lower temperatures. This gives a
$c/a$ ratio that decreases with increasing temperature to become almost
constant above $\sim$150 K. The volume thermal expansion coefficient
$\alpha_{V}$ has a jump at $T_{c}$, a factor $\sim$20 larger than the change
predicted by the Ehrenfest relation for a second order phase transition. The
volume expansion deviates from the curve expected for the conventional
anharmonic behaviour described by a simple Grüneisen-Einstein model. The
observed differences are about ten times larger than the statistical error bars
but are too small to be taken as an indication for the proximity of the system
to a valence instability that is avoided by the superconducting state.
| 0 | 1 | 0 | 0 | 0 | 0 |
Versatile Auxiliary Classifier with Generative Adversarial Network (VAC+GAN), Multi Class Scenarios | Conditional generators learn the data distribution for each class in a
multi-class scenario and generate samples for a specific class given the right
input from the latent space. In this work, a method known as "Versatile
Auxiliary Classifier with Generative Adversarial Network" for multi-class
scenarios is presented. In this technique, the Generative Adversarial Networks
(GAN)'s generator is turned into a conditional generator by placing a
multi-class classifier in parallel with the discriminator network and
backpropagate the classification error through the generator. This technique is
versatile enough to be applied to any GAN implementation. The results on two
databases and comparisons with other method are provided as well.
| 0 | 0 | 0 | 1 | 0 | 0 |
A Lagrangian Model to Predict Microscallop Motion in non Newtonian Fluids | The need to develop models to predict the motion of microrobots, or robots of
a much smaller scale, moving in fluids in a low Reynolds number regime, and in
particular, in non Newtonian fluids, cannot be understated. The article
develops a Lagrangian based model for one such mechanism - a two-link mechanism
termed a microscallop, moving in a low Reynolds number environment in a non
Newtonian fluid. The modelling proceeds through the conventional Lagrangian
construction for a two-link mechanism and then goes on to model the external
fluid forces using empirically based models for viscosity to complete the
dynamic model. The derived model is then simulated for different initial
conditions and key parameters of the non Newtonian fluid, and the results are
corroborated with a few existing experimental results on a similar mechanism
under identical conditions.
| 1 | 0 | 0 | 0 | 0 | 0 |
Towards a Bootstrap approach to higher orders of epsilon expansion | We employ a hybrid approach in determining the anomalous dimension and OPE
coefficient of higher spin operators in the Wilson-Fisher theory. First we do a
large spin analysis for CFT data where we use results obtained from the usual
and the Mellin Bootstrap and also from Feynman diagram literature. This gives
new predictions at $O(\epsilon^4)$ and $O(\epsilon^5)$ for anomalous dimensions
and OPE coefficients, and also provides a cross-check for the results from
Mellin Bootstrap. These higher orders get contributions from all higher spin
operators in the crossed channel. We also use the Bootstrap in Mellin space
method for $\phi^3$ in $d=6-\epsilon$ CFT where we calculate general higher
spin OPE data. We demonstrate a higher loop order calculation in this approach
by summing over contributions from higher spin operators of the crossed channel
in the same spirit as before.
| 0 | 1 | 0 | 0 | 0 | 0 |
Small-scale Effects of Thermal Inflation on Halo Abundance at High-$z$, Galaxy Substructure Abundance and 21-cm Power Spectrum | We study the impact of thermal inflation on the formation of cosmological
structures and present astrophysical observables which can be used to constrain
and possibly probe the thermal inflation scenario. These are dark matter halo
abundance at high redshifts, satellite galaxy abundance in the Milky Way, and
fluctuation in the 21-cm radiation background before the epoch of reionization.
The thermal inflation scenario leaves a characteristic signature on the matter
power spectrum by boosting the amplitude at a specific wavenumber determined by
the number of e-foldings during thermal inflation ($N_{\rm bc}$), and strongly
suppressing the amplitude for modes at smaller scales. For a reasonable range
of parameter space, one of the consequences is the suppression of minihalo
formation at high redshifts and that of satellite galaxies in the Milky Way.
While this effect is substantial, it is degenerate with other cosmological or
astrophysical effects. The power spectrum of the 21-cm background probes this
impact more directly, and its observation may be the best way to constrain the
thermal inflation scenario due to the characteristic signature in the power
spectrum. The Square Kilometre Array (SKA) in phase 1 (SKA1) has sensitivity
large enough to achieve this goal for models with $N_{\rm bc}\gtrsim 26$ if a
10000-hr observation is performed. The final phase SKA, with anticipated
sensitivity about an order of magnitude higher, seems more promising and will
cover a wider parameter space.
| 0 | 1 | 0 | 0 | 0 | 0 |
Finiteness theorems for holomorphic mappings from products of hyperbolic Riemann surfaces | We prove that the space of dominant/non-constant holomorphic mappings from a
product of hyperbolic Riemann surfaces of finite type into certain hyperbolic
manifolds with universal cover a bounded domain is a finite set.
| 0 | 0 | 1 | 0 | 0 | 0 |
Time and media-use of Italian Generation Y: dimensions of leisure preferences | Time spent in leisure is not a minor research question as it is acknowledged
as a key aspect of one's quality of life. The primary aim of this article is to
qualify time and Internet use of Italian Generation Y beyond media hype and
assumptions. To this aim, we apply a multidimensional extension of Item
Response Theory models to the Italian "Multipurpose survey on households:
aspects of daily life" to ascertain the relevant dimensions of Generation Y
time-use. We show that the use of technology is neither the first nor the
foremost time-use activity of Italian Generation Y, who still prefers to use
its time to socialise and have fun with friends in a non media-medalled manner.
| 1 | 0 | 0 | 1 | 0 | 0 |
Recurrent Multimodal Interaction for Referring Image Segmentation | In this paper we are interested in the problem of image segmentation given
natural language descriptions, i.e. referring expressions. Existing works
tackle this problem by first modeling images and sentences independently and
then segment images by combining these two types of representations. We argue
that learning word-to-image interaction is more native in the sense of jointly
modeling two modalities for the image segmentation task, and we propose
convolutional multimodal LSTM to encode the sequential interactions between
individual words, visual information, and spatial information. We show that our
proposed model outperforms the baseline model on benchmark datasets. In
addition, we analyze the intermediate output of the proposed multimodal LSTM
approach and empirically explain how this approach enforces a more effective
word-to-image interaction.
| 1 | 0 | 0 | 0 | 0 | 0 |
Motion of a thin elliptic plate under symmetric and asymmetric orthotropic friction forces | Anisotropy of friction force is proved to be an important factor in various
contact problems. We study dynamical behavior of thin plates with respect to
symmetric and asymmetric orthotropic friction. Terminal motion of plates with
circular and elliptic contact areas is mainly analyzed. Evaluation of friction
forces for both symmetric and asymmetric orthotropic cases are shown. Regular
pressure distribution is considered. Differential equations are formulated and
solved numerically for a number of initial conditions. Examples show
significant influence of friction force asymmetry on the motion.
| 0 | 1 | 0 | 0 | 0 | 0 |
Combinatorial Auctions with Online XOS Bidders | In combinatorial auctions, a designer must decide how to allocate a set of
indivisible items amongst a set of bidders. Each bidder has a valuation
function which gives the utility they obtain from any subset of the items. Our
focus is specifically on welfare maximization, where the objective is to
maximize the sum of valuations that the bidders place on the items that they
were allocated (the valuation functions are assumed to be reported truthfully).
We analyze an online problem in which the algorithm is not given the set of
bidders in advance. Instead, the bidders are revealed sequentially in a
uniformly random order, similarly to secretary problems. The algorithm must
make an irrevocable decision about which items to allocate to the current
bidder before the next one is revealed. When the valuation functions lie in the
class $XOS$ (which includes submodular functions), we provide a black box
reduction from offline to online optimization. Specifically, given an
$\alpha$-approximation algorithm for offline welfare maximization, we show how
to create a $(0.199 \alpha)$-approximation algorithm for the online problem.
Our algorithm draws on connections to secretary problems; in fact, we show that
the online welfare maximization problem itself can be viewed as a particular
kind of secretary problem with nonuniform arrival order.
| 1 | 0 | 0 | 0 | 0 | 0 |
Application of Convolutional Neural Network to Predict Airfoil Lift Coefficient | The adaptability of the convolutional neural network (CNN) technique for
aerodynamic meta-modeling tasks is probed in this work. The primary objective
is to develop suitable CNN architecture for variable flow conditions and object
geometry, in addition to identifying a sufficient data preparation process.
Multiple CNN structures were trained to learn the lift coefficients of the
airfoils with a variety of shapes in multiple flow Mach numbers, Reynolds
numbers, and diverse angles of attack. This is conducted to illustrate the
concept of the technique. A multi-layered perceptron (MLP) is also used for the
training sets. The MLP results are compared with that of the CNN results. The
newly proposed meta-modeling concept has been found to be comparable with the
MLP in learning capability; and more importantly, our CNN model exhibits a
competitive prediction accuracy with minimal constraints in a geometric
representation.
| 1 | 0 | 0 | 1 | 0 | 0 |
Bayesian inference in Y-linked two-sex branching processes with mutations: ABC approach | A Y-linked two-sex branching process with mutations and blind choice of males
is a suitable model for analyzing the evolution of the number of carriers of an
allele and its mutations of a Y-linked gene. Considering a two-sex monogamous
population, in this model each female chooses her partner from among the male
population without caring about his type (i.e., the allele he carries). In this
work, we deal with the problem of estimating the main parameters of such model
developing the Bayesian inference in a parametric framework. Firstly, we
consider, as sample scheme, the observation of the total number of females and
males up to some generation as well as the number of males of each genotype at
last generation. Later, we introduce the information of the mutated males only
in the last generation obtaining in this way a second sample scheme. For both
samples, we apply the Approximate Bayesian Computation (ABC) methodology to
approximate the posterior distributions of the main parameters of this model.
The accuracy of the procedure based on these samples is illustrated and
discussed by way of simulated examples.
| 0 | 0 | 0 | 1 | 1 | 0 |
Effective Theories for 2+1 Dimensional Non-Abelian Topological Spin Liquids | In this work we propose an effective low-energy theory for a large class of
2+1 dimensional non-Abelian topological spin liquids whose edge states are
conformal degrees of freedom with central charges corresponding to the coset
structure $su(2)_k\oplus su(2)_{k'}/su(2)_{k+k'}$. For particular values of
$k'$ it furnishes the series for unitary minimal and superconformal models.
These gapped phases were recently suggested to be obtained from an array of
one-dimensional coupled quantum wires. In doing so we provide an explicit
relationship between two distinct approaches: quantum wires and Chern-Simons
bulk theory. We firstly make a direct connection between the interacting
quantum wires and the corresponding conformal field theory at the edges, which
turns out to be given in terms of chiral gauged WZW models. Relying on the
bulk-edge correspondence we are able to construct the underlying non-Abelian
Chern-Simons effective field theory.
| 0 | 1 | 0 | 0 | 0 | 0 |
Coset space construction for the conformal group. II. Spontaneously broken phase and inverse Higgs phenomenon | A self-contained method of obtaining effective theories resulting from the
spontaneous breakdown of conformal invariance is developed. It allows to
demonstrate that the Nambu-Goldstone fields for special conformal
transformations always represent non-dynamical degrees of freedom. The standard
approach to the same question, which includes the imposition of the inverse
Higgs constraints, is shown to follow from the developed technique. This
provides an alternative view on the nature of the inverse Higgs constraints for
the conformal group.
| 0 | 1 | 0 | 0 | 0 | 0 |
The morphodynamics of 3D migrating cancer cells | Cell shape is an important biomarker. Previously extensive studies have
established the relation between cell shape and cell function. However, the
morphodynamics, namely the temporal fluctuation of cell shape is much less
understood. We study the morphodynamics of MDA-MB-231 cells in type I collagen
extracellular matrix (ECM). We find ECM mechanics, as tuned by collagen
concentration, controls the morphodynamics but not the static cell morphology.
By employing machine learning techniques, we classify cell shape into five
different morphological phenotypes corresponding to different migration modes.
As a result, cell morphodynamics is mapped into temporal evolution of
morphological phenotypes. We systematically characterize the phenotype dynamics
including occurrence probability, dwell time, transition flux, and also obtain
the invasion characteristics of each phenotype. Using a tumor organoid model,
we show that the distinct invasion potentials of each phenotype modulate the
phenotype homeostasis. Overall invasion of a tumor organoid is facilitated by
individual cells searching for and committing to phenotypes of higher invasive
potential. In conclusion, we show that 3D migrating cancer cells exhibit rich
morphodynamics that is regulated by ECM mechanics and is closely related with
cell motility. Our results pave the way to systematic characterization and
functional understanding of cell morphodynamics.
| 0 | 0 | 0 | 0 | 1 | 0 |
Evolution Strategies as a Scalable Alternative to Reinforcement Learning | We explore the use of Evolution Strategies (ES), a class of black box
optimization algorithms, as an alternative to popular MDP-based RL techniques
such as Q-learning and Policy Gradients. Experiments on MuJoCo and Atari show
that ES is a viable solution strategy that scales extremely well with the
number of CPUs available: By using a novel communication strategy based on
common random numbers, our ES implementation only needs to communicate scalars,
making it possible to scale to over a thousand parallel workers. This allows us
to solve 3D humanoid walking in 10 minutes and obtain competitive results on
most Atari games after one hour of training. In addition, we highlight several
advantages of ES as a black box optimization technique: it is invariant to
action frequency and delayed rewards, tolerant of extremely long horizons, and
does not need temporal discounting or value function approximation.
| 1 | 0 | 0 | 1 | 0 | 0 |
IoT Localization for Bistatic Passive UHF RFID Systems with 3D Radiation Pattern | Passive Radio-Frequency IDentification (RFID) systems carry critical
importance for Internet of Things (IoT) applications due to their energy
harvesting capabilities. RFID based position estimation, in particular, is
expected to facilitate a wide array of location based services for IoT
applications with low-power requirements. In this paper, considering monostatic
and bistatic configurations and 3D antenna radiation pattern, we investigate
the accuracy of received signal strength based wireless localization using
passive ultra high frequency (UHF) RFID systems. The Cramer-Rao Lower Bound
(CRLB) for the localization accuracy is derived, and is compared with the
accuracy of maximum likelihood estimators for various RFID antenna
configurations. Numerical results show that due to RFID tag/antenna
sensitivity, and the directional antenna pattern, the localization accuracy can
degrade at blind locations that remain outside of the RFID reader antennas'
main beam patterns. In such cases optimizing elevation angle of antennas are
shown to improve localization coverage, while using bistatic configuration
improves localization accuracy significantly.
| 1 | 0 | 0 | 0 | 0 | 0 |
Multidimensional upwind hydrodynamics on unstructured meshes using Graphics Processing Units I. Two-dimensional uniform meshes | We present a new method for numerical hydrodynamics which uses a
multidimensional generalisation of the Roe solver and operates on an
unstructured triangular mesh. The main advantage over traditional methods based
on Riemann solvers, which commonly use one-dimensional flux estimates as
building blocks for a multidimensional integration, is its inherently
multidimensional nature, and as a consequence its ability to recognise
multidimensional stationary states that are not hydrostatic. A second novelty
is the focus on Graphics Processing Units (GPUs). By tailoring the algorithms
specifically to GPUs we are able to get speedups of 100-250 compared to a
desktop machine. We compare the multidimensional upwind scheme to a
traditional, dimensionally split implementation of the Roe solver on several
test problems, and we find that the new method significantly outperforms the
Roe solver in almost all cases. This comes with increased computational costs
per time step, which makes the new method approximately a factor of 2 slower
than a dimensionally split scheme acting on a structured grid.
| 0 | 1 | 0 | 0 | 0 | 0 |
Voice Conversion Based on Cross-Domain Features Using Variational Auto Encoders | An effective approach to non-parallel voice conversion (VC) is to utilize
deep neural networks (DNNs), specifically variational auto encoders (VAEs), to
model the latent structure of speech in an unsupervised manner. A previous
study has confirmed the ef- fectiveness of VAE using the STRAIGHT spectra for
VC. How- ever, VAE using other types of spectral features such as mel- cepstral
coefficients (MCCs), which are related to human per- ception and have been
widely used in VC, have not been prop- erly investigated. Instead of using one
specific type of spectral feature, it is expected that VAE may benefit from
using multi- ple types of spectral features simultaneously, thereby improving
the capability of VAE for VC. To this end, we propose a novel VAE framework
(called cross-domain VAE, CDVAE) for VC. Specifically, the proposed framework
utilizes both STRAIGHT spectra and MCCs by explicitly regularizing multiple
objectives in order to constrain the behavior of the learned encoder and de-
coder. Experimental results demonstrate that the proposed CD- VAE framework
outperforms the conventional VAE framework in terms of subjective tests.
| 1 | 0 | 0 | 0 | 0 | 0 |
Density matrix expansion based semi-local exchange hole applied to range separated density functional theory | Exchange hole is the principle constituent in density functional theory,
which can be used to accurately design exchange energy functional and range
separated hybrid functionals coupled with some appropriate correlation.
Recently, density matrix expansion (DME) based semi-local exchange hole
proposed by Tao-Mo gained attention due to its fulfillment of some exact
constraints. We propose a new long-range corrected (LC) scheme that combines
meta-generalized gradient approximation (meta-GGA) exchange functionals
designed from DME exchange hole coupled with the ab-initio Hartree-Fock (HF)
exchange integral by separating the Coulomb interaction operator using standard
error function. Associate with Lee-Yang-Parr (LYP) correlation functional,
assessment and benchmarking of our functional using well-known test set shows
that it performs remarkably well for a broad range of molecular properties,
such as thermochemistry, noncovalent interaction and barrier height of chemical
reactions.
| 0 | 1 | 0 | 0 | 0 | 0 |
CANDELS Sheds Light on the Environmental Quenching of Low-mass Galaxies | We investigate the environmental quenching of galaxies, especially those with
stellar masses (M*)$<10^{9.5} M_\odot$, beyond the local universe. Essentially
all local low-mass quenched galaxies (QGs) are believed to live close to
massive central galaxies, which is a demonstration of environmental quenching.
We use CANDELS data to test {\it whether or not} such a dwarf QG--massive
central galaxy connection exists beyond the local universe. To this purpose, we
only need a statistically representative, rather than a complete, sample of
low-mass galaxies, which enables our study to $z\gtrsim1.5$. For each low-mass
galaxy, we measure the projected distance ($d_{proj}$) to its nearest massive
neighbor (M*$>10^{10.5} M_\odot$) within a redshift range. At a given redshift
and M*, the environmental quenching effect is considered to be observed if the
$d_{proj}$ distribution of QGs ($d_{proj}^Q$) is significantly skewed toward
lower values than that of star-forming galaxies ($d_{proj}^{SF}$). For galaxies
with $10^{8} M_\odot < M* < 10^{10} M_\odot$, such a difference between
$d_{proj}^Q$ and $d_{proj}^{SF}$ is detected up to $z\sim1$. Also, about 10\%
of the quenched galaxies in our sample are located between two and four virial
radii ($R_{Vir}$) of the massive halos. The median projected distance from
low-mass QGs to their massive neighbors, $d_{proj}^Q / R_{Vir}$, decreases with
satellite M* at $M* \lesssim 10^{9.5} M_\odot$, but increases with satellite M*
at $M* \gtrsim 10^{9.5} M_\odot$. This trend suggests a smooth, if any,
transition of the quenching timescale around $M* \sim 10^{9.5} M_\odot$ at
$0.5<z<1.0$.
| 0 | 1 | 0 | 0 | 0 | 0 |
BP-homology of elementary abelian 2-groups: BP-module structure | We determine the BP-module structure, mod higher filtration, of the main part
of the BP-homology of elementary abelian 2-groups. The action is related to
symmetric polynomials and to Dickson invariants.
| 0 | 0 | 1 | 0 | 0 | 0 |
Probabilistic Active Learning of Functions in Structural Causal Models | We consider the problem of learning the functions computing children from
parents in a Structural Causal Model once the underlying causal graph has been
identified. This is in some sense the second step after causal discovery.
Taking a probabilistic approach to estimating these functions, we derive a
natural myopic active learning scheme that identifies the intervention which is
optimally informative about all of the unknown functions jointly, given
previously observed data. We test the derived algorithms on simple examples, to
demonstrate that they produce a structured exploration policy that
significantly improves on unstructured base-lines.
| 1 | 0 | 0 | 1 | 0 | 0 |
Neural Episodic Control | Deep reinforcement learning methods attain super-human performance in a wide
range of environments. Such methods are grossly inefficient, often taking
orders of magnitudes more data than humans to achieve reasonable performance.
We propose Neural Episodic Control: a deep reinforcement learning agent that is
able to rapidly assimilate new experiences and act upon them. Our agent uses a
semi-tabular representation of the value function: a buffer of past experience
containing slowly changing state representations and rapidly updated estimates
of the value function. We show across a wide range of environments that our
agent learns significantly faster than other state-of-the-art, general purpose
deep reinforcement learning agents.
| 1 | 0 | 0 | 1 | 0 | 0 |
School bus routing by maximizing trip compatibility | School bus planning is usually divided into routing and scheduling due to the
complexity of solving them concurrently. However, the separation between these
two steps may lead to worse solutions with higher overall costs than that from
solving them together. When finding the minimal number of trips in the routing
problem, neglecting the importance of trip compatibility may increase the
number of buses actually needed in the scheduling problem. This paper proposes
a new formulation for the multi-school homogeneous fleet routing problem that
maximizes trip compatibility while minimizing total travel time. This
incorporates the trip compatibility for the scheduling problem in the routing
problem. Since the problem is inherently just a routing problem, finding a good
solution is not cumbersome. To compare the performance of the model with
traditional routing problems, we generate eight mid-size data sets. Through
importing the generated trips of the routing problems into the bus scheduling
(blocking) problem, it is shown that the proposed model uses up to 13% fewer
buses than the common traditional routing models.
| 1 | 0 | 0 | 0 | 0 | 0 |
Solvability of abstract semilinear equations by a global diffeomorphism theorem | In this work we proivied a new simpler proof of the global diffeomorphism
theorem from [9] which we further apply to consider unique solvability of some
abstract semilinear equations. Applications to the second order Dirichlet
problem driven by the Laplace operator are given.
| 0 | 0 | 1 | 0 | 0 | 0 |
Learning a Generative Model for Validity in Complex Discrete Structures | Deep generative models have been successfully used to learn representations
for high-dimensional discrete spaces by representing discrete objects as
sequences and employing powerful sequence-based deep models. Unfortunately,
these sequence-based models often produce invalid sequences: sequences which do
not represent any underlying discrete structure; invalid sequences hinder the
utility of such models. As a step towards solving this problem, we propose to
learn a deep recurrent validator model, which can estimate whether a partial
sequence can function as the beginning of a full, valid sequence. This
validator provides insight as to how individual sequence elements influence the
validity of the overall sequence, and can be used to constrain sequence based
models to generate valid sequences -- and thus faithfully model discrete
objects. Our approach is inspired by reinforcement learning, where an oracle
which can evaluate validity of complete sequences provides a sparse reward
signal. We demonstrate its effectiveness as a generative model of Python 3
source code for mathematical expressions, and in improving the ability of a
variational autoencoder trained on SMILES strings to decode valid molecular
structures.
| 1 | 0 | 0 | 1 | 0 | 0 |
On discrete structures in finite Hilbert spaces | We present a brief review of discrete structures in a finite Hilbert space,
relevant for the theory of quantum information. Unitary operator bases,
mutually unbiased bases, Clifford group and stabilizer states, discrete Wigner
function, symmetric informationally complete measurements, projective and
unitary t--designs are discussed. Some recent results in the field are covered
and several important open questions are formulated. We advocate a geometric
approach to the subject and emphasize numerous links to various mathematical
problems.
| 0 | 0 | 1 | 0 | 0 | 0 |
Selected topics on Toric Varieties | This article is based on a series of lectures on toric varieties given at
RIMS, Kyoto. We start by introducing toric varieties, their basic properties
and later pass to more advanced topics relating mostly to combinatorics.
| 0 | 0 | 1 | 0 | 0 | 0 |
Control Capacity | Feedback control actively dissipates uncertainty from a dynamical system by
means of actuation. We develop a notion of "control capacity" that gives a
fundamental limit (in bits) on the rate at which a controller can dissipate the
uncertainty from a system, i.e. stabilize to a known fixed point. We give a
computable single-letter characterization of control capacity for memoryless
stationary scalar multiplicative actuation channels. Control capacity allows us
to answer questions of stabilizability for scalar linear systems: a system with
actuation uncertainty is stabilizable if and only if the control capacity is
larger than the log of the unstable open-loop eigenvalue.
For second-moment senses of stability, we recover the classic uncertainty
threshold principle result. However, our definition of control capacity can
quantify the stabilizability limits for any moment of stability. Our
formulation parallels the notion of Shannon's communication capacity, and thus
yields both a strong converse and a way to compute the value of
side-information in control. The results in our paper are motivated by
bit-level models for control that build on the deterministic models that are
widely used to understand information flows in wireless network information
theory.
| 1 | 0 | 1 | 0 | 0 | 0 |
Benefits from Superposed Hawkes Processes | The superposition of temporal point processes has been studied for many
years, although the usefulness of such models for practical applications has
not be fully developed. We investigate superposed Hawkes process as an
important class of such models, with properties studied in the framework of
least squares estimation. The superposition of Hawkes processes is demonstrated
to be beneficial for tightening the upper bound of excess risk under certain
conditions, and we show the feasibility of the benefit in typical situations.
The usefulness of superposed Hawkes processes is verified on synthetic data,
and its potential to solve the cold-start problem of recommendation systems is
demonstrated on real-world data.
| 0 | 0 | 0 | 1 | 0 | 0 |
Network Embedding as Matrix Factorization: Unifying DeepWalk, LINE, PTE, and node2vec | Since the invention of word2vec, the skip-gram model has significantly
advanced the research of network embedding, such as the recent emergence of the
DeepWalk, LINE, PTE, and node2vec approaches. In this work, we show that all of
the aforementioned models with negative sampling can be unified into the matrix
factorization framework with closed forms. Our analysis and proofs reveal that:
(1) DeepWalk empirically produces a low-rank transformation of a network's
normalized Laplacian matrix; (2) LINE, in theory, is a special case of DeepWalk
when the size of vertices' context is set to one; (3) As an extension of LINE,
PTE can be viewed as the joint factorization of multiple networks' Laplacians;
(4) node2vec is factorizing a matrix related to the stationary distribution and
transition probability tensor of a 2nd-order random walk. We further provide
the theoretical connections between skip-gram based network embedding
algorithms and the theory of graph Laplacian. Finally, we present the NetMF
method as well as its approximation algorithm for computing network embedding.
Our method offers significant improvements over DeepWalk and LINE for
conventional network mining tasks. This work lays the theoretical foundation
for skip-gram based network embedding methods, leading to a better
understanding of latent network representation learning.
| 1 | 0 | 0 | 1 | 0 | 0 |
Differentially Private Query Learning: from Data Publishing to Model Publishing | With the development of Big Data and cloud data sharing, privacy preserving
data publishing becomes one of the most important topics in the past decade. As
one of the most influential privacy definitions, differential privacy provides
a rigorous and provable privacy guarantee for data publishing. Differentially
private interactive publishing achieves good performance in many applications;
however, the curator has to release a large number of queries in a batch or a
synthetic dataset in the Big Data era. To provide accurate non-interactive
publishing results in the constraint of differential privacy, two challenges
need to be tackled: one is how to decrease the correlation between large sets
of queries, while the other is how to predict on fresh queries. Neither is easy
to solve by the traditional differential privacy mechanism. This paper
transfers the data publishing problem to a machine learning problem, in which
queries are considered as training samples and a prediction model will be
released rather than query results or synthetic datasets. When the model is
published, it can be used to answer current submitted queries and predict
results for fresh queries from the public. Compared with the traditional
method, the proposed prediction model enhances the accuracy of query results
for non-interactive publishing. Experimental results show that the proposed
solution outperforms traditional differential privacy in terms of Mean Absolute
Value on a large group of queries. This also suggests the learning model can
successfully retain the utility of published queries while preserving privacy.
| 1 | 0 | 0 | 0 | 0 | 0 |
Flexible Level-1 Consensus Ensuring Stable Social Choice: Analysis and Algorithms | Level-1 Consensus is a property of a preference-profile. Intuitively, it
means that there exists a preference relation which induces an ordering of all
other preferences such that frequent preferences are those that are more
similar to it. This is a desirable property, since it enhances the stability of
social choice by guaranteeing that there exists a Condorcet winner and it is
elected by all scoring rules.
In this paper, we present an algorithm for checking whether a given
preference profile exhibits level-1 consensus. We apply this algorithm to a
large number of preference profiles, both real and randomly-generated, and find
that level-1 consensus is very improbable. We support these empirical findings
theoretically, by showing that, under the impartial culture assumption, the
probability of level-1 consensus approaches zero when the number of individuals
approaches infinity.
Motivated by these observations, we show that the level-1 consensus property
can be weakened while retaining its stability implications. We call this weaker
property Flexible Consensus. We show, both empirically and theoretically, that
it is considerably more probable than the original level-1 consensus. In
particular, under the impartial culture assumption, the probability for
Flexible Consensus converges to a positive number when the number of
individuals approaches infinity.
| 1 | 0 | 0 | 0 | 0 | 0 |
Fracture imaging within a granitic rock aquifer using multiple-offset single-hole and cross-hole GPR reflection data | The sparsely spaced highly permeable fractures of the granitic rock aquifer
at Stang-er-Brune (Brittany, France) form a well-connected fracture network of
high permeability but unknown geometry. Previous work based on optical and
acoustic logging together with single-hole and cross-hole flowmeter data
acquired in 3 neighboring boreholes (70-100 m deep) have identified the most
important permeable fractures crossing the boreholes and their hydraulic
connections. To constrain possible flow paths by estimating the geometries of
known and previously unknown fractures, we have acquired, processed and
interpreted multifold, single- and cross-hole GPR data using 100 and 250 MHz
antennas. The GPR data processing scheme consisting of time-zero corrections,
scaling, bandpass filtering and F-X deconvolution, eigenvector filtering,
muting, pre-stack Kirchhoff depth migration and stacking was used to
differentiate fluid-filled fracture reflections from source-generated noise.
The final stacked and pre-stack depth-migrated GPR sections provide
high-resolution images of individual fractures (dipping 30-90°) in the
surroundings (2-20 m for the 100 MHz antennas; 2-12 m for the 250 MHz antennas)
of each borehole in a 2D plane projection that are of superior quality to those
obtained from single-offset sections. Most fractures previously identified from
hydraulic testing can be correlated to reflections in the single-hole data.
Several previously unknown major near vertical fractures have also been
identified away from the boreholes.
| 0 | 1 | 0 | 0 | 0 | 0 |
The asymptotic behavior of automorphism groups of function fields over finite fields | The purpose of this paper is to investigate the asymptotic behavior of
automorphism groups of function fields when genus tends to infinity.
Motivated by applications in coding and cryptography, we consider the maximum
size of abelian subgroups of the automorphism group
$\mbox{Aut}(F/\mathbb{F}_q)$ in terms of genus ${g_F}$ for a function field $F$
over a finite field $\mathbb{F}_q$. Although the whole group
$\mbox{Aut}(F/\mathbb{F}_q)$ could have size $\Omega({g_F}^4)$, the maximum
size $m_F$ of abelian subgroups of the automorphism group
$\mbox{Aut}(F/\mathbb{F}_q)$ is upper bounded by $4g_F+4$ for $g_F\ge 2$. In
the present paper, we study the asymptotic behavior of $m_F$ by defining
$M_q=\limsup_{{g_F}\rightarrow\infty}\frac{m_F \cdot \log_q m_F}{g_F}$, where
$F$ runs through all function fields over $\mathbb{F}_q$. We show that $M_q$
lies between $2$ and $3$ (or $4$) for odd characteristic (or for even
characteristic, respectively). This means that $m_F$ grows much more slowly
than genus does asymptotically.
The second part of this paper is to study the maximum size $b_F$ of subgroups
of $\mbox{Aut}(F/\mathbb{F}_q)$ whose order is coprime to $q$. The Hurwitz
bound gives an upper bound $b_F\le 84(g_F-1)$ for every function field
$F/\mathbb{F}_q$ of genus $g_F\ge 2$. We investigate the asymptotic behavior of
$b_F$ by defining ${B_q}=\limsup_{{g_F}\rightarrow\infty}\frac{b_F}{g_F}$,
where $F$ runs through all function fields over $\mathbb{F}_q$. Although the
Hurwitz bound shows ${B_q}\le 84$, there are no lower bounds on $B_q$ in
literature. One does not even know if ${B_q}=0$. For the first time, we show
that ${B_q}\ge 2/3$ by explicitly constructing some towers of function fields
in this paper.
| 0 | 0 | 1 | 0 | 0 | 0 |
Bootstrap Robust Prescriptive Analytics | We address the problem of prescribing an optimal decision in a framework
where its cost depends on uncertain problem parameters $Y$ that need to be
learned from data. Earlier work by Bertsimas and Kallus (2014) transforms
classical machine learning methods that merely predict $Y$ from supervised
training data $[(x_1, y_1), \dots, (x_n, y_n)]$ into prescriptive methods
taking optimal decisions specific to a particular covariate context $X=\bar x$.
Their prescriptive methods factor in additional observed contextual information
on a potentially large number of covariates $X=\bar x$ to take context specific
actions $z(\bar x)$ which are superior to any static decision $z$. Any naive
use of limited training data may, however, lead to gullible decisions
over-calibrated to one particular data set. In this paper, we borrow ideas from
distributionally robust optimization and the statistical bootstrap of Efron
(1982) to propose two novel prescriptive methods based on (nw) Nadaraya-Watson
and (nn) nearest-neighbors learning which safeguard against overfitting and
lead to improved out-of-sample performance. Both resulting robust prescriptive
methods reduce to tractable convex optimization problems and enjoy a limited
disappointment on bootstrap data. We illustrate the data-driven decision-making
framework and our novel robustness notion on a small news vendor problem as
well as a small portfolio allocation problem.
| 0 | 0 | 0 | 1 | 0 | 0 |
Tverberg type theorems for matroids | In this paper we show a variant of colorful Tverberg's theorem which is valid
in any matroid: Let $S$ be a sequence of non-loops in a matroid $M$ of finite
rank $m$ with closure operator cl. Suppose that $S$ is colored in such a way
that the first color does not appear more than $r$-times and each other color
appears at most $(r-1)$-times. Then $S$ can be partitioned into $r$ rainbow
subsequences $S_1,\ldots, S_r$ such that $cl\,\emptyset\subsetneq
cl\,S_1\subseteq cl\, S_2\subseteq \ldots \subseteq cl\,S_r$. In particular,
$\emptyset\neq \bigcap_{i=1}^r cl\,S_i$. A subsequence is called rainbow if it
contains each color at most once.
The conclusion of our theorem is weaker than the conclusion of the original
Tverberg's theorem in $\mathbb R^d$, which states that $\bigcap conv\,S_i\neq
\emptyset$, whereas we only claim that $\bigcap aff\,S_i\neq \emptyset$. On the
other hand, our theorem strengthens the Tverberg's theorem in several other
ways: 1) it is applicable to any matroid (whereas Tverberg's theorem can only
be used in $\mathbb R^d$), 2) instead of $\bigcap cl\,S_i\neq \emptyset$ we
have the stronger condition $cl\,\emptyset\subsetneq cl\,S_1\subseteq
cl\,S_2\subseteq \ldots \subseteq cl\,S_r$, and 3) we add a color constraints
that are even stronger than the color constraints in the colorful version of
Tverberg's theorem.
Recently, the author together with Goaoc, Mabillard, Patáková, Tancer and
Wagner used the first property and applied the non-colorful version of this
theorem to homology groups with $GF(p)$ coefficients to obtain several
non-embeddability results, for details we refer to arXiv:1610.09063.
| 0 | 0 | 1 | 0 | 0 | 0 |
Dynamic Erdős-Rényi graphs | We propose two classes of dynamic versions of the classical Erdős-Rényi
graph: one in which the transition rates are governed by an external regime
process, and one in which the transition rates are periodically resampled. For
both models we consider the evolution of the number of edges present, with
explicit results for the corresponding moments, functional central limit
theorems and large deviations asymptotics.
| 0 | 0 | 1 | 0 | 0 | 0 |
Mining Density Contrast Subgraphs | Dense subgraph discovery is a key primitive in many graph mining
applications, such as detecting communities in social networks and mining gene
correlation from biological data. Most studies on dense subgraph mining only
deal with one graph. However, in many applications, we have more than one graph
describing relations among a same group of entities. In this paper, given two
graphs sharing the same set of vertices, we investigate the problem of
detecting subgraphs that contrast the most with respect to density. We call
such subgraphs Density Contrast Subgraphs, or DCS in short. Two widely used
graph density measures, average degree and graph affinity, are considered. For
both density measures, mining DCS is equivalent to mining the densest subgraph
from a "difference" graph, which may have both positive and negative edge
weights. Due to the existence of negative edge weights, existing dense subgraph
detection algorithms cannot identify the subgraph we need. We prove the
computational hardness of mining DCS under the two graph density measures and
develop efficient algorithms to find DCS. We also conduct extensive experiments
on several real-world datasets to evaluate our algorithms. The experimental
results show that our algorithms are both effective and efficient.
| 1 | 0 | 0 | 0 | 0 | 0 |
Unsupervised Learning by Predicting Noise | Convolutional neural networks provide visual features that perform remarkably
well in many computer vision applications. However, training these networks
requires significant amounts of supervision. This paper introduces a generic
framework to train deep networks, end-to-end, with no supervision. We propose
to fix a set of target representations, called Noise As Targets (NAT), and to
constrain the deep features to align to them. This domain agnostic approach
avoids the standard unsupervised learning issues of trivial solutions and
collapsing of features. Thanks to a stochastic batch reassignment strategy and
a separable square loss function, it scales to millions of images. The proposed
approach produces representations that perform on par with state-of-the-art
unsupervised methods on ImageNet and Pascal VOC.
| 1 | 0 | 0 | 1 | 0 | 0 |
Trajectory Generation for Millimeter Scale Ferromagnetic Swimmers: Theory and Experiments | Microrobots have the potential to impact many areas such as microsurgery,
micromanipulation and minimally invasive sensing. Due to their small size,
microrobots swim in a regime that is governed by low Reynolds number
hydrodynamics. In this paper, we consider small scale artificial swimmers that
are fabricated using ferromagnetic filaments and locomote in response to time
varying external magnetic fields. We motivate the design of previously proposed
control laws using tools from geometric mechanics and also demonstrate how new
control laws can be synthesized to generate net translation in such swimmers.
We further describe how to modify these control inputs to make the swimmers
track rich trajectories in the workspace by investigating stability properties
of their limit cycles in the orientation angles phase space. Following a
systematic design optimization, we develop a principled approach to encode
internal magnetization distributions in millimeter scale ferromagnetic
filaments. We verify and demonstrate this procedure experimentally and finally
show translation, trajectory tracking and turning in place locomotion in these
optimal swimmers using a Helmholtz coils setup.
| 1 | 0 | 0 | 0 | 0 | 0 |
Decoupled Block-Wise ILU(k) Preconditioner on GPU | This research investigates the implementation mechanism of block-wise ILU(k)
preconditioner on GPU. The block-wise ILU(k) algorithm requires both the level
k and the block size to be designed as variables. A decoupled ILU(k) algorithm
consists of a symbolic phase and a factorization phase. In the symbolic phase,
a ILU(k) nonzero pattern is established from the point-wise structure extracted
from a block-wise matrix. In the factorization phase, the block-wise matrix
with a variable block size is factorized into a block lower triangular matrix
and a block upper triangular matrix. And a further diagonal factorization is
required to perform on the block upper triangular matrix for adapting a
parallel triangular solver on GPU.We also present the numerical experiments to
study the preconditioner actions on different k levels and block sizes.
| 1 | 0 | 0 | 0 | 0 | 0 |
Multi-agent Economics and the Emergence of Critical Markets | The dual crises of the sub-prime mortgage crisis and the global financial
crisis has prompted a call for explanations of non-equilibrium market dynamics.
Recently a promising approach has been the use of agent based models (ABMs) to
simulate aggregate market dynamics. A key aspect of these models is the
endogenous emergence of critical transitions between equilibria, i.e. market
collapses, caused by multiple equilibria and changing market parameters.
Several research themes have developed microeconomic based models that include
multiple equilibria: social decision theory (Brock and Durlauf), quantal
response models (McKelvey and Palfrey), and strategic complementarities
(Goldstein). A gap that needs to be filled in the literature is a unified
analysis of the relationship between these models and how aggregate criticality
emerges from the individual agent level. This article reviews the agent-based
foundations of markets starting with the individual agent perspective of
McFadden and the aggregate perspective of catastrophe theory emphasising
connections between the different approaches. It is shown that changes in the
uncertainty agents have in the value of their interactions with one another,
even if these changes are one-sided, plays a central role in systemic market
risks such as market instability and the twin crises effect. These interactions
can endogenously cause crises that are an emergent phenomena of markets.
| 0 | 0 | 0 | 0 | 0 | 1 |
Error-Correcting Neural Sequence Prediction | In this paper we propose a novel neural language modelling (NLM) method based
on \textit{error-correcting output codes} (ECOC), abbreviated as ECOC-NLM. This
latent variable based approach provides a principled way to choose a varying
amount of latent output codes and avoids exact softmax normalization. Instead
of minimizing measures between the predicted probability distribution and true
distribution, we use error-correcting codes to represent both predictions and
outputs. Secondly, we propose multiple ways to improve accuracy and convergence
rates by maximizing the separability between codes that correspond to classes
proportional to word embedding similarities. Lastly, we introduce a novel
method called \textit{Latent Mixture Sampling}, a technique that is used to
mitigate exposure bias and can be integrated into training latent-based neural
language models. This involves mixing the latent codes (i.e variables) of past
predictions and past targets in one of two ways: (1) according to a predefined
sampling schedule or (2) a differentiable sampling procedure whereby the mixing
probability is learned throughout training by replacing the greedy argmax
operation with a smooth approximation. In evaluating Codeword Mixture Sampling
for ECOC-NLM, we also baseline it against CWMS in a closely related Hierarhical
Softmax-based NLM.
| 1 | 0 | 0 | 1 | 0 | 0 |
Sampling as optimization in the space of measures: The Langevin dynamics as a composite optimization problem | We study sampling as optimization in the space of measures. We focus on
gradient flow-based optimization with the Langevin dynamics as a case study. We
investigate the source of the bias of the unadjusted Langevin algorithm (ULA)
in discrete time, and consider how to remove or reduce the bias. We point out
the difficulty is that the heat flow is exactly solvable, but neither its
forward nor backward method is implementable in general, except for Gaussian
data. We propose the symmetrized Langevin algorithm (SLA), which should have a
smaller bias than ULA, at the price of implementing a proximal gradient step in
space. We show SLA is in fact consistent for Gaussian target measure, whereas
ULA is not. We also illustrate various algorithms explicitly for Gaussian
target measure, including gradient descent, proximal gradient, and
Forward-Backward, and show they are all consistent.
| 0 | 0 | 0 | 1 | 0 | 0 |
Deep laser cooling in optical trap: two-level quantum model | We study laser cooling of $^{24}$Mg atoms in dipole optical trap with pumping
field resonant to narrow $(3s3s)\,^1S_0 \rightarrow \, (3s3p)\,^{3}P_1$
($\lambda = 457$ nm) optical transition. For description of laser cooling of
atoms in the optical trap with taking into account quantum recoil effects we
consider two quantum models. The first one is based on direct numerical
solution of quantum kinetic equation for atom density matrix and the second one
is simplified model based on decomposition of atom density matrix over
vibration states in the dipole trap. We search pumping field intensity and
detuning for minimum cooling energy and fast laser cooling.
| 0 | 1 | 0 | 0 | 0 | 0 |
JADE: Joint Autoencoders for Dis-Entanglement | The problem of feature disentanglement has been explored in the literature,
for the purpose of image and video processing and text analysis.
State-of-the-art methods for disentangling feature representations rely on the
presence of many labeled samples. In this work, we present a novel method for
disentangling factors of variation in data-scarce regimes. Specifically, we
explore the application of feature disentangling for the problem of supervised
classification in a setting where few labeled samples exist, and there are no
unlabeled samples for use in unsupervised training. Instead, a similar datasets
exists which shares at least one direction of variation with the
sample-constrained datasets. We train our model end-to-end using the framework
of variational autoencoders and are able to experimentally demonstrate that
using an auxiliary dataset with similar variation factors contribute positively
to classification performance, yielding competitive results with the
state-of-the-art in unsupervised learning.
| 1 | 0 | 0 | 1 | 0 | 0 |
Numerical Simulations of Collisional Cascades at the Roche Limits of White Dwarf Stars | We consider the long-term collisional and dynamical evolution of solid
material orbiting in a narrow annulus near the Roche limit of a white dwarf.
With orbital velocities of 300 km/sec, systems of solids with initial
eccentricity $e \gtrsim 10^{-3}$ generate a collisional cascade where objects
with radii $r \lesssim$ 100--300 km are ground to dust. This process converts
1-100 km asteroids into 1 $\mu$m particles in $10^2 - 10^6$ yr. Throughout this
evolution, the swarm maintains an initially large vertical scale height $H$.
Adding solids at a rate $\dot{M}$ enables the system to find an equilibrium
where the mass in solids is roughly constant. This equilibrium depends on
$\dot{M}$ and $r_0$, the radius of the largest solid added to the swarm. When
$r_0 \lesssim$ 10 km, this equilibrium is stable. For larger $r_0$, the mass
oscillates between high and low states; the fraction of time spent in high
states ranges from 100% for large $\dot{M}$ to much less than 1% for small
$\dot{M}$. During high states, the stellar luminosity reprocessed by the solids
is comparable to the excess infrared emission observed in many metallic line
white dwarfs.
| 0 | 1 | 0 | 0 | 0 | 0 |
Causal Effect Inference with Deep Latent-Variable Models | Learning individual-level causal effects from observational data, such as
inferring the most effective medication for a specific patient, is a problem of
growing importance for policy makers. The most important aspect of inferring
causal effects from observational data is the handling of confounders, factors
that affect both an intervention and its outcome. A carefully designed
observational study attempts to measure all important confounders. However,
even if one does not have direct access to all confounders, there may exist
noisy and uncertain measurement of proxies for confounders. We build on recent
advances in latent variable modeling to simultaneously estimate the unknown
latent space summarizing the confounders and the causal effect. Our method is
based on Variational Autoencoders (VAE) which follow the causal structure of
inference with proxies. We show our method is significantly more robust than
existing methods, and matches the state-of-the-art on previous benchmarks
focused on individual treatment effects.
| 1 | 0 | 0 | 1 | 0 | 0 |
Gamma-ray bursts and their relation to astroparticle physics and cosmology | This article gives an overview of gamma-ray bursts (GRBs) and their relation
to astroparticle physics and cosmology. GRBs are the most powerful explosions
in the universe that occur roughly once per day and are characterized by
flashes of gamma-rays typically lasting from a fraction of a second to
thousands of seconds. Even after more than four decades since their discovery
they still remain not fully understood. Two types of GRBs are observed:
spectrally harder short duration bursts and softer long duration bursts. The
long GRBs originate from the collapse of massive stars whereas the preferred
model for the short GRBs is coalescence of compact objects such as two neutron
stars or a neutron star and a black hole. There were suggestions that GRBs can
produce ultra-high energy cosmic rays and neutrinos. Also a certain sub-type of
GRBs may serve as a new standard candle that can help constrain and measure the
cosmological parameters to much higher redshift than what was possible so far.
I will review the recent experimental observations.
| 0 | 1 | 0 | 0 | 0 | 0 |
Discrete-time Risk-sensitive Mean-field Games | In this paper, we study a class of discrete-time mean-field games under the
infinite-horizon risk-sensitive discounted-cost optimality criterion.
Risk-sensitivity is introduced for each agent (player) via an exponential
utility function. In this game model, each agent is coupled with the rest of
the population through the empirical distribution of the states, which affects
both the agent's individual cost and its state dynamics. Under mild
assumptions, we establish the existence of a mean-field equilibrium in the
infinite-population limit as the number of agents ($N$) goes to infinity, and
then show that the policy obtained from the mean-field equilibrium constitutes
an approximate Nash equilibrium when $N$ is sufficiently large.
| 1 | 0 | 0 | 0 | 0 | 0 |
Glasner's problem for Polish groups with metrizable universal minimal flow | A problem of Glasner, now known as Glasner's problem, asks whether every
minimally almost periodic, monothetic, Polish groups is extremely amenable. The
purpose of this short note is to observe that a positive answer is obtained
under the additional assumption that the universal minimal flow is metrizable.
| 0 | 0 | 1 | 0 | 0 | 0 |
VAE with a VampPrior | Many different methods to train deep generative models have been introduced
in the past. In this paper, we propose to extend the variational auto-encoder
(VAE) framework with a new type of prior which we call "Variational Mixture of
Posteriors" prior, or VampPrior for short. The VampPrior consists of a mixture
distribution (e.g., a mixture of Gaussians) with components given by
variational posteriors conditioned on learnable pseudo-inputs. We further
extend this prior to a two layer hierarchical model and show that this
architecture with a coupled prior and posterior, learns significantly better
models. The model also avoids the usual local optima issues related to useless
latent dimensions that plague VAEs. We provide empirical studies on six
datasets, namely, static and binary MNIST, OMNIGLOT, Caltech 101 Silhouettes,
Frey Faces and Histopathology patches, and show that applying the hierarchical
VampPrior delivers state-of-the-art results on all datasets in the unsupervised
permutation invariant setting and the best results or comparable to SOTA
methods for the approach with convolutional networks.
| 1 | 0 | 0 | 1 | 0 | 0 |
Fast and Strong Convergence of Online Learning Algorithms | In this paper, we study the online learning algorithm without explicit
regularization terms. This algorithm is essentially a stochastic gradient
descent scheme in a reproducing kernel Hilbert space (RKHS). The polynomially
decaying step size in each iteration can play a role of regularization to
ensure the generalization ability of online learning algorithm. We develop a
novel capacity dependent analysis on the performance of the last iterate of
online learning algorithm. The contribution of this paper is two-fold. First,
our nice analysis can lead to the convergence rate in the standard mean square
distance which is the best so far. Second, we establish, for the first time,
the strong convergence of the last iterate with polynomially decaying step
sizes in the RKHS norm. We demonstrate that the theoretical analysis
established in this paper fully exploits the fine structure of the underlying
RKHS, and thus can lead to sharp error estimates of online learning algorithm.
| 1 | 0 | 0 | 1 | 0 | 0 |
A new continuum theory for incompressible swelling materials | Swelling media (e.g. gels, tumors) are usually described by mechanical
constitutive laws (e.g. Hooke or Darcy laws). However, constitutive relations
of real swelling media are not well known. Here, we take an opposite route and
consider a simple packing heuristics, i.e. the particles can't overlap. We
deduce a formula for the equilibrium density under a confining potential. We
then consider its evolution when the average particle volume and confining
potential depend on time under two additional heuristics: (i) any two particles
can't swap their position; (ii) motion should obey some energy minimization
principle. These heuristics determine the medium velocity consistently with the
continuity equation. In the direction normal to the potential level sets the
velocity is related with that of the level sets while in the parallel
direction, it is determined by a Laplace-Beltrami operator on these sets. This
complex geometrical feature cannot be recovered using a simple Darcy law.
| 0 | 0 | 1 | 0 | 0 | 0 |
The Hidden Binary Search Tree:A Balanced Rotation-Free Search Tree in the AVL RAM Model | In this paper we generalize the definition of "Search Trees" (ST) to enable
reference values other than the key of prior inserted nodes. The idea builds on
the assumption an $n$-node AVL (or Red-Black) requires to assure $O(\log_2n)$
worst-case search time, namely, a single comparison between two keys takes
constant time. This means the size of each key in bits is fixed to $B=c\log_2
n$ ($c\geq1$) once $n$ is determined, otherwise the $O(1)$-time comparison
assumption does not hold. Based on this we calculate \emph{ideal} reference
values from the mid-point of the interval $0..2^B$. This idea follows
`recursively' to assure each node along the search path is provided a reference
value that guarantees an overall logarithmic time. Because the search tree
property works only when keys are compared to reference values and these values
are calculated only during searches, we term the data structure as the Hidden
Binary Search Tree (HBST). We show elementary functions to maintain the HSBT
height $O(B)=O(\log_2n)$. This result requires no special order on the input --
as does BST -- nor self-balancing procedures, as do AVL and Red-Black.
| 1 | 0 | 0 | 0 | 0 | 0 |
The sdB pulsating star V391 Peg and its putative giant planet revisited after 13 years of time-series photometric data | V391 Peg (alias HS2201+2610) is a subdwarf B (sdB) pulsating star that shows
both p- and g-modes. By studying the arrival times of the p-mode maxima and
minima through the O-C method, in a previous article the presence of a planet
was inferred with an orbital period of 3.2 yr and a minimum mass of 3.2 M_Jup.
Here we present an updated O-C analysis using a larger data set of 1066 hours
of photometric time series (~2.5x larger in terms of the number of data
points), which covers the period between 1999 and 2012 (compared with 1999-2006
of the previous analysis). Up to the end of 2008, the new O-C diagram of the
main pulsation frequency (f1) is compatible with (and improves) the previous
two-component solution representing the long-term variation of the pulsation
period (parabolic component) and the giant planet (sine wave component). Since
2009, the O-C trend of f1 changes, and the time derivative of the pulsation
period (p_dot) passes from positive to negative; the reason of this change of
regime is not clear and could be related to nonlinear interactions between
different pulsation modes. With the new data, the O-C diagram of the secondary
pulsation frequency (f2) continues to show two components (parabola and sine
wave), like in the previous analysis. Various solutions are proposed to fit the
O-C diagrams of f1 and f2, but in all of them, the sinusoidal components of f1
and f2 differ or at least agree less well than before. The nice agreement found
previously was a coincidence due to various small effects that are carefully
analysed. Now, with a larger dataset, the presence of a planet is more
uncertain and would require confirmation with an independent method. The new
data allow us to improve the measurement of p_dot for f1 and f2: using only the
data up to the end of 2008, we obtain p_dot_1=(1.34+-0.04)x10**-12 and
p_dot_2=(1.62+-0.22)x10**-12 ...
| 0 | 1 | 0 | 0 | 0 | 0 |
Rates of convergence for inexact Krasnosel'skii-Mann iterations in Banach spaces | We study the convergence of an inexact version of the classical
Krasnosel'skii-Mann iteration for computing fixed points of nonexpansive maps.
Our main result establishes a new metric bound for the fixed-point residuals,
from which we derive their rate of convergence as well as the convergence of
the iterates towards a fixed point. The results are applied to three variants
of the basic iteration: infeasible iterations with approximate projections, the
Ishikawa iteration, and diagonal Krasnosels'kii-Mann schemes. The results are
also extended to continuous time in order to study the asymptotics of
nonautonomous evolution equations governed by nonexpansive operators.
| 0 | 0 | 1 | 0 | 0 | 0 |
Automatic Estimation of Fetal Abdominal Circumference from Ultrasound Images | Ultrasound diagnosis is routinely used in obstetrics and gynecology for fetal
biometry, and owing to its time-consuming process, there has been a great
demand for automatic estimation. However, the automated analysis of ultrasound
images is complicated because they are patient-specific, operator-dependent,
and machine-specific. Among various types of fetal biometry, the accurate
estimation of abdominal circumference (AC) is especially difficult to perform
automatically because the abdomen has low contrast against surroundings,
non-uniform contrast, and irregular shape compared to other parameters.We
propose a method for the automatic estimation of the fetal AC from 2D
ultrasound data through a specially designed convolutional neural network
(CNN), which takes account of doctors' decision process, anatomical structure,
and the characteristics of the ultrasound image. The proposed method uses CNN
to classify ultrasound images (stomach bubble, amniotic fluid, and umbilical
vein) and Hough transformation for measuring AC. We test the proposed method
using clinical ultrasound data acquired from 56 pregnant women. Experimental
results show that, with relatively small training samples, the proposed CNN
provides sufficient classification results for AC estimation through the Hough
transformation. The proposed method automatically estimates AC from ultrasound
images. The method is quantitatively evaluated, and shows stable performance in
most cases and even for ultrasound images deteriorated by shadowing artifacts.
As a result of experiments for our acceptance check, the accuracies are 0.809
and 0.771 with the expert 1 and expert 2, respectively, while the accuracy
between the two experts is 0.905. However, for cases of oversized fetus, when
the amniotic fluid is not observed or the abdominal area is distorted, it could
not correctly estimate AC.
| 1 | 0 | 0 | 1 | 0 | 0 |
Heavy fermion quantum criticality at dilute carrier limit in CeNi$_{2-δ}$(As$_{1-x}$P$_{x}$)$_{2}$ | We study the quantum phase transitions in the nickel pnctides,
CeNi$_{2-\delta}$(As$_{1-x}$P$_{x}$)$_{2}$ ($\delta$ $\approx$ 0.07-0.22). This
series displays the distinct heavy fermion behavior in the rarely studied
parameter regime of dilute carrier limit. We systematically investigate the
magnetization, specific heat and electrical transport down to low temperatures.
Upon increasing the P-content, the antiferromagnetic order of the Ce-4$f$
moment is suppressed continuously and vanishes at $x_c \sim$ 0.55. At this
doping, the temperature dependences of the specific heat and longitudinal
resistivity display non-Fermi liquid behavior. Both the residual resistivity
$\rho_0$ and the Sommerfeld coefficient $\gamma_0$ are sharply peaked around
$x_c$. When the P-content reaches close to 100\%, we observe a clear
low-temperature crossover into the Fermi liquid regime. In contrast to what
happens in the parent compound $x$ = 0.0 as a function of pressure, we find a
surprising result that the non-Fermi liquid behavior persists over a nonzero
range of doping concentration, $x_c<x<0.9$. In this doping range, at the lowest
measured temperatures, the temperature dependence of the specific-heat
coefficient is logarithmically divergent and that of the electrical resistivity
is linear. We discuss the properties of
CeNi$_{2-\delta}$(As$_{1-x}$P$_{x}$)$_{2}$ in comparison with those of its 1111
counterpart, CeNi(As$_{1-x}$P$_{x}$)O. Our results indicate a non-Fermi liquid
phase in the global phase diagram of heavy fermion metals.
| 0 | 1 | 0 | 0 | 0 | 0 |
Invariant Gibbs measures for the 2-d defocusing nonlinear wave equations | We consider the defocusing nonlinear wave equations (NLW) on the
two-dimensional torus. In particular, we construct invariant Gibbs measures for
the renormalized so-called Wick ordered NLW. We then prove weak universality of
the Wick ordered NLW, showing that the Wick ordered NLW naturally appears as a
suitable scaling limit of non-renormalized NLW with Gaussian random initial
data.
| 0 | 0 | 1 | 0 | 0 | 0 |
Topological Analysis and Synthesis of Structures related to Certain Classes of K-Geodetic Computer Networks | A fundamental characteristic of computer networks is their topological
structure. The question of the description of the structural characteristics of
computer networks represents a problem that is not completely solved. Search
methods for structures of computer networks, for which the values of the
selected parameters of their operation quality are extreme, have not been
completely developed. The construction of computer networks with optimum
indices of their operation quality is reduced to the solution of discrete
optimization problems over graphs. This paper describes in detail the
advantages of the practical use of k-geodetic graphs [2, 3] in the topological
design of computer networks as an alternative for the solution of the
fundamental problems mentioned above which, we believe, are still open. Also,
the topological analysis and synthesis of some classes of these networks have
been performed.
| 1 | 0 | 0 | 0 | 0 | 0 |
A distributed-memory hierarchical solver for general sparse linear systems | We present a parallel hierarchical solver for general sparse linear systems
on distributed-memory machines. For large-scale problems, this fully algebraic
algorithm is faster and more memory-efficient than sparse direct solvers
because it exploits the low-rank structure of fill-in blocks. Depending on the
accuracy of low-rank approximations, the hierarchical solver can be used either
as a direct solver or as a preconditioner. The parallel algorithm is based on
data decomposition and requires only local communication for updating boundary
data on every processor. Moreover, the computation-to-communication ratio of
the parallel algorithm is approximately the volume-to-surface-area ratio of the
subdomain owned by every processor. We present various numerical results to
demonstrate the versatility and scalability of the parallel algorithm.
| 1 | 0 | 0 | 0 | 0 | 0 |
Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
this https URL .
| 1 | 0 | 0 | 0 | 0 | 0 |
Choreographic and Somatic Approaches for the Development of Expressive Robotic Systems | As robotic systems are moved out of factory work cells into human-facing
environments questions of choreography become central to their design,
placement, and application. With a human viewer or counterpart present, a
system will automatically be interpreted within context, style of movement, and
form factor by human beings as animate elements of their environment. The
interpretation by this human counterpart is critical to the success of the
system's integration: knobs on the system need to make sense to a human
counterpart; an artificial agent should have a way of notifying a human
counterpart of a change in system state, possibly through motion profiles; and
the motion of a human counterpart may have important contextual clues for task
completion. Thus, professional choreographers, dance practitioners, and
movement analysts are critical to research in robotics. They have design
methods for movement that align with human audience perception, can identify
simplified features of movement for human-robot interaction goals, and have
detailed knowledge of the capacity of human movement. This article provides
approaches employed by one research lab, specific impacts on technical and
artistic projects within, and principles that may guide future such work. The
background section reports on choreography, somatic perspectives,
improvisation, the Laban/Bartenieff Movement System, and robotics. From this
context methods including embodied exercises, writing prompts, and community
building activities have been developed to facilitate interdisciplinary
research. The results of this work is presented as an overview of a smattering
of projects in areas like high-level motion planning, software development for
rapid prototyping of movement, artistic output, and user studies that help
understand how people interpret movement. Finally, guiding principles for other
groups to adopt are posited.
| 1 | 0 | 0 | 0 | 0 | 0 |
Development of verification system of socio-demographic data of virtual community member | The important task of developing verification system of data of virtual
community member on the basis of computer-linguistic analysis of the content of
a large sample of Ukrainian virtual communities is solved. The subject of
research is methods and tools for verification of web-members socio-demographic
characteristics based on computer-linguistic analysis of their communicative
interaction results. The aim of paper is to verifying web-user personal data on
the basis of computer-linguistic analysis of web-members information tracks.
The structure of verification software for web-user profile is designed for a
practical implementation of assigned tasks. The method of personal data
verification of web-members by analyzing information track of virtual community
member is conducted. For the first time the method for checking the
authenticity of web members personal data, which helped to design of
verification tool for socio-demographic characteristics of web-member is
developed. The verification system of data of web-members, which forms the
verified socio-demographic profiles of web-members, is developed as a result of
conducted experiments. Also the user interface of the developed verification
system web-members data is presented. Effectiveness and efficiency of use of
the developed methods and means for solving tasks in web-communities
administration is proved by their approbation. The number of false results of
verification system is 18%.
| 1 | 0 | 0 | 0 | 0 | 0 |
Demonstration of an efficient, photonic-based astronomical spectrograph on an 8-m telescope | We demonstrate for the first time an efficient, photonic-based astronomical
spectrograph on the 8-m Subaru Telescope. An extreme adaptive optics system is
combined with pupil apodiziation optics to efficiently inject light directly
into a single-mode fiber, which feeds a compact cross-dispersed spectrograph
based on array waveguide grating technology. The instrument currently offers a
throughput of 5% from sky-to-detector which we outline could easily be upgraded
to ~13% (assuming a coupling efficiency of 50%). The isolated spectrograph
throughput from the single-mode fiber to detector was 42% at 1550 nm. The
coupling efficiency into the single-mode fiber was limited by the achievable
Strehl ratio on a given night. A coupling efficiency of 47% has been achieved
with ~60% Strehl ratio on-sky to date. Improvements to the adaptive optics
system will enable 90% Strehl ratio and a coupling of up to 67% eventually.
This work demonstrates that the unique combination of advanced technologies
enables the realization of a compact and highly efficient spectrograph, setting
a precedent for future instrument design on very-large and extremely-large
telescopes.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Game of Random Variables | This paper analyzes a simple game with $n$ players. We fix a mean, $\mu$, in
the interval $[0, 1]$ and let each player choose any random variable
distributed on that interval with the given mean. The winner of the zero-sum
game is the player whose random variable has the highest realization. We show
that the position of the mean within the interval is paramount. Remarkably, if
the given mean is above a crucial threshold then the unique equilibrium must
contain a point mass on $1$. The cutoff is strictly decreasing in the number of
players, $n$; and for fixed $\mu$, as the number of players is increased, each
player places more weight on $1$ at equilibrium. We characterize the
equilibrium as the number of players goes to infinity.
| 1 | 0 | 0 | 0 | 0 | 0 |
Parallel Structure from Motion from Local Increment to Global Averaging | In this paper, we tackle the accurate and consistent Structure from Motion
(SfM) problem, in particular camera registration, far exceeding the memory of a
single computer in parallel. Different from the previous methods which
drastically simplify the parameters of SfM and sacrifice the accuracy of the
final reconstruction, we try to preserve the connectivities among cameras by
proposing a camera clustering algorithm to divide a large SfM problem into
smaller sub-problems in terms of camera clusters with overlapping. We then
exploit a hybrid formulation that applies the relative poses from local
incremental SfM into a global motion averaging framework and produce accurate
and consistent global camera poses. Our scalable formulation in terms of camera
clusters is highly applicable to the whole SfM pipeline including track
generation, local SfM, 3D point triangulation and bundle adjustment. We are
even able to reconstruct the camera poses of a city-scale data-set containing
more than one million high-resolution images with superior accuracy and
robustness evaluated on benchmark, Internet, and sequential data-sets.
| 1 | 0 | 0 | 0 | 0 | 0 |
Proceedings 2nd Workshop on Models for Formal Analysis of Real Systems | This volume contains the proceedings of MARS 2017, the second workshop on
Models for Formal Analysis of Real Systems, held on April 29, 2017 in Uppala,
Sweden, as an affiliated workshop of ETAPS 2017, the European Joint Conferences
on Theory and Practice of Software.
The workshop emphasises modelling over verification. It aims at discussing
the lessons learned from making formal methods for the verification and
analysis of realistic systems. Examples are:
(1) Which formalism is chosen, and why?
(2) Which abstractions have to be made and why?
(3) How are important characteristics of the system modelled?
(4) Were there any complications while modelling the system?
(5) Which measures were taken to guarantee the accuracy of the model?
We invited papers that present full models of real systems, which may lay the
basis for future comparison and analysis. An aim of the workshop is to present
different modelling approaches and discuss pros and cons for each of them.
Alternative formal descriptions of the systems presented at this workshop are
encouraged, which should foster the development of improved specification
formalisms.
| 1 | 0 | 0 | 0 | 0 | 0 |
Negative electronic compressibility and nanoscale inhomogeneity in ionic-liquid gated two-dimensional superconductors | When the electron density of highly crystalline thin films is tuned by
chemical doping or ionic liq- uid gating, interesting effects appear including
unconventional superconductivity, sizeable spin-orbit coupling, competition
with charge-density waves, and a debated low-temperature metallic state that
seems to avoid the superconducting or insulating fate of standard
two-dimensional electron systems. Some experiments also find a marked tendency
to a negative electronic compressibility. We suggest that this indicates an
inclination for electronic phase separation resulting in a nanoscopic inhomo-
geneity. Although the mild modulation of the inhomogeneous landscape is
compatible with a high electron mobility in the metallic state, this
intrinsically inhomogeneous character is highlighted by the peculiar behaviour
of the metal-to-superconductor transition. Modelling the system with super-
conducting puddles embedded in a metallic matrix, we fit the peculiar
resistance vs. temperature curves of systems like TiSe2, MoS2, and ZrNCl. In
this framework also the low-temperature debated metallic state finds a natural
explanation in terms of the pristine metallic background embedding
non-percolating superconducting clusters. An intrinsically inhomogeneous
character naturally raises the question of the formation mechanism(s). We
propose a mechanism based on the interplay be- tween electrons and the charges
of the gating ionic liquid.
| 0 | 1 | 0 | 0 | 0 | 0 |
Symbolic Music Genre Transfer with CycleGAN | Deep generative models such as Variational Autoencoders (VAEs) and Generative
Adversarial Networks (GANs) have recently been applied to style and domain
transfer for images, and in the case of VAEs, music. GAN-based models employing
several generators and some form of cycle consistency loss have been among the
most successful for image domain transfer. In this paper we apply such a model
to symbolic music and show the feasibility of our approach for music genre
transfer. Evaluations using separate genre classifiers show that the style
transfer works well. In order to improve the fidelity of the transformed music,
we add additional discriminators that cause the generators to keep the
structure of the original music mostly intact, while still achieving strong
genre transfer. Visual and audible results further show the potential of our
approach. To the best of our knowledge, this paper represents the first
application of GANs to symbolic music domain transfer.
| 1 | 0 | 0 | 0 | 0 | 0 |
Methodology for Multi-stage, Operations- and Uncertainty-Aware Placement and Sizing of FACTS Devices in a Large Power Transmission System | We develop new optimization methodology for planning installation of Flexible
Alternating Current Transmission System (FACTS) devices of the parallel and
shunt types into large power transmission systems, which allows to delay or
avoid installations of generally much more expensive power lines. Methodology
takes as an input projected economic development, expressed through a paced
growth of the system loads, as well as uncertainties, expressed through
multiple scenarios of the growth. We price new devices according to their
capacities. Installation cost contributes to the optimization objective in
combination with the cost of operations integrated over time and averaged over
the scenarios. The multi-stage (-time-frame) optimization aims to achieve a
gradual distribution of new resources in space and time. Constraints on the
investment budget, or equivalently constraint on building capacity, is
introduced at each time frame. Our approach adjusts operationally not only
newly installed FACTS devices but also other already existing flexible degrees
of freedom. This complex optimization problem is stated using the most general
AC Power Flows. Non-linear, non-convex, multiple-scenario and multi-time-frame
optimization is resolved via efficient heuristics, consisting of a sequence of
alternating Linear Programmings or Quadratic Programmings (depending on the
generation cost) and AC-PF solution steps designed to maintain operational
feasibility for all scenarios. Computational scalability and application of the
newly developed approach is illustrated on the example of the 2736-nodes large
Polish system. One most important advantage of the framework is that the
optimal capacity of FACTS is build up gradually at each time frame in a limited
number of locations, thus allowing to prepare the system better for possible
congestion due to future economic and other uncertainties.
| 1 | 1 | 1 | 0 | 0 | 0 |
PC Proxy: A New Method of Dynamical Tracer Reconstruction | A detailed development of the principal component proxy method of dynamical
tracer reconstruction is presented, including error analysis. The method works
by correlating the largest principal components of a matrix representation of
the transport dynamics with a set of sparse measurements. The Lyapunov spectrum
was measured and used to quantify the lifetime of each principal component. The
method was tested on the 500 K isentropic surface with ozone measurements from
the Polar Aerosol and Ozone Measurement (POAM) III satellite instrument during
October and November 1998 and compared with the older proxy tracer method which
works by correlating measurements with a single other tracer or proxy. Using a
60 day integration time and five (5) principal components, cross validation of
globally reconstructed ozone and comparison with ozone sondes returned
root-mean-square errors of 0.16 ppmv and 0.36 ppmv, respectively. This compares
favourably with the classic proxy tracer method in which a passive tracer
equivalent latitude field was used for the proxy and which returned RMS errors
of 0.30 ppmv and 0.59 ppmv for cross-validation and sonde validation
respectively. The method seems especially effective for shorter lived tracers
and was far more accurate than the classic method at predicting ozone
concentration in the Southern hemisphere at the end of winter.
| 0 | 1 | 0 | 0 | 0 | 0 |
Magnetic Correlations in the Two-dimensional Repulsive Fermi Hubbard Model | The repulsive Fermi Hubbard model on the square lattice has a rich phase
diagram near half-filling (corresponding to the particle density per lattice
site $n=1$): for $n=1$ the ground state is an antiferromagnetic insulator, at
$0.6 < n \lesssim 0.8$, it is a $d_{x^2-y^2}$-wave superfluid (at least for
moderately strong interactions $U \lesssim 4t$ in terms of the hopping $t$),
and the region $1-n \ll 1$ is most likely subject to phase separation. Much of
this physics is preempted at finite temperatures and to an extent driven by
strong magnetic fluctuations, their quantitative characteristics and how they
change with the doping level being much less understood. Experiments on
ultra-cold atoms have recently gained access to this interesting fluctuation
regime, which is now under extensive investigation. In this work we employ a
self-consistent skeleton diagrammatic approach to quantify the characteristic
temperature scale $T_{M}(n)$ for the onset of magnetic fluctuations with a
large correlation length and identify their nature. Our results suggest that
the strongest fluctuations---and hence highest $T_{M}$ and easiest experimental
access to this regime---are observed at $U/t \approx 4-6$.
| 0 | 1 | 0 | 0 | 0 | 0 |
Bridge type classification: supervised learning on a modified NBI dataset | A key phase in the bridge design process is the selection of the structural
system. Due to budget and time constraints, engineers typically rely on
engineering judgment and prior experience when selecting a structural system,
often considering a limited range of design alternatives. The objective of this
study was to explore the suitability of supervised machine learning as a
preliminary design aid that provides guidance to engineers with regards to the
statistically optimal bridge type to choose, ultimately improving the
likelihood of optimized design, design standardization, and reduced maintenance
costs. In order to devise this supervised learning system, data for over
600,000 bridges from the National Bridge Inventory database were analyzed. Key
attributes for determining the bridge structure type were identified through
three feature selection techniques. Potentially useful attributes like seismic
intensity and historic data on the cost of materials (steel and concrete) were
then added from the US Geological Survey (USGS) database and Engineering News
Record. Decision tree, Bayes network and Support Vector Machines were used for
predicting the bridge design type. Due to state-to-state variations in material
availability, material costs, and design codes, supervised learning models
based on the complete data set did not yield favorable results. Supervised
learning models were then trained and tested using 10-fold cross validation on
data for each state. Inclusion of seismic data improved the model performance
noticeably. The data was then resampled to reduce the bias of the models
towards more common design types, and the supervised learning models thus
constructed showed further improvements in performance. The average recall and
precision for the state models was 88.6% and 88.0% using Decision Trees, 84.0%
and 83.7% using Bayesian Networks, and 80.8% and 75.6% using SVM.
| 0 | 0 | 0 | 1 | 0 | 0 |
Automatic Question-Answering Using A Deep Similarity Neural Network | Automatic question-answering is a classical problem in natural language
processing, which aims at designing systems that can automatically answer a
question, in the same way as human does. In this work, we propose a deep
learning based model for automatic question-answering. First the questions and
answers are embedded using neural probabilistic modeling. Then a deep
similarity neural network is trained to find the similarity score of a pair of
answer and question. Then for each question, the best answer is found as the
one with the highest similarity score. We first train this model on a
large-scale public question-answering database, and then fine-tune it to
transfer to the customer-care chat data. We have also tested our framework on a
public question-answering database and achieved very good performance.
| 1 | 0 | 0 | 0 | 0 | 0 |
Long ties accelerate noisy threshold-based contagions | Changes to network structure can substantially affect when and how widely new
ideas, products, and conventions are adopted. In models of biological
contagion, interventions that randomly rewire edges (making them "longer")
accelerate spread. However, there are other models relevant to social
contagion, such as those motivated by myopic best-response in games with
strategic complements, in which individual's behavior is described by a
threshold number of adopting neighbors above which adoption occurs (i.e.,
complex contagions). Recent work has argued that highly clustered, rather than
random, networks facilitate spread of these complex contagions. Here we show
that minor modifications of prior analyses, which make them more realistic,
reverse this result. The modification is that we allow very rarely below
threshold adoption, i.e., very rarely adoption occurs, where there is only one
adopting neighbor. To model the trade-off between long and short edges we
consider networks that are the union of cycle-power-$k$ graphs and random
graphs on $n$ nodes. We study how the time to global spread changes as we
replace the cycle edges with (random) long ties. Allowing adoptions below
threshold to occur with order $1/\sqrt{n}$ probability is enough to ensure that
random rewiring accelerates spread. Simulations illustrate the robustness of
these results to other commonly-posited models for noisy best-response
behavior. We then examine empirical social networks, where we find that
hypothetical interventions that (a) randomly rewire existing edges or (b) add
random edges reduce time to spread compared with the original network or
addition of "short", triad-closing edges, respectively. This substantially
revises conclusions about how interventions change the spread of behavior,
suggesting that those wanting to increase spread should induce formation of
long ties, rather than triad-closing ties.
| 1 | 0 | 0 | 0 | 0 | 0 |
Detecting topological transitions in two dimensions by Hamiltonian evolution | We show that the evolution of two-component particles governed by a
two-dimensional spin-orbit lattice Hamiltonian can reveal transitions between
topological phases. A kink in the mean width of the particle distribution
signals the closing of the band gap, a prerequisite for a quantum phase
transition between topological phases. Furthermore, for realistic and
experimentally motivated Hamiltonians the density profile in topologically
non-trivial phases displays characteristic rings in the vicinity of the origin
that are absent in trivial phases. The results are expected to have immediate
application to systems of ultracold atoms and photonic lattices.
| 0 | 1 | 0 | 0 | 0 | 0 |
A search for optical bursts from the repeating fast radio burst FRB 121102 | We present a search for optical bursts from the repeating fast radio burst
FRB 121102 using simultaneous observations with the high-speed optical camera
ULTRASPEC on the 2.4-m Thai National Telescope and radio observations with the
100-m Effelsberg Radio Telescope. A total of 13 radio bursts were detected, but
we found no evidence for corresponding optical bursts in our 70.7-ms frames.
The 5-sigma upper limit to the optical flux density during our observations is
0.33 mJy at 767nm. This gives an upper limit for the optical burst fluence of
0.046 Jy ms, which constrains the broadband spectral index of the burst
emission to alpha < -0.2. Two of the radio pulses are separated by just 34 ms,
which may represent an upper limit on a possible underlying periodicity (a
rotation period typical of pulsars), or these pulses may have come from a
single emission window that is a small fraction of a possible period.
| 0 | 1 | 0 | 0 | 0 | 0 |
de Haas-van Alphen measurement of the antiferromagnet URhIn$_5$ | We report on the results of a de Haas-van Alphen (dHvA) measurement performed
on the recently discovered antiferromagnet URhIn$_5$ ($T_N$ = 98 K), a
5\textit{f}-analogue of the well studied heavy fermion antiferromagnet
CeRhIn$_5$. The Fermi surface is found to consist of four surfaces: a roughly
spherical pocket $\beta$, with $F_\beta \simeq 0.3$ kT; a pillow-shaped closed
surface, $\alpha$, with $F_\alpha \simeq 1.1$ kT; and two higher frequencies
$\gamma_1$ with $F_{\gamma_1} \simeq 3.2$ kT and $\gamma_2$ with $F_{\gamma_2}
\simeq 3.5$ kT that are seen only near the \textit{c}-axis, and that may arise
on cylindrical Fermi surfaces. The measured cyclotron masses range from 1.9
$m_e$ to 4.3 $m_e$. A simple LDA+SO calculation performed for the paramagnetic
ground state shows a very different Fermi surface topology, demonstrating a
need for more advanced electronic structure calculations.
| 0 | 1 | 0 | 0 | 0 | 0 |
Indoor Frame Recovery from Refined Line Segments | An important yet challenging problem in understanding indoor scene is
recovering indoor frame structure from a monocular image. It is more difficult
when occlusions and illumination vary, and object boundaries are weak. To
overcome these difficulties, a new approach based on line segment refinement
with two constraints is proposed. First, the line segments are refined by four
consecutive operations, i.e., reclassifying, connecting, fitting, and voting.
Specifically, misclassified line segments are revised by the reclassifying
operation, some short line segments are joined by the connecting operation, the
undetected key line segments are recovered by the fitting operation with the
help of the vanishing points, the line segments converging on the frame are
selected by the voting operation. Second, we construct four frame models
according to four classes of possible shooting angles of the monocular image,
the natures of all frame models are introduced via enforcing the cross ratio
and depth constraints. The indoor frame is then constructed by fitting those
refined line segments with related frame model under the two constraints, which
jointly advance the accuracy of the frame. Experimental results on a collection
of over 300 indoor images indicate that our algorithm has the capability of
recovering the frame from complex indoor scenes.
| 1 | 0 | 0 | 0 | 0 | 0 |
Fairly Allocating Contiguous Blocks of Indivisible Items | In this paper, we study the classic problem of fairly allocating indivisible
items with the extra feature that the items lie on a line. Our goal is to find
a fair allocation that is contiguous, meaning that the bundle of each agent
forms a contiguous block on the line. While allocations satisfying the
classical fairness notions of proportionality, envy-freeness, and equitability
are not guaranteed to exist even without the contiguity requirement, we show
the existence of contiguous allocations satisfying approximate versions of
these notions that do not degrade as the number of agents or items increases.
We also study the efficiency loss of contiguous allocations due to fairness
constraints.
| 1 | 0 | 0 | 0 | 0 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.