text
stringlengths
57
2.88k
labels
sequencelengths
6
6
Title: Birecurrent sets, Abstract: A set is called recurrent if its minimal automaton is strongly connected and birecurrent if it is recurrent as well as its reversal. We prove a series of results concerning birecurrent sets. It is already known that any birecurrent set is completely reducible (that is, such that the minimal representation of its characteristic series is completely reducible). The main result of this paper characterizes completely reducible sets as linear combinations of birecurrent sets
[ 1, 0, 1, 0, 0, 0 ]
Title: Lazarsfeld-Mukai Reflexive Sheaves and their Stability, Abstract: Consider an ample and globally generated line bundle $L$ on a smooth projective variety $X$ of dimension $N\geq 2$ over $\mathbb{C}$. Let $D$ be a smooth divisor in the complete linear system of $L$. We construct reflexive sheaves on $X$ by an elementary transformation of a trivial bundle on $X$ along certain globally generated torsion-free sheaves on $D$. The dual reflexive sheaves are called the Lazarsfeld-Mukai reflexive sheaves. We prove the $\mu_L$-(semi)stability of such reflexive sheaves under certain conditions.
[ 0, 0, 1, 0, 0, 0 ]
Title: Efficient Estimation of Linear Functionals of Principal Components, Abstract: We study principal component analysis (PCA) for mean zero i.i.d. Gaussian observations $X_1,\dots, X_n$ in a separable Hilbert space $\mathbb{H}$ with unknown covariance operator $\Sigma.$ The complexity of the problem is characterized by its effective rank ${\bf r}(\Sigma):= \frac{{\rm tr}(\Sigma)}{\|\Sigma\|},$ where ${\rm tr}(\Sigma)$ denotes the trace of $\Sigma$ and $\|\Sigma\|$ denotes its operator norm. We develop a method of bias reduction in the problem of estimation of linear functionals of eigenvectors of $\Sigma.$ Under the assumption that ${\bf r}(\Sigma)=o(n),$ we establish the asymptotic normality and asymptotic properties of the risk of the resulting estimators and prove matching minimax lower bounds, showing their semi-parametric optimality.
[ 0, 0, 1, 1, 0, 0 ]
Title: An Extension of Heron's Formula, Abstract: This paper introduces an extension of Heron's formula to approximate area of cyclic n-gons where the error never exceeds $\frac{\pi}{e}-1$
[ 0, 0, 1, 0, 0, 0 ]
Title: Learning Less-Overlapping Representations, Abstract: In representation learning (RL), how to make the learned representations easy to interpret and less overfitted to training data are two important but challenging issues. To address these problems, we study a new type of regulariza- tion approach that encourages the supports of weight vectors in RL models to have small overlap, by simultaneously promoting near-orthogonality among vectors and sparsity of each vector. We apply the proposed regularizer to two models: neural networks (NNs) and sparse coding (SC), and develop an efficient ADMM-based algorithm for regu- larized SC. Experiments on various datasets demonstrate that weight vectors learned under our regularizer are more interpretable and have better generalization performance.
[ 1, 0, 0, 1, 0, 0 ]
Title: LDPC Code Design for Distributed Storage: Balancing Repair Bandwidth, Reliability and Storage Overhead, Abstract: Distributed storage systems suffer from significant repair traffic generated due to frequent storage node failures. This paper shows that properly designed low-density parity-check (LDPC) codes can substantially reduce the amount of required block downloads for repair thanks to the sparse nature of their factor graph representation. In particular, with a careful construction of the factor graph, both low repair-bandwidth and high reliability can be achieved for a given code rate. First, a formula for the average repair bandwidth of LDPC codes is developed. This formula is then used to establish that the minimum repair bandwidth can be achieved by forcing a regular check node degree in the factor graph. Moreover, it is shown that given a fixed code rate, the variable node degree should also be regular to yield minimum repair bandwidth, under some reasonable minimum variable node degree constraint. It is also shown that for a given repair-bandwidth requirement, LDPC codes can yield substantially higher reliability than currently utilized Reed-Solomon (RS) codes. Our reliability analysis is based on a formulation of the general equation for the mean-time-to-data-loss (MTTDL) associated with LDPC codes. The formulation reveals that the stopping number is closely related to the MTTDL. It is further shown that LDPC codes can be designed such that a small loss of repair-bandwidth optimality may be traded for a large improvement in erasure-correction capability and thus the MTTDL.
[ 1, 0, 0, 0, 0, 0 ]
Title: Structural, magnetic, and electronic properties of GdTiO3 Mott insulator thin films grown by pulsed laser deposition, Abstract: We report on the optimization process to synthesize epitaxial thin films of GdTiO3 on SrLaGaO4 substrates by pulsed laser deposition. Optimized films are free of impurity phases and are fully strained. They possess a magnetic Curie temperature TC = 31.8 K with a saturation magnetization of 4.2 muB per formula unit at 10 K. Transport measurements reveal an insulating response, as expected. Optical spectroscopy indicates a band gap of 0.7 eV, comparable to the bulk value. Our work adds ferrimagnetic orthotitanates to the palette of perovskite materials for the design of emergent strongly correlated states at oxide interfaces using a versatile growth technique such as pulsed laser deposition.
[ 0, 1, 0, 0, 0, 0 ]
Title: Fluid-Structure Interaction with the Entropic Lattice Boltzmann Method, Abstract: We propose a novel fluid-structure interaction (FSI) scheme using the entropic multi-relaxation time lattice Boltzmann (KBC) model for the fluid domain in combination with a nonlinear finite element solver for the structural part. We show validity of the proposed scheme for various challenging set-ups by comparison to literature data. Beyond validation, we extend the KBC model to multiphase flows and couple it with FEM solver. Robustness and viability of the entropic multi-relaxation time model for complex FSI applications is shown by simulations of droplet impact on elastic superhydrophobic surfaces.
[ 0, 1, 0, 0, 0, 0 ]
Title: Feature-based visual odometry prior for real-time semi-dense stereo SLAM, Abstract: Robust and fast motion estimation and mapping is a key prerequisite for autonomous operation of mobile robots. The goal of performing this task solely on a stereo pair of video cameras is highly demanding and bears conflicting objectives: on one hand, the motion has to be tracked fast and reliably, on the other hand, high-level functions like navigation and obstacle avoidance depend crucially on a complete and accurate environment representation. In this work, we propose a two-layer approach for visual odometry and SLAM with stereo cameras that runs in real-time and combines feature-based matching with semi-dense direct image alignment. Our method initializes semi-dense depth estimation, which is computationally expensive, from motion that is tracked by a fast but robust keypoint-based method. Experiments on public benchmark and proprietary datasets show that our approach is faster than state-of-the-art methods without losing accuracy and yields comparable map building capabilities. Moreover, our approach is shown to handle large inter-frame motion and illumination changes much more robustly than its direct counterparts.
[ 1, 0, 0, 0, 0, 0 ]
Title: Compressive Sensing Approaches for Autonomous Object Detection in Video Sequences, Abstract: Video analytics requires operating with large amounts of data. Compressive sensing allows to reduce the number of measurements required to represent the video using the prior knowledge of sparsity of the original signal, but it imposes certain conditions on the design matrix. The Bayesian compressive sensing approach relaxes the limitations of the conventional approach using the probabilistic reasoning and allows to include different prior knowledge about the signal structure. This paper presents two Bayesian compressive sensing methods for autonomous object detection in a video sequence from a static camera. Their performance is compared on the real datasets with the non-Bayesian greedy algorithm. It is shown that the Bayesian methods can provide the same accuracy as the greedy algorithm but much faster; or if the computational time is not critical they can provide more accurate results.
[ 1, 0, 0, 1, 0, 0 ]
Title: A Generalization of Quasi-twisted Codes: Multi-twisted codes, Abstract: Cyclic codes and their various generalizations, such as quasi-twisted (QT) codes, have a special place in algebraic coding theory. Among other things, many of the best-known or optimal codes have been obtained from these classes. In this work we introduce a new generalization of QT codes that we call multi-twisted (MT) codes and study some of their basic properties. Presenting several methods of constructing codes in this class and obtaining bounds on the minimum distances, we show that there exist codes with good parameters in this class that cannot be obtained as QT or constacyclic codes. This suggests that considering this larger class in computer searches is promising for constructing codes with better parameters than currently best-known linear codes. Working with this new class of codes motivated us to consider a problem about binomials over finite fields and to discover a result that is interesting in its own right.
[ 1, 0, 1, 0, 0, 0 ]
Title: On the Successive Cancellation Decoding of Polar Codes with Arbitrary Linear Binary Kernels, Abstract: A method for efficiently successive cancellation (SC) decoding of polar codes with high-dimensional linear binary kernels (HDLBK) is presented and analyzed. We devise a $l$-expressions method which can obtain simplified recursive formulas of SC decoder in likelihood ratio form for arbitrary linear binary kernels to reduce the complexity of corresponding SC decoder. By considering the bit-channel transition probabilities $W_{G}^{(\cdot)}(\cdot|0)$ and $W_{G}^{(\cdot)}(\cdot|1)$ separately, a $W$-expressions method is proposed to further reduce the complexity of HDLBK based SC decoder. For a $m\times m$ binary kernel, the complexity of straightforward SC decoder is $O(2^{m}N\log N)$. With $W$-expressions, we reduce the complexity of straightforward SC decoder to $O(m^{2}N\log N)$ when $m\leq 16$. Simulation results show that $16\times16$ kernel polar codes offer significant advantages in terms of error performances compared with $2\times2$ kernel polar codes under SC and list SC decoders.
[ 1, 0, 1, 0, 0, 0 ]
Title: OpenML: An R Package to Connect to the Machine Learning Platform OpenML, Abstract: OpenML is an online machine learning platform where researchers can easily share data, machine learning tasks and experiments as well as organize them online to work and collaborate more efficiently. In this paper, we present an R package to interface with the OpenML platform and illustrate its usage in combination with the machine learning R package mlr. We show how the OpenML package allows R users to easily search, download and upload data sets and machine learning tasks. Furthermore, we also show how to upload results of experiments, share them with others and download results from other users. Beyond ensuring reproducibility of results, the OpenML platform automates much of the drudge work, speeds up research, facilitates collaboration and increases the users' visibility online.
[ 1, 0, 0, 1, 0, 0 ]
Title: Direct mapping of the temperature and velocity gradients in discs. Imaging the vertical CO snow line around IM Lupi, Abstract: Accurate measurements of the physical structure of protoplanetary discs are critical inputs for planet formation models. These constraints are traditionally established via complex modelling of continuum and line observations. Instead, we present an empirical framework to locate the CO isotopologue emitting surfaces from high spectral and spatial resolution ALMA observations. We apply this framework to the disc surrounding IM Lupi, where we report the first direct, i.e. model independent, measurements of the radial and vertical gradients of temperature and velocity in a protoplanetary disc. The measured disc structure is consistent with an irradiated self-similar disc structure, where the temperature increases and the velocity decreases towards the disc surface. We also directly map the vertical CO snow line, which is located at about one gas scale height at radii between 150 and 300 au, with a CO freeze-out temperature of $21\pm2$ K. In the outer disc ($> 300$ au), where the gas surface density transitions from a power law to an exponential taper, the velocity rotation field becomes significantly sub-Keplerian, in agreement with the expected steeper pressure gradient. The sub-Keplerian velocities should result in a very efficient inward migration of large dust grains, explaining the lack of millimetre continuum emission outside of 300 au. The sub-Keplerian motions may also be the signature of the base of an externally irradiated photo-evaporative wind. In the same outer region, the measured CO temperature above the snow line decreases to $\approx$ 15 K because of the reduced gas density, which can result in a lower CO freeze-out temperature, photo-desorption, or deviations from local thermodynamic equilibrium.
[ 0, 1, 0, 0, 0, 0 ]
Title: A Machine Learning Alternative to P-values, Abstract: This paper presents an alternative approach to p-values in regression settings. This approach, whose origins can be traced to machine learning, is based on the leave-one-out bootstrap for prediction error. In machine learning this is called the out-of-bag (OOB) error. To obtain the OOB error for a model, one draws a bootstrap sample and fits the model to the in-sample data. The out-of-sample prediction error for the model is obtained by calculating the prediction error for the model using the out-of-sample data. Repeating and averaging yields the OOB error, which represents a robust cross-validated estimate of the accuracy of the underlying model. By a simple modification to the bootstrap data involving "noising up" a variable, the OOB method yields a variable importance (VIMP) index, which directly measures how much a specific variable contributes to the prediction precision of a model. VIMP provides a scientifically interpretable measure of the effect size of a variable, we call the "predictive effect size", that holds whether the researcher's model is correct or not, unlike the p-value whose calculation is based on the assumed correctness of the model. We also discuss a marginal VIMP index, also easily calculated, which measures the marginal effect of a variable, or what we call "the discovery effect". The OOB procedure can be applied to both parametric and nonparametric regression models and requires only that the researcher can repeatedly fit their model to bootstrap and modified bootstrap data. We illustrate this approach on a survival data set involving patients with systolic heart failure and to a simulated survival data set where the model is incorrectly specified to illustrate its robustness to model misspecification.
[ 1, 0, 0, 1, 0, 0 ]
Title: Image Registration Techniques: A Survey, Abstract: Image Registration is the process of aligning two or more images of the same scene with reference to a particular image. The images are captured from various sensors at different times and at multiple view-points. Thus to get a better picture of any change of a scene or object over a considerable period of time image registration is important. Image registration finds application in medical sciences, remote sensing and in computer vision. This paper presents a detailed review of several approaches which are classified accordingly along with their contributions and drawbacks. The main steps of an image registration procedure are also discussed. Different performance measures are presented that determine the registration quality and accuracy. The scope for the future research are presented as well.
[ 1, 0, 0, 0, 0, 0 ]
Title: Hybrid quantum-classical modeling of quantum dot devices, Abstract: The design of electrically driven quantum dot devices for quantum optical applications asks for modeling approaches combining classical device physics with quantum mechanics. We connect the well-established fields of semi-classical semiconductor transport theory and the theory of open quantum systems to meet this requirement. By coupling the van Roosbroeck system with a quantum master equation in Lindblad form, we introduce a new hybrid quantum-classical modeling approach, which provides a comprehensive description of quantum dot devices on multiple scales: It enables the calculation of quantum optical figures of merit and the spatially resolved simulation of the current flow in realistic semiconductor device geometries in a unified way. We construct the interface between both theories in such a way, that the resulting hybrid system obeys the fundamental axioms of (non-)equilibrium thermodynamics. We show that our approach guarantees the conservation of charge, consistency with the thermodynamic equilibrium and the second law of thermodynamics. The feasibility of the approach is demonstrated by numerical simulations of an electrically driven single-photon source based on a single quantum dot in the stationary and transient operation regime.
[ 0, 1, 0, 0, 0, 0 ]
Title: Efficient exploration with Double Uncertain Value Networks, Abstract: This paper studies directed exploration for reinforcement learning agents by tracking uncertainty about the value of each available action. We identify two sources of uncertainty that are relevant for exploration. The first originates from limited data (parametric uncertainty), while the second originates from the distribution of the returns (return uncertainty). We identify methods to learn these distributions with deep neural networks, where we estimate parametric uncertainty with Bayesian drop-out, while return uncertainty is propagated through the Bellman equation as a Gaussian distribution. Then, we identify that both can be jointly estimated in one network, which we call the Double Uncertain Value Network. The policy is directly derived from the learned distributions based on Thompson sampling. Experimental results show that both types of uncertainty may vastly improve learning in domains with a strong exploration challenge.
[ 1, 0, 0, 1, 0, 0 ]
Title: Auxiliary Variables for Multi-Dirichlet Priors, Abstract: Bayesian models that mix multiple Dirichlet prior parameters, called Multi-Dirichlet priors (MD) in this paper, are gaining popularity. Inferring mixing weights and parameters of mixed prior distributions seems tricky, as sums over Dirichlet parameters complicate the joint distribution of model parameters. This paper shows a novel auxiliary variable scheme which helps to simplify the inference for models involving hierarchical MDs and MDPs. Using this scheme, it is easy to derive fully collapsed inference schemes which allow for an efficient inference.
[ 0, 0, 0, 1, 0, 0 ]
Title: Numerical non-LTE 3D radiative transfer using a multigrid method, Abstract: 3D non-LTE radiative transfer problems are computationally demanding, and this sets limits on the size of the problems that can be solved. So far Multilevel Accelerated Lambda Iteration (MALI) has been to the method of choice to perform high-resolution computations in multidimensional problems. The disadvantage of MALI is that its computing time scales as $\mathcal{O}(n^2)$, with $n$ the number of grid points. When the grid gets finer, the computational cost increases quadratically. We aim to develop a 3D non-LTE radiative transfer code that is more efficient than MALI. We implement a non-linear multigrid, fast approximation storage scheme, into the existing Multi3D radiative transfer code. We verify our multigrid implementation by comparing with MALI computations. We show that multigrid can be employed in realistic problems with snapshots from 3D radiative-MHD simulations as input atmospheres. With multigrid, we obtain a factor 3.3-4.5 speedup compared to MALI. With full-multigrid the speed-up increases to a factor 6. The speedup is expected to increase for input atmospheres with more grid points and finer grid spacing. Solving 3D non-LTE radiative transfer problems using non-linear multigrid methods can be applied to realistic atmospheres with a substantial speed-up.
[ 0, 1, 0, 0, 0, 0 ]
Title: Test Case Prioritization Techniques for Model-Based Testing: A Replicated Study, Abstract: Recently, several Test Case Prioritization (TCP) techniques have been proposed to order test cases for achieving a goal during test execution, particularly, revealing faults sooner. In the Model-Based Testing (MBT) context, such techniques are usually based on heuristics related to structural elements of the model and derived test cases. In this sense, techniques' performance may vary due to a number of factors. While empirical studies comparing the performance of TCP techniques have already been presented in literature, there is still little knowledge, particularly in the MBT context, about which factors may influence the outcomes suggested by a TCP technique. In a previous family of empirical studies focusing on labeled transition systems, we identified that the model layout, i.e. amount of branches, joins, and loops in the model, alone may have little influence on the performance of TCP techniques investigated, whereas characteristics of test cases that actually fail definitely influences their performance. However, we considered only synthetic artifacts in the study, which reduced the ability of representing properly the reality. In this paper, we present a replication of one of these studies, now with a larger and more representative selection of techniques and considering test suites from industrial applications as experimental objects. Our objective is to find out whether the results remain while increasing the validity in comparison to the original study. Results reinforce that there is no best performer among the investigated techniques and characteristics of test cases that fail represent an important factor, although adaptive random based techniques are less affected by it.
[ 1, 0, 0, 0, 0, 0 ]
Title: Deep Domain Adaptation Based Video Smoke Detection using Synthetic Smoke Images, Abstract: In this paper, a deep domain adaptation based method for video smoke detection is proposed to extract a powerful feature representation of smoke. Due to the smoke image samples limited in scale and diversity for deep CNN training, we systematically produced adequate synthetic smoke images with a wide variation in the smoke shape, background and lighting conditions. Considering that the appearance gap (dataset bias) between synthetic and real smoke images degrades significantly the performance of the trained model on the test set composed fully of real images, we build deep architectures based on domain adaptation to confuse the distributions of features extracted from synthetic and real smoke images. This approach expands the domain-invariant feature space for smoke image samples. With their approximate feature distribution off non-smoke images, the recognition rate of the trained model is improved significantly compared to the model trained directly on mixed dataset of synthetic and real images. Experimentally, several deep architectures with different design choices are applied to the smoke detector. The ultimate framework can get a satisfactory result on the test set. We believe that our approach is a start in the direction of utilizing deep neural networks enhanced with synthetic smoke images for video smoke detection.
[ 1, 0, 0, 0, 0, 0 ]
Title: NaCl crystal from salt solution with far below saturated concentration under ambient condition, Abstract: Under ambient conditions, we directly observed NaCl crystals experimentally in the rGO membranes soaked in the salt solution with concentration below and far below the saturated concentration. Moreover, in most probability, the NaCl crystals show stoichiometries behavior. We attribute this unexpected crystallization to the cation-{\pi} interactions between the ions and the aromatic rings of the rGO.
[ 0, 1, 0, 0, 0, 0 ]
Title: The localization transition in SU(3) gauge theory, Abstract: We study the Anderson-like localization transition in the spectrum of the Dirac operator of quenched QCD. Above the deconfining transition we determine the temperature dependence of the mobility edge separating localized and delocalized eigenmodes in the spectrum. We show that the temperature where the mobility edge vanishes and localized modes disappear from the spectrum, coincides with the critical temperature of the deconfining transition. We also identify topological charge related close to zero modes in the Dirac spectrum and show that they account for only a small fraction of localized modes, a fraction that is rapidly falling as the temperature increases.
[ 0, 1, 0, 0, 0, 0 ]
Title: Conformation Clustering of Long MD Protein Dynamics with an Adversarial Autoencoder, Abstract: Recent developments in specialized computer hardware have greatly accelerated atomic level Molecular Dynamics (MD) simulations. A single GPU-attached cluster is capable of producing microsecond-length trajectories in reasonable amounts of time. Multiple protein states and a large number of microstates associated with folding and with the function of the protein can be observed as conformations sampled in the trajectories. Clustering those conformations, however, is needed for identifying protein states, evaluating transition rates and understanding protein behavior. In this paper, we propose a novel data-driven generative conformation clustering method based on the adversarial autoencoder (AAE) and provide the associated software implementation Cong. The method was tested using a 208 microseconds MD simulation of the fast-folding peptide Trp-Cage (20 residues) obtained from the D.E. Shaw Research Group. The proposed clustering algorithm identifies many of the salient features of the folding process by grouping a large number of conformations that share common features not easily identifiable in the trajectory.
[ 0, 0, 0, 0, 1, 0 ]
Title: Correction to the article: Floer homology and splicing knot complements, Abstract: This note corrects the mistakes in the splicing formulas of the paper "Floer homology and splicing knot complements". The mistakes are the result of the incorrect assumption that for a knot $K$ inside a homology sphere $Y$, the involution on the knot Floer homology of $K$ which corresponds to moving the basepoints by one full twist around $K$ is trivial. The correction implicitly involves considering the contribution from this (possibly non-trivial) involution in a number of places.
[ 0, 0, 1, 0, 0, 0 ]
Title: Tree tribes and lower bounds for switching lemmas, Abstract: We show tight upper and lower bounds for switching lemmas obtained by the action of random $p$-restrictions on boolean functions that can be expressed as decision trees in which every vertex is at a distance of at most $t$ from some leaf, also called $t$-clipped decision trees. More specifically, we show the following: $\bullet$ If a boolean function $f$ can be expressed as a $t$-clipped decision tree, then under the action of a random $p$-restriction $\rho$, the probability that the smallest depth decision tree for $f|_{\rho}$ has depth greater than $d$ is upper bounded by $(4p2^{t})^{d}$. $\bullet$ For every $t$, there exists a function $g_{t}$ that can be expressed as a $t$-clipped decision tree, such that under the action of a random $p$-restriction $\rho$, the probability that the smallest depth decision tree for $g_{t}|_{\rho}$ has depth greater than $d$ is lower bounded by $(c_{0}p2^{t})^{d}$, for $0\leq p\leq c_{p}2^{-t}$ and $0\leq d\leq c_{d}\frac{\log n}{2^{t}\log t}$, where $c_{0},c_{p},c_{d}$ are universal constants.
[ 1, 0, 0, 0, 0, 0 ]
Title: NeuralPower: Predict and Deploy Energy-Efficient Convolutional Neural Networks, Abstract: "How much energy is consumed for an inference made by a convolutional neural network (CNN)?" With the increased popularity of CNNs deployed on the wide-spectrum of platforms (from mobile devices to workstations), the answer to this question has drawn significant attention. From lengthening battery life of mobile devices to reducing the energy bill of a datacenter, it is important to understand the energy efficiency of CNNs during serving for making an inference, before actually training the model. In this work, we propose NeuralPower: a layer-wise predictive framework based on sparse polynomial regression, for predicting the serving energy consumption of a CNN deployed on any GPU platform. Given the architecture of a CNN, NeuralPower provides an accurate prediction and breakdown for power and runtime across all layers in the whole network, helping machine learners quickly identify the power, runtime, or energy bottlenecks. We also propose the "energy-precision ratio" (EPR) metric to guide machine learners in selecting an energy-efficient CNN architecture that better trades off the energy consumption and prediction accuracy. The experimental results show that the prediction accuracy of the proposed NeuralPower outperforms the best published model to date, yielding an improvement in accuracy of up to 68.5%. We also assess the accuracy of predictions at the network level, by predicting the runtime, power, and energy of state-of-the-art CNN architectures, achieving an average accuracy of 88.24% in runtime, 88.34% in power, and 97.21% in energy. We comprehensively corroborate the effectiveness of NeuralPower as a powerful framework for machine learners by testing it on different GPU platforms and Deep Learning software tools.
[ 1, 0, 0, 1, 0, 0 ]
Title: The Minor Fall, the Major Lift: Inferring Emotional Valence of Musical Chords through Lyrics, Abstract: We investigate the association between musical chords and lyrics by analyzing a large dataset of user-contributed guitar tablatures. Motivated by the idea that the emotional content of chords is reflected in the words used in corresponding lyrics, we analyze associations between lyrics and chord categories. We also examine the usage patterns of chords and lyrics in different musical genres, historical eras, and geographical regions. Our overall results confirms a previously known association between Major chords and positive valence. We also report a wide variation in this association across regions, genres, and eras. Our results suggest possible existence of different emotional associations for other types of chords.
[ 1, 0, 0, 0, 0, 0 ]
Title: Subextensions for co-induced modules, Abstract: Using cohomological methods, we prove a criterion for the embedding of a group extension with abelian kernel into the split extension of a co-induced module. This generalises some earlier similar results. We also prove an assertion about the conjugacy of complements in split extensions of co-induced modules. Both results follow from a relation between homomorphisms of certain cohomology groups.
[ 0, 0, 1, 0, 0, 0 ]
Title: Existence theorems for the Cauchy problem of 2D nonhomogeneous incompressible non-resistive MHD equations with vacuum, Abstract: In this paper, we investigate the Cauchy problem of the nonhomogeneous incompressible non-resistive MHD on $\mathbb{R}^2$ with vacuum as far field density and prove that the 2D Cauchy problem has a unique local strong solution provided that the initial density and magnetic field decay not too slow at infinity. Furthermore, if the initial data satisfy some additional regularity and compatibility conditions, the strong solution becomes a classical one.
[ 0, 0, 1, 0, 0, 0 ]
Title: Density-equalizing maps for simply-connected open surfaces, Abstract: In this paper, we are concerned with the problem of creating flattening maps of simply-connected open surfaces in $\mathbb{R}^3$. Using a natural principle of density diffusion in physics, we propose an effective algorithm for computing density-equalizing flattening maps with any prescribed density distribution. By varying the initial density distribution, a large variety of mappings with different properties can be achieved. For instance, area-preserving parameterizations of simply-connected open surfaces can be easily computed. Experimental results are presented to demonstrate the effectiveness of our proposed method. Applications to data visualization and surface remeshing are explored.
[ 1, 0, 1, 0, 0, 0 ]
Title: Measuring the radius and mass of Planet Nine, Abstract: Batygin and Brown (2016) have suggested the existence of a new Solar System planet supposed to be responsible for the perturbation of eccentric orbits of small outer bodies. The main challenge is now to detect and characterize this putative body. Here we investigate the principles of the determination of its physical parameters, mainly its mass and radius. For that purpose we concentrate on two methods, stellar occultations and gravitational microlensing effects (amplification, deflection and time delay). We estimate the main characteristics of a possible occultation or gravitational effects: flux variation of a background star, duration and probability of occurence. We investigate also additional benefits of direct imaging and of an occultation.
[ 0, 1, 0, 0, 0, 0 ]
Title: Small presentations of model categories and Vopěnka's principle, Abstract: We prove existence results for small presentations of model categories generalizing a theorem of D. Dugger from combinatorial model categories to more general model categories. Some of these results are shown under the assumption of Vopěnka's principle. Our main theorem applies in particular to cofibrantly generated model categories where the domains of the generating cofibrations satisfy a slightly stronger smallness condition. As a consequence, assuming Vopěnka's principle, such a cofibrantly generated model category is Quillen equivalent to a combinatorial model category. Moreover, if there are generating sets which consist of presentable objects, then the same conclusion holds without the assumption of Vopěnka's principle. We also correct a mistake from previous work that made similar claims.
[ 0, 0, 1, 0, 0, 0 ]
Title: Towards the LISA Backlink: Experiment design for comparing optical phase reference distribution systems, Abstract: LISA is a proposed space-based laser interferometer detecting gravitational waves by measuring distances between free-floating test masses housed in three satellites in a triangular constellation with laser links in-between. Each satellite contains two optical benches that are articulated by moving optical subassemblies for compensating the breathing angle in the constellation. The phase reference distribution system, also known as backlink, forms an optical bi-directional path between the intra-satellite benches. In this work we discuss phase reference implementations with a target non-reciprocity of at most $2\pi\,\mathrm{\mu rad/\sqrt{Hz}}$, equivalent to $1\,\mathrm{pm/\sqrt{Hz}}$ for a wavelength of $1064\,\mathrm{nm}$ in the frequency band from $0.1\,\mathrm{mHz}$ to $1\,\mathrm{Hz}$. One phase reference uses a steered free beam connection, the other one a fiber together with additional laser frequencies. The noise characteristics of these implementations will be compared in a single interferometric set-up with a previously successfully tested direct fiber connection. We show the design of this interferometer created by optical simulations including ghost beam analysis, component alignment and noise estimation. First experimental results of a free beam laser link between two optical set-ups that are co-rotating by $\pm 1^\circ$ are presented. This experiment demonstrates sufficient thermal stability during rotation of less than $10^{-4}\,\mathrm{K/\sqrt{Hz}}$ at $1\,\mathrm{mHz}$ and operation of the free beam steering mirror control over more than 1 week.
[ 0, 1, 0, 0, 0, 0 ]
Title: Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs, Abstract: Generative Adversarial Networks (GANs) have shown remarkable success as a framework for training models to produce realistic-looking data. In this work, we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to produce realistic real-valued multi-dimensional time series, with an emphasis on their application to medical data. RGANs make use of recurrent neural networks in the generator and the discriminator. In the case of RCGANs, both of these RNNs are conditioned on auxiliary information. We demonstrate our models in a set of toy datasets, where we show visually and quantitatively (using sample likelihood and maximum mean discrepancy) that they can successfully generate realistic time-series. We also describe novel evaluation methods for GANs, where we generate a synthetic labelled training dataset, and evaluate on a real test set the performance of a model trained on the synthetic data, and vice-versa. We illustrate with these metrics that RCGANs can generate time-series data useful for supervised training, with only minor degradation in performance on real test data. This is demonstrated on digit classification from 'serialised' MNIST and by training an early warning system on a medical dataset of 17,000 patients from an intensive care unit. We further discuss and analyse the privacy concerns that may arise when using RCGANs to generate realistic synthetic medical time series data.
[ 1, 0, 0, 1, 0, 0 ]
Title: Multichannel Robot Speech Recognition Database: MChRSR, Abstract: In real human robot interaction (HRI) scenarios, speech recognition represents a major challenge due to robot noise, background noise and time-varying acoustic channel. This document describes the procedure used to obtain the Multichannel Robot Speech Recognition Database (MChRSR). It is composed of 12 hours of multichannel evaluation data recorded in a real mobile HRI scenario. This database was recorded with a PR2 robot performing different translational and azimuthal movements. Accordingly, 16 evaluation sets were obtained re-recording the clean set of the Aurora 4 database in different movement conditions.
[ 1, 0, 0, 0, 0, 0 ]
Title: Two simple observations on representations of metaplectic groups, Abstract: M. Hanzer and I. Matic have proved that the genuine unitary principal series representations of the metaplectic groups are irreducible. A simple consequence of that paper is a criterion for the irreducibility of the non-unitary principal series representations of the metaplectic groups that we give in this paper.
[ 0, 0, 1, 0, 0, 0 ]
Title: Near-sphere lattices with constant nonlocal mean curvature, Abstract: We are concerned with unbounded sets of $\mathbb{R}^N$ whose boundary has constant nonlocal (or fractional) mean curvature, which we call CNMC sets. This is the equation associated to critical points of the fractional perimeter functional under a volume constraint. We construct CNMC sets which are the countable union of a certain bounded domain and all its translations through a periodic integer lattice of dimension $M\leq N$. Our CNMC sets form a $C^2$ branch emanating from the unit ball alone and where the parameter in the branch is essentially the distance to the closest lattice point. Thus, the new translated near-balls (or near-spheres) appear from infinity. We find their exact asymptotic shape as the parameter tends to infinity.
[ 0, 0, 1, 0, 0, 0 ]
Title: Stability, convergence, and limit cycles in some human physiological processes, Abstract: Mathematical models for physiological processes aid qualitative understanding of the impact of various parameters on the underlying process. We analyse two such models for human physiological processes: the Mackey-Glass and the Lasota equations, which model the change in the concentration of blood cells in the human body. We first study the local stability of these models, and derive bounds on various model parameters and the feedback delay for the concentration to equilibrate. We then deduce conditions for non-oscillatory convergence of the solutions, which could ensure that the blood cell concentration does not oscillate. Further, we define the convergence characteristics of the solutions which govern the rate at which the concentration equilibrates when the system is stable. Owing to the possibility that physiological parameters can seldom be estimated precisely, we also derive bounds for robust stability\textemdash which enable one to ensure that the blood cell concentration equilibrates despite parametric uncertainty. We also highlight that when the necessary and sufficient condition for local stability is violated, the system transits into instability via a Hopf bifurcation, leading to limit cycles in the blood cell concentration. We then outline a framework to characterise the type of the Hopf bifurcation and determine the asymptotic orbital stability of limit cycles. The analysis is complemented with numerical examples, stability charts and bifurcation diagrams. The insights into the dynamical properties of the mathematical models may serve to guide the study of dynamical diseases.
[ 1, 0, 0, 0, 0, 0 ]
Title: Poisson multi-Bernoulli mixture filter: direct derivation and implementation, Abstract: We provide a derivation of the Poisson multi-Bernoulli mixture (PMBM) filter for multi-target tracking with the standard point target measurements without using probability generating functionals or functional derivatives. We also establish the connection with the \delta-generalised labelled multi-Bernoulli (\delta-GLMB) filter, showing that a \delta-GLMB density represents a multi-Bernoulli mixture with labelled targets so it can be seen as a special case of PMBM. In addition, we propose an implementation for linear/Gaussian dynamic and measurement models and how to efficiently obtain typical estimators in the literature from the PMBM. The PMBM filter is shown to outperform other filters in the literature in a challenging scenario.
[ 1, 0, 0, 1, 0, 0 ]
Title: Universality of density waves in p-doped La2CuO4 and n-doped Nd2CuO4+y, Abstract: The contribution of $O^{2-}$ ions to antiferromagnetism in $La_{2-x}Ae_xCuO_4$ ($Ae = Sr, Ba)$ is highly sensitive to doped holes. In contrast, the contribution of $Cu^{2+}$ ions to antiferromagnetism in $Nd_{2-x}Ce_xCuO_{4+y}$ is much less sensitive to doped electrons. The difference causes the precarious and, respectively, robust antiferromagnetic phase of these cuprates. The same sensitivities affect the doping dependence of the incommensurability of density waves, $\delta (x)$. In the hole-doped compounds this gives rise to a doping offset for magnetic and charge density waves, $\delta_{m,c}^p(x) \propto \sqrt{x-x_{0p}^N}$. Here $x_{0p}^N$ is the doping concentration where the Néel temperature vanishes, $T_N(x_{0p}^N) = 0$. No such doping offset occurs for density waves in the electron-doped compound. Instead, excess oxygen (necessary for stability in crystal growth) of concentration $y$ causes a different doping offset in the latter case, $\delta_{m,c}^n(x) \propto \sqrt{x- 2y}$. The square-root formulas result from the assumption of superlattice formation through partitioning of the $CuO_2$ plane by pairs of itinerant charge carriers. Agreement of observed incommensurability $\delta(x)$ with the formulas is very good for the hole-doped compounds and reasonable for the electron-doped compound. The deviation in the latter case may be caused by residual excess oxygen.
[ 0, 1, 0, 0, 0, 0 ]
Title: The perceived assortativity of social networks: Methodological problems and solutions, Abstract: Networks describe a range of social, biological and technical phenomena. An important property of a network is its degree correlation or assortativity, describing how nodes in the network associate based on their number of connections. Social networks are typically thought to be distinct from other networks in being assortative (possessing positive degree correlations); well-connected individuals associate with other well-connected individuals, and poorly-connected individuals associate with each other. We review the evidence for this in the literature and find that, while social networks are more assortative than non-social networks, only when they are built using group-based methods do they tend to be positively assortative. Non-social networks tend to be disassortative. We go on to show that connecting individuals due to shared membership of a group, a commonly used method, biases towards assortativity unless a large enough number of censuses of the network are taken. We present a number of solutions to overcoming this bias by drawing on advances in sociological and biological fields. Adoption of these methods across all fields can greatly enhance our understanding of social networks and networks in general.
[ 1, 0, 0, 1, 0, 0 ]
Title: The Taipan Galaxy Survey: Scientific Goals and Observing Strategy, Abstract: Taipan is a multi-object spectroscopic galaxy survey starting in 2017 that will cover 2pi steradians over the southern sky, and obtain optical spectra for about two million galaxies out to z<0.4. Taipan will use the newly-refurbished 1.2m UK Schmidt Telescope at Siding Spring Observatory with the new TAIPAN instrument, which includes an innovative 'Starbugs' positioning system capable of rapidly and simultaneously deploying up to 150 spectroscopic fibres (and up to 300 with a proposed upgrade) over the 6-deg diameter focal plane, and a purpose-built spectrograph operating from 370 to 870nm with resolving power R>2000. The main scientific goals of Taipan are: (i) to measure the distance scale of the Universe (primarily governed by the local expansion rate, H_0) to 1% precision, and the structure growth rate of structure to 5%; (ii) to make the most extensive map yet constructed of the mass distribution and motions in the local Universe, using peculiar velocities based on improved Fundamental Plane distances, which will enable sensitive tests of gravitational physics; and (iii) to deliver a legacy sample of low-redshift galaxies as a unique laboratory for studying galaxy evolution as a function of mass and environment. The final survey, which will be completed within 5 years, will consist of a complete magnitude-limited sample (i<17) of about 1.2x10^6 galaxies, supplemented by an extension to higher redshifts and fainter magnitudes (i<18.1) of a luminous red galaxy sample of about 0.8x10^6 galaxies. Observations and data processing will be carried out remotely and in a fully-automated way, using a purpose-built automated 'virtual observer' software and an automated data reduction pipeline. The Taipan survey is deliberately designed to maximise its legacy value, by complementing and enhancing current and planned surveys of the southern sky at wavelengths from the optical to the radio.
[ 0, 1, 0, 0, 0, 0 ]
Title: The Externalities of Exploration and How Data Diversity Helps Exploitation, Abstract: Online learning algorithms, widely used to power search and content optimization on the web, must balance exploration and exploitation, potentially sacrificing the experience of current users for information that will lead to better decisions in the future. Recently, concerns have been raised about whether the process of exploration could be viewed as unfair, placing too much burden on certain individuals or groups. Motivated by these concerns, we initiate the study of the externalities of exploration - the undesirable side effects that the presence of one party may impose on another - under the linear contextual bandits model. We introduce the notion of a group externality, measuring the extent to which the presence of one population of users impacts the rewards of another. We show that this impact can in some cases be negative, and that, in a certain sense, no algorithm can avoid it. We then study externalities at the individual level, interpreting the act of exploration as an externality imposed on the current user of a system by future users. This drives us to ask under what conditions inherent diversity in the data makes explicit exploration unnecessary. We build on a recent line of work on the smoothed analysis of the greedy algorithm that always chooses the action that currently looks optimal, improving on prior results to show that a greedy approach almost matches the best possible Bayesian regret rate of any other algorithm on the same problem instance whenever the diversity conditions hold, and that this regret is at most $\tilde{O}(T^{1/3})$. Returning to group-level effects, we show that under the same conditions, negative group externalities essentially vanish under the greedy algorithm. Together, our results uncover a sharp contrast between the high externalities that exist in the worst case, and the ability to remove all externalities if the data is sufficiently diverse.
[ 0, 0, 0, 1, 0, 0 ]
Title: Invariant submanifolds of (LCS)n-Manifolds with respect to quarter symmetric metric connection, Abstract: The object of the present paper is to study invariant submanifolds of (LCS)n-manifolds with respect to quarter symmetric metric connection. It is shown that the mean curvature of an invariant submanifold of (LCS)n-manifold with respect to quarter symmetric metric connection and Levi-Civita connection are equal. An example is constructed to illustrate the results of the paper. We also obtain some equivalent conditions of such notion.
[ 0, 0, 1, 0, 0, 0 ]
Title: Iterated filtering methods for Markov process epidemic models, Abstract: Dynamic epidemic models have proven valuable for public health decision makers as they provide useful insights into the understanding and prevention of infectious diseases. However, inference for these types of models can be difficult because the disease spread is typically only partially observed e.g. in form of reported incidences in given time periods. This chapter discusses how to perform likelihood-based inference for partially observed Markov epidemic models when it is relatively easy to generate samples from the Markov transmission model while the likelihood function is intractable. The first part of the chapter reviews the theoretical background of inference for partially observed Markov processes (POMP) via iterated filtering. In the second part of the chapter the performance of the method and associated practical difficulties are illustrated on two examples. In the first example a simulated outbreak data set consisting of the number of newly reported cases aggregated by week is fitted to a POMP where the underlying disease transmission model is assumed to be a simple Markovian SIR model. The second example illustrates possible model extensions such as seasonal forcing and over-dispersion in both, the transmission and observation model, which can be used, e.g., when analysing routinely collected rotavirus surveillance data. Both examples are implemented using the R-package pomp (King et al., 2016) and the code is made available online.
[ 0, 0, 1, 1, 0, 0 ]
Title: An Algebraic-Combinatorial Proof Technique for the GM-MDS Conjecture, Abstract: This paper considers the problem of designing maximum distance separable (MDS) codes over small fields with constraints on the support of their generator matrices. For any given $m\times n$ binary matrix $M$, the GM-MDS conjecture, due to Dau et al., states that if $M$ satisfies the so-called MDS condition, then for any field $\mathbb{F}$ of size $q\geq n+m-1$, there exists an $[n,m]_q$ MDS code whose generator matrix $G$, with entries in $\mathbb{F}$, fits $M$ (i.e., $M$ is the support matrix of $G$). Despite all the attempts by the coding theory community, this conjecture remains still open in general. It was shown, independently by Yan et al. and Dau et al., that the GM-MDS conjecture holds if the following conjecture, referred to as the TM-MDS conjecture, holds: if $M$ satisfies the MDS condition, then the determinant of a transformation matrix $T$, such that $TV$ fits $M$, is not identically zero, where $V$ is a Vandermonde matrix with distinct parameters. In this work, we generalize the TM-MDS conjecture, and present an algebraic-combinatorial approach based on polynomial-degree reduction for proving this conjecture. Our proof technique's strength is based primarily on reducing inherent combinatorics in the proof. We demonstrate the strength of our technique by proving the TM-MDS conjecture for the cases where the number of rows ($m$) of $M$ is upper bounded by $5$. For this class of special cases of $M$ where the only additional constraint is on $m$, only cases with $m\leq 4$ were previously proven theoretically, and the previously used proof techniques are not applicable to cases with $m > 4$.
[ 1, 0, 0, 0, 0, 0 ]
Title: The sequence of open and closed prefixes of a Sturmian word, Abstract: A finite word is closed if it contains a factor that occurs both as a prefix and as a suffix but does not have internal occurrences, otherwise it is open. We are interested in the {\it oc-sequence} of a word, which is the binary sequence whose $n$-th element is $0$ if the prefix of length $n$ of the word is open, or $1$ if it is closed. We exhibit results showing that this sequence is deeply related to the combinatorial and periodic structure of a word. In the case of Sturmian words, we show that these are uniquely determined (up to renaming letters) by their oc-sequence. Moreover, we prove that the class of finite Sturmian words is a maximal element with this property in the class of binary factorial languages. We then discuss several aspects of Sturmian words that can be expressed through this sequence. Finally, we provide a linear-time algorithm that computes the oc-sequence of a finite word, and a linear-time algorithm that reconstructs a finite Sturmian word from its oc-sequence.
[ 1, 0, 1, 0, 0, 0 ]
Title: Capital Regulation under Price Impacts and Dynamic Financial Contagion, Abstract: We construct a continuous time model for price-mediated contagion precipitated by a common exogenous stress to the trading book of all firms in the financial system. In this setting, firms are constrained so as to satisfy a risk-weight based capital ratio requirement. We use this model to find analytical bounds on the risk-weights for an asset as a function of the market liquidity. Under these appropriate risk-weights, we find existence and uniqueness for the joint system of firm behavior and the asset price. We further consider an analytical bound on the firm liquidations, which allows us to construct exact formulas for stress testing the financial system with deterministic or random stresses. Numerical case studies are provided to demonstrate various implications of this model and analytical bounds.
[ 0, 0, 0, 0, 0, 1 ]
Title: On seaweed subalgebras and meander graphs in type D, Abstract: In 2000, Dergachev and Kirillov introduced subalgebras of "seaweed type" in $\mathfrak{gl}_n$ and computed their index using certain graphs, which we call type-${\sf A}$ meander graphs. Then the subalgebras of seaweed type, or just "seaweeds", have been defined by Panyushev (2001) for arbitrary reductive Lie algebras. Recently, a meander graph approach to computing the index in types ${\sf B}$ and ${\sf C}$ has been developed by the authors. In this article, we consider the most difficult and interesting case of type ${\sf D}$. Some new phenomena occurring here are related to the fact that the Dynkin diagram has a branching node.
[ 0, 0, 1, 0, 0, 0 ]
Title: Rank Two Non-Commutative Laurent Phenomenon and Pseudo-Positivity, Abstract: We study polynomial generalizations of the Kontsevich automorphisms acting on the skew-field of formal rational expressions in two non-commuting variables. Our main result is the Laurentness and pseudo-positivity of iterations of these automorphisms. The resulting expressions are described combinatorially using a generalization of the combinatorics of compatible pairs in a maximal Dyck path developed by Lee, Li, and Zelevinsky. By specializing to quasi-commuting variables we obtain pseudo-positive expressions for rank 2 quantum generalized cluster variables. In the case that all internal exchange coefficients are zero, this quantum specialization provides a combinatorial construction of counting polynomials for Grassmannians of submodules in exceptional representations of valued quivers with two vertices.
[ 0, 0, 1, 0, 0, 0 ]
Title: Faster Coordinate Descent via Adaptive Importance Sampling, Abstract: Coordinate descent methods employ random partial updates of decision variables in order to solve huge-scale convex optimization problems. In this work, we introduce new adaptive rules for the random selection of their updates. By adaptive, we mean that our selection rules are based on the dual residual or the primal-dual gap estimates and can change at each iteration. We theoretically characterize the performance of our selection rules and demonstrate improvements over the state-of-the-art, and extend our theory and algorithms to general convex objectives. Numerical evidence with hinge-loss support vector machines and Lasso confirm that the practice follows the theory.
[ 1, 0, 1, 1, 0, 0 ]
Title: Convergence of Stochastic Approximation Monte Carlo and modified Wang-Landau algorithms: Tests for the Ising model, Abstract: We investigate the behavior of the deviation of the estimator for the density of states (DOS) with respect to the exact solution in the course of Wang-Landau and Stochastic Approximation Monte Carlo (SAMC) simulations of the two-dimensional Ising model. We find that the deviation saturates in the Wang-Landau case. This can be cured by adjusting the refinement scheme. To this end, the 1/t-modification of the Wang-Landau algorithm has been suggested. A similar choice of refinement scheme is employed in the SAMC algorithm. The convergence behavior of all three algorithms is examined. It turns out that the convergence of the SAMC algorithm is very sensitive to the onset of the refinement. Finally, the internal energy and specific heat of the Ising model are calculated from the SAMC DOS and compared to exact values.
[ 0, 1, 0, 0, 0, 0 ]
Title: On the approximation by convolution type double singular integral operators, Abstract: In this paper, we prove the pointwise convergence and the rate of pointwise convergence for a family of singular integral operators in two-dimensional setting in the following form: \begin{equation*} L_{\lambda }\left( f;x,y\right) =\underset{D}{\iint }f\left( t,s\right) K_{\lambda }\left( t-x,s-y\right) dsdt,\text{ }\left( x,y\right) \in D, \end{equation*} where $D=\left \langle a,b\right \rangle \times \left \langle c,d\right \rangle $ is an arbitrary closed, semi-closed or open rectangle in $\mathbb{R}^{2}$ and $% \lambda \in \Lambda ,$ $\Lambda $ is a set of non-negative indices with accumulation point $\lambda_{0}$. Also, we provide an example to support these theoretical results. In contrast to previous works, the kernel function $K_{\lambda }\left( t,s\right) $ does not have to be even, positive or 2$\pi -$periodic.
[ 0, 0, 1, 0, 0, 0 ]
Title: The Moon Illusion explained by the Projective Consciousness Model, Abstract: The Moon often appears larger near the perceptual horizon and smaller high in the sky though the visual angle subtended is invariant. We show how this illusion results from the optimization of a projective geometrical frame for conscious perception through free energy minimization, as articulated in the Projective Consciousness Model. The model accounts for all documented modulations of the illusion without anomalies (e.g., the size-distance paradox), surpasses other theories in explanatory power, makes sense of inter- and intra-subjective variability vis-a-vis the illusion, and yields new quantitative and qualitative predictions.
[ 0, 0, 0, 0, 1, 0 ]
Title: Approaching the UCT problem via crossed products of the Razak-Jacelon algebra, Abstract: We show that the UCT problem for separable, nuclear $\mathrm C^*$-algebras relies only on whether the UCT holds for crossed products of certain finite cyclic group actions on the Razak-Jacelon algebra. This observation is analogous to and in fact recovers a characterization of the UCT problem in terms of finite group actions on the Cuntz algebra $\mathcal O_2$ established in previous work by the authors. Although based on a similar approach, the new conceptual ingredients in the finite context are the recent advances in the classification of stably projectionless $\mathrm C^*$-algebras, as well as a known characterization of the UCT problem in terms of certain tracially AF $\mathrm C^*$-algebras due to Dadarlat.
[ 0, 0, 1, 0, 0, 0 ]
Title: Some Remarks about the Complexity of Epidemics Management, Abstract: Recent outbreaks of Ebola, H1N1 and other infectious diseases have shown that the assumptions underlying the established theory of epidemics management are too idealistic. For an improvement of procedures and organizations involved in fighting epidemics, extended models of epidemics management are required. The necessary extensions consist in a representation of the management loop and the potential frictions influencing the loop. The effects of the non-deterministic frictions can be taken into account by including the measures of robustness and risk in the assessment of management options. Thus, besides of the increased structural complexity resulting from the model extensions, the computational complexity of the task of epidemics management - interpreted as an optimization problem - is increased as well. This is a serious obstacle for analyzing the model and may require an additional pre-processing enabling a simplification of the analysis process. The paper closes with an outlook discussing some forthcoming problems.
[ 0, 1, 0, 0, 0, 0 ]
Title: Poisson-Fermi Formulation of Nonlocal Electrostatics in Electrolyte Solutions, Abstract: We present a nonlocal electrostatic formulation of nonuniform ions and water molecules with interstitial voids that uses a Fermi-like distribution to account for steric and correlation effects in electrolyte solutions. The formulation is based on the volume exclusion of hard spheres leading to a steric potential and Maxwell's displacement field with Yukawa-type interactions resulting in a nonlocal electric potential. The classical Poisson-Boltzmann model fails to describe steric and correlation effects important in a variety of chemical and biological systems, especially in high field or large concentration conditions found in and near binding sites, ion channels, and electrodes. Steric effects and correlations are apparent when we compare nonlocal Poisson-Fermi results to Poisson-Boltzmann calculations in electric double layer and to experimental measurements on the selectivity of potassium channels for K+ over Na+. The present theory links atomic scale descriptions of the crystallized KcsA channel with macroscopic bulk conditions. Atomic structures and macroscopic conditions determine complex functions of great importance in biology, nanotechnology, and electrochemistry.
[ 0, 1, 0, 0, 0, 0 ]
Title: Learning to Transfer, Abstract: Transfer learning borrows knowledge from a source domain to facilitate learning in a target domain. Two primary issues to be addressed in transfer learning are what and how to transfer. For a pair of domains, adopting different transfer learning algorithms results in different knowledge transferred between them. To discover the optimal transfer learning algorithm that maximally improves the learning performance in the target domain, researchers have to exhaustively explore all existing transfer learning algorithms, which is computationally intractable. As a trade-off, a sub-optimal algorithm is selected, which requires considerable expertise in an ad-hoc way. Meanwhile, it is widely accepted in educational psychology that human beings improve transfer learning skills of deciding what to transfer through meta-cognitive reflection on inductive transfer learning practices. Motivated by this, we propose a novel transfer learning framework known as Learning to Transfer (L2T) to automatically determine what and how to transfer are the best by leveraging previous transfer learning experiences. We establish the L2T framework in two stages: 1) we first learn a reflection function encrypting transfer learning skills from experiences; and 2) we infer what and how to transfer for a newly arrived pair of domains by optimizing the reflection function. Extensive experiments demonstrate the L2T's superiority over several state-of-the-art transfer learning algorithms and its effectiveness on discovering more transferable knowledge.
[ 1, 0, 0, 1, 0, 0 ]
Title: Experimental study of mini-magnetosphere, Abstract: Magnetosphere at ion kinetic scales, or mini-magnetosphere, possesses unusual features as predicted by numerical simulations. However, there are practically no data on the subject from space observations and the data which are available are far too incomplete. In the present work we describe results of laboratory experiment on interaction of plasma flow with magnetic dipole with parameters such that ion inertia length is smaller than a size of observed magnetosphere. A detailed structure of non-coplanar or out-of-plane component of magnetic field has been obtained in meridian plane. Independence of this component on dipole moment reversal, as was reported in previous work, has been verified. In the tail distinct lobes and central current sheet have been observed. It was found that lobe regions adjacent to boundary layer are dominated by non-coplanar component of magnetic field. Tail-ward oriented electric current in plasma associated with that component appears to be equal to ion current in the frontal part of magnetosphere and in the tail current sheet implying that electrons are stationary in those regions while ions flow by. Obtained data strongly support the proposed model of mini-magnetosphere based on two-fluid effects as described by the Hall term.
[ 0, 1, 0, 0, 0, 0 ]
Title: Poster Abstract: LPWA-MAC - a Low Power Wide Area network MAC protocol for cyber-physical systems, Abstract: Low-Power Wide-Area Networks (LPWANs) are being successfully used for the monitoring of large-scale systems that are delay-tolerant and which have low-bandwidth requirements. The next step would be instrumenting these for the control of Cyber-Physical Systems (CPSs) distributed over large areas which require more bandwidth, bounded delays and higher reliability or at least more rigorous guarantees therein. This paper presents LPWA-MAC, a novel Low Power Wide-Area network MAC protocol, that ensures bounded end-to-end delays, high channel utility and supports many of the different traffic patterns and data-rates typical of CPS.
[ 1, 0, 0, 0, 0, 0 ]
Title: The STAR MAPS-based PiXeL detector, Abstract: The PiXeL detector (PXL) for the Heavy Flavor Tracker (HFT) of the STAR experiment at RHIC is the first application of the state-of-the-art thin Monolithic Active Pixel Sensors (MAPS) technology in a collider environment. Custom built pixel sensors, their readout electronics and the detector mechanical structure are described in detail. Selected detector design aspects and production steps are presented. The detector operations during the three years of data taking (2014-2016) and the overall performance exceeding the design specifications are discussed in the conclusive sections of this paper.
[ 0, 1, 0, 0, 0, 0 ]
Title: Complete intersection monomial curves and the Cohen-Macaulayness of their tangent cones, Abstract: Let $C({\bf n})$ be a complete intersection monomial curve in the 4-dimensional affine space. In this paper we study the complete intersection property of the monomial curve $C({\bf n}+w{\bf v})$, where $w>0$ is an integer and ${\bf v} \in \mathbb{N}^{4}$. Also we investigate the Cohen-Macaulayness of the tangent cone of $C({\bf n}+w{\bf v})$.
[ 0, 0, 1, 0, 0, 0 ]
Title: Design of Configurable Sequential Circuits in Quantum-dot Cellular Automata, Abstract: Quantum-dot cellular automata (QCA) is a likely candidate for future low power nano-scale electronic devices. Sequential circuits in QCA attract more attention due to its numerous application in digital industry. On the other hand, configurable devices provide low device cost and efficient utilization of device area. Since the fundamental building block of any sequential logic circuit is flip flop, hence constructing configurable, multi-purpose QCA flip-flops are one of the prime importance of current research. This work proposes a design of configurable flip-flop (CFF) which is the first of its kind in QCA domain. The proposed flip-flop can be configured to D, T and JK flip-flop by configuring its control inputs. In addition, to make more efficient configurable flip-flop, a clock pulse generator (CPG) is designed which can trigger all types of edges (falling, rising and dual) of a clock. The same CFF design is used to realize an edge configurable (dual/rising/falling) flip- flop with the help of CPG. The biggest advantage of using edge configurable (dual/rising/falling) flip-flop is that it can be used in 9 different ways using the same single circuit. All the proposed designs are verified using QCADesigner simulator.
[ 1, 0, 0, 0, 0, 0 ]
Title: Biderivations of the twisted Heisenberg-Virasoro algebra and their applications, Abstract: In this paper, the biderivations without the skew-symmetric condition of the twisted Heisenberg-Virasoro algebra are presented. We find some non-inner and non-skew-symmetric biderivations. As applications, the characterizations of the forms of linear commuting maps and the commutative post-Lie algebra structures on the twisted Heisenberg-Virasoro algebra are given. It also is proved that every biderivation of the graded twisted Heisenberg-Virasoro left-symmetric algebra is trivial.
[ 0, 0, 1, 0, 0, 0 ]
Title: A Data Science Approach to Understanding Residential Water Contamination in Flint, Abstract: When the residents of Flint learned that lead had contaminated their water system, the local government made water-testing kits available to them free of charge. The city government published the results of these tests, creating a valuable dataset that is key to understanding the causes and extent of the lead contamination event in Flint. This is the nation's largest dataset on lead in a municipal water system. In this paper, we predict the lead contamination for each household's water supply, and we study several related aspects of Flint's water troubles, many of which generalize well beyond this one city. For example, we show that elevated lead risks can be (weakly) predicted from observable home attributes. Then we explore the factors associated with elevated lead. These risk assessments were developed in part via a crowd sourced prediction challenge at the University of Michigan. To inform Flint residents of these assessments, they have been incorporated into a web and mobile application funded by \texttt{Google.org}. We also explore questions of self-selection in the residential testing program, examining which factors are linked to when and how frequently residents voluntarily sample their water.
[ 1, 0, 0, 1, 0, 0 ]
Title: Sum-Product Graphical Models, Abstract: This paper introduces a new probabilistic architecture called Sum-Product Graphical Model (SPGM). SPGMs combine traits from Sum-Product Networks (SPNs) and Graphical Models (GMs): Like SPNs, SPGMs always enable tractable inference using a class of models that incorporate context specific independence. Like GMs, SPGMs provide a high-level model interpretation in terms of conditional independence assumptions and corresponding factorizations. Thus, the new architecture represents a class of probability distributions that combines, for the first time, the semantics of graphical models with the evaluation efficiency of SPNs. We also propose a novel algorithm for learning both the structure and the parameters of SPGMs. A comparative empirical evaluation demonstrates competitive performances of our approach in density estimation.
[ 1, 0, 0, 1, 0, 0 ]
Title: Dependence of the Martian radiation environment on atmospheric depth: Modeling and measurement, Abstract: The energetic particle environment on the Martian surface is influenced by solar and heliospheric modulation and changes in the local atmospheric pressure (or column depth). The Radiation Assessment Detector (RAD) on board the Mars Science Laboratory rover Curiosity on the surface of Mars has been measuring this effect for over four Earth years (about two Martian years). The anticorrelation between the recorded surface Galactic Cosmic Ray-induced dose rates and pressure changes has been investigated by Rafkin et al. (2014) and the long-term solar modulation has also been empirically analyzed and modeled by Guo et al. (2015). This paper employs the newly updated HZETRN2015 code to model the Martian atmospheric shielding effect on the accumulated dose rates and the change of this effect under different solar modulation and atmospheric conditions. The modeled results are compared with the most up-to-date (from 14 August 2012 to 29 June 2016) observations of the RAD instrument on the surface of Mars. Both model and measurements agree reasonably well and show the atmospheric shielding effect under weak solar modulation conditions and the decline of this effect as solar modulation becomes stronger. This result is important for better risk estimations of future human explorations to Mars under different heliospheric and Martian atmospheric conditions.
[ 0, 1, 0, 0, 0, 0 ]
Title: Analyzing the Digital Traces of Political Manipulation: The 2016 Russian Interference Twitter Campaign, Abstract: Until recently, social media was seen to promote democratic discourse on social and political issues. However, this powerful communication platform has come under scrutiny for allowing hostile actors to exploit online discussions in an attempt to manipulate public opinion. A case in point is the ongoing U.S. Congress' investigation of Russian interference in the 2016 U.S. election campaign, with Russia accused of using trolls (malicious accounts created to manipulate) and bots to spread misinformation and politically biased information. In this study, we explore the effects of this manipulation campaign, taking a closer look at users who re-shared the posts produced on Twitter by the Russian troll accounts publicly disclosed by U.S. Congress investigation. We collected a dataset with over 43 million election-related posts shared on Twitter between September 16 and October 21, 2016, by about 5.7 million distinct users. This dataset included accounts associated with the identified Russian trolls. We use label propagation to infer the ideology of all users based on the news sources they shared. This method enables us to classify a large number of users as liberal or conservative with precision and recall above 90%. Conservatives retweeted Russian trolls about 31 times more often than liberals and produced 36x more tweets. Additionally, most retweets of troll content originated from two Southern states: Tennessee and Texas. Using state-of-the-art bot detection techniques, we estimated that about 4.9% and 6.2% of liberal and conservative users respectively were bots. Text analysis on the content shared by trolls reveals that they had a mostly conservative, pro-Trump agenda. Although an ideologically broad swath of Twitter users was exposed to Russian Trolls in the period leading up to the 2016 U.S. Presidential election, it was mainly conservatives who helped amplify their message.
[ 1, 0, 0, 0, 0, 0 ]
Title: Computing metric hulls in graphs, Abstract: We prove that, given a closure function the smallest preimage of a closed set can be calculated in polynomial time in the number of closed sets. This confirms a conjecture of Albenque and Knauer and implies that there is a polynomial time algorithm to compute the convex hull-number of a graph, when all its convex subgraphs are given as input. We then show that computing if the smallest preimage of a closed set is logarithmic in the size of the ground set is LOGSNP-complete if only the ground set is given. A special instance of this problem is computing the dimension of a poset given its linear extension graph, that was conjectured to be in P. The intent to show that the latter problem is LOGSNP-complete leads to several interesting questions and to the definition of the isometric hull, i.e., a smallest isometric subgraph containing a given set of vertices $S$. While for $|S|=2$ an isometric hull is just a shortest path, we show that computing the isometric hull of a set of vertices is NP-complete even if $|S|=3$. Finally, we consider the problem of computing the isometric hull-number of a graph and show that computing it is $\Sigma^P_2$ complete.
[ 1, 0, 0, 0, 0, 0 ]
Title: Central Moment Discrepancy (CMD) for Domain-Invariant Representation Learning, Abstract: The learning of domain-invariant representations in the context of domain adaptation with neural networks is considered. We propose a new regularization method that minimizes the discrepancy between domain-specific latent feature representations directly in the hidden activation space. Although some standard distribution matching approaches exist that can be interpreted as the matching of weighted sums of moments, e.g. Maximum Mean Discrepancy (MMD), an explicit order-wise matching of higher order moments has not been considered before. We propose to match the higher order central moments of probability distributions by means of order-wise moment differences. Our model does not require computationally expensive distance and kernel matrix computations. We utilize the equivalent representation of probability distributions by moment sequences to define a new distance function, called Central Moment Discrepancy (CMD). We prove that CMD is a metric on the set of probability distributions on a compact interval. We further prove that convergence of probability distributions on compact intervals w.r.t. the new metric implies convergence in distribution of the respective random variables. We test our approach on two different benchmark data sets for object recognition (Office) and sentiment analysis of product reviews (Amazon reviews). CMD achieves a new state-of-the-art performance on most domain adaptation tasks of Office and outperforms networks trained with MMD, Variational Fair Autoencoders and Domain Adversarial Neural Networks on Amazon reviews. In addition, a post-hoc parameter sensitivity analysis shows that the new approach is stable w.r.t. parameter changes in a certain interval. The source code of the experiments is publicly available.
[ 0, 0, 0, 1, 0, 0 ]
Title: AutoShuffleNet: Learning Permutation Matrices via an Exact Lipschitz Continuous Penalty in Deep Convolutional Neural Networks, Abstract: ShuffleNet is a state-of-the-art light weight convolutional neural network architecture. Its basic operations include group, channel-wise convolution and channel shuffling. However, channel shuffling is manually designed empirically. Mathematically, shuffling is a multiplication by a permutation matrix. In this paper, we propose to automate channel shuffling by learning permutation matrices in network training. We introduce an exact Lipschitz continuous non-convex penalty so that it can be incorporated in the stochastic gradient descent to approximate permutation at high precision. Exact permutations are obtained by simple rounding at the end of training and are used in inference. The resulting network, referred to as AutoShuffleNet, achieved improved classification accuracies on CIFAR-10 and ImageNet data sets. In addition, we found experimentally that the standard convex relaxation of permutation matrices into stochastic matrices leads to poor performance. We prove theoretically the exactness (error bounds) in recovering permutation matrices when our penalty function is zero (very small). We present examples of permutation optimization through graph matching and two-layer neural network models where the loss functions are calculated in closed analytical form. In the examples, convex relaxation failed to capture permutations whereas our penalty succeeded.
[ 1, 0, 0, 1, 0, 0 ]
Title: ACDC: Altering Control Dependence Chains for Automated Patch Generation, Abstract: Once a failure is observed, the primary concern of the developer is to identify what caused it in order to repair the code that induced the incorrect behavior. Until a permanent repair is afforded, code repair patches are invaluable. The aim of this work is to devise an automated patch generation technique that proceeds as follows: Step1) It identifies a set of failure-causing control dependence chains that are minimal in terms of number and length. Step2) It identifies a set of predicates within the chains along with associated execution instances, such that negating the predicates at the given instances would exhibit correct behavior. Step3) For each candidate predicate, it creates a classifier that dictates when the predicate should be negated to yield correct program behavior. Step4) Prior to each candidate predicate, the faulty program is injected with a call to its corresponding classifier passing it the program state and getting a return value predictively indicating whether to negate the predicate or not. The role of the classifiers is to ensure that: 1) the predicates are not negated during passing runs; and 2) the predicates are negated at the appropriate instances within failing runs. We implemented our patch generation approach for the Java platform and evaluated our toolset using 148 defects from the Introclass and Siemens benchmarks. The toolset identified 56 full patches and another 46 partial patches, and the classification accuracy averaged 84%.
[ 1, 0, 0, 0, 0, 0 ]
Title: Microlensing of Extremely Magnified Stars near Caustics of Galaxy Clusters, Abstract: Recent observations of lensed galaxies at cosmological distances have detected individual stars that are extremely magnified when crossing the caustics of lensing clusters. In idealized cluster lenses with smooth mass distributions, two images of a star of radius $R$ approaching a caustic brighten as $t^{-1/2}$ and reach a peak magnification $\sim 10^{6}\, (10\, R_{\odot}/R)^{1/2}$ before merging on the critical curve. We show that a mass fraction ($\kappa_\star \gtrsim \, 10^{-4.5}$) in microlenses inevitably disrupts the smooth caustic into a network of corrugated microcaustics, and produces light curves with numerous peaks. Using analytical calculations and numerical simulations, we derive the characteristic width of the network, caustic-crossing frequencies, and peak magnifications. For the lens parameters of a recent detection and a population of intracluster stars with $\kappa_\star \sim 0.01$, we find a source-plane width of $\sim 20 \, {\rm pc}$ for the caustic network, which spans $0.2 \, {\rm arcsec}$ on the image plane. A source star takes $\sim 2\times 10^4$ years to cross this width, with a total of $\sim 6 \times 10^4$ crossings, each one lasting for $\sim 5\,{\rm hr}\,(R/10\,R_\odot)$ with typical peak magnifications of $\sim 10^{4} \left( R/ 10\,R_\odot \right)^{-1/2}$. The exquisite sensitivity of caustic-crossing events to the granularity of the lens-mass distribution makes them ideal probes of dark matter components, such as compact halo objects and ultralight axion dark matter.
[ 0, 1, 0, 0, 0, 0 ]
Title: Convergence rate of a simulated annealing algorithm with noisy observations, Abstract: In this paper we propose a modified version of the simulated annealing algorithm for solving a stochastic global optimization problem. More precisely, we address the problem of finding a global minimizer of a function with noisy evaluations. We provide a rate of convergence and its optimized parametrization to ensure a minimal number of evaluations for a given accuracy and a confidence level close to 1. This work is completed with a set of numerical experimentations and assesses the practical performance both on benchmark test cases and on real world examples.
[ 0, 0, 1, 1, 0, 0 ]
Title: Trans-allelic model for prediction of peptide:MHC-II interactions, Abstract: Major histocompatibility complex class two (MHC-II) molecules are trans-membrane proteins and key components of the cellular immune system. Upon recognition of foreign peptides expressed on the MHC-II binding groove, helper T cells mount an immune response against invading pathogens. Therefore, mechanistic identification and knowledge of physico-chemical features that govern interactions between peptides and MHC-II molecules is useful for the design of effective epitope-based vaccines, as well as for understanding of immune responses. In this paper, we present a comprehensive trans-allelic prediction model, a generalized version of our previous biophysical model, that can predict peptide interactions for all three human MHC-II loci (HLA-DR, HLA-DP and HLA-DQ), using both peptide sequence data and structural information of MHC-II molecules. The advantage of this approach over other machine learning models is that it offers a simple and plausible physical explanation for peptide-MHC-II interactions. We train the model using a benchmark experimental dataset, and measure its predictive performance using novel data. Despite its relative simplicity, we find that the model has comparable performance to the state-of-the-art method. Focusing on the physical bases of peptide-MHC binding, we find support for previous theoretical predictions about the contributions of certain binding pockets to the binding energy. Additionally, we find that binding pockets P 4 and P 5 of HLA-DP, which were not previously considered as primary anchors, do make strong contributions to the binding energy. Together, the results indicate that our model can serve as a useful complement to alternative approaches to predicting peptide-MHC interactions.
[ 0, 0, 0, 1, 0, 0 ]
Title: Improved Speech Reconstruction from Silent Video, Abstract: Speechreading is the task of inferring phonetic information from visually observed articulatory facial movements, and is a notoriously difficult task for humans to perform. In this paper we present an end-to-end model based on a convolutional neural network (CNN) for generating an intelligible and natural-sounding acoustic speech signal from silent video frames of a speaking person. We train our model on speakers from the GRID and TCD-TIMIT datasets, and evaluate the quality and intelligibility of reconstructed speech using common objective measurements. We show that speech predictions from the proposed model attain scores which indicate significantly improved quality over existing models. In addition, we show promising results towards reconstructing speech from an unconstrained dictionary.
[ 1, 0, 0, 0, 0, 0 ]
Title: Tangent measures of elliptic harmonic measure and applications, Abstract: Tangent measure and blow-up methods, are powerful tools for understanding the relationship between the infinitesimal structure of the boundary of a domain and the behavior of its harmonic measure. We introduce a method for studying tangent measures of elliptic measures in arbitrary domains associated with (possibly non-symmetric) elliptic operators in divergence form whose coefficients have vanishing mean oscillation at the boundary. In this setting, we show the following for domains $ \Omega \subset \mathbb{R}^{n+1}$: 1. We extend the results of Kenig, Preiss, and Toro [KPT09] by showing mutual absolute continuity of interior and exterior elliptic measures for {\it any} domains implies the tangent measures are a.e. flat and the elliptic measures have dimension $n$. 2. We generalize the work of Kenig and Toro [KT06] and show that VMO equivalence of doubling interior and exterior elliptic measures for general domains implies the tangent measures are always elliptic polynomials. 3. In a uniform domain that satisfies the capacity density condition and whose boundary is locally finite and has a.e. positive lower $n$-Hausdorff density, we show that if the elliptic measure is absolutely continuous with respect to $n$-Hausdorff measure then the boundary is rectifiable. This generalizes the work of Akman, Badger, Hofmann, and Martell [ABHM17]. Finally, we generalize one of the main results of [Bad11] by showing that if $\omega$ is a Radon measure for which all tangent measures at a point are harmonic polynomials vanishing at the origin, then they are all homogeneous harmonic polynomials.
[ 0, 0, 1, 0, 0, 0 ]
Title: Parametric gain and wavelength conversion via third order nonlinear optics a CMOS compatible waveguide, Abstract: We demonstrate sub-picosecond wavelength conversion in the C-band via four wave mixing in a 45cm long high index doped silica spiral waveguide. We achieve an on/off conversion efficiency (signal to idler) of +16.5dB as well as a parametric gain of +15dB for a peak pump power of 38W over a wavelength range of 100nm. Furthermore, we demonstrated a minimum gain of +5dB over a wavelength range as large as 200nm.
[ 0, 1, 0, 0, 0, 0 ]
Title: Doubly Accelerated Stochastic Variance Reduced Dual Averaging Method for Regularized Empirical Risk Minimization, Abstract: In this paper, we develop a new accelerated stochastic gradient method for efficiently solving the convex regularized empirical risk minimization problem in mini-batch settings. The use of mini-batches is becoming a golden standard in the machine learning community, because mini-batch settings stabilize the gradient estimate and can easily make good use of parallel computing. The core of our proposed method is the incorporation of our new "double acceleration" technique and variance reduction technique. We theoretically analyze our proposed method and show that our method much improves the mini-batch efficiencies of previous accelerated stochastic methods, and essentially only needs size $\sqrt{n}$ mini-batches for achieving the optimal iteration complexities for both non-strongly and strongly convex objectives, where $n$ is the training set size. Further, we show that even in non-mini-batch settings, our method achieves the best known convergence rate for both non-strongly and strongly convex objectives.
[ 1, 0, 1, 1, 0, 0 ]
Title: Automatically Leveraging MapReduce Frameworks for Data-Intensive Applications, Abstract: MapReduce is a popular programming paradigm for developing large-scale, data-intensive computation. Many frameworks that implement this paradigm have recently been developed. To leverage these frameworks, however, developers must become familiar with their APIs and rewrite existing code. Casper is a new tool that automatically translates sequential Java programs into the MapReduce paradigm. Casper identifies potential code fragments to rewrite and translates them in two steps: (1) Casper uses program synthesis to search for a program summary (i.e., a functional specification) of each code fragment. The summary is expressed using a high-level intermediate language resembling the MapReduce paradigm and verified to be semantically equivalent to the original using a theorem prover. (2) Casper generates executable code from the summary, using either the Hadoop, Spark, or Flink API. We evaluated Casper by automatically converting real-world, sequential Java benchmarks to MapReduce. The resulting benchmarks perform up to 48.2x faster compared to the original.
[ 1, 0, 0, 0, 0, 0 ]
Title: Deep SNP: An End-to-end Deep Neural Network with Attention-based Localization for Break-point Detection in SNP Array Genomic data, Abstract: Diagnosis and risk stratification of cancer and many other diseases require the detection of genomic breakpoints as a prerequisite of calling copy number alterations (CNA). This, however, is still challenging and requires time-consuming manual curation. As deep-learning methods outperformed classical state-of-the-art algorithms in various domains and have also been successfully applied to life science problems including medicine and biology, we here propose Deep SNP, a novel Deep Neural Network to learn from genomic data. Specifically, we used a manually curated dataset from 12 genomic single nucleotide polymorphism array (SNPa) profiles as truth-set and aimed at predicting the presence or absence of genomic breakpoints, an indicator of structural chromosomal variations, in windows of 40,000 probes. We compare our results with well-known neural network models as well as Rawcopy though this tool is designed to predict breakpoints and in addition genomic segments with high sensitivity. We show, that Deep SNP is capable of successfully predicting the presence or absence of a breakpoint in large genomic windows and outperforms state-of-the-art neural network models. Qualitative examples suggest that integration of a localization unit may enable breakpoint detection and prediction of genomic segments, even if the breakpoint coordinates were not provided for network training. These results warrant further evaluation of DeepSNP for breakpoint localization and subsequent calling of genomic segments.
[ 0, 0, 0, 0, 1, 0 ]
Title: $(L,M)$-fuzzy convex structures, Abstract: In this paper, the notion of $(L,M)$-fuzzy convex structures is introduced. It is a generalization of $L$-convex structures and $M$-fuzzifying convex structures. In our definition of $(L,M)$-fuzzy convex structures, each $L$-fuzzy subset can be regarded as an $L$-convex set to some degree. The notion of convexity preserving functions is also generalized to lattice-valued case. Moreover, under the framework of $(L,M)$-fuzzy convex structures, the concepts of quotient structures, substructures and products are presented and their fundamental properties are discussed. Finally, we create a functor $\omega$ from $\mathbf{MYCS}$ to $\mathbf{LMCS}$ and show that there exists an adjunction between $\mathbf{MYCS}$ and $\mathbf{LMCS}$, where $\mathbf{MYCS}$ and $\mathbf{LMCS}$ denote the category of $M$-fuzzifying convex structures, and the category of $(L,M)$-fuzzy convex structures, respectively.
[ 0, 0, 1, 0, 0, 0 ]
Title: Counting intersecting and pairs of cross-intersecting families, Abstract: A family of subsets of $\{1,\ldots,n\}$ is called {\it intersecting} if any two of its sets intersect. A classical result in extremal combinatorics due to Erdős, Ko, and Rado determines the maximum size of an intersecting family of $k$-subsets of $\{1,\ldots, n\}$. In this paper we study the following problem: how many intersecting families of $k$-subsets of $\{1,\ldots, n\}$ are there? Improving a result of Balogh, Das, Delcourt, Liu, and Sharifzadeh, we determine this quantity asymptotically for $n\ge 2k+2+2\sqrt{k\log k}$ and $k\to \infty$. Moreover, under the same assumptions we also determine asymptotically the number of {\it non-trivial} intersecting families, that is, intersecting families for which the intersection of all sets is empty. We obtain analogous results for pairs of cross-intersecting families.
[ 1, 0, 1, 0, 0, 0 ]
Title: Zero-field Skyrmions with a High Topological Number in Itinerant Magnets, Abstract: Magnetic skyrmions are swirling spin textures with topologically protected noncoplanarity. Recently, skyrmions with the topological number of unity have been extensively studied in both experiment and theory. We here show that a skyrmion crystal with an unusually high topological number of two is stabilized in itinerant magnets at zero magnetic field. The results are obtained for a minimal Kondo lattice model on a triangular lattice by an unrestricted large-scale numerical simulation and variational calculations. We find that the topological number can be switched by a magnetic field as $2\leftrightarrow 1\leftrightarrow 0$. The skyrmion crystals are formed by the superpositions of three spin density waves induced by the Fermi surface effect, and hence, the size of skyrmions can be controlled by the band structure and electron filling. We also discuss the charge and spin textures of itinerant electrons in the skyrmion crystals which are directly obtained in our numerical simulations.
[ 0, 1, 0, 0, 0, 0 ]
Title: DoShiCo Challenge: Domain Shift in Control Prediction, Abstract: Training deep neural network policies end-to-end for real-world applications so far requires big demonstration datasets in the real world or big sets consisting of a large variety of realistic and closely related 3D CAD models. These real or virtual data should, moreover, have very similar characteristics to the conditions expected at test time. These stringent requirements and the time consuming data collection processes that they entail, are currently the most important impediment that keeps deep reinforcement learning from being deployed in real-world applications. Therefore, in this work we advocate an alternative approach, where instead of avoiding any domain shift by carefully selecting the training data, the goal is to learn a policy that can cope with it. To this end, we propose the DoShiCo challenge: to train a model in very basic synthetic environments, far from realistic, in a way that it can be applied in more realistic environments as well as take the control decisions on real-world data. In particular, we focus on the task of collision avoidance for drones. We created a set of simulated environments that can be used as benchmark and implemented a baseline method, exploiting depth prediction as an auxiliary task to help overcome the domain shift. Even though the policy is trained in very basic environments, it can learn to fly without collisions in a very different realistic simulated environment. Of course several benchmarks for reinforcement learning already exist - but they never include a large domain shift. On the other hand, several benchmarks in computer vision focus on the domain shift, but they take the form of a static datasets instead of simulated environments. In this work we claim that it is crucial to take the two challenges together in one benchmark.
[ 1, 0, 0, 0, 0, 0 ]
Title: Optimal Stopping for Interval Estimation in Bernoulli Trials, Abstract: We propose an optimal sequential methodology for obtaining confidence intervals for a binomial proportion $\theta$. Assuming that an i.i.d. random sequence of Benoulli($\theta$) trials is observed sequentially, we are interested in designing a)~a stopping time $T$ that will decide when is the best time to stop sampling the process, and b)~an optimum estimator $\hat{\theta}_{T}$ that will provide the optimum center of the interval estimate of $\theta$. We follow a semi-Bayesian approach, where we assume that there exists a prior distribution for $\theta$, and our goal is to minimize the average number of samples while we guarantee a minimal coverage probability level. The solution is obtained by applying standard optimal stopping theory and computing the optimum pair $(T,\hat{\theta}_{T})$ numerically. Regarding the optimum stopping time component $T$, we demonstrate that it enjoys certain very uncommon characteristics not encountered in solutions of other classical optimal stopping problems. Finally, we compare our method with the optimum fixed-sample-size procedure but also with existing alternative sequential schemes.
[ 0, 0, 0, 1, 0, 0 ]
Title: Derivation of a Non-autonomous Linear Boltzmann Equation from a Heterogeneous Rayleigh Gas, Abstract: A linear Boltzmann equation with nonautonomous collision operator is rigorously derived in the Boltzmann-Grad limit for the deterministic dynamics of a Rayleigh gas where a tagged particle is undergoing hard-sphere collisions with heterogeneously distributed background particles, which do not interact among each other. The validity of the linear Boltzmann equation holds for arbitrary long times under moderate assumptions on spatial continuity and higher moments of the initial distributions of the tagged particle and the heterogeneous, non-equilibrium distribution of the background. The empiric particle dynamics are compared to the Boltzmann dynamics using evolution semigroups for Kolmogorov equations of associated probability measures on collision histories.
[ 0, 0, 1, 0, 0, 0 ]
Title: Forward Collision Vehicular Radar with IEEE 802.11: Feasibility Demonstration through Measurements, Abstract: Increasing safety and automation in transportation systems has led to the proliferation of radar and IEEE 802.11 dedicated short range communication (DSRC) in vehicles. Current implementations of vehicular radar devices, however, are expensive, use a substantial amount of bandwidth, and are susceptible to multiple security risks. Consider the feasibility of using an IEEE 802.11 orthogonal frequency division multiplexing (OFDM) communications waveform to perform radar functions. In this paper, we present an approach that determines the mean-normalized channel energy from frequency domain channel estimates and models it as a direct sinusoidal function of target range, enabling closest target range estimation. In addition, we propose an alternative to vehicular forward collision detection by extending IEEE 802.11 dedicated short-range communications (DSRC) and WiFi technology to radar, providing a foundation for joint communications and radar framework. Furthermore, we perform an experimental demonstration using existing IEEE 802.11 devices with minimal modification through algorithm processing on frequency-domain channel estimates. The results of this paper show that our solution delivers similar accuracy and reliability to mmWave radar devices with as little as 20 MHz of spectrum (doubling DSRC's 10 MHz allocation), indicating significant potential for industrial devices with joint vehicular communications and radar capabilities.
[ 1, 0, 0, 0, 0, 0 ]
Title: Holistic Interstitial Lung Disease Detection using Deep Convolutional Neural Networks: Multi-label Learning and Unordered Pooling, Abstract: Accurately predicting and detecting interstitial lung disease (ILD) patterns given any computed tomography (CT) slice without any pre-processing prerequisites, such as manually delineated regions of interest (ROIs), is a clinically desirable, yet challenging goal. The majority of existing work relies on manually-provided ILD ROIs to extract sampled 2D image patches from CT slices and, from there, performs patch-based ILD categorization. Acquiring manual ROIs is labor intensive and serves as a bottleneck towards fully-automated CT imaging ILD screening over large-scale populations. Furthermore, despite the considerable high frequency of more than one ILD pattern on a single CT slice, previous works are only designed to detect one ILD pattern per slice or patch. To tackle these two critical challenges, we present multi-label deep convolutional neural networks (CNNs) for detecting ILDs from holistic CT slices (instead of ROIs or sub-images). Conventional single-labeled CNN models can be augmented to cope with the possible presence of multiple ILD pattern labels, via 1) continuous-valued deep regression based robust norm loss functions or 2) a categorical objective as the sum of element-wise binary logistic losses. Our methods are evaluated and validated using a publicly available database of 658 patient CT scans under five-fold cross-validation, achieving promising performance on detecting four major ILD patterns: Ground Glass, Reticular, Honeycomb, and Emphysema. We also investigate the effectiveness of a CNN activation-based deep-feature encoding scheme using Fisher vector encoding, which treats ILD detection as spatially-unordered deep texture classification.
[ 1, 0, 0, 0, 0, 0 ]
Title: The norm residue symbol for higher local fields, Abstract: Since the development of higher local class field theory, several explicit reciprocity laws have been constructed. In particular, there are formulas describing the higher-dimensional Hilbert symbol given, among others, by M. Kurihara, A. Zinoviev and S. Vostokov. K. Kato also has explicit formulas for the higher-dimensional Kummer pairing associated to certain (one-dimensional) $p$-divisible groups. In this paper we construct an explicit reciprocity law describing the Kummer pairing associated to any (one-dimensional) formal group. The formulas are a generalization to higher-dimensional local fields of Kolyvagin's reciprocity laws. The formulas obtained describe the values of the pairing in terms of multidimensional $p$-adic differentiation, the logarithm of the formal group, the generalized trace and the norm on Milnor K-groups. In the second part of this paper, we will apply the results obtained here to give explicit formulas for the generalized Hilbert symbol and the Kummer pairing associated to a Lubin-Tate formal group. The results obtained in the second paper constitute a generalization to higher local fields, of the formulas of Artin-Hasse, K. Iwasawa and A. Wiles.
[ 0, 0, 1, 0, 0, 0 ]
Title: Interpretable Structure-Evolving LSTM, Abstract: This paper develops a general framework for learning interpretable data representation via Long Short-Term Memory (LSTM) recurrent neural networks over hierarchal graph structures. Instead of learning LSTM models over the pre-fixed structures, we propose to further learn the intermediate interpretable multi-level graph structures in a progressive and stochastic way from data during the LSTM network optimization. We thus call this model the structure-evolving LSTM. In particular, starting with an initial element-level graph representation where each node is a small data element, the structure-evolving LSTM gradually evolves the multi-level graph representations by stochastically merging the graph nodes with high compatibilities along the stacked LSTM layers. In each LSTM layer, we estimate the compatibility of two connected nodes from their corresponding LSTM gate outputs, which is used to generate a merging probability. The candidate graph structures are accordingly generated where the nodes are grouped into cliques with their merging probabilities. We then produce the new graph structure with a Metropolis-Hasting algorithm, which alleviates the risk of getting stuck in local optimums by stochastic sampling with an acceptance probability. Once a graph structure is accepted, a higher-level graph is then constructed by taking the partitioned cliques as its nodes. During the evolving process, representation becomes more abstracted in higher-levels where redundant information is filtered out, allowing more efficient propagation of long-range data dependencies. We evaluate the effectiveness of structure-evolving LSTM in the application of semantic object parsing and demonstrate its advantage over state-of-the-art LSTM models on standard benchmarks.
[ 1, 0, 0, 0, 0, 0 ]
Title: On Optimization over Tail Distributions, Abstract: We investigate the use of optimization to compute bounds for extremal performance measures. This approach takes a non-parametric viewpoint that aims to alleviate the issue of model misspecification possibly encountered by conventional methods in extreme event analysis. We make two contributions towards solving these formulations, paying especial attention to the arising tail issues. First, we provide a technique in parallel to Choquet's theory, via a combination of integration by parts and change of measures, to transform shape constrained problems (e.g., monotonicity of derivatives) into families of moment problems. Second, we show how a moment problem cast over infinite support can be reformulated into a problem over compact support with an additional slack variable. In the context of optimization over tail distributions, the latter helps resolve the issue of non-convergence of solutions when using algorithms such as generalized linear programming. We further demonstrate the applicability of this result to problems with infinite-value constraints, which can arise in modeling heavy tails.
[ 0, 0, 0, 1, 0, 0 ]
Title: Isotropic covariance functions on graphs and their edges, Abstract: We develop parametric classes of covariance functions on linear networks and their extension to graphs with Euclidean edges, i.e., graphs with edges viewed as line segments or more general sets with a coordinate system allowing us to consider points on the graph which are vertices or points on an edge. Our covariance functions are defined on the vertices and edge points of these graphs and are isotropic in the sense that they depend only on the geodesic distance or on a new metric called the resistance metric (which extends the classical resistance metric developed in electrical network theory on the vertices of a graph to the continuum of edge points). We discuss the advantages of using the resistance metric in comparison with the geodesic metric as well as the restrictions these metrics impose on the investigated covariance functions. In particular, many of the commonly used isotropic covariance functions in the spatial statistics literature (the power exponential, Mat{é}rn, generalized Cauchy, and Dagum classes) are shown to be valid with respect to the resistance metric for any graph with Euclidean edges, whilst they are only valid with respect to the geodesic metric in more special cases.
[ 0, 0, 1, 1, 0, 0 ]
Title: Representations of language in a model of visually grounded speech signal, Abstract: We present a visually grounded model of speech perception which projects spoken utterances and images to a joint semantic space. We use a multi-layer recurrent highway network to model the temporal nature of spoken speech, and show that it learns to extract both form and meaning-based linguistic knowledge from the input signal. We carry out an in-depth analysis of the representations used by different components of the trained model and show that encoding of semantic aspects tends to become richer as we go up the hierarchy of layers, whereas encoding of form-related aspects of the language input tends to initially increase and then plateau or decrease.
[ 1, 0, 0, 0, 0, 0 ]
Title: Evolutionary dynamics of cooperation in neutral populations, Abstract: Cooperation is a difficult proposition in the face of Darwinian selection. Those that defect have an evolutionary advantage over cooperators who should therefore die out. However, spatial structure enables cooperators to survive through the formation of homogeneous clusters, which is the hallmark of network reciprocity. Here we go beyond this traditional setup and study the spatiotemporal dynamics of cooperation in a population of populations. We use the prisoner's dilemma game as the mathematical model and show that considering several populations simultaneously give rise to fascinating spatiotemporal dynamics and pattern formation. Even the simplest assumption that strategies between different populations are payoff-neutral with one another results in the spontaneous emergence of cyclic dominance, where defectors of one population become prey of cooperators in the other population, and vice versa. Moreover, if social interactions within different populations are characterized by significantly different temptations to defect, we observe that defectors in the population with the largest temptation counterintuitively vanish the fastest, while cooperators that hang on eventually take over the whole available space. Our results reveal that considering the simultaneous presence of different populations significantly expands the complexity of evolutionary dynamics in structured populations, and it allow us to understand the stability of cooperation under adverse conditions that could never be bridged by network reciprocity alone.
[ 1, 0, 0, 0, 0, 0 ]
Title: Large global-in-time solutions to a nonlocal model of chemotaxis, Abstract: We consider the parabolic-elliptic model for the chemotaxis with fractional (anomalous) diffusion. Global-in-time solutions are constructed under (nearly) optimal assumptions on the size of radial initial data. Moreover, criteria for blowup of radial solutions in terms of suitable Morrey spaces norms are derived.
[ 0, 0, 1, 0, 0, 0 ]
Title: The strictly-correlated electron functional for spherically symmetric systems revisited, Abstract: The strong-interaction limit of the Hohenberg-Kohn functional defines a multimarginal optimal transport problem with Coulomb cost. From physical arguments, the solution of this limit is expected to yield strictly-correlated particle positions, related to each other by co-motion functions (or optimal maps), but the existence of such a deterministic solution in the general three-dimensional case is still an open question. A conjecture for the co-motion functions for radially symmetric densities was presented in Phys.~Rev.~A {\bf 75}, 042511 (2007), and later used to build approximate exchange-correlation functionals for electrons confined in low-density quantum dots. Colombo and Stra [Math.~Models Methods Appl.~Sci., {\bf 26} 1025 (2016)] have recently shown that these conjectured maps are not always optimal. Here we revisit the whole issue both from the formal and numerical point of view, finding that even if the conjectured maps are not always optimal, they still yield an interaction energy (cost) that is numerically very close to the true minimum. We also prove that the functional built from the conjectured maps has the expected functional derivative also when they are not optimal.
[ 0, 1, 0, 0, 0, 0 ]
Title: Accelerated Sparse Subspace Clustering, Abstract: State-of-the-art algorithms for sparse subspace clustering perform spectral clustering on a similarity matrix typically obtained by representing each data point as a sparse combination of other points using either basis pursuit (BP) or orthogonal matching pursuit (OMP). BP-based methods are often prohibitive in practice while the performance of OMP-based schemes are unsatisfactory, especially in settings where data points are highly similar. In this paper, we propose a novel algorithm that exploits an accelerated variant of orthogonal least-squares to efficiently find the underlying subspaces. We show that under certain conditions the proposed algorithm returns a subspace-preserving solution. Simulation results illustrate that the proposed method compares favorably with BP-based method in terms of running time while being significantly more accurate than OMP-based schemes.
[ 1, 0, 0, 1, 0, 0 ]