title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
ZFIRE: The Evolution of the Stellar Mass Tully-Fisher Relation to Redshift 2.0 < Z < 2.5 with MOSFIRE
Using observations made with MOSFIRE on Keck I as part of the ZFIRE survey, we present the stellar mass Tully-Fisher relation at 2.0 < z < 2.5. The sample was drawn from a stellar mass limited, Ks-band selected catalog from ZFOURGE over the CANDELS area in the COSMOS field. We model the shear of the Halpha emission line to derive rotational velocities at 2.2X the scale radius of an exponential disk (V2.2). We correct for the blurring effect of a two-dimensional PSF and the fact that the MOSFIRE PSF is better approximated by a Moffat than a Gaussian, which is more typically assumed for natural seeing. We find for the Tully-Fisher relation at 2.0 < z < 2.5 that logV2.2 =(2.18 +/- 0.051)+(0.193 +/- 0.108)(logM/Msun - 10) and infer an evolution of the zeropoint of Delta M/Msun = -0.25 +/- 0.16 dex or Delta M/Msun = -0.39 +/- 0.21 dex compared to z = 0 when adopting a fixed slope of 0.29 or 1/4.5, respectively. We also derive the alternative kinematic estimator S0.5, with a best-fit relation logS0.5 =(2.06 +/- 0.032)+(0.211 +/- 0.086)(logM/Msun - 10), and infer an evolution of Delta M/Msun= -0.45 +/- 0.13 dex compared to z < 1.2 if we adopt a fixed slope. We investigate and review various systematics, ranging from PSF effects, projection effects, systematics related to stellar mass derivation, selection biases and slope. We find that discrepancies between the various literature values are reduced when taking these into account. Our observations correspond well with the gradual evolution predicted by semi-analytic models.
0
1
0
0
0
0
A new computational method for a model of C. elegans biomechanics: Insights into elasticity and locomotion performance
An organism's ability to move freely is a fundamental behaviour in the animal kingdom. To understand animal locomotion requires a characterisation of the material properties, as well as the biomechanics and physiology. We present a biomechanical model of C. elegans locomotion together with a novel finite element method. We formulate our model as a nonlinear initial-boundary value problem which allows the study of the dynamics of arbitrary body shapes, undulation gaits and the link between the animal's material properties and its performance across a range of environments. Our model replicates behaviours across a wide range of environments. It makes strong predictions on the viable range of the worm's Young's modulus and suggests that animals can control speed via the known mechanism of gait modulation that is observed across different media.
0
1
0
0
0
0
Word forms - not just their lengths- are optimized for efficient communication
The inverse relationship between the length of a word and the frequency of its use, first identified by G.K. Zipf in 1935, is a classic empirical law that holds across a wide range of human languages. We demonstrate that length is one aspect of a much more general property of words: how distinctive they are with respect to other words in a language. Distinctiveness plays a critical role in recognizing words in fluent speech, in that it reflects the strength of potential competitors when selecting the best candidate for an ambiguous signal. Phonological information content, a measure of a word's string probability under a statistical model of a language's sound or character sequences, concisely captures distinctiveness. Examining large-scale corpora from 13 languages, we find that distinctiveness significantly outperforms word length as a predictor of frequency. This finding provides evidence that listeners' processing constraints shape fine-grained aspects of word forms across languages.
1
0
0
0
0
0
Human Understandable Explanation Extraction for Black-box Classification Models Based on Matrix Factorization
In recent years, a number of artificial intelligent services have been developed such as defect detection system or diagnosis system for customer services. Unfortunately, the core in these services is a black-box in which human cannot understand the underlying decision making logic, even though the inspection of the logic is crucial before launching a commercial service. Our goal in this paper is to propose an analytic method of a model explanation that is applicable to general classification models. To this end, we introduce the concept of a contribution matrix and an explanation embedding in a constraint space by using a matrix factorization. We extract a rule-like model explanation from the contribution matrix with the help of the nonnegative matrix factorization. To validate our method, the experiment results provide with open datasets as well as an industry dataset of a LTE network diagnosis and the results show our method extracts reasonable explanations.
1
0
0
1
0
0
Generalization Bounds for Unsupervised Cross-Domain Mapping with WGANs
The recent empirical success of cross-domain mapping algorithms, between two domains that share common characteristics, is not well-supported by theoretical justifications. This lacuna is especially troubling, given the clear ambiguity in such mappings. We work with the adversarial training method called the Wasserstein GAN. We derive a novel generalization bound, which limits the risk between the learned mapping $h$ and the target mapping $y$, by a sum of two terms: (i) the risk between $h$ and the most distant alternative mapping that was learned by the same cross-domain mapping algorithm, and (ii) the minimal Wasserstein GAN divergence between the target domain and the domain obtained by applying a hypothesis $h^*$ on the samples of the source domain, where $h^*$ is a hypothesis selected by the same algorithm. The bound is directly related to Occam's razor and it encourages the selection of the minimal architecture that supports a small Wasserstein GAN divergence. From the bound, we derive algorithms for hyperparameter selection and early stopping in cross-domain mapping GANs. We also demonstrate a novel capability of estimating confidence in the mapping of every specific sample. Lastly, we show how non-minimal architectures can be effectively trained by an inverted knowledge distillation in which a minimal architecture is used to train a larger one, leading to higher quality outputs.
0
0
0
1
0
0
Quantum chaos in an electron-phonon bad metal
We calculate the scrambling rate $\lambda_L$ and the butterfly velocity $v_B$ associated with the growth of quantum chaos for a solvable large-$N$ electron-phonon system. We study a temperature regime in which the electrical resistivity of this system exceeds the Mott-Ioffe-Regel limit and increases linearly with temperature - a sign that there are no long-lived charged quasiparticles - although the phonons remain well-defined quasiparticles. The long-lived phonons determine $\lambda_L$, rendering it parametrically smaller than the theoretical upper-bound $\lambda_L \ll \lambda_{max}=2\pi T/\hbar$. Significantly, the chaos properties seem to be intrinsic - $\lambda_L$ and $v_B$ are the same for electronic and phononic operators. We consider two models - one in which the phonons are dispersive, and one in which they are dispersionless. In either case, we find that $\lambda_L$ is proportional to the inverse phonon lifetime, and $v_B$ is proportional to the effective phonon velocity. The thermal and chaos diffusion constants, $D_E$ and $D_L\equiv v_B^2/\lambda_L$, are always comparable, $D_E \sim D_L$. In the dispersive phonon case, the charge diffusion constant $D_C$ satisfies $D_L\gg D_C$, while in the dispersionless case $D_L \ll D_C$.
0
1
0
0
0
0
Data-driven Optimal Transport Cost Selection for Distributionally Robust Optimizatio
Recently, (Blanchet, Kang, and Murhy 2016) showed that several machine learning algorithms, such as square-root Lasso, Support Vector Machines, and regularized logistic regression, among many others, can be represented exactly as distributionally robust optimization (DRO) problems. The distributional uncertainty is defined as a neighborhood centered at the empirical distribution. We propose a methodology which learns such neighborhood in a natural data-driven way. We show rigorously that our framework encompasses adaptive regularization as a particular case. Moreover, we demonstrate empirically that our proposed methodology is able to improve upon a wide range of popular machine learning estimators.
0
0
0
1
0
0
Formally Secure Compilation of Unsafe Low-Level Components (Extended Abstract)
We propose a new formal criterion for secure compilation, providing strong security guarantees for components written in unsafe, low-level languages with C-style undefined behavior. Our criterion goes beyond recent proposals, which protect the trace properties of a single component against an adversarial context, to model dynamic compromise in a system of mutually distrustful components. Each component is protected from all the others until it receives an input that triggers an undefined behavior, causing it to become compromised and attack the remaining uncompromised components. To illustrate this model, we demonstrate a secure compilation chain for an unsafe language with buffers, procedures, and components, compiled to a simple RISC abstract machine with built-in compartmentalization. The protection guarantees offered by this abstract machine can be achieved at the machine-code level using either software fault isolation or tag-based reference monitoring. We are working on machine-checked proofs showing that this compiler satisfies our secure compilation criterion.
1
0
0
0
0
0
Detecting Potential Local Adversarial Examples for Human-Interpretable Defense
Machine learning models are increasingly used in the industry to make decisions such as credit insurance approval. Some people may be tempted to manipulate specific variables, such as the age or the salary, in order to get better chances of approval. In this ongoing work, we propose to discuss, with a first proposition, the issue of detecting a potential local adversarial example on classical tabular data by providing to a human expert the locally critical features for the classifier's decision, in order to control the provided information and avoid a fraud.
0
0
0
1
0
0
The formation of magnetic depletions and flux annihilation due to reconnection in the heliosheath
The misalignment of the solar rotation axis and the magnetic axis of the Sun produces a periodic reversal of the Parker spiral magnetic field and the sectored solar wind. The compression of the sectors is expected to lead to reconnection in the heliosheath (HS). We present particle-in-cell simulations of the sectored HS that reflect the plasma environment along the Voyager 1 and 2 trajectories, specifically including unequal positive and negative azimuthal magnetic flux as seen in the Voyager data \citep{Burlaga03}. Reconnection proceeds on individual current sheets until islands on adjacent current layers merge. At late time bands of the dominant flux survive, separated by bands of deep magnetic field depletion. The ambient plasma pressure supports the strong magnetic pressure variation so that pressure is anti-correlated with magnetic field strength. There is little variation in the magnetic field direction across the boundaries of the magnetic depressions. At irregular intervals within the magnetic depressions are long-lived pairs of magnetic islands where the magnetic field direction reverses so that spacecraft data would reveal sharp magnetic field depressions with only occasional crossings with jumps in magnetic field direction. This is typical of the magnetic field data from the Voyager spacecraft \citep{Burlaga11,Burlaga16}. Voyager 2 data reveals that fluctuations in the density and magnetic field strength are anti-correlated in the sector zone as expected from reconnection but not in unipolar regions. The consequence of the annihilation of subdominant flux is a sharp reduction in the "number of sectors" and a loss in magnetic flux as documented from the Voyager 1 magnetic field and flow data \citep{Richardson13}.
0
1
0
0
0
0
Optimization Based Methods for Partially Observed Chaotic Systems
In this paper we consider filtering and smoothing of partially observed chaotic dynamical systems that are discretely observed, with an additive Gaussian noise in the observation. These models are found in a wide variety of real applications and include the Lorenz 96' model. In the context of a fixed observation interval $T$, observation time step $h$ and Gaussian observation variance $\sigma_Z^2$, we show under assumptions that the filter and smoother are well approximated by a Gaussian with high probability when $h$ and $\sigma^2_Z h$ are sufficiently small. Based on this result we show that the Maximum-a-posteriori (MAP) estimators are asymptotically optimal in mean square error as $\sigma^2_Z h$ tends to $0$. Given these results, we provide a batch algorithm for the smoother and filter, based on Newton's method, to obtain the MAP. In particular, we show that if the initial point is close enough to the MAP, then Newton's method converges to it at a fast rate. We also provide a method for computing such an initial point. These results contribute to the theoretical understanding of widely used 4D-Var data assimilation method. Our approach is illustrated numerically on the Lorenz 96' model with state vector up to 1 million dimensions, with code running in the order of minutes. To our knowledge the results in this paper are the first of their type for this class of models.
0
0
1
1
0
0
LongHCPulse: Long Pulse Heat Capacity on a Quantum Design PPMS
This paper presents LongHCPulse: software which enables heat capacity to be collected on a Quantum Design PPMS using a long-pulse method. This method, wherein heat capacity is computed from the time derivative of sample temperature over long (30 min) measurement times, is necessary for probing first order transitions and shortens the measurement time by a factor of five. LongHCPulse also includes plotting utilities based on the Matplotlib library. I illustrate the use of LongHCPulse with the example of data taken on ${\rm Yb_{2}Ti_{2}O_{7}}$, and compare the results to the standard semi-adiabatic method.
0
1
0
0
0
0
Calculation of hyperfine structure constants of small molecules using Z-vector method in the relativistic coupled-cluster framework
The Z-vector method in the relativistic coupled-cluster framework is employed to calculate the parallel and perpendicular components of the magnetic hyperfine structure constant of a few small alkaline earth hydrides (BeH, MgH, and CaH) and fluorides (MgF and CaF). We have compared our Z-vector results with the values calculated by the extended coupled-cluster (ECC) method reported in Phys. Rev. A 91 022512 (2015). All these results are compared with the available experimental values. The Z-vector results are found to be in better agreement with the experimental values than those of the ECC values.
0
1
0
0
0
0
Autoregressive Point-Processes as Latent State-Space Models: a Moment-Closure Approach to Fluctuations and Autocorrelations
Modeling and interpreting spike train data is a task of central importance in computational neuroscience, with significant translational implications. Two popular classes of data-driven models for this task are autoregressive Point Process Generalized Linear models (PPGLM) and latent State-Space models (SSM) with point-process observations. In this letter, we derive a mathematical connection between these two classes of models. By introducing an auxiliary history process, we represent exactly a PPGLM in terms of a latent, infinite dimensional dynamical system, which can then be mapped onto an SSM by basis function projections and moment closure. This representation provides a new perspective on widely used methods for modeling spike data, and also suggests novel algorithmic approaches to fitting such models. We illustrate our results on a phasic bursting neuron model, showing that our proposed approach provides an accurate and efficient way to capture neural dynamics.
0
0
0
0
1
0
Phase transitions of the dimerized Kane-Mele model with/without the strong interaction
The dimerized Kane-Mele model with/without the strong interaction is studied using analytical methods. The boundary of the topological phase transition of the model without strong interaction is obtained. Our results show that the occurrence of the transition only depends on dimerized parameter . From the one-particle spectrum, we obtain the completed phase diagram including the quantum spin Hall (QSH) state and the topologically trivial insulator. Then, using different mean-field methods, we investigate the Mott transition and the magnetic transition of the strongly correlated dimerized Kane-Mele model. In the region between the two transitions, the topological Mott insulator (TMI) with characters of Mott insulators and topological phases may be the most interesting phase. In this work, effects of the hopping anisotropy and Hubbard interaction U on boundaries of the two transitions are observed in detail. The completed phase diagram of the dimerized Kane-Mele-Hubbard model is also obtained in this work. Quantum fluctuations have extremely important influences on a quantum system. However, investigations are under the framework of the mean field treatment in this work and the effects of fluctuations in this model will be discussed in the future.
0
1
0
0
0
0
A Unified Approach to Interpreting Model Predictions
Understanding why a model makes a certain prediction can be as crucial as the prediction's accuracy in many applications. However, the highest accuracy for large modern datasets is often achieved by complex models that even experts struggle to interpret, such as ensemble or deep learning models, creating a tension between accuracy and interpretability. In response, various methods have recently been proposed to help users interpret the predictions of complex models, but it is often unclear how these methods are related and when one method is preferable over another. To address this problem, we present a unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations). SHAP assigns each feature an importance value for a particular prediction. Its novel components include: (1) the identification of a new class of additive feature importance measures, and (2) theoretical results showing there is a unique solution in this class with a set of desirable properties. The new class unifies six existing methods, notable because several recent methods in the class lack the proposed desirable properties. Based on insights from this unification, we present new methods that show improved computational performance and/or better consistency with human intuition than previous approaches.
1
0
0
1
0
0
Approximate Value Iteration for Risk-aware Markov Decision Processes
We consider large-scale Markov decision processes (MDPs) with a risk measure of variability in cost, under the risk-aware MDPs paradigm. Previous studies showed that risk-aware MDPs, based on a minimax approach to handling risk, can be solved using dynamic programming for small to medium sized problems. However, due to the "curse of dimensionality", MDPs that model real-life problems are typically prohibitively large for such approaches. In this paper, we employ an approximate dynamic programming approach, and develop a family of simulation-based algorithms to approximately solve large-scale risk-aware MDPs. In parallel, we develop a unified convergence analysis technique to derive sample complexity bounds for this new family of algorithms.
1
0
1
0
0
0
Dataflow Matrix Machines as a Model of Computations with Linear Streams
We overview dataflow matrix machines as a Turing complete generalization of recurrent neural networks and as a programming platform. We describe vector space of finite prefix trees with numerical leaves which allows us to combine expressive power of dataflow matrix machines with simplicity of traditional recurrent neural networks.
1
0
0
0
0
0
Attention-based Vocabulary Selection for NMT Decoding
Neural Machine Translation (NMT) models usually use large target vocabulary sizes to capture most of the words in the target language. The vocabulary size is a big factor when decoding new sentences as the final softmax layer normalizes over all possible target words. To address this problem, it is widely common to restrict the target vocabulary with candidate lists based on the source sentence. Usually, the candidate lists are a combination of external word-to-word aligner, phrase table entries or most frequent words. In this work, we propose a simple and yet novel approach to learn candidate lists directly from the attention layer during NMT training. The candidate lists are highly optimized for the current NMT model and do not need any external computation of the candidate pool. We show significant decoding speedup compared with using the entire vocabulary, without losing any translation quality for two language pairs.
1
0
0
0
0
0
Reconstruction by Calibration over Tensors for Multi-Coil Multi-Acquisition Balanced SSFP Imaging
Purpose: To develop a rapid imaging framework for balanced steady-state free precession (bSSFP) that jointly reconstructs undersampled data (by a factor of R) across multiple coils (D) and multiple acquisitions (N). To devise a multi-acquisition coil compression technique for improved computational efficiency. Methods: The bSSFP image for a given coil and acquisition is modeled to be modulated by a coil sensitivity and a bSSFP profile. The proposed reconstruction by calibration over tensors (ReCat) recovers missing data by tensor interpolation over the coil and acquisition dimensions. Coil compression is achieved using a new method based on multilinear singular value decomposition (MLCC). ReCat is compared with iterative self-consistent parallel imaging (SPIRiT) and profile encoding (PE-SSFP) reconstructions. Results: Compared to parallel imaging or profile-encoding methods, ReCat attains sensitive depiction of high-spatial-frequency information even at higher R. In the brain, ReCat improves peak SNR (PSNR) by 1.1$\pm$1.0 dB over SPIRiT and by 0.9$\pm$0.3 dB over PE-SSFP (mean$\pm$std across subjects; average for N=2-8, R=8-16). Furthermore, reconstructions based on MLCC achieve 0.8$\pm$0.6 dB higher PSNR compared to those based on geometric coil compression (GCC) (average for N=2-8, R=4-16). Conclusion: ReCat is a promising acceleration framework for banding-artifact-free bSSFP imaging with high image quality; and MLCC offers improved computational efficiency for tensor-based reconstructions.
0
1
0
0
0
0
Spectral Sparsification of Simplicial Complexes for Clustering and Label Propagation
As a generalization of the use of graphs to describe pairwise interactions, simplicial complexes can be used to model higher-order interactions between three or more objects in complex systems. There has been a recent surge in activity for the development of data analysis methods applicable to simplicial complexes, including techniques based on computational topology, higher-order random processes, generalized Cheeger inequalities, isoperimetric inequalities, and spectral methods. In particular, spectral learning methods (e.g. label propagation and clustering) that directly operate on simplicial complexes represent a new direction for analyzing such complex datasets. To apply spectral learning methods to massive datasets modeled as simplicial complexes, we develop a method for sparsifying simplicial complexes that preserves the spectrum of the associated Laplacian matrices. We show that the theory of Spielman and Srivastava for the sparsification of graphs extends to simplicial complexes via the up Laplacian. In particular, we introduce a generalized effective resistance for simplices, provide an algorithm for sparsifying simplicial complexes at a fixed dimension, and give a specific version of the generalized Cheeger inequality for weighted simplicial complexes. Finally, we introduce higher-order generalizations of spectral clustering and label propagation for simplicial complexes and demonstrate via experiments the utility of the proposed spectral sparsification method for these applications.
1
0
0
0
0
0
BHK mirror symmetry for K3 surfaces with non-symplectic automorphism
In this paper we consider the class of K3 surfaces defined as hypersurfaces in weighted projective space, and admitting a non-symplectic automorphism of non-prime order, excluding the orders 4, 8, and 12. We show that on these surfaces the Berglund-Hübsch-Krawitz mirror construction and mirror symmetry for lattice polarized K3 surfaces constructed by Dolgachev agree; that is, both versions of mirror symmetry define the same mirror K3 surface.
0
0
1
0
0
0
Automatic Rule Extraction from Long Short Term Memory Networks
Although deep learning models have proven effective at solving problems in natural language processing, the mechanism by which they come to their conclusions is often unclear. As a result, these models are generally treated as black boxes, yielding no insight of the underlying learned patterns. In this paper we consider Long Short Term Memory networks (LSTMs) and demonstrate a new approach for tracking the importance of a given input to the LSTM for a given output. By identifying consistently important patterns of words, we are able to distill state of the art LSTMs on sentiment analysis and question answering into a set of representative phrases. This representation is then quantitatively validated by using the extracted phrases to construct a simple, rule-based classifier which approximates the output of the LSTM.
1
0
0
1
0
0
On the validity of parametric block correlation matrices with constant within and between group correlations
We consider the set Bp of parametric block correlation matrices with p blocks of various (and possibly different) sizes, whose diagonal blocks are compound symmetry (CS) correlation matrices and off-diagonal blocks are constant matrices. Such matrices appear in probabilistic models on categorical data, when the levels are partitioned in p groups, assuming a constant correlation within a group and a constant correlation for each pair of groups. We obtain two necessary and sufficient conditions for positive definiteness of elements of Bp. Firstly we consider the block average map $\phi$, consisting in replacing a block by its mean value. We prove that for any A $\in$ Bp , A is positive definite if and only if $\phi$(A) is positive definite. Hence it is equivalent to check the validity of the covariance matrix of group means, which only depends on the number of groups and not on their sizes. This theorem can be extended to a wider set of block matrices. Secondly, we consider the subset of Bp for which the between group correlation is the same for all pairs of groups. Positive definiteness then comes down to find the positive definite interval of a matrix pencil on Sp. We obtain a simple characterization by localizing the roots of the determinant with within group correlation values.
0
0
1
1
0
0
Two-dimensional off-lattice Boltzmann model for van der Waals fluids with variable temperature
We develop a two-dimensional Lattice Boltzmann model for liquid-vapour systems with variable temperature. Our model is based on a single particle distribution function expanded with respect to the full-range Hermite polynomials. In order to ensure the recovery of the hydrodynamic equations for thermal flows, we use a fourth order expansion together with a set of momentum vectors with 25 elements whose Cartesian projections are the roots of the Hermite polynomial of order Q = 5. Since these vectors are off-lattice, a fifth-order projection scheme is used to evolve the corresponding set of distribution functions. A fourth order scheme employing a 49 point stencil is used to compute the gradient operators in the force term that ensures the liquid-vapour phase separation and diffuse reflection boundary conditions are used on the walls. We demonstrate at least fourth order convergence with respect to the lattice spacing in the contexts of shear and longitudinal wave propagation through the van der Waals fluid. For the planar interface, fourth order convergence can be seen at small enough lattice spacings, while the effect of the spurious velocity on the temperature profile is found to be smaller than 1.0%, even when T w ' 0.7 T c . We further validate our scheme by considering the Laplace pressure test. Galilean invariance is shown to be preserved up to second order with respect to the background velocity. We further investigate the liquid-vapour phase separation between two parallel walls kept at a constant temperature T w smaller than the critical temperature T c and discuss the main features of this process.
0
1
0
0
0
0
Minimal coloring number on minimal diagrams for $\mathbb{Z}$-colorable links
It was shown that any $\mathbb{Z}$-colorable link has a diagram which admits a non-trivial $\mathbb{Z}$-coloring with at most four colors. In this paper, we consider minimal numbers of colors for non-trivial $\mathbb{Z}$-colorings on minimal diagrams of $\mathbb{Z}$-colorable links. We show, for any positive integer $N$, there exists a minimal diagram of a $\mathbb{Z}$-colorable link such that any $\mathbb{Z}$-coloring on the diagram has at least $N$ colors. On the other hand, it is shown that certain $\mathbb{Z}$-colorable torus links have minimal diagrams admitting $\mathbb{Z}$-colorings with only four colors.
0
0
1
0
0
0
Automatic Gradient Boosting
Automatic machine learning performs predictive modeling with high performing machine learning tools without human interference. This is achieved by making machine learning applications parameter-free, i.e. only a dataset is provided while the complete model selection and model building process is handled internally through (often meta) optimization. Projects like Auto-WEKA and auto-sklearn aim to solve the Combined Algorithm Selection and Hyperparameter optimization (CASH) problem resulting in huge configuration spaces. However, for most real-world applications, the optimization over only a few different key learning algorithms can not only be sufficient, but also potentially beneficial. The latter becomes apparent when one considers that models have to be validated, explained, deployed and maintained. Here, less complex model are often preferred, for validation or efficiency reasons, or even a strict requirement. Automatic gradient boosting simplifies this idea one step further, using only gradient boosting as a single learning algorithm in combination with model-based hyperparameter tuning, threshold optimization and encoding of categorical features. We introduce this general framework as well as a concrete implementation called autoxgboost. It is compared to current AutoML projects on 16 datasets and despite its simplicity is able to achieve comparable results on about half of the datasets as well as performing best on two.
0
0
0
1
0
0
Speeding up Memory-based Collaborative Filtering with Landmarks
Recommender systems play an important role in many scenarios where users are overwhelmed with too many choices to make. In this context, Collaborative Filtering (CF) arises by providing a simple and widely used approach for personalized recommendation. Memory-based CF algorithms mostly rely on similarities between pairs of users or items, which are posteriorly employed in classifiers like k-Nearest Neighbor (kNN) to generalize for unknown ratings. A major issue regarding this approach is to build the similarity matrix. Depending on the dimensionality of the rating matrix, the similarity computations may become computationally intractable. To overcome this issue, we propose to represent users by their distances to preselected users, namely landmarks. This procedure allows to drastically reduce the computational cost associated with the similarity matrix. We evaluated our proposal on two distinct distinguishing databases, and the results showed our method has consistently and considerably outperformed eight CF algorithms (including both memory-based and model-based) in terms of computational performance.
1
0
0
0
0
0
Spatio-temporal variations in the urban rhythm: the travelling waves of crime
In the last decades, the notion that cities are in a state of equilibrium with a centralised organisation has given place to the viewpoint of cities in disequilibrium and organised from bottom to up. In this perspective, cities are evolving systems that exhibit emergent phenomena built from local decisions. While urban evolution promotes the emergence of positive social phenomena such as the formation of innovation hubs and the increase in cultural diversity, it also yields negative phenomena such as increases in criminal activity. Yet, we are still far from understanding the driving mechanisms of these phenomena. In particular, approaches to analyse urban phenomena are limited in scope by neglecting both temporal non-stationarity and spatial heterogeneity. In the case of criminal activity, we know for more than one century that crime peaks during specific times of the year, but the literature still fails to characterise the mobility of crime. Here we develop an approach to describe the spatial, temporal, and periodic variations in urban quantities. With crime data from 12 cities, we characterise how the periodicity of crime varies spatially across the city over time. We confirm one-year criminal cycles and show that this periodicity occurs unevenly across the city. These `waves of crime' keep travelling across the city: while cities have a stable number of regions with a circannual period, the regions exhibit non-stationary series. Our findings support the concept of cities in a constant change, influencing urban phenomena---in agreement with the notion of cities not in equilibrium.
1
0
0
0
0
0
Smith-Purcell Radiation
The simplest model of the magnetized infinitely thin electron beam is considered. The basic equations that describe the periodic solutions for a self-consistent system of a couple of Maxwell equations and equations for the medium are obtained.
0
1
0
0
0
0
Dynamics of Bose-Einstein condensate with account of pair correlations
The system of dynamic equations for Bose-Einstein condensate at zero temperature with account of pair correlations is obtained. The spectrum of small oscillations of the condensate in a spatially homogeneous state is explored. It is shown that this spectrum has two branches: the sound wave branch and branch with an energy gap.
0
1
0
0
0
0
On the generalization of Erdős-Vincze's theorem about the approximation of closed convex plane curves by polyellipses
A polyellipse is a curve in the Euclidean plane all of whose points have the same sum of distances from finitely many given points (focuses). The classical version of Erdős-Vincze's theorem states that regular triangles can not be presented as the Hausdorff limit of polyellipses even if the number of the focuses can be arbitrary large. In other words the topological closure of the set of polyellipses with respect to the Hausdorff distance does not contain any regular triangle and we have a negative answer to the problem posed by E. Vázsonyi (Weissfeld) about the approximation of closed convex plane curves by polyellipses. It is the additive version of the approximation of simple closed plane curves by polynomial lemniscates all of whose points have the same product of distances from finitely many given points (focuses). Here we are going to generalize the classical version of Erdős-Vincze's theorem for regular polygons in the plane. We will conclude that the error of the approximation tends to zero as the number of the vertices of the regular polygon tends to the infinity. The decreasing tendency of the approximation error gives the idea to construct curves in the topological closure of the set of polyellipses. If we use integration to compute the average distance of a point from a given (focal) set in the plane then the curves all of whose points have the same average distance from the focal set can be given as the Hausdorff limit of polyellipses corresponding to partial sums.
0
0
1
0
0
0
Improved self-energy correction method for accurate and efficient band structure calculation
The LDA-1/2 method for self-energy correction is a powerful tool for calculating accurate band structures of semiconductors, while keeping the computational load as low as standard LDA. Nevertheless, controversies remain regarding the arbitrariness of choice between (1/2)e and (1/4)e charge stripping from the atoms in group IV semiconductors, the incorrect direct band gap predicted for Ge, and inaccurate band structures for III-V semiconductors. Here we propose an improved method named shell-LDA-1/2 (shLDA-1/2 for short), which is based on a shell-like trimming function for the self-energy potential. With the new approach, we obtained accurate band structures for group IV, and for III-V and II-VI compound semiconductors. In particular, we reproduced the complete band structure of Ge in good agreement with experimental data. Moreover, we have defined clear rules for choosing when (1/2)e or (1/4)e charge ought to be stripped in covalent semiconductors, and for identifying materials for which shLDA-1/2 is expected to fail.
0
1
0
0
0
0
Magnetic domains in thin ferromagnetic films with strong perpendicular anisotropy
We investigate the scaling of the ground state energy and optimal domain patterns in thin ferromagnetic films with strong uniaxial anisotropy and the easy axis perpendicular to the film plane. Starting from the full three-dimensional micromagnetic model, we identify the critical scaling where the transition from single domain to multidomain ground states such as bubble or maze patterns occurs. Furthermore, we analyze the asymptotic behavior of the energy in two regimes separated by a transition. In the single domain regime, the energy $\Gamma$-converges towards a much simpler two-dimensional and local model. In the second regime, we derive the scaling of the minimal energy and deduce a scaling law for the typical domain size.
0
1
1
0
0
0
Learning a Robust Society of Tracking Parts
Object tracking is an essential task in computer vision that has been studied since the early days of the field. Being able to follow objects that undergo different transformations in the video sequence, including changes in scale, illumination, shape and occlusions, makes the problem extremely difficult. One of the real challenges is to keep track of the changes in objects appearance and not drift towards the background clutter. Different from previous approaches, we obtain robustness against background with a tracker model that is composed of many different parts. They are classifiers that respond at different scales and locations. The tracker system functions as a society of parts, each having its own role and level of credibility. Reliable classifiers decide the tracker's next move, while newcomers are first monitored before gaining the necessary level of reliability to participate in the decision process. Some parts that loose their consistency are rejected, while others that show consistency for a sufficiently long time are promoted to permanent roles. The tracker system, as a whole, could also go through different phases, from the usual, normal functioning to states of weak agreement and even crisis. The tracker system has different governing rules in each state. What truly distinguishes our work from others is not necessarily the strength of individual tracking parts, but the way in which they work together and build a strong and robust organization. We also propose an efficient way to learn simultaneously many tracking parts, with a single closed-form formulation. We obtain a fast and robust tracker with state of the art performance on the challenging OTB50 dataset.
1
0
0
0
0
0
A direct proof of dimerization in a family of SU(n)-invariant quantum spin chains
We study the family of spin-S quantum spin chains with a nearest neighbor interaction given by the negative of the singlet projection operator. Using a random loop representation of the partition function in the limit of zero temperature and standard techniques of classical statistical mechanics, we prove dimerization for all sufficiently large values of S.
0
1
1
0
0
0
An accelerated splitting algorithm for radio-interferometric imaging: when natural and uniform weighting meet
Next generation radio-interferometers, like the Square Kilometre Array, will acquire tremendous amounts of data with the goal of improving the size and sensitivity of the reconstructed images by orders of magnitude. The efficient processing of large-scale data sets is of great importance. We propose an acceleration strategy for a recently proposed primal-dual distributed algorithm. A preconditioning approach can incorporate into the algorithmic structure both the sampling density of the measured visibilities and the noise statistics. Using the sampling density information greatly accelerates the convergence speed, especially for highly non-uniform sampling patterns, while relying on the correct noise statistics optimises the sensitivity of the reconstruction. In connection to CLEAN, our approach can be seen as including in the same algorithmic structure both natural and uniform weighting, thereby simultaneously optimising both the resolution and the sensitivity. The method relies on a new non-Euclidean proximity operator for the data fidelity term, that generalises the projection onto the $\ell_2$ ball where the noise lives for naturally weighted data, to the projection onto a generalised ellipsoid incorporating sampling density information through uniform weighting. Importantly, this non-Euclidean modification is only an acceleration strategy to solve the convex imaging problem with data fidelity dictated only by noise statistics. We showcase through simulations with realistic sampling patterns the acceleration obtained using the preconditioning. We also investigate the algorithm performance for the reconstruction of the 3C129 radio galaxy from real visibilities and compare with multi-scale CLEAN, showing better sensitivity and resolution. Our MATLAB code is available online on GitHub.
0
1
0
0
0
0
BaFe2(As1-xPx)2 (x = 0.22-0.42) thin films grown on practical metal-tape substrates and their critical current densities
We optimized the substrate temperature (Ts) and phosphorus concentration (x) of BaFe2(As1-xPx)2 films on practical metal-tape substrates for pulsed laser deposition from the viewpoints of crystallinity, superconductor critical temperature (Tc), and critical current density (Jc). It was found that the optimum Ts and x values are 1050 degree C and x = 0.28, respectively. The optimized film exhibits Tc_onset = 26.6 and Tc_zero = 22.4 K along with a high self-field Jc at 4 K (~1 MA/cm2) and relatively isotropic Jc under magnetic fields up to 9 T. Unexpectedly, we found that lower crystallinity samples, which were grown at a higher Ts of 1250 degree C than the optimized Ts = 1050 degree C, exhibit higher Jc along the ab plane under high magnetic fields than the optimized samples. The presence of horizontal defects that act as strong vortex pinning centers, such as stacking faults, are a possible origin of the high Jc values in the poor crystallinity samples.
0
1
0
0
0
0
How does the accuracy of interatomic force constants affect the prediction of lattice thermal conductivity
Solving Peierls-Boltzmann transport equation with interatomic force constants (IFCs) from first-principles calculations has been a widely used method for predicting lattice thermal conductivity of three-dimensional materials. With the increasing research interests in two-dimensional materials, this method is directly applied to them but different works show quite different results. In this work, classical potential was used to investigate the effect of the accuracy of IFCs on the predicted thermal conductivity. Inaccuracies were introduced to the third-order IFCs by generating errors in the input forces. When the force error lies in the typical value from first-principles calculations, the calculated thermal conductivity would be quite different from the benchmark value. It is found that imposing translational invariance conditions cannot always guarantee a better thermal conductivity result. It is also shown that Grüneisen parameters cannot be used as a necessary and sufficient criterion for the accuracy of third-order IFCs in the aspect of predicting thermal conductivity.
0
1
0
0
0
0
Mesh-to-raster based non-rigid registration of multi-modal images
Region of interest (ROI) alignment in medical images plays a crucial role in diagnostics, procedure planning, treatment, and follow-up. Frequently, a model is represented as triangulated mesh while the patient data is provided from CAT scanners as pixel or voxel data. Previously, we presented a 2D method for curve-to-pixel registration. This paper contributes (i) a general mesh-to-raster (M2R) framework to register ROIs in multi-modal images; (ii) a 3D surface-to-voxel application, and (iii) a comprehensive quantitative evaluation in 2D using ground truth provided by the simultaneous truth and performance level estimation (STAPLE) method. The registration is formulated as a minimization problem where the objective consists of a data term, which involves the signed distance function of the ROI from the reference image, and a higher order elastic regularizer for the deformation. The evaluation is based on quantitative light-induced fluoroscopy (QLF) and digital photography (DP) of decalcified teeth. STAPLE is computed on 150 image pairs from 32 subjects, each showing one corresponding tooth in both modalities. The ROI in each image is manually marked by three experts (900 curves in total). In the QLF-DP setting, our approach significantly outperforms the mutual information-based registration algorithm implemented with the Insight Segmentation and Registration Toolkit (ITK) and Elastix.
1
0
0
0
0
0
Vision-based Obstacle Removal System for Autonomous Ground Vehicles Using a Robotic Arm
Over the past few years, the use of camera-equipped robotic platforms for data collection and visually monitoring applications has exponentially grown. Cluttered construction sites with many objects (e.g., bricks, pipes, etc.) on the ground are challenging environments for a mobile unmanned ground vehicle (UGV) to navigate. To address this issue, this study presents a mobile UGV equipped with a stereo camera and a robotic arm that can remove obstacles along the UGV's path. To achieve this objective, the surrounding environment is captured by the stereo camera and obstacles are detected. The obstacle's relative location to the UGV is sent to the robotic arm module through Robot Operating System (ROS). Then, the robotic arm picks up and removes the obstacle. The proposed method will greatly enhance the degree of automation and the frequency of data collection for construction monitoring. The proposed system is validated through two case studies. The results successfully demonstrate the detection and removal of obstacles, serving as one of the enabling factors for developing an autonomous UGV with various construction operating applications.
1
0
0
0
0
0
Polyatomic trilobite Rydberg molecules in a dense random gas
Trilobites are exotic giant dimers with enormous dipole moments. They consist of a Rydberg atom and a distant ground-state atom bound together by short-range electron-neutral attraction. We show that highly polar, polyatomic trilobite states unexpectedly persist and thrive in a dense ultracold gas of randomly positioned atoms. This is caused by perturbation-induced quantum scarring and the localization of electron density on randomly occurring atom clusters. At certain densities these states also mix with a s-state, overcoming selection rules that hinder the photoassociation of ordinary trilobites.
0
1
0
0
0
0
Does warm debris dust stem from asteroid belts?
Many debris discs reveal a two-component structure, with a cold outer and a warm inner component. While the former are likely massive analogues of the Kuiper belt, the origin of the latter is still a matter of debate. In this work we investigate whether the warm dust may be a signature of asteroid belt analogues. In the scenario tested here the current two-belt architecture stems from an originally extended protoplanetary disc, in which planets have opened a gap separating it into the outer and inner discs which, after the gas dispersal, experience a steady-state collisional decay. This idea is explored with an analytic collisional evolution model for a sample of 225 debris discs from a Spitzer/IRS catalogue that are likely to possess a two-component structure. We find that the vast majority of systems (220 out of 225, or 98%) are compatible with this scenario. For their progenitors, original protoplanetary discs, we find an average surface density slope of $-0.93\pm0.06$ and an average initial mass of $\left(3.3^{+0.4}_{-0.3}\right)\times 10^{-3}$ solar masses, both of which are in agreement with the values inferred from submillimetre surveys. However, dust production by short-period comets and - more rarely - inward transport from the outer belts may be viable, and not mutually excluding, alternatives to the asteroid belt scenario. The remaining five discs (2% of the sample: HIP 11486, HIP 23497, HIP 57971, HIP 85790, HIP 89770) harbour inner components that appear inconsistent with dust production in an "asteroid belt." Warm dust in these systems must either be replenished from cometary sources or represent an aftermath of a recent rare event, such as a major collision or planetary system instability.
0
1
0
0
0
0
The Multiple Roots Phenomenon in Maximum Likelihood Estimation for Factor Analysis
Multiple root estimation problems in statistical inference arise in many contexts in the literature. In the context of maximum likelihood estimation, the existence of multiple roots causes uncertainty in the computation of maximum likelihood estimators using hill-climbing algorithms, and consequent difficulties in the resulting statistical inference. In this paper, we study the multiple roots phenomenon in maximum likelihood estimation for factor analysis. We prove that the corresponding likelihood equations have uncountably many feasible solutions even in the simplest cases. For the case in which the observed data are two-dimensional and the unobserved factor scores are one-dimensional, we prove that the solutions to the likelihood equations form a one-dimensional real curve.
0
0
1
1
0
0
Group-like projections for locally compact quantum groups
Let $\mathbb{G}$ be a locally compact quantum group. We give a 1-1 correspondence between group-like projections in $L^\infty(\mathbb{G})$ preserved by the scaling group and idempotent states on the dual quantum group $\widehat{\mathbb{G}}$. As a byproduct we give a simple proof that normal integrable coideals in $L^\infty(\mathbb{G})$ which are preserved by the scaling group are in 1-1 correspondence with compact quantum subgroups of $\mathbb{G}$.
0
0
1
0
0
0
Exciton-phonon interaction in the strong coupling regime in hexagonal boron nitride
The temperature-dependent optical response of excitons in semiconductors is controlled by the exciton-phonon interaction. When the exciton-lattice coupling is weak, the excitonic line has a Lorentzian profile resulting from motional narrowing, with a width increasing linearly with the lattice temperature $T$. In contrast, when the exciton-lattice coupling is strong, the lineshape is Gaussian with a width increasing sublinearly with the lattice temperature, proportional to $\sqrt{T}$. While the former case is commonly reported in the literature, here the latter is reported for the first time, for hexagonal boron nitride. Thus the theoretical predictions of Toyozawa [Progr. Theor. Phys. 20, 53 (1958)] are supported by demonstrating that the exciton-phonon interaction is in the strong coupling regime in this Van der Waals crystal.
0
1
0
0
0
0
Exact lowest-Landau-level solutions for vortex precession in Bose-Einstein condensates
The Lowest Landau Level (LLL) equation emerges as an accurate approximation for a class of dynamical regimes of Bose-Einstein Condensates (BEC) in two-dimensional isotropic harmonic traps in the limit of weak interactions. Building on recent developments in the field of spatially confined extended Hamiltonian systems, we find a fully nonlinear solution of this equation representing periodically modulated precession of a single vortex. Motions of this type have been previously seen in numerical simulations and experiments at moderately weak coupling. Our work provides the first controlled analytic prediction for trajectories of a single vortex, suggests new targets for experiments, and opens up the prospect of finding analytic multi-vortex solutions.
0
1
1
0
0
0
Predicting Out-of-View Feature Points for Model-Based Camera Pose Estimation
In this work we present a novel framework that uses deep learning to predict object feature points that are out-of-view in the input image. This system was developed with the application of model-based tracking in mind, particularly in the case of autonomous inspection robots, where only partial views of the object are available. Out-of-view prediction is enabled by applying scaling to the feature point labels during network training. This is combined with a recurrent neural network architecture designed to provide the final prediction layers with rich feature information from across the spatial extent of the input image. To show the versatility of these out-of-view predictions, we describe how to integrate them in both a particle filter tracker and an optimisation based tracker. To evaluate our work we compared our framework with one that predicts only points inside the image. We show that as the amount of the object in view decreases, being able to predict outside the image bounds adds robustness to the final pose estimation.
1
0
0
0
0
0
A Simple PTAS for the Dual Bin Packing Problem and Advice Complexity of Its Online Version
Recently, Renault (2016) studied the dual bin packing problem in the per-request advice model of online algorithms. He showed that given $O(1/\epsilon)$ advice bits for each input item allows approximating the dual bin packing problem online to within a factor of $1+\epsilon$. Renault asked about the advice complexity of dual bin packing in the tape-advice model of online algorithms. We make progress on this question. Let $s$ be the maximum bit size of an input item weight. We present a conceptually simple online algorithm that with total advice $O\left(\frac{s + \log n}{\epsilon^2}\right)$ approximates the dual bin packing to within a $1+\epsilon$ factor. To this end, we describe and analyze a simple offline PTAS for the dual bin packing problem. Although a PTAS for a more general problem was known prior to our work (Kellerer 1999, Chekuri and Khanna 2006), our PTAS is arguably simpler to state and analyze. As a result, we could easily adapt our PTAS to obtain the advice-complexity result. We also consider whether the dependence on $s$ is necessary in our algorithm. We show that if $s$ is unrestricted then for small enough $\epsilon > 0$ obtaining a $1+\epsilon$ approximation to the dual bin packing requires $\Omega_\epsilon(n)$ bits of advice. To establish this lower bound we analyze an online reduction that preserves the advice complexity and approximation ratio from the binary separation problem due to Boyar et al. (2016). We define two natural advice complexity classes that capture the distinction similar to the Turing machine world distinction between pseudo polynomial time algorithms and polynomial time algorithms. Our results on the dual bin packing problem imply the separation of the two classes in the advice complexity world.
1
0
0
0
0
0
Constructive Néron Desingularization of algebras with big smooth locus
An algorithmic proof of the General Néron Desingularization theorem and its uniform version is given for morphisms with big smooth locus. This generalizes the results for the one-dimensional case.
0
0
1
0
0
0
List-Decodable Robust Mean Estimation and Learning Mixtures of Spherical Gaussians
We study the problem of list-decodable Gaussian mean estimation and the related problem of learning mixtures of separated spherical Gaussians. We develop a set of techniques that yield new efficient algorithms with significantly improved guarantees for these problems. {\bf List-Decodable Mean Estimation.} Fix any $d \in \mathbb{Z}_+$ and $0< \alpha <1/2$. We design an algorithm with runtime $O (\mathrm{poly}(n/\alpha)^{d})$ that outputs a list of $O(1/\alpha)$ many candidate vectors such that with high probability one of the candidates is within $\ell_2$-distance $O(\alpha^{-1/(2d)})$ from the true mean. The only previous algorithm for this problem achieved error $\tilde O(\alpha^{-1/2})$ under second moment conditions. For $d = O(1/\epsilon)$, our algorithm runs in polynomial time and achieves error $O(\alpha^{\epsilon})$. We also give a Statistical Query lower bound suggesting that the complexity of our algorithm is qualitatively close to best possible. {\bf Learning Mixtures of Spherical Gaussians.} We give a learning algorithm for mixtures of spherical Gaussians that succeeds under significantly weaker separation assumptions compared to prior work. For the prototypical case of a uniform mixture of $k$ identity covariance Gaussians we obtain: For any $\epsilon>0$, if the pairwise separation between the means is at least $\Omega(k^{\epsilon}+\sqrt{\log(1/\delta)})$, our algorithm learns the unknown parameters within accuracy $\delta$ with sample complexity and running time $\mathrm{poly} (n, 1/\delta, (k/\epsilon)^{1/\epsilon})$. The previously best known polynomial time algorithm required separation at least $k^{1/4} \mathrm{polylog}(k/\delta)$. Our main technical contribution is a new technique, using degree-$d$ multivariate polynomials, to remove outliers from high-dimensional datasets where the majority of the points are corrupted.
1
0
1
1
0
0
Complexity Dichotomies for the Minimum F-Overlay Problem
For a (possibly infinite) fixed family of graphs F, we say that a graph G overlays F on a hypergraph H if V(H) is equal to V(G) and the subgraph of G induced by every hyperedge of H contains some member of F as a spanning subgraph.While it is easy to see that the complete graph on |V(H)| overlays F on a hypergraph H whenever the problem admits a solution, the Minimum F-Overlay problem asks for such a graph with the minimum number of edges.This problem allows to generalize some natural problems which may arise in practice. For instance, if the family F contains all connected graphs, then Minimum F-Overlay corresponds to the Minimum Connectivity Inference problem (also known as Subset Interconnection Design problem) introduced for the low-resolution reconstruction of macro-molecular assembly in structural biology, or for the design of networks.Our main contribution is a strong dichotomy result regarding the polynomial vs. NP-hard status with respect to the considered family F. Roughly speaking, we show that the easy cases one can think of (e.g. when edgeless graphs of the right sizes are in F, or if F contains only cliques) are the only families giving rise to a polynomial problem: all others are NP-complete.We then investigate the parameterized complexity of the problem and give similar sufficient conditions on F that give rise to W[1]-hard, W[2]-hard or FPT problems when the parameter is the size of the solution.This yields an FPT/W[1]-hard dichotomy for a relaxed problem, where every hyperedge of H must contain some member of F as a (non necessarily spanning) subgraph.
1
0
0
0
0
0
Asymptotic behavior of 3-D stochastic primitive equations of large-scale moist atmosphere with additive noise
Using a new and general method, we prove the existence of random attractor for the three dimensional stochastic primitive equations defined on a manifold $\D\subset\R^3$ improving the existence of weak attractor for the deterministic model. Furthermore, we show the existence of the invariant measure.
0
0
1
0
0
0
Analytical Approach for Calculating Chemotaxis Sensitivity Function
We consider the chemotaxis problem for a one-dimensional system. To analyze the interaction of bacteria and attractant we use a modified Keller-Segel model which accounts attractant absorption. To describe the system we use the chemotaxis sensitivity function, which characterizes nonuniformity of bacteria distribution. In particular, we investigate how the chemotaxis sensitivity function depends on the concentration of attractant at the boundary of the system. It is known that in the system without absorption the chemotaxis sensitivity function has a bell shape maximum. Here we show that attractant absorption and special boundary conditions for bacteria can cause the appearance of an additional maximum in the chemotaxis sensitivity function. The value of this maximum is determined by the intensity of absorption.
0
1
0
0
0
0
Copy the dynamics using a learning machine
Is it possible to generally construct a dynamical system to simulate a black system without recovering the equations of motion of the latter? Here we show that this goal can be approached by a learning machine. Trained by a set of input-output responses or a segment of time series of a black system, a learning machine can be served as a copy system to mimic the dynamics of various black systems. It can not only behave as the black system at the parameter set that the training data are made, but also recur the evolution history of the black system. As a result, the learning machine provides an effective way for prediction, and enables one to probe the global dynamics of a black system. These findings have significance for practical systems whose equations of motion cannot be approached accurately. Examples of copying the dynamics of an artificial neural network, the Lorenz system, and a variable star are given. Our idea paves a possible way towards copy a living brain.
0
1
1
1
0
0
Algebraic Bethe ansatz for the XXZ Heisenberg spin chain with triangular boundaries and the corresponding Gaudin model
The implementation of the algebraic Bethe ansatz for the XXZ Heisenberg spin chain, of arbitrary spin-$s$, in the case, when both reflection matrices have the upper-triangular form is analyzed. The general form of the Bethe vectors is studied. In the particular form, Bethe vectors admit the recurrent procedure, with an appropriate modification, used previously in the case of the XXX Heisenberg chain. As expected, these Bethe vectors yield the strikingly simple expression for the off-shell action of the transfer matrix of the chain as well as the spectrum of the transfer matrix and the corresponding Bethe equations. As in the XXX case, the so-called quasi-classical limit gives the off-shell action of the generating function of the corresponding trigonometric Gaudin Hamiltonians with boundary terms.
0
1
0
0
0
0
Using Programmable Graphene Channels as Weights in Spin-Diffusive Neuromorphic Computing
A graphene-based spin-diffusive (GrSD) neural network is presented in this work that takes advantage of the locally tunable spin transport of graphene and the non-volatility of nanomagnets. By using electrostatically gated graphene as spintronic synapses, a weighted summation operation can be performed in the spin domain while the weights can be programmed using circuits in the charge domain. Four-component spin/charge circuit simulations coupled to magnetic dynamics are used to show the feasibility of the neuron-synapse functionality and quantify the analog weighting capability of the graphene under different spin relaxation mechanisms. By realizing transistor-free weight implementation, the graphene spin-diffusive neural network reduces the energy consumption to 0.08-0.32 fJ per cell-synapse and achieves significantly better scalability compared to its digital counterparts, particularly as the number and bit accuracy of the synapses increases.
0
1
0
0
0
0
Construction and Encoding of QC-LDPC Codes Using Group Rings
Quasi-cyclic (QC) low-density parity-check (LDPC) codes which are known as QC-LDPC codes, have many applications due to their simple encoding implementation by means of cyclic shift registers. In this paper, we construct QC-LDPC codes from group rings. A group ring is a free module (at the same time a ring) constructed in a natural way from any given ring and any given group. We present a structure based on the elements of a group ring for constructing QC-LDPC codes. Some of the previously addressed methods for constructing QC-LDPC codes based on finite fields are special cases of the proposed construction method. The constructed QC-LDPC codes perform very well over the additive white Gaussian noise (AWGN) channel with iterative decoding in terms of bit-error probability and block-error probability. Simulation results demonstrate that the proposed codes have competitive performance in comparison with the similar existing LDPC codes. Finally, we propose a new encoding method for the proposed group ring based QC-LDPC codes that can be implemented faster than the current encoding methods. The encoding complexity of the proposed method is analyzed mathematically, and indicates a significate reduction in the required number of operations, even when compared to the available efficient encoding methods that have linear time and space complexities.
1
0
1
0
0
0
Imaginary time, shredded propagator method for large-scale GW calculations
The GW method is a many-body approach capable of providing quasiparticle bands for realistic systems spanning physics, chemistry, and materials science. Despite its power, GW is not routinely applied to large complex materials due to its computational expense. We perform an exact recasting of the GW polarizability and the self-energy as Laplace integrals over imaginary time propagators. We then "shred" the propagators (via energy windowing) and approximate them in a controlled manner by using Gauss-Laguerre quadrature and discrete variable methods to treat the imaginary time propagators in real space. The resulting cubic scaling GW method has a sufficiently small prefactor to outperform standard quartic scaling methods on small systems (>=10 atoms) and also represents a substantial improvement over several other cubic methods tested. This approach is useful for evaluating quantum mechanical response function involving large sums containing energy (difference) denominators.
0
1
0
0
0
0
A time change strategy to model reporting delay dynamics in claims reserving
This paper considers the problem of predicting the number of claims that have already incurred in past exposure years, but which have not yet been reported to the insurer. This is an important building block in the risk management strategy of an insurer since the company should be able to fulfill its liabilities with respect to such claims. Our approach puts emphasis on modeling the time between the occurrence and reporting of claims, the so-called reporting delay. Using data at a daily level we propose a micro-level model for the heterogeneity in reporting delay caused by calendar day effects in the reporting process, such as the weekday pattern and holidays. A simulation study identifies the strengths and weaknesses of our approach in several scenarios compared to traditional methods to predict the number of incurred but not reported claims from aggregated data (i.e. the chain ladder method). We also illustrate our model on a European general liability insurance data set and conclude that the granular approach compared to the chain ladder method is more robust with respect to volatility in the occurrence process. Our framework can be extended to other predictive problems where interest goes to events that incurred in the past but which are subject to an observation delay (e.g. the number of infections during an epidemic).
0
0
0
0
0
1
Solving Parameter Estimation Problems with Discrete Adjoint Exponential Integrators
The solution of inverse problems in a variational setting finds best estimates of the model parameters by minimizing a cost function that penalizes the mismatch between model outputs and observations. The gradients required by the numerical optimization process are computed using adjoint models. Exponential integrators are a promising family of time discretizations for evolutionary partial differential equations. In order to allow the use of these discretizations in the context of inverse problems adjoints of exponential integrators are required. This work derives the discrete adjoint formulae for a W-type exponential propagation iterative methods of Runge-Kutta type (EPIRK-W). These methods allow arbitrary approximations of the Jacobian while maintaining the overall accuracy of the forward integration. The use of Jacobian approximation matrices that do not depend on the model state avoids the complex calculation of Hessians in the discrete adjoint formulae, and allows efficient adjoint code generation via algorithmic differentiation. We use the discrete EPIRK-W adjoints to solve inverse problems with the Lorenz-96 model and a computational magnetics benchmark test. Numerical results validate our theoretical derivations.
1
0
1
0
0
0
Induced and intrinsic Hashiguchi connections on Finsler submanifolds
We study the geometry of Finsler submanifolds using the pulled-back approach. We define the Finsler normal pulled-back bundle and obtain the induced geometric objects, namely, induced pullback Finsler connection, normal pullback Finsler connection, second fundamental form and shape operator. Under a certain condition, we prove that induced and intrinsic Hashiguchi connections coincide on the pulled-back bundle of Finsler submanifold.
0
0
1
0
0
0
Iterative PET Image Reconstruction Using Convolutional Neural Network Representation
PET image reconstruction is challenging due to the ill-poseness of the inverse problem and limited number of detected photons. Recently deep neural networks have been widely and successfully used in computer vision tasks and attracted growing interests in medical imaging. In this work, we trained a deep residual convolutional neural network to improve PET image quality by using the existing inter-patient information. An innovative feature of the proposed method is that we embed the neural network in the iterative reconstruction framework for image representation, rather than using it as a post-processing tool. We formulate the objective function as a constraint optimization problem and solve it using the alternating direction method of multipliers (ADMM) algorithm. Both simulation data and hybrid real data are used to evaluate the proposed method. Quantification results show that our proposed iterative neural network method can outperform the neural network denoising and conventional penalized maximum likelihood methods.
0
1
0
1
0
0
Wave-Shaped Round Functions and Primitive Groups
Round functions used as building blocks for iterated block ciphers, both in the case of Substitution-Permutation Networks and Feistel Networks, are often obtained as the composition of different layers which provide confusion and diffusion, and key additions. The bijectivity of any encryption function, crucial in order to make the decryption possible, is guaranteed by the use of invertible layers or by the Feistel structure. In this work a new family of ciphers, called wave ciphers, is introduced. In wave ciphers, round functions feature wave functions, which are vectorial Boolean functions obtained as the composition of non-invertible layers, where the confusion layer enlarges the message which returns to its original size after the diffusion layer is applied. This is motivated by the fact that relaxing the requirement that all the layers are invertible allows to consider more functions which are optimal with regard to non-linearity. In particular it allows to consider injective APN S-boxes. In order to guarantee efficient decryption we propose to use wave functions in Feistel Networks. With regard to security, the immunity from some group-theoretical attacks is investigated. In particular, it is shown how to avoid that the group generated by the round functions acts imprimitively, which represent a serious flaw for the cipher.
1
0
1
0
0
0
Clean Floquet Time Crystals: Models and Realizations in Cold Atoms
Time crystals, a phase showing spontaneous breaking of time-translation symmetry, has been an intriguing subject for systems far away from equilibrium. Recent experiments found such a phase both in the presence and absence of localization, while in theories localization by disorder is usually assumed a priori. In this work, we point out that time crystals can generally exist in systems without disorder. A series of clean quasi-one-dimensional models under Floquet driving are proposed to demonstrate this unexpected result in principle. Robust time crystalline orders are found in the strongly interacting regime along with the emergent integrals of motion in the dynamical system, which can be characterized by level statistics and the out-of-time-ordered correlators. We propose two cold atom experimental schemes to realize the clean Floquet time crystals, one by making use of dipolar gases and another by synthetic dimensions.
0
1
0
0
0
0
Fractional Derivatives of Convex Lyapunov Functions and Control Problems in Fractional Order Systems
The paper is devoted to the development of control procedures with a guide for conflict-controlled dynamical systems described by ordinary fractional differential equations with the Caputo derivative of an order $\alpha \in (0, 1).$ For the case when the guide is in a certain sense a copy of the system, a mutual aiming procedure between the initial system and the guide is elaborated. The proof of proximity between motions of the systems is based on the estimate of the fractional derivative of the superposition of a convex Lyapunov function and a function represented by the fractional integral of an essentially bounded measurable function. This estimate can be considered as a generalization of the known estimates of such type. An example is considered which illustrates the workability of the proposed control procedures.
0
0
1
0
0
0
The role of tachysterol in vitamin D photosynthesis - A non-adiabatic molecular dynamics study
To investigate the role of tachysterol in the photophysical/chemical regulation of vitamin D photosynthesis, we studied its electronic absorption properties and excited state dynamics using time-dependent density functional theory (TDDFT), coupled cluster theory (CC2), and non-adiabatic molecular dynamics. In excellent agreement with experiments, the simulated electronic spectrum shows a broad absorption band covering the spectra of the other vitamin D photoisomers. The broad band stems from the spectral overlap of four different ground state rotamers. After photoexcitation, the first excited singlet state (S1) decays within 882 fs. The S1 dynamics is characterized by a strong twisting of the central double bond. 96% of all trajectories relax without chemical transformation to the ground state. In 2.3 % of the trajectories we observed [1,5]-sigmatropic hydrogen shift forming the partly deconjugated toxisterol D1. 1.4 % previtamin D formation is observed via hula-twist double bond isomerization. We find a strong dependence between photoreactivity and dihedral angle conformation: hydrogen shift only occurs in cEc and cEt rotamers and double bond isomerization occurs mainly in cEc rotamers. Our study confirms the hypothesis that cEc rotamers are more prone to previtamin D formation than other isomers. We also observe the formation of a cyclobutene-toxisterol in the hot ground state (0.7 %). Due to its strong absorption and unreactive behavior, tachysterol acts mainly as a sun shield suppressing previtamin D formation. Tachysterol shows stronger toxisterol formation than previtamin D. Absorption of low energy UV light by the cEc rotamer can lead to previtamin D formation. Our study reinforces a recent hypothesis that tachysterol can act as a previtamin D source when only low energy ultraviolet light is available, as it is the case in winter or in the morning and evening hours of the day.
0
1
0
0
0
0
A Web of Hate: Tackling Hateful Speech in Online Social Spaces
Online social platforms are beset with hateful speech - content that expresses hatred for a person or group of people. Such content can frighten, intimidate, or silence platform users, and some of it can inspire other users to commit violence. Despite widespread recognition of the problems posed by such content, reliable solutions even for detecting hateful speech are lacking. In the present work, we establish why keyword-based methods are insufficient for detection. We then propose an approach to detecting hateful speech that uses content produced by self-identifying hateful communities as training data. Our approach bypasses the expensive annotation process often required to train keyword systems and performs well across several established platforms, making substantial improvements over current state-of-the-art approaches.
1
0
0
0
0
0
Lacunary Eta-quotients Modulo Powers of Primes
An integral power series is called lacunary modulo $M$ if almost all of its coefficients are divisible by $M$. Motivated by the parity problem for the partition function, $p(n)$, Gordon and Ono studied the generating functions for $t$-regular partitions, and determined conditions for when these functions are lacunary modulo powers of primes. We generalize their results in a number of ways by studying infinite products called Dedekind eta-quotients and generalized Dedekind eta-quotients. We then apply our results to the generating functions for the partition functions considered by Nekrasov, Okounkov, and Han.
0
0
1
0
0
0
An elementary approach to sofic groupoids
We describe sofic groupoids in elementary terms and prove several permanence properties for sofcity. We show that sofcity can be determined in terms of the full group alone, answering a question by Conley, Kechris and Tucker-Drob.
0
0
1
0
0
0
A New Combination of Message Passing Techniques for Receiver Design in MIMO-OFDM Systems
In this paper, we propose a new combined message passing algorithm which allows belief propagation (BP) and mean filed (MF) applied on a same factor node, so that MF can be applied to hard constraint factors. Based on the proposed message passing algorithm, a iterative receiver is designed for MIMO-OFDM systems. Both BP and MF are exploited to deal with the hard constraint factor nodes involving the multiplication of channel coefficients and data symbols to reduce the complexity of the only BP used. The numerical results show that the BER performance of the proposed low complexity receiver closely approach that of the state-of-the-art receiver, where only BP is used to handled the hard constraint factors, in the high SNRs.
1
0
0
0
0
0
Modern Python at the Large Synoptic Survey Telescope
The LSST software systems make extensive use of Python, with almost all of it initially being developed solely in Python 2. Since LSST will be commissioned when Python 2 is end-of-lifed it is critical that we have all our code support Python 3 before commissioning begins. Over the past year we have made significant progress in migrating the bulk of the code from the Data Management system onto Python 3. This paper presents our migration methodology, and the current status of the port, with our eventual aim to be running completely on Python 3 by early 2018. We also discuss recent modernizations to our Python codebase.
0
1
0
0
0
0
Matrix Completion Methods for Causal Panel Data Models
In this paper we study methods for estimating causal effects in settings with panel data, where a subset of units are exposed to a treatment during a subset of periods, and the goal is estimating counterfactual (untreated) outcomes for the treated unit/period combinations. We develop a class of matrix completion estimators that uses the observed elements of the matrix of control outcomes corresponding to untreated unit/periods to predict the "missing" elements of the matrix, corresponding to treated units/periods. The approach estimates a matrix that well-approximates the original (incomplete) matrix, but has lower complexity according to the nuclear norm for matrices. From a technical perspective, we generalize results from the matrix completion literature by allowing the patterns of missing data to have a time series dependency structure. We also present novel insights concerning the connections between the matrix completion literature, the literature on interactive fixed effects models and the literatures on program evaluation under unconfoundedness and synthetic control methods.
0
0
1
0
0
0
Singular surfaces of revolution with prescribed unbounded mean curvature
We give an explicit formula for singular surfaces of revolution with prescribed unbounded mean curvature. Using it, we give conditions for singularities of that surfaces. Periodicity of that surface is also discussed.
0
0
1
0
0
0
A Possible Mechanism for Driving Oscillations in Hot Giant Planets
The $\kappa$-mechanism has been successful in explaining the origin of observed oscillations of many types of "classical" pulsating variable stars. Here we examine quantitatively if that same process is prominent enough to excite the potential global oscillations within Jupiter, whose energy flux is powered by gravitational collapse rather than nuclear fusion. Additionally, we examine whether external radiative forcing, i.e. starlight, could be a driver for global oscillations in hot Jupiters orbiting various main-sequence stars at defined orbital semimajor axes. Using planetary models generated by the Modules for Experiments in Stellar Astrophysics (MESA) and nonadiabatic oscillation calculations, we confirm that Jovian oscillations cannot be driven via the $\kappa$-mechanism. However, we do show that in hot Jupiters oscillations can likely be excited via the suppression of radiative cooling due to external radiation given a large enough stellar flux and the absence of a significant oscillatory damping zone within the planet. This trend seems to not be dependent on the planetary mass. In future observations we can thus expect that such planets may be pulsating, thereby giving greater insight into the internal structure of these bodies.
0
1
0
0
0
0
Cubic Fields: A Primer
We classify all cubic extensions of any field of arbitrary characteristic, up to isomorphism, via an explicit construction involving three fundamental types of cubic forms. We deduce a classification of any Galois cubic extension of a field. The splitting and ramification of places in a separable cubic extension of any global function field are completely determined, and precise Riemann-Hurwitz formulae are given. In doing so, we determine the decomposition of any cubic polynomial over a finite field.
0
0
1
0
0
0
The GBT Beam Shape at 109 GHz
With the installation of the Argus 16-pixel receiver covering 75-115 GHz on the Green Bank Telescope (GBT), it is now possible to characterize the antenna beam at very high frequencies, where the use of the active surface and out-of-focus holography are critical to the telescope's performance. A recent measurement in good weather conditions (low atmospheric opacity, low winds, and stable night-time thermal conditions) at 109.4 GHz yielded a FWHM beam of 6.7"x6.4" in azimuth and elevation, respectively. This corresponds to 1.16+/-0.03 Lambda/D at 109.4 GHz. The derived ratio agrees well with the low-frequency value of 1.18+/-0.03 Lambda/D measured at 9.0 GHz. There are no detectable side-lobes at either frequency. In good weather conditions and after applying the standard antenna corrections (pointing, focus, and the active surface corrections for gravity and thermal effects), there is no measurable degradation of the beam of the GBT at its highest operational frequencies.
0
1
0
0
0
0
Deep Latent Dirichlet Allocation with Topic-Layer-Adaptive Stochastic Gradient Riemannian MCMC
It is challenging to develop stochastic gradient based scalable inference for deep discrete latent variable models (LVMs), due to the difficulties in not only computing the gradients, but also adapting the step sizes to different latent factors and hidden layers. For the Poisson gamma belief network (PGBN), a recently proposed deep discrete LVM, we derive an alternative representation that is referred to as deep latent Dirichlet allocation (DLDA). Exploiting data augmentation and marginalization techniques, we derive a block-diagonal Fisher information matrix and its inverse for the simplex-constrained global model parameters of DLDA. Exploiting that Fisher information matrix with stochastic gradient MCMC, we present topic-layer-adaptive stochastic gradient Riemannian (TLASGR) MCMC that jointly learns simplex-constrained global parameters across all layers and topics, with topic and layer specific learning rates. State-of-the-art results are demonstrated on big data sets.
1
0
0
1
0
0
Bridging Finite and Super Population Causal Inference
There are two general views in causal analysis of experimental data: the super population view that the units are an independent sample from some hypothetical infinite populations, and the finite population view that the potential outcomes of the experimental units are fixed and the randomness comes solely from the physical randomization of the treatment assignment. These two views differs conceptually and mathematically, resulting in different sampling variances of the usual difference-in-means estimator of the average causal effect. Practically, however, these two views result in identical variance estimators. By recalling a variance decomposition and exploiting a completeness-type argument, we establish a connection between these two views in completely randomized experiments. This alternative formulation could serve as a template for bridging finite and super population causal inference in other scenarios.
0
0
1
1
0
0
Deep Reinforcement Learning that Matters
In recent years, significant progress has been made in solving challenging problems across various domains using deep reinforcement learning (RL). Reproducing existing work and accurately judging the improvements offered by novel methods is vital to sustaining this progress. Unfortunately, reproducing results for state-of-the-art deep RL methods is seldom straightforward. In particular, non-determinism in standard benchmark environments, combined with variance intrinsic to the methods, can make reported results tough to interpret. Without significance metrics and tighter standardization of experimental reporting, it is difficult to determine whether improvements over the prior state-of-the-art are meaningful. In this paper, we investigate challenges posed by reproducibility, proper experimental techniques, and reporting procedures. We illustrate the variability in reported metrics and results when comparing against common baselines and suggest guidelines to make future results in deep RL more reproducible. We aim to spur discussion about how to ensure continued progress in the field by minimizing wasted effort stemming from results that are non-reproducible and easily misinterpreted.
1
0
0
1
0
0
Testing Docker Performance for HPC Applications
The main goal for this article is to compare performance penalties when using KVM virtualization and Docker containers for creating isolated environments for HPC applications. The article provides both data obtained using commonly accepted synthetic tests (High Performance Linpack) and real life applications (OpenFOAM). The article highlights the influence on resulting application performance of major infrastructure configuration options: CPU type presented to VM, networking connection type used.
1
0
0
0
0
0
Can Neural Machine Translation be Improved with User Feedback?
We present the first real-world application of methods for improving neural machine translation (NMT) with human reinforcement, based on explicit and implicit user feedback collected on the eBay e-commerce platform. Previous work has been confined to simulation experiments, whereas in this paper we work with real logged feedback for offline bandit learning of NMT parameters. We conduct a thorough analysis of the available explicit user judgments---five-star ratings of translation quality---and show that they are not reliable enough to yield significant improvements in bandit learning. In contrast, we successfully utilize implicit task-based feedback collected in a cross-lingual search task to improve task-specific and machine translation quality metrics.
0
0
0
1
0
0
A new concept multi-stage Zeeman decelerator: experimental implementation
We demonstrate the successful experimental implementation of a multi-stage Zeeman decelerator utilizing the new concept described in the accompanying paper. The decelerator consists of an array of 25 hexapoles and 24 solenoids. The performance of the decelerator in acceleration, deceleration and guiding modes is characterized using beams of metastable Helium ($^3S$) atoms. Up to 60% of the kinetic energy was removed for He atoms that have an initial velocity of 520 m/s. The hexapoles consist of permanent magnets, whereas the solenoids are produced from a single hollow copper capillary through which cooling liquid is passed. The solenoid design allows for excellent thermal properties, and enables the use of readily available and cheap electronics components to pulse high currents through the solenoids. The Zeeman decelerator demonstrated here is mechanically easy to build, can be operated with cost-effective electronics, and can run at repetition rates up to 10 Hz.
0
1
0
0
0
0
Adversarial Generation of Real-time Feedback with Neural Networks for Simulation-based Training
Simulation-based training (SBT) is gaining popularity as a low-cost and convenient training technique in a vast range of applications. However, for a SBT platform to be fully utilized as an effective training tool, it is essential that feedback on performance is provided automatically in real-time during training. It is the aim of this paper to develop an efficient and effective feedback generation method for the provision of real-time feedback in SBT. Existing methods either have low effectiveness in improving novice skills or suffer from low efficiency, resulting in their inability to be used in real-time. In this paper, we propose a neural network based method to generate feedback using the adversarial technique. The proposed method utilizes a bounded adversarial update to minimize a L1 regularized loss via back-propagation. We empirically show that the proposed method can be used to generate simple, yet effective feedback. Also, it was observed to have high effectiveness and efficiency when compared to existing methods, thus making it a promising option for real-time feedback generation in SBT.
1
0
0
1
0
0
Models of the strongly lensed quasar DES J0408-5354
We present gravitational lens models of the multiply imaged quasar DES J0408-5354, recently discovered in the Dark Energy Survey (DES) footprint, with the aim of interpreting its remarkable quad-like configuration. We first model the DES single-epoch $grizY$ images as a superposition of a lens galaxy and four point-like objects, obtaining spectral energy distributions (SEDs) and relative positions for the objects. Three of the point sources (A,B,D) have SEDs compatible with the discovery quasar spectra, while the faintest point-like image (G2/C) shows significant reddening and a `grey' dimming of $\approx0.8$mag. In order to understand the lens configuration, we fit different models to the relative positions of A,B,D. Models with just a single deflector predict a fourth image at the location of G2/C but considerably brighter and bluer. The addition of a small satellite galaxy ($R_{\rm E}\approx0.2$") in the lens plane near the position of G2/C suppresses the flux of the fourth image and can explain both the reddening and grey dimming. All models predict a main deflector with Einstein radius between $1.7"$ and $2.0",$ velocity dispersion $267-280$km/s and enclosed mass $\approx 6\times10^{11}M_{\odot},$ even though higher resolution imaging data are needed to break residual degeneracies in model parameters. The longest time-delay (B-A) is estimated as $\approx 85$ (resp. $\approx125$) days by models with (resp. without) a perturber near G2/C. The configuration and predicted time-delays of J0408-5354 make it an excellent target for follow-up aimed at understanding the source quasar host galaxy and substructure in the lens, and measuring cosmological parameters. We also discuss some lessons learnt from J0408-5354 on lensed quasar finding strategies, due to its chromaticity and morphology.
0
1
0
0
0
0
Efficient Test-based Variable Selection for High-dimensional Linear Models
Variable selection plays a fundamental role in high-dimensional data analysis. Various methods have been developed for variable selection in recent years. Well-known examples are forward stepwise regression (FSR) and least angle regression (LARS), among others. These methods typically add variables into the model one by one. For such selection procedures, it is crucial to find a stopping criterion that controls model complexity. One of the most commonly used techniques to this end is cross-validation (CV) which, in spite of its popularity, has two major drawbacks: expensive computational cost and lack of statistical interpretation. To overcome these drawbacks, we introduce a flexible and efficient test-based variable selection approach that can be incorporated into any sequential selection procedure. The test, which is on the overall signal in the remaining inactive variables, is based on the maximal absolute partial correlation between the inactive variables and the response given active variables. We develop the asymptotic null distribution of the proposed test statistic as the dimension tends to infinity uniformly in the sample size. We also show that the test is consistent. With this test, at each step of the selection, a new variable is included if and only if the $p$-value is below some pre-defined level. Numerical studies show that the proposed method delivers very competitive performance in terms of variable selection accuracy and computational complexity compared to CV.
0
0
0
1
0
0
Star Cluster Formation from Turbulent Clumps. I. The Fast Formation Limit
We investigate the formation and early evolution of star clusters assuming that they form from a turbulent starless clump of given mass bounded inside a parent self-gravitating molecular cloud characterized by a particular mass surface density. As a first step we assume instantaneous star cluster formation and gas expulsion. We draw our initial conditions from observed properties of starless clumps. We follow the early evolution of the clusters up to 20 Myr, investigating effects of different star formation efficiencies, primordial binary fractions and eccentricities and primordial mass segregation levels. We investigate clumps with initial masses of $M_{\rm cl}=3000\:{\rm M}_\odot$ embedded in ambient cloud environments with mass surface densities, $\Sigma_{\rm cloud}=0.1$ and $1\:{\rm g\:cm^{-2}}$. We show that these models of fast star cluster formation result, in the fiducial case, in clusters that expand rapidly, even considering only the bound members. Clusters formed from higher $\Sigma_{\rm cloud}$ environments tend to expand more quickly, so are soon larger than clusters born from lower $\Sigma_{\rm cloud}$ conditions. To form a young cluster of a given age, stellar mass and mass surface density, these models need to assume a parent molecular clump that is many times denser, which is unrealistic compared to observed systems. We also show that in these models the initial binary properties are only slightly modified by interactions, meaning that binary properties, e.g., at 20 Myr, are very similar to those at birth. With this study we set up the basis of future work where we will investigate more realistic models of star formation compared to this instantaneous, baseline case.
0
1
0
0
0
0
A deep learning architecture for temporal sleep stage classification using multivariate and multimodal time series
Sleep stage classification constitutes an important preliminary exam in the diagnosis of sleep disorders. It is traditionally performed by a sleep expert who assigns to each 30s of signal a sleep stage, based on the visual inspection of signals such as electroencephalograms (EEG), electrooculograms (EOG), electrocardiograms (ECG) and electromyograms (EMG). We introduce here the first deep learning approach for sleep stage classification that learns end-to-end without computing spectrograms or extracting hand-crafted features, that exploits all multivariate and multimodal Polysomnography (PSG) signals (EEG, EMG and EOG), and that can exploit the temporal context of each 30s window of data. For each modality the first layer learns linear spatial filters that exploit the array of sensors to increase the signal-to-noise ratio, and the last layer feeds the learnt representation to a softmax classifier. Our model is compared to alternative automatic approaches based on convolutional networks or decisions trees. Results obtained on 61 publicly available PSG records with up to 20 EEG channels demonstrate that our network architecture yields state-of-the-art performance. Our study reveals a number of insights on the spatio-temporal distribution of the signal of interest: a good trade-off for optimal classification performance measured with balanced accuracy is to use 6 EEG with 2 EOG (left and right) and 3 EMG chin channels. Also exploiting one minute of data before and after each data segment offers the strongest improvement when a limited number of channels is available. As sleep experts, our system exploits the multivariate and multimodal nature of PSG signals in order to deliver state-of-the-art classification performance with a small computational cost.
0
0
0
1
0
0
An anti-incursion algorithm for unknown probabilistic adversaries on connected graphs
A gambler moves on the vertices $1, \ldots, n$ of a graph using the probability distribution $p_{1}, \ldots, p_{n}$. A cop pursues the gambler on the graph, only being able to move between adjacent vertices. What is the expected number of moves that the gambler can make until the cop catches them? Komarov and Winkler proved an upper bound of approximately $1.97n$ for the expected capture time on any connected $n$-vertex graph when the cop does not know the gambler's distribution. We improve this upper bound to approximately $1.95n$ by modifying the cop's pursuit algorithm.
1
0
1
0
0
0
Virtual Breakpoints for x86/64
Efficient, reliable trapping of execution in a program at the desired location is a hot area of research for security professionals. The progression of debuggers and malware is akin to a game of cat and mouse - each are constantly in a state of trying to thwart one another. At the core of most efficient debuggers today is a combination of virtual machines and traditional binary modification breakpoints (int3). In this paper, we present a design for Virtual Breakpoints, a modification to the x86 MMU which brings breakpoint management into hardware alongside page tables. We demonstrate the fundamental abstraction failures of current trapping methods, and rebuild the mechanism from the ground up. Our design delivers fast, reliable trapping without the pitfalls of binary modification.
1
0
0
0
0
0
Accurate Multi-physics Numerical Analysis of Particle Preconcentration Based on Ion Concentration Polarization
This paper studies mechanism of preconcentration of charged particles in a straight micro-channel embedded with permselective membranes, by numerically solving coupled transport equations of ions, charged particles and solvent fluid without any simplifying assumptions. It is demonstrated that trapping and preconcentration of charged particles are determined by the interplay between drag force from the electroosmotic fluid flow and the electrophoretic force applied trough the electric field. Several insightful characteristics are revealed, including the diverse dynamics of co-ions and counter ions, replacement of co-ions by focused particles, lowered ion concentrations in particle enriched zone, and enhanced electroosmotic pumping effect etc. Conditions for particles that may be concentrated are identified in terms of charges, sizes and electrophoretic mobilities of particles and co-ions. Dependences of enrichment factor on cross-membrane voltage, initial particle concentration and buffer ion concentrations are analyzed and the underlying reasons are elaborated. Finally, post priori a condition for validity of decoupled simulation model is given based on charges carried by focused charge particles and that by buffer co-ions. These results provide important guidance in the design and optimization of nanofluidic preconcentration and other related devices.
0
1
0
0
0
0
Learning to Embed Words in Context for Syntactic Tasks
We present models for embedding words in the context of surrounding words. Such models, which we refer to as token embeddings, represent the characteristics of a word that are specific to a given context, such as word sense, syntactic category, and semantic role. We explore simple, efficient token embedding models based on standard neural network architectures. We learn token embeddings on a large amount of unannotated text and evaluate them as features for part-of-speech taggers and dependency parsers trained on much smaller amounts of annotated data. We find that predictors endowed with token embeddings consistently outperform baseline predictors across a range of context window and training set sizes.
1
0
0
0
0
0
Modeling Impact of Human Errors on the Data Unavailability and Data Loss of Storage Systems
Data storage systems and their availability play a crucial role in contemporary datacenters. Despite using mechanisms such as automatic fail-over in datacenters, the role of human agents and consequently their destructive errors is inevitable. Due to very large number of disk drives used in exascale datacenters and their high failure rates, the disk subsystem in storage systems has become a major source of Data Unavailability (DU) and Data Loss (DL) initiated by human errors. In this paper, we investigate the effect of Incorrect Disk Replacement Service (IDRS) on the availability and reliability of data storage systems. To this end, we analyze the consequences of IDRS in a disk array, and conduct Monte Carlo simulations to evaluate DU and DL during mission time. The proposed modeling framework can cope with a) different storage array configurations and b) Data Object Survivability (DOS), representing the effect of system level redundancies such as remote backups and mirrors. In the proposed framework, the model parameters are obtained from industrial and scientific reports alongside field data which have been extracted from a datacenter operating with 70 storage racks. The results show that ignoring the impact of IDRS leads to unavailability underestimation by up to three orders of magnitude. Moreover, our study suggests that by considering the effect of human errors, the conventional beliefs about the dependability of different Redundant Array of Independent Disks (RAID) mechanisms should be revised. The results show that RAID1 can result in lower availability compared to RAID5 in the presence of human errors. The results also show that employing automatic fail-over policy (using hot spare disks) can reduce the drastic impacts of human errors by two orders of magnitude.
1
0
0
0
0
0
Gaussian and Sparse Processes Are Limits of Generalized Poisson Processes
The theory of sparse stochastic processes offers a broad class of statistical models to study signals. In this framework, signals are represented as realizations of random processes that are solution of linear stochastic differential equations driven by white Lévy noises. Among these processes, generalized Poisson processes based on compound-Poisson noises admit an interpretation as random L-splines with random knots and weights. We demonstrate that every generalized Lévy process-from Gaussian to sparse-can be understood as the limit in law of a sequence of generalized Poisson processes. This enables a new conceptual understanding of sparse processes and suggests simple algorithms for the numerical generation of such objects.
1
0
1
0
0
0
Survival time of Princess Kaguya in an air-tight bamboo chamber
Princess Kaguya is a heroine of a famous folk tale, as every Japanese knows. She was assumed to be confined in a bamboo cavity with cylindrical shape, and then fortuitously discovered by an elderly man in the forest. Here, we pose a question as to how long she could have survived in an enclosed space such as the bamboo chamber, which had no external oxygen supply at all. We demonstrate that the survival time should be determined by three geometric quantities: the inner volume of the bamboo chamber, the volumetric size of her body, and her body's total surface area that governs the rate of oxygen consumption in the body. We also emphasize that this geometric problem shed light on an interesting scaling relation between biological quantities for living organisms.
0
1
0
0
0
0
Descriptor System Tools (DSTOOLS) User's Guide
The Descriptor System Tools (DSTOOLS) is a collection of MATLAB functions for the operation on and manipulation of rational transfer function matrices via their descriptor system realizations. The DSTOOLS collection relies on the Control System Toolbox and several mex-functions based on the Systems and Control Library SLICOT. Many of the implemented functions are based on the computational procedures described in Chapter 10 of the book: "A. Varga, Solving Fault Diagnosis Problems - Linear Synthesis Techniques, Springer, 2017". This document is the User's Guide for the version V0.71 of DSTOOLS. First, we present the mathematical background on rational matrices and descriptor systems. Then, we give in-depth information on the command syntax of the main computational functions. Several examples illustrate the use of the main functions of DSTOOLS.
1
0
0
0
0
0
Laplacian Prior Variational Automatic Relevance Determination for Transmission Tomography
In the classic sparsity-driven problems, the fundamental L-1 penalty method has been shown to have good performance in reconstructing signals for a wide range of problems. However this performance relies on a good choice of penalty weight which is often found from empirical experiments. We propose an algorithm called the Laplacian variational automatic relevance determination (Lap-VARD) that takes this penalty weight as a parameter of a prior Laplace distribution. Optimization of this parameter using an automatic relevance determination framework results in a balance between the sparsity and accuracy of signal reconstruction. Our algorithm is implemented in a transmission tomography model with sparsity constraint in wavelet domain.
0
0
0
1
0
0
Mitigating Confirmation Bias on Twitter by Recommending Opposing Views
In this work, we propose a content-based recommendation approach to increase exposure to opposing beliefs and opinions. Our aim is to help provide users with more diverse viewpoints on issues, which are discussed in partisan groups from different perspectives. Since due to the backfire effect, people's original beliefs tend to strengthen when challenged with counter evidence, we need to expose them to opposing viewpoints at the right time. The preliminary work presented here describes our first step into this direction. As illustrative showcase, we take the political debate on Twitter around the presidency of Donald Trump.
1
0
0
0
0
0
First-principles based Landau-Devonshire potential for BiFeO$_3$
The work describes a first-principles-based computational strategy for studying structural phase transitions, and in particular, for determination of the so-called Landau-Devonshire potential - the classical zero-temperature limit of the Gibbs energy, expanded in terms of order parameters. It exploits the configuration space attached to the eigenvectors of the modes frozen in the ground state, rather than the space spanned by the unstable modes of the high-symmetry phase, as done usually. This allows us to carefully probe the part of the energy surface in the vicinity of the ground state, which is most relevant for the properties of the ordered phase. We apply this procedure to BiFeO$_3$ and perform ab-initio calculations in order to determine potential energy contributions associated with strain, polarization and oxygen octahedra tilt degrees of freedom, compatible with its two-formula unit cell periodic boundary conditions.
0
1
0
0
0
0
A topological characterization of the omega-limit sets of analytic vector fields on open subsets of the sphere
In [15], V. Jimenez and J. Llibre characterized, up to homeomorphism, the omega limit sets of analytic vector fields on the sphere and the projective plane. The authors also studied the same problem for open subsets of these surfaces. Unfortunately, an essential lemma in their programme for general surfaces has a gap. Although the proof of this lemma can be amended in the case of the sphere, the plane, the projective plane and the projective plane minus one point (and therefore the characterizations for these surfaces in [8] are correct), the lemma is not generally true, see [15]. Consequently, the topological characterization for analytic vector fields on open subsets of the sphere and the projective plane is still pending. In this paper, we close this problem in the case of open subsets of the sphere.
0
0
1
0
0
0