text
stringlengths
138
2.38k
labels
sequencelengths
6
6
Predictions
sequencelengths
1
3
Title: Estimation of Low-Rank Matrices via Approximate Message Passing, Abstract: Consider the problem of estimating a low-rank symmetric matrix when its entries are perturbed by Gaussian noise, a setting that is known as `spiked model' or `deformed Wigner matrix'. If the empirical distribution of the entries of the spikes is known, optimal estimators that exploit this knowledge can substantially outperform spectral approaches. Recent work characterizes the accuracy of Bayes-optimal estimators in the high-dimensional limit. In this paper we present a practical algorithm that can achieve Bayes-optimal accuracy above the spectral threshold. A bold conjecture from statistical physics posits that no polynomial-time algorithm achieves optimal error below the same threshold (unless the best estimator is trivial). Our approach uses Approximate Message Passing (AMP) in conjunction with a spectral initialization. AMP has proven successful in a variety of statistical problem, and are amenable to exact asymptotic analysis via state evolution. Unfortunately, state evolution is uninformative when the algorithm is initialized near an unstable fixed point, as it often happens in matrix estimation. We develop a new analysis of AMP that allows for spectral initializations, and builds on a decoupling between the outlier eigenvectors and the bulk in the spiked random matrix model. Our main theorem is general and applies beyond matrix estimation. However, we use it to derive detailed predictions for the problem of estimating a rank-one matrix in noise. Special cases of these problem are closely related -via universality arguments- to the network community detection problem for two asymmetric communities. For general rank-one models, we show that AMP can be used to construct asymptotically valid confidence intervals. As a further illustration, we consider the example of a block-constant low-rank matrix with symmetric blocks, which we refer to as `Gaussian Block Model'.
[ 0, 0, 1, 1, 0, 0 ]
[ "Statistics", "Mathematics", "Computer Science" ]
Title: Arithmetic Siegel-Weil formula on $X_{0}(N)$, Abstract: In this paper, we proved an arithmetic Siegel-Weil formula and the modularity of some arithmetic theta function on the modular curve $X_0(N)$ when $N$ is square free. In the process, we also construct some generalized Delta function for $\Gamma_0(N)$ and proved some explicit Kronecker limit formula for Eisenstein series on $X_0(N)$.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Perturbation, Non-Gaussianity and Reheating in a GB-$α$-Attractor Model, Abstract: Motivated by $\alpha$-attractor models, in this paper we consider a Gauss-Bonnet inflation with E-model type of potential. We consider the Gauss-Bonnet coupling function to be the same as the E-model potential. In the small $\alpha$ limit we obtain an attractor at $r=0$ as expected, and in the large $\alpha$ limit we recover the Gauss-Bonnet model with potential and coupling function of the form $\phi^{2n}$. We study perturbations and non-Gaussianity in this setup and we find some constraints on the model's parameters in comparison with PLANCK datasets. We study also the reheating epoch after inflation in this setup. For this purpose, we seek the number of e-folds and temperature during reheating epoch. These quantities depend on the model's parameter and the effective equation of state of the dominating energy density in the reheating era. We find some observational constraints on these parameters.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Training wide residual networks for deployment using a single bit for each weight, Abstract: For fast and energy-efficient deployment of trained deep neural networks on resource-constrained embedded hardware, each learned weight parameter should ideally be represented and stored using a single bit. Error-rates usually increase when this requirement is imposed. Here, we report large improvements in error rates on multiple datasets, for deep convolutional neural networks deployed with 1-bit-per-weight. Using wide residual networks as our main baseline, our approach simplifies existing methods that binarize weights by applying the sign function in training; we apply scaling factors for each layer with constant unlearned values equal to the layer-specific standard deviations used for initialization. For CIFAR-10, CIFAR-100 and ImageNet, and models with 1-bit-per-weight requiring less than 10 MB of parameter memory, we achieve error rates of 3.9%, 18.5% and 26.0% / 8.5% (Top-1 / Top-5) respectively. We also considered MNIST, SVHN and ImageNet32, achieving 1-bit-per-weight test results of 0.27%, 1.9%, and 41.3% / 19.1% respectively. For CIFAR, our error rates halve previously reported values, and are within about 1% of our error-rates for the same network with full-precision weights. For networks that overfit, we also show significant improvements in error rate by not learning batch normalization scale and offset parameters. This applies to both full precision and 1-bit-per-weight networks. Using a warm-restart learning-rate schedule, we found that training for 1-bit-per-weight is just as fast as full-precision networks, with better accuracy than standard schedules, and achieved about 98%-99% of peak performance in just 62 training epochs for CIFAR-10/100. For full training code and trained models in MATLAB, Keras and PyTorch see this https URL .
[ 0, 0, 0, 1, 0, 0 ]
[ "Computer Science" ]
Title: IP Based Traffic Recovery: An Optimal Approach using SDN Application for Data Center Network, Abstract: With the passage of time and indulgence in Information Technology, network management has proved its significance and has become one of the most important and challenging task in today's era of information flow. Communication networks implement a high level of sophistication in managing and flowing the data through secure channels, to make it almost impossible for data loss. That is why there are many proposed solution that are currently implemented in wide range of network-based applications like social networks and finance applications. The objective of this research paper is to propose a very reliable method of data flow: Choose best path for traffic using SDN application. This is an IP based method in which our SDN application implements provision of best possible path by filtering the requests on base of their IP origin. We'll distinguish among source and will provide the data flow with lowest traffic path, thus resulting in providing us minimum chances of data loss. A request to access our test application will be generated from host and request from each host will be distinguished by our SDN application that will get number of active users for all available servers and will redirect the request to server with minimum traffic load. It will also destroy sessions of inactive users, resulting in maintaining a best responsive channel for data flow.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: Finding influential nodes for integration in brain networks using optimal percolation theory, Abstract: Global integration of information in the brain results from complex interactions of segregated brain networks. Identifying the most influential neuronal populations that efficiently bind these networks is a fundamental problem of systems neuroscience. Here we apply optimal percolation theory and pharmacogenetic interventions in-vivo to predict and subsequently target nodes that are essential for global integration of a memory network in rodents. The theory predicts that integration in the memory network is mediated by a set of low-degree nodes located in the nucleus accumbens. This result is confirmed with pharmacogenetic inactivation of the nucleus accumbens, which eliminates the formation of the memory network, while inactivations of other brain areas leave the network intact. Thus, optimal percolation theory predicts essential nodes in brain networks. This could be used to identify targets of interventions to modulate brain function.
[ 0, 0, 0, 0, 1, 0 ]
[ "Quantitative Biology", "Physics" ]
Title: Noise induced transition in Josephson junction with second harmonic, Abstract: We show a noise-induced transition in Josephson junction with fundamental as well as second harmonic. A periodically modulated multiplicative colored noise can stabilize an unstable configuration in such a system. The stabilization of the unstable configuration has been captured in the effective potential of the system obtained by integrating out the high-frequency components of the noise. This is a classical approach to understand the stability of an unstable configuration due to the presence of such stochasticity in the system and our numerical analysis confirms the prediction from the analytical calculation.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Local and global boundary rigidity and the geodesic X-ray transform in the normal gauge, Abstract: In this paper we analyze the local and global boundary rigidity problem for general Riemannian manifolds with boundary $(M,g)$ whose boundary is strictly convex. We show that the boundary distance function, i.e., $d_g|_{\partial M\times\partial M}$, known over suitable open sets of $\partial M$ determines $g$ in suitable corresponding open subsets of $M$, up to the natural diffeomorphism invariance of the problem. We also show that if there is a function on $M$ with suitable convexity properties relative to $g$ then $d_g|_{\partial M\times\partial M}$ determines $g$ globally in the sense that if $d_g|_{\partial M\times\partial M}=d_{\tilde g}|_{\partial M\times \partial M}$ then there is a diffeomorphism $\psi$ fixing $\partial M$ (pointwise) such that $g=\psi^*\tilde g$. This global assumption is satisfied, for instance, for the distance function from a given point if the manifold has no focal points (from that point). We also consider the lens rigidity problem. The lens relation measures the point of exit from $M$ and the direction of exit of geodesics issued from the boundary and the length of the geodesic. The lens rigidity problem is whether we can determine the metric up to isometry from the lens relation. We solve the lens rigidity problem under the same global assumption mentioned above. This shows, for instance, that manifolds with a strictly convex boundary and non-positive sectional curvature are lens rigid. The key tool is the analysis of the geodesic X-ray transform on 2-tensors, corresponding to a metric $g$, in the normal gauge, such as normal coordinates relative to a hypersurface, where one also needs to allow microlocal weights. This is handled by refining and extending our earlier results in the solenoidal gauge.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Adversarial Source Identification Game with Corrupted Training, Abstract: We study a variant of the source identification game with training data in which part of the training data is corrupted by an attacker. In the addressed scenario, the defender aims at deciding whether a test sequence has been drawn according to a discrete memoryless source $X \sim P_X$, whose statistics are known to him through the observation of a training sequence generated by $X$. In order to undermine the correct decision under the alternative hypothesis that the test sequence has not been drawn from $X$, the attacker can modify a sequence produced by a source $Y \sim P_Y$ up to a certain distortion, and corrupt the training sequence either by adding some fake samples or by replacing some samples with fake ones. We derive the unique rationalizable equilibrium of the two versions of the game in the asymptotic regime and by assuming that the defender bases its decision by relying only on the first order statistics of the test and the training sequences. By mimicking Stein's lemma, we derive the best achievable performance for the defender when the first type error probability is required to tend to zero exponentially fast with an arbitrarily small, yet positive, error exponent. We then use such a result to analyze the ultimate distinguishability of any two sources as a function of the allowed distortion and the fraction of corrupted samples injected into the training sequence.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Localization and Stationary Phase Approximation on Supermanifolds, Abstract: Given an odd vector field $Q$ on a supermanifold $M$ and a $Q$-invariant density $\mu$ on $M$, under certain compactness conditions on $Q$, the value of the integral $\int_{M}\mu$ is determined by the value of $\mu$ on any neighborhood of the vanishing locus $N$ of $Q$. We present a formula for the integral in the case where $N$ is a subsupermanifold which is appropriately non-degenerate with respect to $Q$. In the process, we discuss the linear algebra necessary to express our result in a coordinate independent way. We also extend stationary phase approximation and the Morse-Bott Lemma to supermanifolds.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Physics" ]
Title: An Analytical Design Optimization Method for Electric Propulsion Systems of Multicopter UAVs with Desired Hovering Endurance, Abstract: Multicopters are becoming increasingly important in both civil and military fields. Currently, most multicopter propulsion systems are designed by experience and trial-and-error experiments, which are costly and ineffective. This paper proposes a simple and practical method to help designers find the optimal propulsion system according to the given design requirements. First, the modeling methods for four basic components of the propulsion system including propellers, motors, electric speed controls, and batteries are studied respectively. Secondly, the whole optimization design problem is simplified and decoupled into several sub-problems. By solving these sub-problems, the optimal parameters of each component can be obtained respectively. Finally, based on the obtained optimal component parameters, the optimal product of each component can be quickly located and determined from the corresponding database. Experiments and statistical analyses demonstrate the effectiveness of the proposed method.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Engineering" ]
Title: Correlation between clustering and degree in affiliation networks, Abstract: We are interested in the probability that two randomly selected neighbors of a random vertex of degree (at least) $k$ are adjacent. We evaluate this probability for a power law random intersection graph, where each vertex is prescribed a collection of attributes and two vertices are adjacent whenever they share a common attribute. We show that the probability obeys the scaling $k^{-\delta}$ as $k\to+\infty$. Our results are mathematically rigorous. The parameter $0\le \delta\le 1$ is determined by the tail indices of power law random weights defining the links between vertices and attributes.
[ 1, 0, 0, 0, 0, 0 ]
[ "Mathematics", "Statistics" ]
Title: Predicting Financial Crime: Augmenting the Predictive Policing Arsenal, Abstract: Financial crime is a rampant but hidden threat. In spite of this, predictive policing systems disproportionately target "street crime" rather than white collar crime. This paper presents the White Collar Crime Early Warning System (WCCEWS), a white collar crime predictive model that uses random forest classifiers to identify high risk zones for incidents of financial crime.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Quantitative Finance", "Statistics" ]
Title: Resonant Scattering Characteristics of Homogeneous Dielectric Sphere, Abstract: In the present article the classical problem of electromagnetic scattering by a single homogeneous sphere is revisited. Main focus is the study of the scattering behavior as a function of the material contrast and the size parameters for all electric and magnetic resonances of a dielectric sphere. Specifically, the Padé approximants are introduced and utilized as an alternative system expansion of the Mie coefficients. Low order Padé approximants can give compact and physically insightful expressions for the scattering system and the enabled dynamic mechanisms. Higher order approximants are used for predicting accurately the resonant pole spectrum. These results are summarized into general pole formulae, covering up to fifth order magnetic and forth order electric resonances of a small dielectric sphere. Additionally, the connection between the radiative damping process and the resonant linewidth is investigated. The results obtained reveal the fundamental connection of the radiative damping mechanism with the maximum width occurring for each resonance. Finally, the suggested system ansatz is used for studying the resonant absorption maximum through a circuit-inspired perspective.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Poisson brackets symmetry from the pentagon-wheel cocycle in the graph complex, Abstract: Kontsevich designed a scheme to generate infinitesimal symmetries $\dot{\mathcal{P}} = \mathcal{Q}(\mathcal{P})$ of Poisson brackets $\mathcal{P}$ on all affine manifolds $M^r$; every such deformation is encoded by oriented graphs on $n+2$ vertices and $2n$ edges. In particular, these symmetries can be obtained by orienting sums of non-oriented graphs $\gamma$ on $n$ vertices and $2n-2$ edges. The bi-vector flow $\dot{\mathcal{P}} = \text{Or}(\gamma)(\mathcal{P})$ preserves the space of Poisson structures if $\gamma$ is a cocycle with respect to the vertex-expanding differential in the graph complex. A class of such cocycles $\boldsymbol{\gamma}_{2\ell+1}$ is known to exist: marked by $\ell \in \mathbb{N}$, each of them contains a $(2\ell+1)$-gon wheel with a nonzero coefficient. At $\ell=1$ the tetrahedron $\boldsymbol{\gamma}_3$ itself is a cocycle; at $\ell=2$ the Kontsevich--Willwacher pentagon-wheel cocycle $\boldsymbol{\gamma}_5$ consists of two graphs. We reconstruct the symmetry $\mathcal{Q}_5(\mathcal{P}) = \text{Or}(\boldsymbol{\gamma}_5)(\mathcal{P})$ and verify that $\mathcal{Q}_5$ is a Poisson cocycle indeed: $[\![\mathcal{P},\mathcal{Q}_5(\mathcal{P})]\!]\doteq 0$ via $[\![\mathcal{P},\mathcal{P}]\!]=0$.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Physics" ]
Title: An Iterative Scheme for Leverage-based Approximate Aggregation, Abstract: The current data explosion poses great challenges to the approximate aggregation with an efficiency and accuracy. To address this problem, we propose a novel approach to calculate the aggregation answers with a high accuracy using only a small portion of the data. We introduce leverages to reflect individual differences in the samples from a statistical perspective. Two kinds of estimators, the leverage-based estimator, and the sketch estimator (a "rough picture" of the aggregation answer), are in constraint relations and iteratively improved according to the actual conditions until their difference is below a threshold. Due to the iteration mechanism and the leverages, our approach achieves a high accuracy. Moreover, some features, such as not requiring recording the sampled data and easy to extend to various execution modes (e.g., the online mode), make our approach well suited to deal with big data. Experiments show that our approach has an extraordinary performance, and when compared with the uniform sampling, our approach can achieve high-quality answers with only 1/3 of the same sample size.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: A Game of Martingales, Abstract: We consider a two player dynamic game played over $T \leq \infty$ periods. In each period each player chooses any probability distribution with support on $[0,1]$ with a given mean, where the mean is the realized value of the draw from the previous period. The player with the highest realization in the period achieves a payoff of $1$, and the other player, $0$; and each player seeks to maximize the discounted sum of his per-period payoffs over the whole time horizon. We solve for the unique subgame perfect equilibrium of this game, and establish properties of the equilibrium strategies and payoffs in the limit. The solution and comparative statics thereof provide insight about intertemporal choice with status concerns. In particular we find that patient players take fewer risks.
[ 0, 0, 0, 0, 0, 1 ]
[ "Mathematics", "Quantitative Finance" ]
Title: Dropout is a special case of the stochastic delta rule: faster and more accurate deep learning, Abstract: Multi-layer neural networks have lead to remarkable performance on many kinds of benchmark tasks in text, speech and image processing. Nonlinear parameter estimation in hierarchical models is known to be subject to overfitting. One approach to this overfitting and related problems (local minima, colinearity, feature discovery etc.) is called dropout (Srivastava, et al 2014, Baldi et al 2016). This method removes hidden units with a Bernoulli random variable with probability $p$ over updates. In this paper we will show that Dropout is a special case of a more general model published originally in 1990 called the stochastic delta rule ( SDR, Hanson, 1990). SDR parameterizes each weight in the network as a random variable with mean $\mu_{w_{ij}}$ and standard deviation $\sigma_{w_{ij}}$. These random variables are sampled on each forward activation, consequently creating an exponential number of potential networks with shared weights. Both parameters are updated according to prediction error, thus implementing weight noise injections that reflect a local history of prediction error and efficient model averaging. SDR therefore implements a local gradient-dependent simulated annealing per weight converging to a bayes optimal network. Tests on standard benchmarks (CIFAR) using a modified version of DenseNet shows the SDR outperforms standard dropout in error by over 50% and in loss by over 50%. Furthermore, the SDR implementation converges on a solution much faster, reaching a training error of 5 in just 15 epochs with DenseNet-40 compared to standard DenseNet-40's 94 epochs.
[ 0, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Kernel Regression with Sparse Metric Learning, Abstract: Kernel regression is a popular non-parametric fitting technique. It aims at learning a function which estimates the targets for test inputs as precise as possible. Generally, the function value for a test input is estimated by a weighted average of the surrounding training examples. The weights are typically computed by a distance-based kernel function and they strongly depend on the distances between examples. In this paper, we first review the latest developments of sparse metric learning and kernel regression. Then a novel kernel regression method involving sparse metric learning, which is called kernel regression with sparse metric learning (KR$\_$SML), is proposed. The sparse kernel regression model is established by enforcing a mixed $(2,1)$-norm regularization over the metric matrix. It learns a Mahalanobis distance metric by a gradient descent procedure, which can simultaneously conduct dimensionality reduction and lead to good prediction results. Our work is the first to combine kernel regression with sparse metric learning. To verify the effectiveness of the proposed method, it is evaluated on 19 data sets for regression. Furthermore, the new method is also applied to solving practical problems of forecasting short-term traffic flows. In the end, we compare the proposed method with other three related kernel regression methods on all test data sets under two criterions. Experimental results show that the proposed method is much more competitive.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Learning MSO-definable hypotheses on string, Abstract: We study the classification problems over string data for hypotheses specified by formulas of monadic second-order logic MSO. The goal is to design learning algorithms that run in time polynomial in the size of the training set, independently of or at least sublinear in the size of the whole data set. We prove negative as well as positive results. If the data set is an unprocessed string to which our algorithms have local access, then learning in sublinear time is impossible even for hypotheses definable in a small fragment of first-order logic. If we allow for a linear time pre-processing of the string data to build an index data structure, then learning of MSO-definable hypotheses is possible in time polynomial in the size of the training set, independently of the size of the whole data set.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: From semimetal to chiral Fulde-Ferrell superfluids, Abstract: The recent realization of two-dimensional (2D) synthetic spin-orbit (SO) coupling opens a broad avenue to study novel topological states for ultracold atoms. Here, we propose a new scheme to realize exotic chiral Fulde-Ferrell superfluid for ultracold fermions, with a generic theory being shown that the topology of superfluid pairing phases can be determined from the normal states. The main findings are two fold. First, a semimetal is driven by a new type of 2D SO coupling whose realization is even simpler than the recent experiment, and can be tuned into massive Dirac fermion phases with or without inversion symmetry. Without inversion symmetry the superfluid phase with nonzero pairing momentum is favored under an attractive interaction. Furthermore, we show a fundamental theorem that the topology of a 2D chiral superfluid can be uniquely determined from the unpaired normal states, with which the topological chiral Fulde-Ferrell superfluid with a broad topological region is predicted for the present system. This generic theorem is also useful for condensed matter physics and material science in search for new topological superconductors.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Learning to Rank based on Analogical Reasoning, Abstract: Object ranking or "learning to rank" is an important problem in the realm of preference learning. On the basis of training data in the form of a set of rankings of objects represented as feature vectors, the goal is to learn a ranking function that predicts a linear order of any new set of objects. In this paper, we propose a new approach to object ranking based on principles of analogical reasoning. More specifically, our inference pattern is formalized in terms of so-called analogical proportions and can be summarized as follows: Given objects $A,B,C,D$, if object $A$ is known to be preferred to $B$, and $C$ relates to $D$ as $A$ relates to $B$, then $C$ is (supposedly) preferred to $D$. Our method applies this pattern as a main building block and combines it with ideas and techniques from instance-based learning and rank aggregation. Based on first experimental results for data sets from various domains (sports, education, tourism, etc.), we conclude that our approach is highly competitive. It appears to be specifically interesting in situations in which the objects are coming from different subdomains, and which hence require a kind of knowledge transfer.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Modeling Spatial Overdispersion with the Generalized Waring Process, Abstract: Modeling spatial overdispersion requires point processes models with finite dimensional distributions that are overdisperse relative to the Poisson. Fitting such models usually heavily relies on the properties of stationarity, ergodicity, and orderliness. And, though processes based on negative binomial finite dimensional distributions have been widely considered, they typically fail to simultaneously satisfy the three required properties for fitting. Indeed, it has been conjectured by Diggle and Milne that no negative binomial model can satisfy all three properties. In light of this, we change perspective, and construct a new process based on a different overdisperse count model, the Generalized Waring Distribution. While comparably tractable and flexible to negative binomial processes, the Generalized Waring process is shown to possess all required properties, and additionally span the negative binomial and Poisson processes as limiting cases. In this sense, the GW process provides an approximate resolution to the conundrum highlighted by Diggle and Milne.
[ 0, 0, 1, 1, 0, 0 ]
[ "Statistics", "Mathematics" ]
Title: Sparse covariance matrix estimation in high-dimensional deconvolution, Abstract: We study the estimation of the covariance matrix $\Sigma$ of a $p$-dimensional normal random vector based on $n$ independent observations corrupted by additive noise. Only a general nonparametric assumption is imposed on the distribution of the noise without any sparsity constraint on its covariance matrix. In this high-dimensional semiparametric deconvolution problem, we propose spectral thresholding estimators that are adaptive to the sparsity of $\Sigma$. We establish an oracle inequality for these estimators under model miss-specification and derive non-asymptotic minimax convergence rates that are shown to be logarithmic in $n/\log p$. We also discuss the estimation of low-rank matrices based on indirect observations as well as the generalization to elliptical distributions. The finite sample performance of the threshold estimators is illustrated in a numerical example.
[ 0, 0, 1, 0, 0, 0 ]
[ "Statistics", "Mathematics" ]
Title: Dynamic controllers for column synchronization of rotation matrices: a QR-factorization approach, Abstract: In the multi-agent systems setting, this paper addresses continuous-time distributed synchronization of columns of rotation matrices. More precisely, k specific columns shall be synchronized and only the corresponding k columns of the relative rotations between the agents are assumed to be available for the control design. When one specific column is considered, the problem is equivalent to synchronization on the (d-1)-dimensional unit sphere and when all the columns are considered, the problem is equivalent to synchronization on SO(d). We design dynamic control laws for these synchronization problems. The control laws are based on the introduction of auxiliary variables in combination with a QR-factorization approach. The benefit of this QR-factorization approach is that we can decouple the dynamics for the k columns from the remaining d-k ones. Under the control scheme, the closed loop system achieves almost global convergence to synchronization for quasi-strong interaction graph topologies.
[ 1, 0, 1, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: The ELEGANT NMR Spectrometer, Abstract: Compact and portable in-situ NMR spectrometers which can be dipped in the liquid to be measured, and are easily maintained, with affordable coil constructions and electronics, together with an apparatus to recover depleted magnets are presented, that provide a new real-time processing method for NMR spectrum acquisition, that remains stable despite magnetic field fluctuations.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Unbiased Multi-index Monte Carlo, Abstract: We introduce a new class of Monte Carlo based approximations of expectations of random variables such that their laws are only available via certain discretizations. Sampling from the discretized versions of these laws can typically introduce a bias. In this paper, we show how to remove that bias, by introducing a new version of multi-index Monte Carlo (MIMC) that has the added advantage of reducing the computational effort, relative to i.i.d. sampling from the most precise discretization, for a given level of error. We cover extensions of results regarding variance and optimality criteria for the new approach. We apply the methodology to the problem of computing an unbiased mollified version of the solution of a partial differential equation with random coefficients. A second application concerns the Bayesian inference (the smoothing problem) of an infinite dimensional signal modelled by the solution of a stochastic partial differential equation that is observed on a discrete space grid and at discrete times. Both applications are complemented by numerical simulations.
[ 0, 0, 0, 1, 0, 0 ]
[ "Mathematics", "Statistics", "Computer Science" ]
Title: Asymptotic limit and decay estimates for a class of dissipative linear hyperbolic systems in several dimensions, Abstract: In this paper, we study the large-time behavior of solutions to a class of partially dissipative linear hyperbolic systems with applications in velocity-jump processes in several dimensions. Given integers $n,d\ge 1$, let $\mathbf A:=(A^1,\dots,A^d)\in (\mathbb R^{n\times n})^d$ be a matrix-vector, where $A^j\in\mathbb R^{n\times n}$, and let $B\in \mathbb R^{n\times n}$ be not required to be symmetric but have one single eigenvalue zero, we consider the Cauchy problem for linear $n\times n$ systems having the form \begin{equation*} \partial_{t}u+\mathbf A\cdot \nabla_{\mathbf x} u+Bu=0,\qquad (\mathbf x,t)\in \mathbb R^d\times \mathbb R_+. \end{equation*} Under appropriate assumptions, we show that the solution $u$ is decomposed into $u=u^{(1)}+u^{(2)}$, where $u^{(1)}$ has the asymptotic profile which is the solution, denoted by $U$, of a parabolic equation and $u^{(1)}-U$ decays at the rate $t^{-\frac d2(\frac 1q-\frac 1p)-\frac 12}$ as $t\to +\infty$ in any $L^p$-norm, and $u^{(2)}$ decays exponentially in $L^2$-norm, provided $u(\cdot,0)\in L^q(\mathbb R^d)\cap L^2(\mathbb R^d)$ for $1\le q\le p\le \infty$. Moreover, $u^{(1)}-U$ decays at the optimal rate $t^{-\frac d2(\frac 1q-\frac 1p)-1}$ as $t\to +\infty$ if the system satisfies a symmetry property. The main proofs are based on asymptotic expansions of the solution $u$ in the frequency space and the Fourier analysis.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Physics" ]
Title: Selecting Representative Examples for Program Synthesis, Abstract: Program synthesis is a class of regression problems where one seeks a solution, in the form of a source-code program, mapping the inputs to their corresponding outputs exactly. Due to its precise and combinatorial nature, program synthesis is commonly formulated as a constraint satisfaction problem, where input-output examples are encoded as constraints and solved with a constraint solver. A key challenge of this formulation is scalability: while constraint solvers work well with a few well-chosen examples, a large set of examples can incur significant overhead in both time and memory. We describe a method to discover a subset of examples that is both small and representative: the subset is constructed iteratively, using a neural network to predict the probability of unchosen examples conditioned on the chosen examples in the subset, and greedily adding the least probable example. We empirically evaluate the representativeness of the subsets constructed by our method, and demonstrate such subsets can significantly improve synthesis time and stability.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Semistable rank 2 sheaves with singularities of mixed dimension on $\mathbb{P}^3$, Abstract: We describe new irreducible components of the Gieseker-Maruyama moduli scheme $\mathcal{M}(3)$ of semistable rank 2 coherent sheaves with Chern classes $c_1=0,\ c_2=3,\ c_3=0$ on $\mathbb{P}^3$, general points of which correspond to sheaves whose singular loci contain components of dimensions both 0 and 1. These sheaves are produced by elementary transformations of stable reflexive rank 2 sheaves with $c_1=0,\ c_2=2,\ c_3=2$ or 4 along a disjoint union of a projective line and a collection of points in $\mathbb{P}^3$. The constructed families of sheaves provide first examples of irreducible components of the Gieseker-Maruyama moduli scheme such that their general sheaves have singularities of mixed dimension.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: From jamming to collective cell migration through a boundary induced transition, Abstract: Cell monolayers provide an interesting example of active matter, exhibiting a phase transition from a flowing to jammed state as they age. Here we report experiments and numerical simulations illustrating how a jammed cellular layer rapidly reverts to a flowing state after a wound. Quantitative comparison between experiments and simulations shows that cells change their self-propulsion and alignement strength so that the system crosses a phase transition line, which we characterize by finite-size scaling in an active particle model. This wound-induced unjamming transition is found to occur generically in epithelial, endothelial and cancer cells.
[ 0, 0, 0, 0, 1, 0 ]
[ "Quantitative Biology", "Physics" ]
Title: An Approximate Bayesian Long Short-Term Memory Algorithm for Outlier Detection, Abstract: Long Short-Term Memory networks trained with gradient descent and back-propagation have received great success in various applications. However, point estimation of the weights of the networks is prone to over-fitting problems and lacks important uncertainty information associated with the estimation. However, exact Bayesian neural network methods are intractable and non-applicable for real-world applications. In this study, we propose an approximate estimation of the weights uncertainty using Ensemble Kalman Filter, which is easily scalable to a large number of weights. Furthermore, we optimize the covariance of the noise distribution in the ensemble update step using maximum likelihood estimation. To assess the proposed algorithm, we apply it to outlier detection in five real-world events retrieved from the Twitter platform.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Principal Floquet subspaces and exponential separations of type II with applications to random delay differential equations, Abstract: This paper deals with the study of principal Lyapunov exponents, principal Floquet subspaces, and exponential separation for positive random linear dynamical systems in ordered Banach spaces. The main contribution lies in the introduction of a new type of exponential separation, called of type II, important for its application to nonautonomous random differential equations with delay. Under weakened assumptions, the existence of an exponential separation of type II in an abstract general setting is shown, and an illustration of its application to dynamical systems generated by scalar linear random delay differential equations with finite delay is given.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Statistically Optimal and Computationally Efficient Low Rank Tensor Completion from Noisy Entries, Abstract: In this article, we develop methods for estimating a low rank tensor from noisy observations on a subset of its entries to achieve both statistical and computational efficiencies. There have been a lot of recent interests in this problem of noisy tensor completion. Much of the attention has been focused on the fundamental computational challenges often associated with problems involving higher order tensors, yet very little is known about their statistical performance. To fill in this void, in this article, we characterize the fundamental statistical limits of noisy tensor completion by establishing minimax optimal rates of convergence for estimating a $k$th order low rank tensor under the general $\ell_p$ ($1\le p\le 2$) norm which suggest significant room for improvement over the existing approaches. Furthermore, we propose a polynomial-time computable estimating procedure based upon power iteration and a second-order spectral initialization that achieves the optimal rates of convergence. Our method is fairly easy to implement and numerical experiments are presented to further demonstrate the practical merits of our estimator.
[ 0, 0, 1, 1, 0, 0 ]
[ "Statistics", "Mathematics", "Computer Science" ]
Title: Model equations and structures formation for the media with memory, Abstract: We propose new types of models of the appearance of small- and large scale structures in media with memory, including a hyperbolic modification of the Navier-Stokes equations and a class of dynamical low-dimensional models with memory effects. On the basis of computer modeling, the formation of the small-scale structures and collapses and the appearance of new chaotic solutions are demonstrated. Possibilities of the application of some proposed models to the description of the burst-type processes and collapses o nthe Sun are discussed.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Mathematics" ]
Title: Twistor theory at fifty: from contour integrals to twistor strings, Abstract: We review aspects of twistor theory, its aims and achievements spanning thelast five decades. In the twistor approach, space--time is secondary with events being derived objects that correspond to compact holomorphic curves in a complex three--fold -- the twistor space. After giving an elementary construction of this space we demonstrate how solutions to linear and nonlinear equations of mathematical physics: anti-self-duality (ASD) equations on Yang--Mills, or conformal curvature can be encoded into twistor cohomology. These twistor correspondences yield explicit examples of Yang--Mills, and gravitational instantons which we review. They also underlie the twistor approach to integrability: the solitonic systems arise as symmetry reductions of ASD Yang--Mills equations, and Einstein--Weyl dispersionless systems are reductions of ASD conformal equations. We then review the holomorphic string theories in twistor and ambitwistor spaces, and explain how these theories give rise to remarkable new formulae for the computation of quantum scattering amplitudes. Finally we discuss the Newtonian limit of twistor theory, and its possible role in Penrose's proposal for a role of gravity in quantum collapse of a wave function.
[ 0, 1, 1, 0, 0, 0 ]
[ "Physics", "Mathematics" ]
Title: The G-centre and gradable derived equivalences, Abstract: We propose a generalisation for the notion of the centre of an algebra in the setup of algebras graded by an arbitrary abelian group G. Our generalisation, which we call the G-centre, is designed to control the endomorphism category of the grading shift functors. We show that the G-centre is preserved by gradable derived equivalences given by tilting modules. We also discuss links with existing notions in superalgebra theory and apply our results to derived equivalences of superalgebras.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Analysis of nonsmooth stochastic approximation: the differential inclusion approach, Abstract: In this paper we address the convergence of stochastic approximation when the functions to be minimized are not convex and nonsmooth. We show that the "mean-limit" approach to the convergence which leads, for smooth problems, to the ODE approach can be adapted to the non-smooth case. The limiting dynamical system may be shown to be, under appropriate assumption, a differential inclusion. Our results expand earlier works in this direction by Benaim et al. (2005) and provide a general framework for proving convergence for unconstrained and constrained stochastic approximation problems, with either explicit or implicit updates. In particular, our results allow us to establish the convergence of stochastic subgradient and proximal stochastic gradient descent algorithms arising in a large class of deep learning and high-dimensional statistical inference with sparsity inducing penalties.
[ 0, 0, 0, 1, 0, 0 ]
[ "Mathematics", "Statistics", "Computer Science" ]
Title: Quickest Change Detection under Transient Dynamics: Theory and Asymptotic Analysis, Abstract: The problem of quickest change detection (QCD) under transient dynamics is studied, where the change from the initial distribution to the final persistent distribution does not happen instantaneously, but after a series of transient phases. The observations within the different phases are generated by different distributions. The objective is to detect the change as quickly as possible, while controlling the average run length (ARL) to false alarm, when the durations of the transient phases are completely unknown. Two algorithms are considered, the dynamic Cumulative Sum (CuSum) algorithm, proposed in earlier work, and a newly constructed weighted dynamic CuSum algorithm. Both algorithms admit recursions that facilitate their practical implementation, and they are adaptive to the unknown transient durations. Specifically, their asymptotic optimality is established with respect to both Lorden's and Pollak's criteria as the ARL to false alarm and the durations of the transient phases go to infinity at any relative rate. Numerical results are provided to demonstrate the adaptivity of the proposed algorithms, and to validate the theoretical results.
[ 0, 0, 1, 1, 0, 0 ]
[ "Statistics", "Mathematics" ]
Title: Multi-Agent Deep Reinforcement Learning for Dynamic Power Allocation in Wireless Networks, Abstract: This work demonstrates the potential of deep reinforcement learning techniques for transmit power control in emerging and future wireless networks. Various techniques have been proposed in the literature to find near-optimal power allocations, often by solving a challenging optimization problem. Most of these algorithms are not scalable to large networks in real-world scenarios because of their computational complexity and instantaneous cross-cell channel state information (CSI) requirement. In this paper, a model-free distributively executed dynamic power allocation scheme is developed based on deep reinforcement learning. Each transmitter collects CSI and quality of service (QoS) information from several neighbors and adapts its own transmit power accordingly. The objective is to maximize a weighted sum-rate utility function, which can be particularized to achieve maximum sum-rate or proportionally fair scheduling (with weights that are changing over time). Both random variations and delays in the CSI are inherently addressed using deep Q-learning. For a typical network architecture, the proposed algorithm is shown to achieve near-optimal power allocation in real time based on delayed CSI measurements available to the agents. This work indicates that deep reinforcement learning based radio resource management can be very fast and deliver highly competitive performance, especially in practical scenarios where the system model is inaccurate and CSI delay is non-negligible.
[ 0, 0, 0, 1, 0, 0 ]
[ "Computer Science" ]
Title: The nature and origin of heavy tails in retweet activity, Abstract: Modern social media platforms facilitate the rapid spread of information online. Modelling phenomena such as social contagion and information diffusion are contingent upon a detailed understanding of the information-sharing processes. In Twitter, an important aspect of this occurs with retweets, where users rebroadcast the tweets of other users. To improve our understanding of how these distributions arise, we analyse the distribution of retweet times. We show that a power law with exponential cutoff provides a better fit than the power laws previously suggested. We explain this fit through the burstiness of human behaviour and the priorities individuals place on different tasks.
[ 1, 1, 0, 1, 0, 0 ]
[ "Statistics", "Quantitative Biology" ]
Title: Adaptive IGAFEM with optimal convergence rates: Hierarchical B-splines, Abstract: We consider an adaptive algorithm for finite element methods for the isogeometric analysis (IGAFEM) of elliptic (possibly non-symmetric) second-order partial differential equations in arbitrary space dimension $d\ge2$. We employ hierarchical B-splines of arbitrary degree and different order of smoothness. We propose a refinement strategy to generate a sequence of locally refined meshes and corresponding discrete solutions. Adaptivity is driven by some weighted residual a posteriori error estimator. We prove linear convergence of the error estimator (resp. the sum of energy error plus data oscillations) with optimal algebraic rates. Numerical experiments underpin the theoretical findings.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Computer Science" ]
Title: Calibration for Weak Variance-Alpha-Gamma Processes, Abstract: The weak variance-alpha-gamma process is a multivariate Lévy process constructed by weakly subordinating Brownian motion, possibly with correlated components with an alpha-gamma subordinator. It generalises the variance-alpha-gamma process of Semeraro constructed by traditional subordination. We compare three calibration methods for the weak variance-alpha-gamma process, method of moments, maximum likelihood estimation (MLE) and digital moment estimation (DME). We derive a condition for Fourier invertibility needed to apply MLE and show in our simulations that MLE produces a better fit when this condition holds, while DME produces a better fit when it is violated. We also find that the weak variance-alpha-gamma process exhibits a wider range of dependence and produces a significantly better fit than the variance-alpha-gamma process on an S&P500-FTSE100 data set, and that DME produces the best fit in this situation.
[ 0, 0, 0, 0, 0, 1 ]
[ "Quantitative Finance", "Statistics", "Mathematics" ]
Title: Boundaries as an Enhancement Technique for Physical Layer Security, Abstract: In this paper, we study the receiver performance with physical layer security in a Poisson field of interferers. We compare the performance in two deployment scenarios: (i) the receiver is located at the corner of a quadrant, (ii) the receiver is located in the infinite plane. When the channel state information (CSI) of the eavesdropper is not available at the transmitter, we calculate the probability of secure connectivity using the Wyner coding scheme, and we show that hiding the receiver at the corner is beneficial at high rates of the transmitted codewords and detrimental at low transmission rates. When the CSI is available, we show that the average secrecy capacity is higher when the receiver is located at the corner, even if the intensity of interferers in this case is four times higher than the intensity of interferers in the bulk. Therefore boundaries can also be used as a secrecy enhancement technique for high data rate applications.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Physics" ]
Title: Magnetically induced Ferroelectricity in Bi$_2$CuO$_4$, Abstract: The tetragonal copper oxide Bi$_2$CuO$_4$ has an unusual crystal structure with a three-dimensional network of well separated CuO$_4$ plaquettes. This material was recently predicted to host electronic excitations with an unconventional spectrum and the spin structure of its magnetically ordered state appearing at T$_N$ $\sim$43 K remains controversial. Here we present the results of detailed studies of specific heat, magnetic and dielectric properties of Bi$_2$CuO$_4$ single crystals grown by the floating zone technique, combined with the polarized neutron scattering and high-resolution X-ray measurements. Our polarized neutron scattering data show Cu spins are parallel to the $ab$ plane. Below the onset of the long range antiferromagnetic ordering we observe an electric polarization induced by an applied magnetic field, which indicates inversion symmetry breaking by the ordered state of Cu spins. For the magnetic field applied perpendicular to the tetragonal axis, the spin-induced ferroelectricity is explained in terms of the linear magnetoelectric effect that occurs in a metastable magnetic state. A relatively small electric polarization induced by the field parallel to the tetragonal axis may indicate a more complex magnetic ordering in Bi$_2$CuO$_4$.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Probabilistic Matrix Factorization for Automated Machine Learning, Abstract: In order to achieve state-of-the-art performance, modern machine learning techniques require careful data pre-processing and hyperparameter tuning. Moreover, given the ever increasing number of machine learning models being developed, model selection is becoming increasingly important. Automating the selection and tuning of machine learning pipelines consisting of data pre-processing methods and machine learning models, has long been one of the goals of the machine learning community. In this paper, we tackle this meta-learning task by combining ideas from collaborative filtering and Bayesian optimization. Using probabilistic matrix factorization techniques and acquisition functions from Bayesian optimization, we exploit experiments performed in hundreds of different datasets to guide the exploration of the space of possible pipelines. In our experiments, we show that our approach quickly identifies high-performing pipelines across a wide range of datasets, significantly outperforming the current state-of-the-art.
[ 0, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Stochastic Multi-objective Optimization on a Budget: Application to multi-pass wire drawing with quantified uncertainties, Abstract: Design optimization of engineering systems with multiple competing objectives is a painstakingly tedious process especially when the objective functions are expensive-to-evaluate computer codes with parametric uncertainties. The effectiveness of the state-of-the-art techniques is greatly diminished because they require a large number of objective evaluations, which makes them impractical for problems of the above kind. Bayesian global optimization (BGO), has managed to deal with these challenges in solving single-objective optimization problems and has recently been extended to multi-objective optimization (MOO). BGO models the objectives via probabilistic surrogates and uses the epistemic uncertainty to define an information acquisition function (IAF) that quantifies the merit of evaluating the objective at new designs. This iterative data acquisition process continues until a stopping criterion is met. The most commonly used IAF for MOO is the expected improvement over the dominated hypervolume (EIHV) which in its original form is unable to deal with parametric uncertainties or measurement noise. In this work, we provide a systematic reformulation of EIHV to deal with stochastic MOO problems. The primary contribution of this paper lies in being able to filter out the noise and reformulate the EIHV without having to observe or estimate the stochastic parameters. An addendum of the probabilistic nature of our methodology is that it enables us to characterize our confidence about the predicted Pareto front. We verify and validate the proposed methodology by applying it to synthetic test problems with known solutions. We demonstrate our approach on an industrial problem of die pass design for a steel wire drawing process.
[ 0, 0, 1, 0, 0, 0 ]
[ "Computer Science", "Mathematics", "Statistics" ]
Title: Generation High resolution 3D model from natural language by Generative Adversarial Network, Abstract: We present a method of generating high resolution 3D shapes from natural language descriptions. To achieve this goal, we propose two steps that generating low resolution shapes which roughly reflect texts and generating high resolution shapes which reflect the detail of texts. In a previous paper, the authors have shown a method of generating low resolution shapes. We improve it to generate 3D shapes more faithful to natural language and test the effectiveness of the method. To generate high resolution 3D shapes, we use the framework of Conditional Wasserstein GAN. We propose two roles of Critic separately, which calculate the Wasserstein distance between two probability distribution, so that we achieve generating high quality shapes or acceleration of learning speed of model. To evaluate our approach, we performed quantitive evaluation with several numerical metrics for Critic models. Our method is first to realize the generation of high quality model by propagating text embedding information to high resolution task when generating 3D model.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science" ]
Title: On the Synthesis of Guaranteed-Quality Plans for Robot Fleets in Logistics Scenarios via Optimization Modulo Theories, Abstract: In manufacturing, the increasing involvement of autonomous robots in production processes poses new challenges on the production management. In this paper we report on the usage of Optimization Modulo Theories (OMT) to solve certain multi-robot scheduling problems in this area. Whereas currently existing methods are heuristic, our approach guarantees optimality for the computed solution. We do not only present our final method but also its chronological development, and draw some general observations for the development of OMT-based approaches.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: A framework for cost-constrained genome rearrangement under Double Cut and Join, Abstract: The study of genome rearrangement has many flavours, but they all are somehow tied to edit distances on variations of a multi-graph called the breakpoint graph. We study a weighted 2-break distance on Eulerian 2-edge-colored multi-graphs, which generalizes weighted versions of several Double Cut and Join problems, including those on genomes with unequal gene content. We affirm the connection between cycle decompositions and edit scenarios first discovered with the Sorting By Reversals problem. Using this we show that the problem of finding a parsimonious scenario of minimum cost on an Eulerian 2-edge-colored multi-graph - with a general cost function for 2-breaks - can be solved by decomposing the problem into independent instances on simple alternating cycles. For breakpoint graphs, and a more constrained cost function, based on coloring the vertices, we give a polynomial-time algorithm for finding a parsimonious 2-break scenario of minimum cost, while showing that finding a non-parsimonious 2-break scenario of minimum cost is NP-Hard.
[ 0, 0, 0, 0, 1, 0 ]
[ "Quantitative Biology", "Computer Science", "Mathematics" ]
Title: Dual combination combination multi switching synchronization of eight chaotic systems, Abstract: In this paper, a novel scheme for synchronizing four drive and four response systems is proposed by the authors. The idea of multi switching and dual combination synchronization is extended to dual combination-combination multi switching synchronization involving eight chaotic systems and is a first of its kind. Due to the multiple combination of chaotic systems and multi switching the resultant dynamic behaviour is so complex that, in communication theory, transmission and security of the resultant signal is more effective. Using Lyapunov stability theory, sufficient conditions are achieved and suitable controllers are designed to realise the desired synchronization. Corresponding theoretical analysis is presented and numerical simulations performed to demonstrate the effectiveness of the proposed scheme.
[ 1, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Physics" ]
Title: An Optimization Based Control Framework for Balancing and Walking: Implementation on the iCub Robot, Abstract: A whole-body torque control framework adapted for balancing and walking tasks is presented in this paper. In the proposed approach, centroidal momentum terms are excluded in favor of a hierarchy of high-priority position and orientation tasks and a low-priority postural task. More specifically, the controller stabilizes the position of the center of mass, the orientation of the pelvis frame, as well as the position and orientation of the feet frames. The low-priority postural task provides reference positions for each joint of the robot. Joint torques and contact forces to stabilize tasks are obtained through quadratic programming optimization. Besides the exclusion of centroidal momentum terms, part of the novelty of the approach lies in the definition of control laws in SE(3) which do not require the use of Euler parameterization. Validation of the framework was achieved in a scenario where the robot kept balance while walking in place. Experiments have been conducted with the iCub robot, in simulation and in real-world experiments.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Binomial transform of products, Abstract: Given two infinite sequences with known binomial transforms, we compute the binomial transform of the product sequence. Various identities are obtained and numerous examples are given involving sequences of special numbers: Harmonic numbers, Bernoulli numbers, Fibonacci numbers, and also Laguerre polynomials.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Programmable DNA-mediated decision maker, Abstract: DNA-mediated computing is a novel technology that seeks to capitalize on the enormous informational capacity of DNA and has tremendous computational ability to compete with the current silicon-mediated computing, due to massive parallelism and unique characteristics inherent in DNA interaction. In this paper, the methodology of DNA-mediated computing is utilized to enrich decision theory, by demonstrating how a novel programmable DNA-mediated normative decision-making apparatus is able to capture rational choice under uncertainty.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Quantitative Biology" ]
Title: The Effects of Ram Pressure on the Cold Clouds in the Centers of Galaxy Clusters, Abstract: We discuss the effect of ram pressure on the cold clouds in the centers of cool-core galaxy clusters, and in particular, how it reduces cloud velocity and sometimes causes an offset between the cold gas and young stars. The velocities of the molecular gas in both observations and our simulations fall in the range of $100-400$ km/s, much lower than expected if they fall from a few tens of kpc ballistically. If the intra-cluster medium (ICM) is at rest, the ram pressure of the ICM only slightly reduces the velocity of the clouds. When we assume that the clouds are actually "fluffier" because they are co-moving with a warm-hot layer, the velocity becomes smaller. If we also consider the AGN wind in the cluster center by adding a wind profile measured from the simulation, the clouds are further slowed down at small radii, and the resulting velocities are in general agreement with the observations and simulations. Because ram pressure only affects gas but not stars, it can cause a separation between a filament and young stars that formed in the filament as they move through the ICM together. This separation has been observed in Perseus and also exists in our simulations. We show that the star-filament offset combined with line-of-sight velocity measurements can help determine the true motion of the cold gas, and thus distinguish between inflows and outflows.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Astrophysics" ]
Title: Determining Song Similarity via Machine Learning Techniques and Tagging Information, Abstract: The task of determining item similarity is a crucial one in a recommender system. This constitutes the base upon which the recommender system will work to determine which items are more likely to be enjoyed by a user, resulting in more user engagement. In this paper we tackle the problem of determining song similarity based solely on song metadata (such as the performer, and song title) and on tags contributed by users. We evaluate our approach under a series of different machine learning algorithms. We conclude that tf-idf achieves better results than Word2Vec to model the dataset to feature vectors. We also conclude that k-NN models have better performance than SVMs and Linear Regression for this problem.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: A critical topology for $L^p$-Carleman classes with $0<p<1$, Abstract: In this paper, we explain a sharp phase transition phenomenon which occurs for $L^p$-Carleman classes with exponents $0<p<1$. In principle, these classes are defined as usual, only the traditional $L^\infty$-bounds are replaced by corresponding $L^p$-bounds. To mirror the classical definition, we add the feature of dilatation invariance as well, and consider a larger soft-topology space, the $L^p$-Carleman class. A particular degenerate instance is when we obtain the $L^p$-Sobolev spaces, analyzed previously by Peetre, following an initial insight by Douady. Peetre found that these $L^p$-Sobolev spaces are highly degenerate for $0<p<1$. Essentially, the contact is lost between the function and its derivatives. Here, we analyze this degeneracy for the more general $L^p$-Carleman classes defined by a weight sequence. Under some reasonable growth and regularity properties, and a condition on the collection of test functions, we find that there is a sharp boundary, defined in terms of the weight sequence: on the one side, we get Douady-Peetre's phenomenon of "disconnexion" between the function and its derivatives, while on the other, we obtain a collection of highly smooth functions. We also look at the more standard second phase transition, between non-quasianalyticity and quasianalyticity, in the $L^p$ setting, with $0<p<1$.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: An energy-based equilibrium contact angle boundary condition on jagged surfaces for phase-field methods, Abstract: We consider an energy-based boundary condition to impose an equilibrium wetting angle for the Cahn-Hilliard-Navier-Stokes phase-field model on voxel-set-type computational domains. These domains typically stem from the micro-CT imaging of porous rock and approximate a (on {\mu}m scale) smooth domain with a certain resolution. Planar surfaces that are perpendicular to the main axes are naturally approximated by a layer of voxels. However, planar surfaces in any other directions and curved surfaces yield a jagged/rough surface approximation by voxels. For the standard Cahn-Hilliard formulation, where the contact angle between the diffuse interface and the domain boundary (fluid-solid interface/wall) is 90 degrees, jagged surfaces have no impact on the contact angle. However, a prescribed contact angle smaller or larger than 90 degrees on jagged voxel surfaces is amplified in either direction. As a remedy, we propose the introduction of surface energy correction factors for each fluid-solid voxel face that counterbalance the difference of the voxel-set surface area with the underlying smooth one. The discretization of the model equations is performed with the discontinuous Galerkin method, however, the presented semi-analytical approach of correcting the surface energy is equally applicable to other direct numerical methods such as finite elements, finite volumes, or finite differences, since the correction factors appear in the strong formulation of the model.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Mathematics" ]
Title: Obstacle Avoidance Using Stereo Camera, Abstract: In this paper we present a novel method for obstacle avoidance using the stereo camera. The conventional obstacle avoidance methods and their limitations are discussed. A new algorithm is developed for the real-time obstacle avoidance which responds faster to unexpected obstacles. In this approach the depth map is divided into optimized number of regions and the minimum depth at each section is assigned as the depth of that region. A fuzzy controller is designed to create the drive commands for the robot/quadcopter. The system was tested on multiple paths with different obstacles and the results demonstrated the high accuracy of the developed system.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: Detecting Outliers in Data with Correlated Measures, Abstract: Advances in sensor technology have enabled the collection of large-scale datasets. Such datasets can be extremely noisy and often contain a significant amount of outliers that result from sensor malfunction or human operation faults. In order to utilize such data for real-world applications, it is critical to detect outliers so that models built from these datasets will not be skewed by outliers. In this paper, we propose a new outlier detection method that utilizes the correlations in the data (e.g., taxi trip distance vs. trip time). Different from existing outlier detection methods, we build a robust regression model that explicitly models the outliers and detects outliers simultaneously with the model fitting. We validate our approach on real-world datasets against methods specifically designed for each dataset as well as the state of the art outlier detectors. Our outlier detection method achieves better performances, demonstrating the robustness and generality of our method. Last, we report interesting case studies on some outliers that result from atypical events.
[ 0, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Active bialkali photocathodes on free-standing graphene substrates, Abstract: The hexagonal structure of graphene gives rise to the property of gas impermeability, motivating its investigation for a new application: protection of semiconductor photocathodes in electron accelerators. These materials are extremely susceptible to degradation in efficiency through multiple mechanisms related to contamination from the local imperfect vacuum environment of the host photoinjector. Few-layer graphene has been predicted to permit a modified photoemission response of protected photocathode surfaces, and recent experiments of single-layer graphene on copper have begun to confirm these predictions for single crystal metallic photocathodes. Unlike metallic photoemitters, the integration of an ultra-thin graphene barrier film with conventional semiconductor photocathode growth processes is not straightforward. A first step toward addressing this challenge is the growth and characterization of technologically relevant, high quantum efficiency bialkali photocathodes grown on ultra-thin free-standing graphene substrates. Photocathode growth on free-standing graphene provides the opportunity to integrate these two materials and study their interaction. Specifically, spectral response features and photoemission stability of cathodes grown on graphene substrates are compared to those deposited on established substrates. In addition we observed an increase of work function for the graphene encapsulated bialkali photocathode surfaces, which is predicted by our calculations. The results provide a unique demonstration of bialkali photocathodes on free-standing substrates, and indicate promise towards our goal of fabricating high-performance graphene encapsulated photocathodes with enhanced lifetime for accelerator applications.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Materials Science" ]
Title: Reconstruction of a compact Riemannian manifold from the scattering data of internal sources, Abstract: Given a smooth non-trapping compact manifold with strictly con- vex boundary, we consider an inverse problem of reconstructing the manifold from the scattering data initiated from internal sources. This data consist of the exit directions of geodesics that are emaneted from interior points of the manifold. We show that under certain generic assumption of the metric, one can reconstruct an isometric copy of the manifold from such scattering data measured on the boundary.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Physics" ]
Title: Classification of $δ(2,n-2)$-ideal Lagrangian submanifolds in $n$-dimensional complex space forms, Abstract: It was proven in [B.-Y. Chen, F. Dillen, J. Van der Veken and L. Vrancken, Curvature inequalities for Lagrangian submanifolds: the final solution, Differ. Geom. Appl. 31 (2013), 808-819] that every Lagrangian submanifold $M$ of a complex space form $\tilde M^{n}(4c)$ of constant holomorphic sectional curvature $4c$ satisfies the following optimal inequality: \begin{align*} \delta(2,n-2) \leq \frac{n^2(n-2)}{4(n-1)} H^2 + 2(n-2) c, \end{align*} where $H^2$ is the squared mean curvature and $\delta(2,n-2)$ is a $\delta$-invariant on $M$. In this paper we classify Lagrangian submanifolds of complex space forms $\tilde M^{n}(4c)$, $n \geq 5$, which satisfy the equality case of this inequality at every point.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Dimensional Analysis in Economics: A Study of the Neoclassical Economic Growth Model, Abstract: The fundamental purpose of the present research article is to introduce the basic principles of Dimensional Analysis in the context of the neoclassical economic theory, in order to apply such principles to the fundamental relations that underlay most models of economic growth. In particular, basic instruments from Dimensional Analysis are used to evaluate the analytical consistency of the Neoclassical economic growth model. The analysis shows that an adjustment to the model is required in such a way that the principle of dimensional homogeneity is satisfied.
[ 0, 0, 0, 0, 0, 1 ]
[ "Quantitative Finance", "Mathematics" ]
Title: A high precision semi-analytic mass function, Abstract: In this paper, extending past works of Del Popolo, we show how a high precision mass function (MF) can be obtained using the excursion set approach and an improved barrier taking implicitly into account a non-zero cosmological constant, the angular momentum acquired by tidal interaction of proto-structures and dynamical friction. In the case of the $\Lambda$CDM paradigm, we find that our MF is in agreement at the 3\% level to Klypin's Bolshoi simulation, in the mass range $M_{\rm vir} = 5 \times 10^9 h^{-1} M_{\odot} -- 5 \times 10^{14} h^{-1} M_{\odot}$ and redshift range $0 \lesssim z \lesssim 10$. For $z=0$ we also compared our MF to several fitting formulae, and found in particular agreement with Bhattacharya's within 3\% in the mass range $10^{12}-10^{16} h^{-1} M_{\odot}$. Moreover, we discuss our MF validity for different cosmologies.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Mathematics" ]
Title: Fast Compressed Self-Indexes with Deterministic Linear-Time Construction, Abstract: We introduce a compressed suffix array representation that, on a text $T$ of length $n$ over an alphabet of size $\sigma$, can be built in $O(n)$ deterministic time, within $O(n\log\sigma)$ bits of working space, and counts the number of occurrences of any pattern $P$ in $T$ in time $O(|P| + \log\log_w \sigma)$ on a RAM machine of $w=\Omega(\log n)$-bit words. This new index outperforms all the other compressed indexes that can be built in linear deterministic time, and some others. The only faster indexes can be built in linear time only in expectation, or require $\Theta(n\log n)$ bits. We also show that, by using $O(n\log\sigma)$ bits, we can build in linear time an index that counts in time $O(|P|/\log_\sigma n + \log n(\log\log n)^2)$, which is RAM-optimal for $w=\Theta(\log n)$ and sufficiently long patterns.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Solvability of the operator Riccati equation in the Feshbach case, Abstract: We consider a bounded block operator matrix of the form $$ L=\left(\begin{array}{cc} A & B \\ C & D \end{array} \right), $$ where the main-diagonal entries $A$ and $D$ are self-adjoint operators on Hilbert spaces $H_{_A}$ and $H_{_D}$, respectively; the coupling $B$ maps $H_{_D}$ to $H_{_A}$ and $C$ is an operator from $H_{_A}$ to $H_{_D}$. It is assumed that the spectrum $\sigma_{_D}$ of $D$ is absolutely continuous and uniform, being presented by a single band $[\alpha,\beta]\subset\mathbb{R}$, $\alpha<\beta$, and the spectrum $\sigma_{_A}$ of $A$ is embedded into $\sigma_{_D}$, that is, $\sigma_{_A}\subset(\alpha,\beta)$. We formulate conditions under which there are bounded solutions to the operator Riccati equations associated with the complexly deformed block operator matrix $L$; in such a case the deformed operator matrix $L$ admits a block diagonalization. The same conditions also ensure the Markus-Matsaev-type factorization of the Schur complement $M_{_A}(z)=A-z-B(D-z)^{-1}C$ analytically continued onto the unphysical sheet(s) of the complex $z$ plane adjacent to the band $[\alpha,\beta]$. We prove that the operator roots of the continued Schur complement $M_{_A}$ are explicitly expressed through the respective solutions to the deformed Riccati equations.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Physics" ]
Title: A remark on oscillatory integrals associated with fewnomials, Abstract: We prove that the $L^2$ bound of an oscillatory integral associated with a polynomial depends only on the number of monomials that this polynomial consists of.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: On the periodicity problem of residual r-Fubini sequences, Abstract: For any positive integer $r$, the $r$-Fubini number with parameter $n$, denoted by $F_{n,r}$, is equal to the number of ways that the elements of a set with $n+r$ elements can be weak ordered such that the $r$ least elements are in distinct orders. In this article we focus on the sequence of residues of the $r$-Fubini numbers modulo a positive integer $s$ and show that this sequence is periodic and then, exhibit how to calculate its period length. As an extra result, an explicit formula for the $r$-Stirling numbers is obtained which is frequently used in calculations.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: A Measurement of CMB Cluster Lensing with SPT and DES Year 1 Data, Abstract: Clusters of galaxies gravitationally lens the cosmic microwave background (CMB) radiation, resulting in a distinct imprint in the CMB on arcminute scales. Measurement of this effect offers a promising way to constrain the masses of galaxy clusters, particularly those at high redshift. We use CMB maps from the South Pole Telescope Sunyaev-Zel'dovich (SZ) survey to measure the CMB lensing signal around galaxy clusters identified in optical imaging from first year observations of the Dark Energy Survey. The cluster catalog used in this analysis contains 3697 members with mean redshift of $\bar{z} = 0.45$. We detect lensing of the CMB by the galaxy clusters at $8.1\sigma$ significance. Using the measured lensing signal, we constrain the amplitude of the relation between cluster mass and optical richness to roughly $17\%$ precision, finding good agreement with recent constraints obtained with galaxy lensing. The error budget is dominated by statistical noise but includes significant contributions from systematic biases due to the thermal SZ effect and cluster miscentering.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Efficient injection from large telescopes into single-mode fibres: Enabling the era of ultra-precision astronomy, Abstract: Photonic technologies offer numerous advantages for astronomical instruments such as spectrographs and interferometers owing to their small footprints and diverse range of functionalities. Operating at the diffraction-limit, it is notoriously difficult to efficiently couple such devices directly with large telescopes. We demonstrate that with careful control of both the non-ideal pupil geometry of a telescope and residual wavefront errors, efficient coupling with single-mode devices can indeed be realised. A fibre injection was built within the Subaru Coronagraphic Extreme Adaptive Optics (SCExAO) instrument. Light was coupled into a single-mode fibre operating in the near-IR (J-H bands) which was downstream of the extreme adaptive optics system and the pupil apodising optics. A coupling efficiency of 86% of the theoretical maximum limit was achieved at 1550 nm for a diffraction-limited beam in the laboratory, and was linearly correlated with Strehl ratio. The coupling efficiency was constant to within <30% in the range 1250-1600 nm. Preliminary on-sky data with a Strehl ratio of 60% in the H-band produced a coupling efficiency into a single-mode fibre of ~50%, consistent with expectations. The coupling was >40% for 84% of the time and >50% for 41% of the time. The laboratory results allow us to forecast that extreme adaptive optics levels of correction (Strehl ratio >90% in H-band) would allow coupling of >67% (of the order of coupling to multimode fibres currently). For Strehl ratios <20%, few-port photonic lanterns become a superior choice but the signal-to-noise must be considered. These results illustrate a clear path to efficient on-sky coupling into a single-mode fibre, which could be used to realise modal-noise-free radial velocity machines, very-long-baseline optical/near-IR interferometers and/or simply exploit photonic technologies in future instrument design.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: An upwind method for genuine weakly hyperbolic systems, Abstract: In this article, we attempted to develop an upwind scheme based on Flux Difference Splitting using Jordan canonical forms to simulate genuine weakly hyperbolic systems. Theory of Jordan Canonical Forms is being used to complete defective set of linear independent eigenvectors. Proposed FDS-J scheme is capable of recognizing various shocks accurately.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Physics" ]
Title: Semi-tied Units for Efficient Gating in LSTM and Highway Networks, Abstract: Gating is a key technique used for integrating information from multiple sources by long short-term memory (LSTM) models and has recently also been applied to other models such as the highway network. Although gating is powerful, it is rather expensive in terms of both computation and storage as each gating unit uses a separate full weight matrix. This issue can be severe since several gates can be used together in e.g. an LSTM cell. This paper proposes a semi-tied unit (STU) approach to solve this efficiency issue, which uses one shared weight matrix to replace those in all the units in the same layer. The approach is termed "semi-tied" since extra parameters are used to separately scale each of the shared output values. These extra scaling factors are associated with the network activation functions and result in the use of parameterised sigmoid, hyperbolic tangent, and rectified linear unit functions. Speech recognition experiments using British English multi-genre broadcast data showed that using STUs can reduce the calculation and storage cost by a factor of three for highway networks and four for LSTMs, while giving similar word error rates to the original models.
[ 0, 0, 0, 1, 0, 0 ]
[ "Computer Science" ]
Title: Achieveing reliable UDP transmission at 10 Gb/s using BSD socket for data acquisition systems, Abstract: User Datagram Protocol (UDP) is a commonly used protocol for data transmission in small embedded systems. UDP as such is unreliable and packet losses can occur. The achievable data rates can suffer if optimal packet sizes are not used. The alternative, Transmission Control Protocol (TCP) guarantees the ordered delivery of data and automatically adjusts transmission to match the capability of the transmission link. Nevertheless UDP is often favored over TCP due to its simplicity, small memory and instruction footprints. Both UDP and TCP are implemented in all larger operating systems and commercial embedded frameworks. In addition UDP also supported on a variety of small hardware platforms such as Digital Signal Processors (DSP) Field Programmable Gate Arrays (FPGA). This is not so common for TCP. This paper describes how high speed UDP based data transmission with very low packet error ratios was achieved. The near-reliable communications link is used in a data acquisition (DAQ) system for the next generation of extremely intense neutron source, European Spallation Source. This paper presents measurements of UDP performance and reliability as achieved by employing several optimizations. The measurements were performed on Xeon E5 based CentOS (Linux) servers. The measured data rates are very close to the 10 Gb/s line rate, and zero packet loss was achieved. The performance was obtained utilizing a single processor core as transmitter and a single core as receiver. The results show that support for transmitting large data packets is a key parameter for good performance. Optimizations for throughput are: MTU, packet sizes, tuning Linux kernel parameters, thread affinity, core locality and efficient timers.
[ 1, 1, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: Demo Abstract: CDMA-based IoT Services with Shared Band Operation of LTE in 5G, Abstract: With the vision of deployment of massive Internet-of-Things (IoTs) in 5G network, existing 4G network and protocols are inefficient to handle sporadic IoT traffic with requirements of low-latency, low control overhead and low power. To suffice these requirements, we propose a design of a PHY/MAC layer using Software Defined Radios (SDRs) that is backward compatible with existing OFDM based LTE protocols and supports CDMA based transmissions for low power IoT devices as well. This demo shows our implemented system based on that design and the viability of the proposal under different network scenarios.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: Unstable normalized standing waves for the space periodic NLS, Abstract: For the stationary nonlinear Schrödinger equation $-\Delta u+ V(x)u- f(u) = \lambda u$ with periodic potential $V$ we study the existence and stability properties of multibump solutions with prescribed $L^2$-norm. To this end we introduce a new nondegeneracy condition and develop new superposition techniques which allow to match the $L^2$-constraint. In this way we obtain the existence of infinitely many geometrically distinct solutions to the stationary problem. We then calculate the Morse index of these solutions with respect to the restriction of the underlying energy functional to the associated $L^2$-sphere, and we show their orbital instability with respect to the Schrödinger flow. Our results apply in both, the mass-subcritical and the mass-supercritical regime.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Physics" ]
Title: Thermodynamics of Higher Order Entropy Corrected Schwarzschild-Beltrami-de Sitter Black Hole, Abstract: In this paper, we consider higher order correction of the entropy and study the thermodynamical properties of recently proposed Schwarzschild-Beltrami-de Sitter black hole, which is indeed an exact solution of Einstein equation with a positive cosmological constant. By using the corrected entropy and Hawking temperature we extract some thermodynamical quantities like Gibbs and Helmholtz free energies and heat capacity. We also investigate the first and second laws of thermodynamics. We find that presence of higher order corrections, which come from thermal fluctuations, may remove some instabilities of the black hole. Also unstable to stable phase transition is possible in presence of the first and second order corrections.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Emergence of superconductivity in the cuprates via a universal percolation process, Abstract: A pivotal step toward understanding unconventional superconductors would be to decipher how superconductivity emerges from the unusual normal state upon cooling. In the cuprates, traces of superconducting pairing appear above the macroscopic transition temperature $T_c$, yet extensive investigation has led to disparate conclusions. The main difficulty has been the separation of superconducting contributions from complex normal state behaviour. Here we avoid this problem by measuring the nonlinear conductivity, an observable that is zero in the normal state. We uncover for several representative cuprates that the nonlinear conductivity vanishes exponentially above $T_c$, both with temperature and magnetic field, and exhibits temperature-scaling characterized by a nearly universal scale $T_0$. Attempts to model the response with the frequently evoked Ginzburg-Landau theory are unsuccessful. Instead, our findings are captured by a simple percolation model that can also explain other properties of the cuprates. We thus resolve a long-standing conundrum by showing that the emergence of superconductivity in the cuprates is dominated by their inherent inhomogeneity.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Global sensitivity analysis in the context of imprecise probabilities (p-boxes) using sparse polynomial chaos expansions, Abstract: Global sensitivity analysis aims at determining which uncertain input parameters of a computational model primarily drives the variance of the output quantities of interest. Sobol' indices are now routinely applied in this context when the input parameters are modelled by classical probability theory using random variables. In many practical applications however, input parameters are affected by both aleatory and epistemic (so-called polymorphic) uncertainty, for which imprecise probability representations have become popular in the last decade. In this paper, we consider that the uncertain input parameters are modelled by parametric probability boxes (p-boxes). We propose interval-valued (so-called imprecise) Sobol' indices as an extension of their classical definition. An original algorithm based on the concepts of augmented space, isoprobabilistic transforms and sparse polynomial chaos expansions is devised to allow for the computation of these imprecise Sobol' indices at extremely low cost. In particular, phantoms points are introduced to build an experimental design in the augmented space (necessary for the calibration of the sparse PCE) which leads to a smart reuse of runs of the original computational model. The approach is illustrated on three analytical and engineering examples which allows one to validate the proposed algorithms against brute-force double-loop Monte Carlo simulation.
[ 0, 0, 0, 1, 0, 0 ]
[ "Mathematics", "Statistics", "Computer Science" ]
Title: Agile Software Development Methods: Review and Analysis, Abstract: Agile - denoting "the quality of being agile, readiness for motion, nimbleness, activity, dexterity in motion" - software development methods are attempting to offer an answer to the eager business community asking for lighter weight along with faster and nimbler software development processes. This is especially the case with the rapidly growing and volatile Internet software industry as well as for the emerging mobile application environment. The new agile methods have evoked substantial amount of literature and debates. However, academic research on the subject is still scarce, as most of existing publications are written by practitioners or consultants. The aim of this publication is to begin filling this gap by systematically reviewing the existing literature on agile software development methodologies. This publication has three purposes. First, it proposes a definition and a classification of agile software development approaches. Second, it analyses ten software development methods that can be characterized as being "agile" against the defined criterion. Third, it compares these methods and highlights their similarities and differences. Based on this analysis, future research needs are identified and discussed.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: Deep Learning: A Bayesian Perspective, Abstract: Deep learning is a form of machine learning for nonlinear high dimensional pattern matching and prediction. By taking a Bayesian probabilistic perspective, we provide a number of insights into more efficient algorithms for optimisation and hyper-parameter tuning. Traditional high-dimensional data reduction techniques, such as principal component analysis (PCA), partial least squares (PLS), reduced rank regression (RRR), projection pursuit regression (PPR) are all shown to be shallow learners. Their deep learning counterparts exploit multiple deep layers of data reduction which provide predictive performance gains. Stochastic gradient descent (SGD) training optimisation and Dropout (DO) regularization provide estimation and variable selection. Bayesian regularization is central to finding weights and connections in networks to optimize the predictive bias-variance trade-off. To illustrate our methodology, we provide an analysis of international bookings on Airbnb. Finally, we conclude with directions for future research.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Tunnel-injected sub-260 nm ultraviolet light emitting diodes, Abstract: We report on tunnel-injected deep ultraviolet light emitting diodes (UV LEDs) configured with a polarization engineered Al0.75Ga0.25N/ In0.2Ga0.8N tunnel junction structure. Tunnel-injected UV LED structure enables n-type contacts for both bottom and top contact layers. However, achieving Ohmic contact to wide bandgap n-AlGaN layers is challenging and typically requires high temperature contact metal annealing. In this work, we adopted a compositionally graded top contact layer for non-alloyed metal contact, and obtained a low contact resistance of Rc=4.8x10-5 Ohm cm2 on n-Al0.75Ga0.25N. We also observed a significant reduction in the forward operation voltage from 30.9 V to 19.2 V at 1 kA/cm2 by increasing the Mg doping concentration from 6.2x1018 cm-3 to 1.5x1019 cm-3. Non-equilibrium hole injection into wide bandgap Al0.75Ga0.25N with Eg>5.2 eV was confirmed by light emission at 257 nm. This work demonstrates the feasibility of tunneling hole injection into deep UV LEDs, and provides a novel structural design towards high power deep-UV emitters.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: On the representation of finite convex geometries with convex sets, Abstract: Very recently Richter and Rogers proved that any convex geometry can be represented by a family of convex polygons in the plane. We shall generalize their construction and obtain a wide variety of convex shapes for representing convex geometries. We present an Erdos-Szekeres type obstruction, which answers a question of Czedli negatively, that is general convex geometries cannot be represented with ellipses in the plane. Moreover, we shall prove that one cannot even bound the number of common supporting lines of the pairs of the representing convex sets. In higher dimensions we prove that all convex geometries can be represented with ellipsoids.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Resolving ultrafast exciton migration in organic solids at the nanoscale, Abstract: The effectiveness of molecular-based light harvesting relies on transport of optical excitations, excitons, to charg-transfer sites. Measuring exciton migration has, however, been challenging because of the mismatch between nanoscale migration lengths and the diffraction limit. In organic semiconductors, common bulk methods employ a series of films terminated at quenching substrates, altering the spatioenergetic landscape for migration. Here we instead define quenching boundaries all-optically with sub-diffraction resolution, thus characterizing spatiotemporal exciton migration on its native nanometer and picosecond scales without disturbing morphology. By transforming stimulated emission depletion microscopy into a time-resolved ultrafast approach, we measure a 16-nm migration length in CN-PPV conjugated polymer films. Combining these experiments with Monte Carlo exciton hopping simulations shows that migration in CN-PPV films is essentially diffusive because intrinsic chromophore energetic disorder is comparable to inhomogeneous broadening among chromophores. This framework also illustrates general trends across materials. Our new approach's sub-diffraction resolution will enable previously unattainable correlations of local material structure to the nature of exciton migration, applicable not only to photovoltaic or display-destined organic semiconductors but also to explaining the quintessential exciton migration exhibited in photosynthesis.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Quantitative Biology" ]
Title: Transfer Learning across Low-Resource, Related Languages for Neural Machine Translation, Abstract: We present a simple method to improve neural translation of a low-resource language pair using parallel data from a related, also low-resource, language pair. The method is based on the transfer method of Zoph et al., but whereas their method ignores any source vocabulary overlap, ours exploits it. First, we split words using Byte Pair Encoding (BPE) to increase vocabulary overlap. Then, we train a model on the first language pair and transfer its parameters, including its source word embeddings, to another model and continue training on the second language pair. Our experiments show that transfer learning helps word-based translation only slightly, but when used on top of a much stronger BPE baseline, it yields larger improvements of up to 4.3 BLEU.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: A Van-Der-Waals picture for metabolic networks from MaxEnt modeling: inherent bistability and elusive coexistence, Abstract: In this work maximum entropy distributions in the space of steady states of metabolic networks are defined upon constraining the first and second moment of the growth rate. Inherent bistability of fast and slow phenotypes, akin to a Van-Der Waals picture, emerges upon considering control on the average growth (optimization/repression) and its fluctuations (heterogeneity). This is applied to the carbon catabolic core of E.coli where it agrees with some stylized facts on the persisters phenotype and it provides a quantitative map with metabolic fluxes, opening for the possibility to detect coexistence from flux data. Preliminary analysis on data for E.Coli cultures in standard conditions shows, on the other hand, degeneracy for the inferred parameters that extend in the coexistence region.
[ 0, 1, 0, 0, 0, 0 ]
[ "Quantitative Biology", "Statistics" ]
Title: Controlled dynamic screening of excitonic complexes in 2D semiconductors, Abstract: We report a combined theoretical/experimental study of dynamic screening of excitons in media with frequency-dependent dielectric functions. We develop an analytical model showing that interparticle interactions in an exciton are screened in the range of frequencies from zero to the characteristic binding energy depending on the symmetries and transition energies of that exciton. The problem of the dynamic screening is then reduced to simply solving the Schrodinger equation with an effectively frequency-independent potential. Quantitative predictions of the model are experimentally verified using a test system: neutral, charged and defect-bound excitons in two-dimensional monolayer WS2, screened by metallic, liquid, and semiconducting environments. The screening-induced shifts of the excitonic peaks in photoluminescence spectra are in good agreement with our model.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Nopol: Automatic Repair of Conditional Statement Bugs in Java Programs, Abstract: We propose NOPOL, an approach to automatic repair of buggy conditional statements (i.e., if-then-else statements). This approach takes a buggy program as well as a test suite as input and generates a patch with a conditional expression as output. The test suite is required to contain passing test cases to model the expected behavior of the program and at least one failing test case that reveals the bug to be repaired. The process of NOPOL consists of three major phases. First, NOPOL employs angelic fix localization to identify expected values of a condition during the test execution. Second, runtime trace collection is used to collect variables and their actual values, including primitive data types and objected-oriented features (e.g., nullness checks), to serve as building blocks for patch generation. Third, NOPOL encodes these collected data into an instance of a Satisfiability Modulo Theory (SMT) problem, then a feasible solution to the SMT instance is translated back into a code patch. We evaluate NOPOL on 22 real-world bugs (16 bugs with buggy IF conditions and 6 bugs with missing preconditions) on two large open-source projects, namely Apache Commons Math and Apache Commons Lang. Empirical analysis on these bugs shows that our approach can effectively fix bugs with buggy IF conditions and missing preconditions. We illustrate the capabilities and limitations of NOPOL using case studies of real bug fixes.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: Parametric geometry of numbers in function fields, Abstract: Parametric geometry of numbers is a new theory, recently created by Schmidt and Summerer, which unifies and simplifies many aspects of classical Diophantine approximations, providing a handle on problems which previously seemed out of reach. Our goal is to transpose this theory to fields of rational functions in one variable and to analyze in that context the problem of simultaneous approximation to exponential functions.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: SEIRS epidemics in growing populations, Abstract: An SEIRS epidemic with disease fatalities is introduced in a growing population (modelled as a super-critical linear birth and death process). The study of the initial phase of the epidemic is stochastic, while the analysis of the major outbreaks is deterministic. Depending on the values of the parameters, the following scenarios are possible. i) The disease dies out quickly, only infecting few; ii) the epidemic takes off, the \textit{number} of infected individuals grows exponentially, but the \textit{fraction} of infected individuals remains negligible; iii) the epidemic takes off, the \textit{number} of infected grows initially quicker than the population, the disease fatalities diminish the growth rate of the population, but it remains super critical, and the \emph{fraction} of infected go to an endemic equilibrium; iv) the epidemic takes off, the \textit{number} of infected individuals grows initially quicker than the population, the diseases fatalities turn the exponential growth of the population to an exponential decay.
[ 0, 1, 0, 0, 0, 0 ]
[ "Quantitative Biology", "Statistics" ]
Title: A Multi-task Selected Learning Approach for Solving New Type 3D Bin Packing Problem, Abstract: This paper studies a new type of 3D bin packing problem (BPP), in which a number of cuboid-shaped items must be put into a bin one by one orthogonally. The objective is to find a way to place these items that can minimize the surface area of the bin. This problem is based on the fact that there is no fixed-sized bin in many real business scenarios and the cost of a bin is proportional to its surface area. Based on previous research on 3D BPP, the surface area is determined by the sequence, spatial locations and orientations of items. It is a new NP-hard combinatorial optimization problem on unfixed-sized bin packing, for which we propose a multi-task framework based on Selected Learning, generating the sequence and orientations of items packed into the bin simultaneously. During training steps, Selected Learning chooses one of loss functions derived from Deep Reinforcement Learning and Supervised Learning corresponding to the training procedure. Numerical results show that the method proposed significantly outperforms Lego baselines by a substantial gain of 7.52%. Moreover, we produce large scale 3D Bin Packing order data set for studying bin packing problems and will release it to the research community.
[ 0, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Use of Docker for deployment and testing of astronomy software, Abstract: We describe preliminary investigations of using Docker for the deployment and testing of astronomy software. Docker is a relatively new containerisation technology that is developing rapidly and being adopted across a range of domains. It is based upon virtualization at operating system level, which presents many advantages in comparison to the more traditional hardware virtualization that underpins most cloud computing infrastructure today. A particular strength of Docker is its simple format for describing and managing software containers, which has benefits for software developers, system administrators and end users. We report on our experiences from two projects -- a simple activity to demonstrate how Docker works, and a more elaborate set of services that demonstrates more of its capabilities and what they can achieve within an astronomical context -- and include an account of how we solved problems through interaction with Docker's very active open source development community, which is currently the key to the most effective use of this rapidly-changing technology.
[ 1, 1, 0, 0, 0, 0 ]
[ "Computer Science", "Physics" ]
Title: A Feature Complete SPIKE Banded Algorithm and Solver, Abstract: New features and enhancements for the SPIKE banded solver are presented. Among all the SPIKE algorithm versions, we focus our attention on the recursive SPIKE technique which provides the best trade-off between generality and parallel efficiency, but was known for its lack of flexibility. Its application was essentially limited to power of two number of cores/processors. This limitation is successfully addressed in this paper. In addition, we present a new transpose solve option, a standard feature of most numerical solver libraries which has never been addressed by the SPIKE algorithm so far. A pivoting recursive SPIKE strategy is finally presented as an alternative to non-pivoting scheme for systems with large condition numbers. All these new enhancements participate to create a feature complete SPIKE algorithm and a new black-box SPIKE-OpenMP package that significantly outperforms the performance and scalability obtained with other state-of-the-art banded solvers.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Dark Matter in the Local Group of Galaxies, Abstract: We describe the neutrino flavor (e = electron, u = muon, t = tau) masses as m(i=e;u;t)= m + [Delta]mi with |[Delta]mij|/m < 1 and probably |[Delta]mij|/m << 1. The quantity m is the degenerate neutrino mass. Because neutrino flavor is not a quantum number, this degenerate mass appears in the neutrino equation of state. We apply a Monte Carlo computational physics technique to the Local Group (LG) of galaxies to determine an approximate location for a Dark Matter embedding condensed neutrino object(CNO). The calculation is based on the rotational properties of the only spiral galaxies within the LG: M31, M33 and the Milky Way. CNOs could be the Dark Matter everyone is looking for and we estimate the CNO embedding the LG to have a mass 5.17x10^15 Mo and a radius 1.316 Mpc, with the estimated value of m ~= 0.8 eV/c2. The up-coming KATRIN experiment will either be the definitive result or eliminate condensed neutrinos as a Dark Matter candidate.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Astrophysics" ]
Title: Finite Time Adaptive Stabilization of LQ Systems, Abstract: Stabilization of linear systems with unknown dynamics is a canonical problem in adaptive control. Since the lack of knowledge of system parameters can cause it to become destabilized, an adaptive stabilization procedure is needed prior to regulation. Therefore, the adaptive stabilization needs to be completed in finite time. In order to achieve this goal, asymptotic approaches are not very helpful. There are only a few existing non-asymptotic results and a full treatment of the problem is not currently available. In this work, leveraging the novel method of random linear feedbacks, we establish high probability guarantees for finite time stabilization. Our results hold for remarkably general settings because we carefully choose a minimal set of assumptions. These include stabilizability of the underlying system and restricting the degree of heaviness of the noise distribution. To derive our results, we also introduce a number of new concepts and technical tools to address regularity and instability of the closed-loop matrix.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: On the composition of Berezin-Toeplitz operators on symplectic manifolds, Abstract: We compute the second coefficient of the composition of two Berezin-Toeplitz operators associated with the $\text{spin}^c$ Dirac operator on a symplectic manifold, making use of the full-off diagonal expansion of the Bergman kernel.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Physics" ]
Title: Approximate Kernel PCA Using Random Features: Computational vs. Statistical Trade-off, Abstract: Kernel methods are powerful learning methodologies that provide a simple way to construct nonlinear algorithms from linear ones. Despite their popularity, they suffer from poor scalability in big data scenarios. Various approximation methods, including random feature approximation have been proposed to alleviate the problem. However, the statistical consistency of most of these approximate kernel methods is not well understood except for kernel ridge regression wherein it has been shown that the random feature approximation is not only computationally efficient but also statistically consistent with a minimax optimal rate of convergence. In this paper, we investigate the efficacy of random feature approximation in the context of kernel principal component analysis (KPCA) by studying the trade-off between computational and statistical behaviors of approximate KPCA. We show that the approximate KPCA is both computationally and statistically efficient compared to KPCA in terms of the error associated with reconstructing a kernel function based on its projection onto the corresponding eigenspaces. Depending on the eigenvalue decay behavior of the covariance operator, we show that only $n^{2/3}$ features (polynomial decay) or $\sqrt{n}$ features (exponential decay) are needed to match the statistical performance of KPCA. We also investigate their statistical behaviors in terms of the convergence of corresponding eigenspaces wherein we show that only $\sqrt{n}$ features are required to match the performance of KPCA and if fewer than $\sqrt{n}$ features are used, then approximate KPCA has a worse statistical behavior than that of KPCA.
[ 0, 0, 1, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: A Heuristic Search Algorithm Using the Stability of Learning Algorithms in Certain Scenarios as the Fitness Function: An Artificial General Intelligence Engineering Approach, Abstract: This paper presents a non-manual design engineering method based on heuristic search algorithm to search for candidate agents in the solution space which formed by artificial intelligence agents modeled on the base of bionics.Compared with the artificial design method represented by meta-learning and the bionics method represented by the neural architecture chip,this method is more feasible for realizing artificial general intelligence,and it has a much better interaction with cognitive neuroscience;at the same time,the engineering method is based on the theoretical hypothesis that the final learning algorithm is stable in certain scenarios,and has generalization ability in various scenarios.The paper discusses the theory preliminarily and proposes the possible correlation between the theory and the fixed-point theorem in the field of mathematics.Limited by the author's knowledge level,this correlation is proposed only as a kind of conjecture.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Topological semimetals with double-helix nodal link, Abstract: Topological nodal line semimetals are characterized by the crossing of the conduction and valence bands along one or more closed loops in the Brillouin zone. Usually, these loops are either isolated or touch each other at some highly symmetric points. Here, we introduce a new kind of nodal line semimetal, that contains a pair of linked nodal loops. A concrete two-band model was constructed, which supports a pair of nodal lines with a double-helix structure, which can be further twisted into a Hopf link because of the periodicity of the Brillouin zone. The nodal lines are stabilized by the combined spatial inversion $\mathcal{P}$ and time reversal $\mathcal{T}$ symmetry; the individual $\mathcal{P}$ and $\mathcal{T}$ symmetries must be broken. The band exhibits nontrivial topology that each nodal loop carries a $\pi$ Berry flux. Surface flat bands emerge at the open boundary and are exactly encircled by the projection of the nodal lines on the surface Brillouin zone. The experimental implementation of our model using cold atoms in optical lattices is discussed.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Artificial Intelligence Assisted Power Grid Hardening in Response to Extreme Weather Events, Abstract: In this paper, an artificial intelligence based grid hardening model is proposed with the objective of improving power grid resilience in response to extreme weather events. At first, a machine learning model is proposed to predict the component states (either operational or outage) in response to the extreme event. Then, these predictions are fed into a hardening model, which determines strategic locations for placement of distributed generation (DG) units. In contrast to existing literature in hardening and resilience enhancement, this paper co-optimizes grid economic and resilience objectives by considering the intricate dependencies of the two. The numerical simulations on the standard IEEE 118-bus test system illustrate the merits and applicability of the proposed hardening model. The results indicate that the proposed hardening model through decentralized and distributed local energy resources can produce a more robust solution that can protect the system significantly against multiple component outages due to an extreme event.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Physics" ]