title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
High order surface radiation conditions for time-harmonic waves in exterior domains
We formulate a new family of high order on-surface radiation conditions to approximate the outgoing solution to the Helmholtz equation in exterior domains. Motivated by the pseudo-differential expansion of the Dirichlet-to-Neumann operator developed by Antoine et al. (J. Math. Anal. Appl. 229:184-211, 1999), we design a systematic procedure to apply pseudo-differential symbols of arbitrarily high order. Numerical results are presented to illustrate the performance of the proposed method for solving both the Dirichlet and the Neumann boundary value problems. Possible improvements and extensions are also discussed.
0
1
1
0
0
0
Probabilistic PARAFAC2
The PARAFAC2 is a multimodal factor analysis model suitable for analyzing multi-way data when one of the modes has incomparable observation units, for example because of differences in signal sampling or batch sizes. A fully probabilistic treatment of the PARAFAC2 is desirable in order to improve robustness to noise and provide a well founded principle for determining the number of factors, but challenging because the factor loadings are constrained to be orthogonal. We develop two probabilistic formulations of the PARAFAC2 along with variational procedures for inference: In the one approach, the mean values of the factor loadings are orthogonal leading to closed form variational updates, and in the other, the factor loadings themselves are orthogonal using a matrix Von Mises-Fisher distribution. We contrast our probabilistic formulation to the conventional direct fitting algorithm based on maximum likelihood. On simulated data and real fluorescence spectroscopy and gas chromatography-mass spectrometry data, we compare our approach to the conventional PARAFAC2 model estimation and find that the probabilistic formulation is more robust to noise and model order misspecification. The probabilistic PARAFAC2 thus forms a promising framework for modeling multi-way data accounting for uncertainty.
0
0
0
1
0
0
Quantizing Euclidean motions via double-coset decomposition
Concepts from mathematical crystallography and group theory are used here to quantize the group of rigid-body motions, resulting in a "motion alphabet" with which to express robot motion primitives. From these primitives it is possible to develop a dictionary of physical actions. Equipped with an alphabet of the sort developed here, intelligent actions of robots in the world can be approximated with finite sequences of characters, thereby forming the foundation of a language in which to articulate robot motion. In particular, we use the discrete handedness-preserving symmetries of macromolecular crystals (known in mathematical crystallography as Sohncke space groups) to form a coarse discretization of the space $\rm{SE}(3)$ of rigid-body motions. This discretization is made finer by subdividing using the concept of double-coset decomposition. More specifically, a very efficient, equivolumetric quantization of spatial motion can be defined using the group-theoretic concept of a double-coset decomposition of the form $\Gamma \backslash \rm{SE}(3) / \Delta$, where $\Gamma$ is a Sohncke space group and $\Delta$ is a finite group of rotational symmetries such as those of the icosahedron. The resulting discrete alphabet is based on a very uniform sampling of $\rm{SE}(3)$ and is a tool for describing the continuous trajectories of robots and humans. The general "signals to symbols" problem in artificial intelligence is cast in this framework for robots moving continuously in the world, and we present a coarse-to-fine search scheme here to efficiently solve this decoding problem in practice.
1
0
0
0
0
0
Evolutionary Stability of Reputation Management System in Peer to Peer Networks
Each participant in peer-to-peer network prefers to free-ride on the contribution of other participants. Reputation based resource sharing is a way to control the free riding. Instead of classical game theory we use evolutionary game theory to analyse the reputation based resource sharing in peer to peer system. Classical game-theoretical approach requires global information of the population. However, the evolutionary games only assumes light cognitive capabilities of users, that is, each user imitates the behavior of other user with better payoff. We find that without any extra benefit reputation strategy is not stable in the system. We also find the fraction of users who calculate the reputation for controlling the free riding in equilibrium. In this work first we made a game theoretical model for the reputation system and then we calculate the threshold of the fraction of users with which the reputation strategy is sustainable in the system. We found that in simplistic conditions reputation calculation is not evolutionarily stable strategy but if we impose some initial payment to all users and then distribute that payment among the users who are calculating reputation then reputation is evolutionary stable strategy.
1
0
0
0
0
0
A Study of FOSS'2013 Survey Data Using Clustering Techniques
FOSS is an acronym for Free and Open Source Software. The FOSS 2013 survey primarily targets FOSS contributors and relevant anonymized dataset is publicly available under CC by SA license. In this study, the dataset is analyzed from a critical perspective using statistical and clustering techniques (especially multiple correspondence analysis) with a strong focus on women contributors towards discovering hidden trends and facts. Important inferences are drawn about development practices and other facets of the free software and OSS worlds.
1
0
0
1
0
0
Intermetallic Nanocrystals: Syntheses and Catalytic Applications
At the forefront of nanochemistry, there exists a research endeavor centered around intermetallic nanocrystals, which are unique in terms of long-range atomic ordering, well-defined stoichiometry, and controlled crystal structure. In contrast to alloy nanocrystals with no atomic ordering, it has been challenging to synthesize intermetallic nanocrystals with a tight control over their size and shape. This review article highlights recent progress in the synthesis of intermetallic nanocrystals with controllable sizes and well-defined shapes. We begin with a simple analysis and some insights key to the selection of experimental conditions for generating intermetallic nanocrystals. We then present examples to highlight the viable use of intermetallic nanocrystals as electrocatalysts or catalysts for various reactions, with a focus on the enhanced performance relative to their alloy counterparts that lack atomic ordering. We conclude with perspectives on future developments in the context of synthetic control, structure-property relationship, and application.
0
1
0
0
0
0
Linear-scaling electronic structure theory: Electronic temperature in the Kernel Polynomial Method
Linear-scaling electronic structure methods based on the calculation of moments of the underlying electronic Hamiltonian offer a computationally efficient and numerically robust scheme to drive large-scale atomistic simulations, in which the quantum-mechanical nature of the electrons is explicitly taken into account. We compare the kernel polynomial method to the Fermi operator expansion method and establish a formal connection between the two approaches. We show that the convolution of the kernel polynomial method may be understood as an effective electron temperature. The results of a number of possible kernels are formally examined, and then applied to a representative tight-binding model.
0
1
0
0
0
0
Characterizing the 2016 Russian IRA Influence Campaign
Until recently, social media were seen to promote democratic discourse on social and political issues. However, this powerful communication ecosystem has come under scrutiny for allowing hostile actors to exploit online discussions in an attempt to manipulate public opinion. A case in point is the ongoing U.S. Congress investigation of Russian interference in the 2016 U.S. election campaign, with Russia accused of, among other things, using trolls (malicious accounts created for the purpose of manipulation) and bots (automated accounts) to spread propaganda and politically biased information. In this study, we explore the effects of this manipulation campaign, taking a closer look at users who re-shared the posts produced on Twitter by the Russian troll accounts publicly disclosed by U.S. Congress investigation. We collected a dataset of 13 million election-related posts shared on Twitter in the year of 2016 by over a million distinct users. This dataset includes accounts associated with the identified Russian trolls as well as users sharing posts in the same time period on a variety of topics around the 2016 elections. We use label propagation to infer the users' ideology based on the news sources they share. We are able to classify a large number of users as liberal or conservative with precision and recall above 84%. Conservative users who retweet Russian trolls produced significantly more tweets than liberal ones, about 8 times as many in terms of tweets. Additionally, trolls' position in the retweet network is stable over time, unlike users who retweet them who form the core of the election-related retweet network by the end of 2016. Using state-of-the-art bot detection techniques, we estimate that about 5% and 11% of liberal and conservative users are bots, respectively.
1
0
0
0
0
0
Nonlinear Mapping Convergence and Application to Social Networks
This paper discusses discrete-time maps of the form $x(k + 1) = F(x(k))$, focussing on equilibrium points of such maps. Under some circumstances, Lefschetz fixed-point theory can be used to establish the existence of a single locally attractive equilibrium (which is sometimes globally attractive) when a general property of local attractivity is known for any equilibrium. Problems in social networks often involve such discrete-time systems, and we make an application to one such problem.
1
0
1
0
0
0
Correlating Cellular Features with Gene Expression using CCA
To understand the biology of cancer, joint analysis of multiple data modalities, including imaging and genomics, is crucial. The involved nature of gene-microenvironment interactions necessitates the use of algorithms which treat both data types equally. We propose the use of canonical correlation analysis (CCA) and a sparse variant as a preliminary discovery tool for identifying connections across modalities, specifically between gene expression and features describing cell and nucleus shape, texture, and stain intensity in histopathological images. Applied to 615 breast cancer samples from The Cancer Genome Atlas, CCA revealed significant correlation of several image features with expression of PAM50 genes, known to be linked to outcome, while Sparse CCA revealed associations with enrichment of pathways implicated in cancer without leveraging prior biological understanding. These findings affirm the utility of CCA for joint phenotype-genotype analysis of cancer.
0
0
0
1
1
0
Links with nontrivial Alexander polynomial which are topologically concordant to the Hopf link
We give infinitely many $2$-component links with unknotted components which are topologically concordant to the Hopf link, but not smoothly concordant to any $2$-component link with trivial Alexander polynomial. Our examples are pairwise non-concordant.
0
0
1
0
0
0
Shape optimization in laminar flow with a label-guided variational autoencoder
Computational design optimization in fluid dynamics usually requires to solve non-linear partial differential equations numerically. In this work, we explore a Bayesian optimization approach to minimize an object's drag coefficient in laminar flow based on predicting drag directly from the object shape. Jointly training an architecture combining a variational autoencoder mapping shapes to latent representations and Gaussian process regression allows us to generate improved shapes in the two dimensional case we consider.
1
0
0
0
0
0
An Empirical Analysis of Vulnerabilities in Python Packages for Web Applications
This paper examines software vulnerabilities in common Python packages used particularly for web development. The empirical dataset is based on the PyPI package repository and the so-called Safety DB used to track vulnerabilities in selected packages within the repository. The methodological approach builds on a release-based time series analysis of the conditional probabilities for the releases of the packages to be vulnerable. According to the results, many of the Python vulnerabilities observed seem to be only modestly severe; input validation and cross-site scripting have been the most typical vulnerabilities. In terms of the time series analysis based on the release histories, only the recent past is observed to be relevant for statistical predictions; the classical Markov property holds.
1
0
0
0
0
0
A combination chaotic system and application in color image encryption
In this paper, by using Logistic, Sine and Tent systems we define a combination chaotic system. Some properties of the chaotic system are studied by using figures and numerical results. A color image encryption algorithm is introduced based on new chaotic system. Also this encryption algorithm can be used for gray scale or binary images. The experimental results of the encryption algorithm show that the encryption algorithm is secure and practical.
1
0
0
0
0
0
A Hybrid MILP and IPM for Dynamic Economic Dispatch with Valve Point Effect
Dynamic economic dispatch with valve-point effect (DED-VPE) is a non-convex and non-differentiable optimization problem which is difficult to solve efficiently. In this paper, a hybrid mixed integer linear programming (MILP) and interior point method (IPM), denoted by MILP-IPM, is proposed to solve such a DED-VPE problem, where the complicated transmission loss is also included. Due to the non-differentiable characteristic of DED-VPE, the classical derivative-based optimization methods can not be used any more. With the help of model reformulation, a differentiable non-linear programming (NLP) formulation which can be directly solved by IPM is derived. However, if the DED-VPE is solved by IPM in a single step, the optimization will easily trap in a poor local optima due to its non-convex and multiple local minima characteristics. To exploit a better solution, an MILP method is required to solve the DED-VPE without transmission loss, yielding a good initial point for IPM to improve the quality of the solution. Simulation results demonstrate the validity and effectiveness of the proposed MILP-IPM in solving DED-VPE.
0
0
1
0
0
0
Using Big Data Technologies for HEP Analysis
The HEP community is approaching an era were the excellent performances of the particle accelerators in delivering collision at high rate will force the experiments to record a large amount of information. The growing size of the datasets could potentially become a limiting factor in the capability to produce scientific results timely and efficiently. Recently, new technologies and new approaches have been developed in industry to answer to the necessity to retrieve information as quickly as possible to analyze PB and EB datasets. Providing the scientists with these modern computing tools will lead to rethinking the principles of data analysis in HEP, making the overall scientific process faster and smoother. In this paper, we are presenting the latest developments and the most recent results on the usage of Apache Spark for HEP analysis. The study aims at evaluating the efficiency of the application of the new tools both quantitatively, by measuring the performances, and qualitatively, focusing on the user experience. The first goal is achieved by developing a data reduction facility: working together with CERN Openlab and Intel, CMS replicates a real physics search using Spark-based technologies, with the ambition of reducing 1 PB of public data in 5 hours, collected by the CMS experiment, to 1 TB of data in a format suitable for physics analysis. The second goal is achieved by implementing multiple physics use-cases in Apache Spark using as input preprocessed datasets derived from official CMS data and simulation. By performing different end-analyses up to the publication plots on different hardware, feasibility, usability and portability are compared to the ones of a traditional ROOT-based workflow.
1
0
0
0
0
0
Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network
Brains need to predict how the body reacts to motor commands. It is an open question how networks of spiking neurons can learn to reproduce the non-linear body dynamics caused by motor commands, using local, online and stable learning rules. Here, we present a supervised learning scheme for the feedforward and recurrent connections in a network of heterogeneous spiking neurons. The error in the output is fed back through fixed random connections with a negative gain, causing the network to follow the desired dynamics, while an online and local rule changes the weights. The rule for Feedback-based Online Local Learning Of Weights (FOLLOW) is local in the sense that weight changes depend on the presynaptic activity and the error signal projected onto the postsynaptic neuron. We provide examples of learning linear, non-linear and chaotic dynamics, as well as the dynamics of a two-link arm. Using the Lyapunov method, and under reasonable assumptions and approximations, we show that FOLLOW learning is stable uniformly, with the error going to zero asymptotically.
1
0
0
0
0
0
Adaptive Exploration-Exploitation Tradeoff for Opportunistic Bandits
In this paper, we propose and study opportunistic bandits - a new variant of bandits where the regret of pulling a suboptimal arm varies under different environmental conditions, such as network load or produce price. When the load/price is low, so is the cost/regret of pulling a suboptimal arm (e.g., trying a suboptimal network configuration). Therefore, intuitively, we could explore more when the load/price is low and exploit more when the load/price is high. Inspired by this intuition, we propose an Adaptive Upper-Confidence-Bound (AdaUCB) algorithm to adaptively balance the exploration-exploitation tradeoff for opportunistic bandits. We prove that AdaUCB achieves $O(\log T)$ regret with a smaller coefficient than the traditional UCB algorithm. Furthermore, AdaUCB achieves $O(1)$ regret with respect to $T$ if the exploration cost is zero when the load level is below a certain threshold. Last, based on both synthetic data and real-world traces, experimental results show that AdaUCB significantly outperforms other bandit algorithms, such as UCB and TS (Thompson Sampling), under large load/price fluctuations.
1
0
0
1
0
0
A Quantile Estimate Based on Local Curve Fitting
Quantile estimation is a problem presented in fields such as quality control, hydrology, and economics. There are different techniques to estimate such quantiles. Nevertheless, these techniques use an overall fit of the sample when the quantiles of interest are usually located in the tails of the distribution. Regression Approach for Quantile Estimation (RAQE) is a method based on regression techniques and the properties of the empirical distribution to address this problem. The method was first presented for the problem of capability analysis. In this paper, a generalization of the method is presented, extended to the multiple sample scenario, and data from real examples is used to illustrate the proposed approaches. In addition, theoretical framework is presented to support the extension for multiple homogeneous samples and the use of the uncertainty of the estimated probabilities as a weighting factor in the analysis.
0
0
0
1
0
0
Diagnosing added value of convection-permitting regional models using precipitation event identification and tracking
Dynamical downscaling with high-resolution regional climate models may offer the possibility of realistically reproducing precipitation and weather events in climate simulations. As resolutions fall to order kilometers, the use of explicit rather than parametrized convection may offer even greater fidelity. However, these increased model resolutions both allow and require increasingly complex diagnostics for evaluating model fidelity. In this study we use a suite of dynamically downscaled simulations of the summertime U.S. (WRF driven by NCEP) with systematic variations in parameters and treatment of convection as a test case for evaluation of model precipitation. In particular, we use a novel rainstorm identification and tracking algorithm that allocates essentially all rainfall to individual precipitation events (Chang et al. 2016). This approach allows multiple insights, including that, at least in these runs, model wet bias is driven by excessive areal extent of precipitating events. Biases are time-dependent, producing excessive diurnal cycle amplitude. We show that this effect is produced not by new production of events but by excessive enlargement of long-lived precipitation events during daytime, and that in the domain average, precipitation biases appear best represented as additive offsets. Of all model configurations evaluated, convection-permitting simulations most consistently reduced biases in precipitation event characteristics.
0
1
0
1
0
0
An integral formula for Riemannian $G$-structures with applications to almost hermitian and almost contact structures
For a Riemannian $G$-structure, we compute the divergence of the vector field induced by the intrinsic torsion. Applying the Stokes theorem, we obtain the integral formula on a closed oriented Riemannian manifold, which we interpret in certain cases. We focus on almost harmitian and almost contact metric structures.
0
0
1
0
0
0
Putin's peaks: Russian election data revisited
We study the anomalous prevalence of integer percentages in the last parliamentary (2016) and presidential (2018) Russian elections. We show how this anomaly in Russian federal elections has evolved since 2000.
0
0
0
1
0
0
On rate of convergence in non-central limit theorems
The main result of this paper is the rate of convergence to Hermite-type distributions in non-central limit theorems. To the best of our knowledge, this is the first result in the literature on rates of convergence of functionals of random fields to Hermite-type distributions with ranks greater than 2. The results were obtained under rather general assumptions on the spectral densities of random fields. These assumptions are even weaker than in the known convergence results for the case of Rosenblatt distributions. Additionally, Lévy concentration functions for Hermite-type distributions were investigated.
0
0
1
0
0
0
Optimal Output Regulation for Square, Over-Actuated and Under-Actuated Linear Systems
This paper considers two different problems in trajectory tracking control for linear systems. First, if the control is not unique which is most input energy efficient. Second, if exact tracking is infeasible which control performs most accurately. These are typical challenges for over-actuated systems and for under-actuated systems, respectively. We formulate both goals as optimal output regulation problems. Then we contribute two new sets of regulator equations to output regulation theory that provide the desired solutions. A thorough study indicates solvability and uniqueness under weak assumptions. E.g., we can always determine the solution of the classical regulator equations that is most input energy efficient. This is of great value if there are infinitely many solutions. We derive our results by a linear quadratic tracking approach and establish a useful link to output regulation theory.
1
0
0
0
0
0
Pattern recognition techniques for Boson Sampling validation
The difficulty of validating large-scale quantum devices, such as Boson Samplers, poses a major challenge for any research program that aims to show quantum advantages over classical hardware. To address this problem, we propose a novel data-driven approach wherein models are trained to identify common pathologies using unsupervised machine learning methods. We illustrate this idea by training a classifier that exploits K-means clustering to distinguish between Boson Samplers that use indistinguishable photons from those that do not. We train the model on numerical simulations of small-scale Boson Samplers and then validate the pattern recognition technique on larger numerical simulations as well as on photonic chips in both traditional Boson Sampling and scattershot experiments. The effectiveness of such method relies on particle-type-dependent internal correlations present in the output distributions. This approach performs substantially better on the test data than previous methods and underscores the ability to further generalize its operation beyond the scope of the examples that it was trained on.
1
0
0
0
0
0
Conformal k-NN Anomaly Detector for Univariate Data Streams
Anomalies in time-series data give essential and often actionable information in many applications. In this paper we consider a model-free anomaly detection method for univariate time-series which adapts to non-stationarity in the data stream and provides probabilistic abnormality scores based on the conformal prediction paradigm. Despite its simplicity the method performs on par with complex prediction-based models on the Numenta Anomaly Detection benchmark and the Yahoo! S5 dataset.
1
0
0
1
0
0
Applications of Trajectory Data from the Perspective of a Road Transportation Agency: Literature Review and Maryland Case Study
Transportation agencies have an opportunity to leverage increasingly-available trajectory datasets to improve their analyses and decision-making processes. However, this data is typically purchased from vendors, which means agencies must understand its potential benefits beforehand in order to properly assess its value relative to the cost of acquisition. While the literature concerned with trajectory data is rich, it is naturally fragmented and focused on technical contributions in niche areas, which makes it difficult for government agencies to assess its value across different transportation domains. To overcome this issue, the current paper explores trajectory data from the perspective of a road transportation agency interested in acquiring trajectories to enhance its analyses. The paper provides a literature review illustrating applications of trajectory data in six areas of road transportation systems analysis: demand estimation, modeling human behavior, designing public transit, traffic performance measurement and prediction, environment and safety. In addition, it visually explores 20 million GPS traces in Maryland, illustrating existing and suggesting new applications of trajectory data.
1
0
0
1
0
0
Towards the dual motivic Steenrod algebra in positive characteristic
The dual motivic Steenrod algebra with mod $\ell$ coefficients was computed by Voevodsky over a base field of characteristic zero, and by Hoyois, Kelly, and {\O}stv{\ae}r over a base field of characteristic $p \neq \ell$. In the case $p = \ell$, we show that the conjectured answer is a retract of the actual answer. We also describe the slices of the algebraic cobordism spectrum $MGL$: we show that the conjectured form of $s_n MGL$ is a retract of the actual answer.
0
0
1
0
0
0
Toeplitz Quantization and Convexity
Let $T^m_f $ be the Toeplitz quantization of a real $ C^{\infty}$ function defined on the sphere $ \mathbb{CP}(1)$. $T^m_f $ is therefore a Hermitian matrix with spectrum $\lambda^m= (\lambda_0^m,\ldots,\lambda_m^m)$. Schur's theorem says that the diagonal of a Hermitian matrix $A$ that has the same spectrum of $ T^m_f $ lies inside a finite dimensional convex set whose extreme points are $\{( \lambda_{\sigma(0)}^m,\ldots,\lambda_{\sigma(m)}^m)\}$, where $\sigma$ is any permutation of $(m+1)$ elements. In this paper, we prove that these convex sets "converge" to a huge convex set in $L^2([0,1])$ whose extreme points are $ f^*\circ \phi$, where $ f^*$ is the decreasing rearrangement of $ f$ and $ \phi $ ranges over the set of measure preserving transformations of the unit interval $ [0,1]$.
0
0
1
0
0
0
The Paulsen Problem, Continuous Operator Scaling, and Smoothed Analysis
The Paulsen problem is a basic open problem in operator theory: Given vectors $u_1, \ldots, u_n \in \mathbb R^d$ that are $\epsilon$-nearly satisfying the Parseval's condition and the equal norm condition, is it close to a set of vectors $v_1, \ldots, v_n \in \mathbb R^d$ that exactly satisfy the Parseval's condition and the equal norm condition? Given $u_1, \ldots, u_n$, the squared distance (to the set of exact solutions) is defined as $\inf_{v} \sum_{i=1}^n \| u_i - v_i \|_2^2$ where the infimum is over the set of exact solutions. Previous results show that the squared distance of any $\epsilon$-nearly solution is at most $O({\rm{poly}}(d,n,\epsilon))$ and there are $\epsilon$-nearly solutions with squared distance at least $\Omega(d\epsilon)$. The fundamental open question is whether the squared distance can be independent of the number of vectors $n$. We answer this question affirmatively by proving that the squared distance of any $\epsilon$-nearly solution is $O(d^{13/2} \epsilon)$. Our approach is based on a continuous version of the operator scaling algorithm and consists of two parts. First, we define a dynamical system based on operator scaling and use it to prove that the squared distance of any $\epsilon$-nearly solution is $O(d^2 n \epsilon)$. Then, we show that by randomly perturbing the input vectors, the dynamical system will converge faster and the squared distance of an $\epsilon$-nearly solution is $O(d^{5/2} \epsilon)$ when $n$ is large enough and $\epsilon$ is small enough. To analyze the convergence of the dynamical system, we develop some new techniques in lower bounding the operator capacity, a concept introduced by Gurvits to analyze the operator scaling algorithm.
1
0
1
0
0
0
On architectural choices in deep learning: From network structure to gradient convergence and parameter estimation
We study mechanisms to characterize how the asymptotic convergence of backpropagation in deep architectures, in general, is related to the network structure, and how it may be influenced by other design choices including activation type, denoising and dropout rate. We seek to analyze whether network architecture and input data statistics may guide the choices of learning parameters and vice versa. Given the broad applicability of deep architectures, this issue is interesting both from theoretical and a practical standpoint. Using properties of general nonconvex objectives (with first-order information), we first build the association between structural, distributional and learnability aspects of the network vis-à-vis their interaction with parameter convergence rates. We identify a nice relationship between feature denoising and dropout, and construct families of networks that achieve the same level of convergence. We then derive a workflow that provides systematic guidance regarding the choice of network sizes and learning parameters often mediated4 by input statistics. Our technical results are corroborated by an extensive set of evaluations, presented in this paper as well as independent empirical observations reported by other groups. We also perform experiments showing the practical implications of our framework for choosing the best fully-connected design for a given problem.
1
0
1
1
0
0
Iron Snow in the Martian Core?
The decline of Mars' global magnetic field some 3.8-4.1 billion years ago is thought to reflect the demise of the dynamo that operated in its liquid core. The dynamo was probably powered by planetary cooling and so its termination is intimately tied to the thermochemical evolution and present-day physical state of the Martian core. Bottom-up growth of a solid inner core, the crystallization regime for Earth's core, has been found to produce a long-lived dynamo leading to the suggestion that the Martian core remains entirely liquid to this day. Motivated by the experimentally-determined increase in the Fe-S liquidus temperature with decreasing pressure at Martian core conditions, we investigate whether Mars' core could crystallize from the top down. We focus on the "iron snow" regime, where newly-formed solid consists of pure Fe and is therefore heavier than the liquid. We derive global energy and entropy equations that describe the long-timescale thermal and magnetic history of the core from a general theory for two-phase, two-component liquid mixtures, assuming that the snow zone is in phase equilibrium and that all solid falls out of the layer and remelts at each timestep. Formation of snow zones occurs for a wide range of interior and thermal properties and depends critically on the initial sulfur concentration, x0. Release of gravitational energy and latent heat during growth of the snow zone do not generate sufficient entropy to restart the dynamo unless the snow zone occupies at least 400 km of the core. Snow zones can be 1.5-2 Gyrs old, though thermal stratification of the uppermost core, not included in our model, likely delays onset. Models that match the available magnetic and geodetic constraints have x0~10% and snow zones that occupy approximately the top 100 km of the present-day Martian core.
0
1
0
0
0
0
Motion Segmentation via Global and Local Sparse Subspace Optimization
In this paper, we propose a new framework for segmenting feature-based moving objects under affine subspace model. Since the feature trajectories in practice are high-dimensional and contain a lot of noise, we firstly apply the sparse PCA to represent the original trajectories with a low-dimensional global subspace, which consists of the orthogonal sparse principal vectors. Subsequently, the local subspace separation will be achieved via automatically searching the sparse representation of the nearest neighbors for each projected data. In order to refine the local subspace estimation result and deal with the missing data problem, we propose an error estimation to encourage the projected data that span a same local subspace to be clustered together. In the end, the segmentation of different motions is achieved through the spectral clustering on an affinity matrix, which is constructed with both the error estimation and sparse neighbors optimization. We test our method extensively and compare it with state-of-the-art methods on the Hopkins 155 dataset and Freiburg-Berkeley Motion Segmentation dataset. The results show that our method is comparable with the other motion segmentation methods, and in many cases exceed them in terms of precision and computation time.
1
0
0
0
0
0
Topological strings linking with quasi-particle exchange in superconducting Dirac semimetals
We demonstrate a topological classification of vortices in three dimensional time-reversal invariant topological superconductors based on superconducting Dirac semimetals with an s-wave superconducting order parameter by means of a pair of numbers $(N_\Phi,N)$, accounting how many units $N_\Phi$ of magnetic fluxes $hc/4e$ and how many $N$ chiral Majorana modes the vortex carries. From these quantities, we introduce a topological invariant which further classifies the properties of such vortices under linking processes. While such processes are known to be related to instanton processes in a field theoretic description, we demonstrate here that they are, in fact, also equivalent to the fractional Josephson effect on junctions based at the edges of quantum spin Hall systems. This allows one to consider microscopically the effects of interactions in the linking problem. We therefore demonstrate that associated to links between vortices, one has the exchange of quasi-particles, either Majorana zero-modes or $e/2$ quasi-particles, which allows for a topological classification of vortices in these systems, seen to be $\mathbb{Z}_8$ classified. While $N_\Phi$ and $N$ are shown to be both even or odd in the weakly-interacting limit, in the strongly interacting scenario one loosens this constraint. In this case, one may have further fractionalization possibilities for the vortices, whose excitations are described by $SO(3)_3$-like conformal field theories with quasi-particle exchanges of more exotic types.
0
1
0
0
0
0
Self-Imitation Learning
This paper proposes Self-Imitation Learning (SIL), a simple off-policy actor-critic algorithm that learns to reproduce the agent's past good decisions. This algorithm is designed to verify our hypothesis that exploiting past good experiences can indirectly drive deep exploration. Our empirical results show that SIL significantly improves advantage actor-critic (A2C) on several hard exploration Atari games and is competitive to the state-of-the-art count-based exploration methods. We also show that SIL improves proximal policy optimization (PPO) on MuJoCo tasks.
0
0
0
1
0
0
Controllability to Equilibria of the 1-D Fokker-Planck Equation with Zero-Flux Boundary Condition
We consider the problem of controlling the spatiotemporal probability distribution of a robotic swarm that evolves according to a reflected diffusion process, using the space- and time-dependent drift vector field parameter as the control variable. In contrast to previous work on control of the Fokker-Planck equation, a zero-flux boundary condition is imposed on the partial differential equation that governs the swarm probability distribution, and only bounded vector fields are considered to be admissible as control parameters. Under these constraints, we show that any initial probability distribution can be transported to a target probability distribution under certain assumptions on the regularity of the target distribution. In particular, we show that if the target distribution is (essentially) bounded, has bounded first-order and second-order partial derivatives, and is bounded from below by a strictly positive constant, then this distribution can be reached exactly using a drift vector field that is bounded in space and time. Our proof is constructive and based on classical linear semigroup theoretic concepts.
1
0
1
0
0
0
Buy your coffee with bitcoin: Real-world deployment of a bitcoin point of sale terminal
In this paper we discuss existing approaches for Bitcoin payments, as suitable for a small business for small-value transactions. We develop an evaluation framework utilizing security, usability, deployability criteria,, examine several existing systems, tools. Following a requirements engineering approach, we designed, implemented a new Point of Sale (PoS) system that satisfies an optimal set of criteria within our evaluation framework. Our open source system, Aunja PoS, has been deployed in a real world cafe since October 2014.
1
0
0
0
0
0
Tackling non-linearities with the effective field theory of dark energy and modified gravity
We present the extension of the effective field theory framework to the mildly non-linear scales. The effective field theory approach has been successfully applied to the late time cosmic acceleration phenomenon and it has been shown to be a powerful method to obtain predictions about cosmological observables on linear scales. However, mildly non-linear scales need to be consistently considered when testing gravity theories because a large part of the data comes from those scales. Thus, non-linear corrections to predictions on observables coming from the linear analysis can help in discriminating among different gravity theories. We proceed firstly by identifying the necessary operators which need to be included in the effective field theory Lagrangian in order to go beyond the linear order in perturbations and then we construct the corresponding non-linear action. Moreover, we present the complete recipe to map any single field dark energy and modified gravity models into the non-linear effective field theory framework by considering a general action in the Arnowitt-Deser-Misner formalism. In order to illustrate this recipe we proceed to map the beyond-Horndeski theory and low-energy Horava gravity into the effective field theory formalism. As a final step we derived the 4th order action in term of the curvature perturbation. This allowed us to identify the non-linear contributions coming from the linear order perturbations which at the next order act like source terms. Moreover, we confirm that the stability requirements, ensuring the positivity of the kinetic term and the speed of propagation for scalar mode, are automatically satisfied once the viability of the theory is demanded at linear level. The approach we present here will allow to construct, in a model independent way, all the relevant predictions on observables at mildly non-linear scales.
0
1
0
0
0
0
A recommender system to restore images with impulse noise
We build a collaborative filtering recommender system to restore images with impulse noise for which the noisy pixels have been previously identified. We define this recommender system in terms of a new color image representation using three matrices that depend on the noise-free pixels of the image to restore, and two parameters: $k$, the number of features; and $\lambda$, the regularization factor. We perform experiments on a well known image database to test our algorithm and we provide image quality statistics for the results obtained. We discuss the roles of bias and variance in the performance of our algorithm as determined by the values of $k$ and $\lambda$, and provide guidance on how to choose the values of these parameters. Finally, we discuss the possibility of using our collaborative filtering recommender system to perform image inpainting and super-resolution.
1
0
0
1
0
0
A Parallel Simulator for Massive Reservoir Models Utilizing Distributed-Memory Parallel Systems
This paper presents our work on developing parallel computational methods for two-phase flow on modern parallel computers, where techniques for linear solvers and nonlinear methods are studied and the standard and inexact Newton methods are investigated. A multi-stage preconditioner for two-phase flow is applied and advanced matrix processing strategies are studied. A local reordering method is developed to speed the solution of linear systems. Numerical experiments show that these computational methods are effective and scalable, and are capable of computing large-scale reservoir simulation problems using thousands of CPU cores on parallel computers. The nonlinear techniques, preconditioner and matrix processing strategies can also be applied to three-phase black oil, compositional and thermal models.
1
0
0
0
0
0
Bifurcation structure of cavity soliton dynamics in a VCSEL with saturable absorber and time-delayed feedback
We consider a wide-aperture surface-emitting laser with a saturable absorber section subjected to time-delayed feedback. We adopt the mean-field approach assuming a single longitudinal mode operation of the solitary VCSEL. We investigate cavity soliton dynamics under the effect of time- delayed feedback in a self-imaging configuration where diffraction in the external cavity is negligible. Using bifurcation analysis, direct numerical simulations and numerical path continuation methods, we identify the possible bifurcations and map them in a plane of feedback parameters. We show that for both the homogeneous and localized stationary lasing solutions in one spatial dimension the time-delayed feedback induces complex spatiotemporal dynamics, in particular a period doubling route to chaos, quasiperiodic oscillations and multistability of the stationary solutions.
0
1
0
0
0
0
Early warning signal for interior crises in excitable systems
The ability to reliably predict critical transitions in dynamical systems is a long-standing goal of diverse scientific communities. Previous work focused on early warning signals related to local bifurcations (critical slowing down) and non-bifurcation type transitions. We extend this toolbox and report on a characteristic scaling behavior (critical attractor growth) which is indicative of an impending global bifurcation, an interior crisis in excitable systems. We demonstrate our early warning signal in a conceptual climate model as well as in a model of coupled neurons known to exhibit extreme events. We observed critical attractor growth prior to interior crises of chaotic as well as strange-nonchaotic attractors. These observations promise to extend the classes of transitions that can be predicted via early warning signals.
0
1
0
0
0
0
Harmonic Mean Iteratively Reweighted Least Squares for Low-Rank Matrix Recovery
We propose a new iteratively reweighted least squares (IRLS) algorithm for the recovery of a matrix $X \in \mathbb{C}^{d_1\times d_2}$ of rank $r \ll\min(d_1,d_2)$ from incomplete linear observations, solving a sequence of low complexity linear problems. The easily implementable algorithm, which we call harmonic mean iteratively reweighted least squares (HM-IRLS), optimizes a non-convex Schatten-$p$ quasi-norm penalization to promote low-rankness and carries three major strengths, in particular for the matrix completion setting. First, we observe a remarkable global convergence behavior of the algorithm's iterates to the low-rank matrix for relevant, interesting cases, for which any other state-of-the-art optimization approach fails the recovery. Secondly, HM-IRLS exhibits an empirical recovery probability close to $1$ even for a number of measurements very close to the theoretical lower bound $r (d_1 +d_2 -r)$, i.e., already for significantly fewer linear observations than any other tractable approach in the literature. Thirdly, HM-IRLS exhibits a locally superlinear rate of convergence (of order $2-p$) if the linear observations fulfill a suitable null space property. While for the first two properties we have so far only strong empirical evidence, we prove the third property as our main theoretical result.
1
0
1
0
0
0
The Italian Pension Gap: a Stochastic Optimal Control Approach
We study the gap between the state pension provided by the Italian pension system pre-Dini reform and post-Dini reform. The goal is to fill the gap between the old and the new pension by joining a defined contribution pension scheme and adopting an optimal investment strategy that is target-based. We find that it is possible to cover, at least partially, this gap with the additional income of the pension scheme, especially in the presence of late retirement and in the presence of stagnant career. Workers with dynamic career and workers who retire early are those who are most penalised by the reform. Results are intuitive and in line with previous studies on the subject.
0
0
0
0
0
1
Immigration-induced phase transition in a regulated multispecies birth-death process
Power-law-distributed species counts or clone counts arise in many biological settings such as multispecies cell populations, population genetics, and ecology. This empirical observation that the number of species $c_{k}$ represented by $k$ individuals scales as negative powers of $k$ is also supported by a series of theoretical birth-death-immigration (BDI) models that consistently predict many low-population species, a few intermediate-population species, and very high-population species. However, we show how a simple global population-dependent regulation in a neutral BDI model destroys the power law distributions. Simulation of the regulated BDI model shows a high probability of observing a high-population species that dominates the total population. Further analysis reveals that the origin of this breakdown is associated with the failure of a mean-field approximation for the expected species abundance distribution. We find an accurate estimate for the expected distribution $\langle c_k \rangle$ by mapping the problem to a lower-dimensional Moran process, allowing us to also straightforwardly calculate the covariances $\langle c_k c_\ell \rangle$. Finally, we exploit the concepts associated with energy landscapes to explain the failure of the mean-field assumption by identifying a phase transition in the quasi-steady-state species counts triggered by a decreasing immigration rate.
0
0
0
0
1
0
Nanoscale assembly of superconducting vortices with scanning tunnelling microscope tip
Vortices play a crucial role in determining the properties of superconductors as well as their applications. Therefore, characterization and manipulation of vortices, especially at the single vortex level, is of great importance. Among many techniques to study single vortices, scanning tunneling microscopy (STM) stands out as a powerful tool, due to its ability to detect the local electronic states and high spatial resolution. However, local control of superconductivity as well as the manipulation of individual vortices with the STM tip is still lacking. Here we report a new function of the STM, namely to control the local pinning in a superconductor through the heating effect. Such effect allows us to quench the superconducting state at nanoscale, and leads to the growth of vortex-clusters whose size can be controlled by the bias voltage. We also demonstrate the use of an STM tip to assemble single quantum vortices into desired nanoscale configurations.
0
1
0
0
0
0
STACCATO: A Novel Solution to Supernova Photometric Classification with Biased Training Sets
We present a new solution to the problem of classifying Type Ia supernovae from their light curves alone given a spectroscopically confirmed but biased training set, circumventing the need to obtain an observationally expensive unbiased training set. We use Gaussian processes (GPs) to model the supernovae's (SN) light curves, and demonstrate that the choice of covariance function has only a small influence on the GPs ability to accurately classify SNe. We extend and improve the approach of Richards et al (2012} -- a diffusion map combined with a random forest classifier -- to deal specifically with the case of biassed training sets. We propose a novel method, called STACCATO (SynThetically Augmented Light Curve ClassificATiOn') that synthetically augments a biased training set by generating additional training data from the fitted GPs. Key to the success of the method is the partitioning of the observations into subgroups based on their propensity score of being included in the training set. Using simulated light curve data, we show that STACCATO increases performance, as measured by the area under the Receiver Operating Characteristic curve (AUC), from 0.93 to 0.96, close to the AUC of 0.977 obtained using the 'gold standard' of an unbiased training set and significantly improving on the previous best result of 0.88. STACCATO also increases the true positive rate for SNIa classification by up to a factor of 50 for high-redshift/low brightness SNe.
0
1
0
0
0
0
Coupling of multiscale and multi-continuum approaches
Simulating complex processes in fractured media requires some type of model reduction. Well-known approaches include multi-continuum techniques, which have been commonly used in approximating subgrid effects for flow and transport in fractured media. Our goal in this paper is to (1) show a relation between multi-continuum approaches and Generalized Multiscale Finite Element Method (GMsFEM) and (2) to discuss coupling these approaches for solving problems in complex multiscale fractured media. The GMsFEM, a systematic approach, constructs multiscale basis functions via local spectral decomposition in pre-computed snapshot spaces. We show that GMsFEM can automatically identify separate fracture networks via local spectral problems. We discuss the relation between these basis functions and continuums in multi-continuum methods. The GMsFEM can automatically detect each continuum and represent the interaction between the continuum and its surrounding (matrix). For problems with simplified fracture networks, we propose a simplified basis construction with the GMsFEM. This simplified approach is effective when the fracture networks are known and have simplified geometries. We show that this approach can achieve a similar result compared to the results using the GMsFEM with spectral basis functions. Further, we discuss the coupling between the GMsFEM and multi-continuum approaches. In this case, many fractures are resolved while for unresolved fractures, we use a multi-continuum approach with local Representative Volume Element (RVE) information. As a result, the method deals with a system of equations on a coarse grid, where each equation represents one of the continua on the fine grid. We present various basis construction mechanisms and numerical results.
0
0
1
0
0
0
Spaces which invert weak homotopy equivalences
It is well known that if $X$ is a CW-complex, then for every weak homotopy equivalence $f:A\to B$, the map $f_*:[X,A]\to [X,B]$ induced in homotopy classes is a bijection. For which spaces $X$ is $f^*:[B,X]\to [A,X]$ a bijection for every weak equivalence $f$? This question was considered by J. Strom and T. Goodwillie. In this note we prove that a non-empty space inverts weak equivalences if and only if it is contractible.
0
0
1
0
0
0
Minimal surfaces in the 3-sphere by desingularizing intersecting Clifford tori
For each integer $k \geq 2$, we apply gluing methods to construct sequences of minimal surfaces embedded in the round $3$-sphere. We produce two types of sequences, all desingularizing collections of intersecting Clifford tori. Sequences of the first type converge to a collection of $k$ Clifford tori intersecting with maximal symmetry along these two circles. Near each of the circles, after rescaling, the sequences converge smoothly on compact subsets to a Karcher-Scherk tower of order $k$. Sequences of the second type desingularize a collection of the same $k$ Clifford tori supplemented by an additional Clifford torus equidistant from the original two circles of intersection, so that the latter torus orthogonally intersects each of the former $k$ tori along a pair of disjoint orthogonal circles, near which the corresponding rescaled sequences converge to a singly periodic Scherk surface. The simpler examples of the first type resemble surfaces constructed by Choe and Soret \cite{CS} by different methods where the number of handles desingularizing each circle is the same. There is a plethora of new examples which are more complicated and on which the number of handles for the two circles differs. Examples of the second type are new as well.
0
0
1
0
0
0
Results from EDGES High-Band: I. Constraints on Phenomenological Models for the Global $21$ cm Signal
We report constraints on the global $21$ cm signal due to neutral hydrogen at redshifts $14.8 \geq z \geq 6.5$. We derive our constraints from low foreground observations of the average sky brightness spectrum conducted with the EDGES High-Band instrument between September $7$ and October $26$, $2015$. Observations were calibrated by accounting for the effects of antenna beam chromaticity, antenna and ground losses, signal reflections, and receiver parameters. We evaluate the consistency between the spectrum and phenomenological models for the global $21$ cm signal. For tanh-based representations of the ionization history during the epoch of reionization, we rule out, at $\geq2\sigma$ significance, models with duration of up to $\Delta z = 1$ at $z\approx8.5$ and higher than $\Delta z = 0.4$ across most of the observed redshift range under the usual assumption that the $21$ cm spin temperature is much larger than the temperature of the cosmic microwave background (CMB) during reionization. We also investigate a `cold' IGM scenario that assumes perfect Ly$\alpha$ coupling of the $21$ cm spin temperature to the temperature of the intergalactic medium (IGM), but that the IGM is not heated by early stars or stellar remants. Under this assumption, we reject tanh-based reionization models of duration $\Delta z \lesssim 2$ over most of the observed redshift range. Finally, we explore and reject a broad range of Gaussian models for the $21$ cm absorption feature expected in the First Light era. As an example, we reject $100$ mK Gaussians with duration (full width at half maximum) $\Delta z \leq 4$ over the range $14.2\geq z\geq 6.5$ at $\geq2\sigma$ significance.
0
1
0
0
0
0
Novel paradigms for advanced distribution grid energy management
The electricity distribution grid was not designed to cope with load dynamics imposed by high penetration of electric vehicles, neither to deal with the increasing deployment of distributed Renewable Energy Sources. Distribution System Operators (DSO) will increasingly rely on flexible Distributed Energy Resources (flexible loads, controllable generation and storage) to keep the grid stable and to ensure quality of supply. In order to properly integrate demand-side flexibility, DSOs need new energy management architectures, capable of fostering collaboration with wholesale market actors and pro-sumers. We propose the creation of Virtual Distribution Grids (VDG) over a common physical infrastructure , to cope with heterogeneity of resources and actors, and with the increasing complexity of distribution grid management and related resources allocation problems. Focusing on residential VDG, we propose an agent-based hierarchical architecture for providing Demand-Side Management services through a market-based approach, where households transact their surplus/lack of energy and their flexibility with neighbours, aggregators, utilities and DSOs. For implementing the overall solution, we consider fine-grained control of smart homes based on Inter-net of Things technology. Homes seamlessly transact self-enforcing smart contracts over a blockchain-based generic platform. Finally, we extend the architecture to solve existing problems on smart home control, beyond energy management.
1
0
0
0
0
0
Uniformly Bounded Sets in Quasiperiodically Forced Dynamical Systems
This paper addresses structures of state space in quasiperiodically forced dynamical systems. We develop a theory of ergodic partition of state space in a class of measure-preserving and dissipative flows, which is a natural extension of the existing theory for measure-preserving maps. The ergodic partition result is based on eigenspace at eigenvalue 0 of the associated Koopman operator, which is realized via time-averages of observables, and provides a constructive way to visualize a low-dimensional slice through a high-dimensional invariant set. We apply the result to the systems with a finite number of attractors and show that the time-average of a continuous observable is well-defined and reveals the invariant sets, namely, a finite number of basins of attraction. We provide a characterization of invariant sets in the quasiperiodically forced systems. A theorem on uniform boundedness of the invariant sets is proved. The series of analytical results enables numerical analysis of invariant sets in the quasiperiodically forced systems based on the ergodic partition and time-averages. Using this, we analyze a nonlinear model of complex power grids that represents the short-term swing instability, named the coherent swing instability. We show that our analytical results can be used to understand stability regions in such complex systems.
1
0
0
0
0
0
Understand Functionality and Dimensionality of Vector Embeddings: the Distributional Hypothesis, the Pairwise Inner Product Loss and Its Bias-Variance Trade-off
Vector embedding is a foundational building block of many deep learning models, especially in natural language processing. In this paper, we present a theoretical framework for understanding the effect of dimensionality on vector embeddings. We observe that the distributional hypothesis, a governing principle of statistical semantics, requires a natural unitary-invariance for vector embeddings. Motivated by the unitary-invariance observation, we propose the Pairwise Inner Product (PIP) loss, a unitary-invariant metric on the similarity between two embeddings. We demonstrate that the PIP loss captures the difference in functionality between embeddings, and that the PIP loss is tightly connect with two basic properties of vector embeddings, namely similarity and compositionality. By formulating the embedding training process as matrix factorization with noise, we reveal a fundamental bias-variance trade-off between the signal spectrum and noise power in the dimensionality selection process. This bias-variance trade-off sheds light on many empirical observations which have not been thoroughly explained, for example the existence of an optimal dimensionality. Moreover, we discover two new results about vector embeddings, namely their robustness against over-parametrization and their forward stability. The bias-variance trade-off of the PIP loss explicitly answers the fundamental open problem of dimensionality selection for vector embeddings.
0
0
0
1
0
0
Strongly correlated one-dimensional Bose-Fermi quantum mixtures: symmetry and correlations
We consider multi-component quantum mixtures (bosonic, fermionic, or mixed) with strongly repulsive contact interactions in a one-dimensional harmonic trap. In the limit of infinitely strong repulsion and zero temperature, using the class-sum method, we study the symmetries of the spatial wave function of the mixture. We find that the ground state of the system has the most symmetric spatial wave function allowed by the type of mixture. This provides an example of the generalized Lieb-Mattis theorem. Furthermore, we show that the symmetry properties of the mixture are embedded in the large-momentum tails of the momentum distribution, which we evaluate both at infinite repulsion by an exact solution and at finite interactions using a numerical DMRG approach. This implies that an experimental measurement of the Tan's contact would allow to unambiguously determine the symmetry of any kind of multi-component mixture.
0
1
0
0
0
0
Joint Pose and Principal Curvature Refinement Using Quadrics
In this paper we present a novel joint approach for optimising surface curvature and pose alignment. We present two implementations of this joint optimisation strategy, including a fast implementation that uses two frames and an offline multi-frame approach. We demonstrate an order of magnitude improvement in simulation over state of the art dense relative point-to-plane Iterative Closest Point (ICP) pose alignment using our dense joint frame-to-frame approach and show comparable pose drift to dense point-to-plane ICP bundle adjustment using low-cost depth sensors. Additionally our improved joint quadric based approach can be used to more accurately estimate surface curvature on noisy point clouds than previous approaches.
1
0
0
0
0
0
Stable basic sets for finite special linear and unitary group
In this paper we show, using Deligne-Lusztig theory and Kawanaka's theory of generalised Gelfand-Graev representations, that the decomposition matrix of the special linear and unitary group in non defining characteristic can be made unitriangular with respect to a basic set that is stable under the action of automorphisms.
0
0
1
0
0
0
Correspondences without a Core
We study the formal properties of correspondences of curves without a core, focusing on the case of étale correspondences. The motivating examples come from Hecke correspondences of Shimura curves. Given a correspondence without a core, we construct an infinite graph $\mathcal{G}_{gen}$ together with a large group of "algebraic" automorphisms $A$. The graph $\mathcal{G}_{gen}$ measures the "generic dynamics" of the correspondence. We construct specialization maps $\mathcal{G}_{gen}\rightarrow\mathcal{G}_{phys}$ to the "physical dynamics" of the correspondence. We also prove results on the number of bounded étale orbits, in particular generalizing a recent theorem of Hallouin and Perret. We use a variety of techniques: Galois theory, the theory of groups acting on infinite graphs, and finite group schemes.
0
0
1
0
0
0
Oxidative species-induced excitonic transport in tubulin aromatic networks: Potential implications for neurodegenerative disease
Oxidative stress is a pathological hallmark of neurodegenerative tauopathic disorders such as Alzheimer's disease and Parkinson's disease-related dementia, which are characterized by altered forms of the microtubule-associated protein (MAP) tau. MAP tau is a key protein in stabilizing the microtubule architecture that regulates neuron morphology and synaptic strength. The precise role of reactive oxygen species (ROS) in the tauopathic disease process, however, is poorly understood. It is known that the production of ROS by mitochondria can result in ultraweak photon emission (UPE) within cells. One likely absorber of these photons is the microtubule cytoskeleton, as it forms a vast network spanning neurons, is highly co-localized with mitochondria, and shows a high density of aromatic amino acids. Functional microtubule networks may traffic this ROS-generated endogenous photon energy for cellular signaling, or they may serve as dissipaters/conduits of such energy. Experimentally, after in vitro exposure to exogenous photons, microtubules have been shown to reorient and reorganize in a dose-dependent manner with the greatest effect being observed around 280 nm, in the tryptophan and tyrosine absorption range. In this paper, recent modeling efforts based on ambient temperature experiment are presented, showing that tubulin polymers can feasibly absorb and channel these photoexcitations via resonance energy transfer, on the order of dendritic length scales. Since microtubule networks are compromised in tauopathic diseases, patients with these illnesses would be unable to support effective channeling of these photons for signaling or dissipation. Consequent emission surplus due to increased UPE production or decreased ability to absorb and transfer may lead to increased cellular oxidative damage, thus hastening the neurodegenerative process.
0
1
0
0
0
0
On radial Schroedinger operators with a Coulomb potential
This paper presents a thorough analysis of 1-dimensional Schroedinger operators whose potential is a linear combination of the Coulomb term 1/r and the centrifugal term 1/r^2. We allow both coupling constants to be complex. Using natural boundary conditions at 0, a two parameter holomorphic family of closed operators is introduced. We call them the Whittaker operators, since in the mathematical literature their eigenvalue equation is called the Whittaker equation. Spectral and scattering theory for Whittaker operators is studied. Whittaker operators appear in quantum mechanics as the radial part of the Schroedinger operator with a Coulomb potential.
0
0
1
0
0
0
Extrema-weighted feature extraction for functional data
Motivation: Although there is a rich literature on methods for assessing the impact of functional predictors, the focus has been on approaches for dimension reduction that can fail dramatically in certain applications. Examples of standard approaches include functional linear models, functional principal components regression, and cluster-based approaches, such as latent trajectory analysis. This article is motivated by applications in which the dynamics in a predictor, across times when the value is relatively extreme, are particularly informative about the response. For example, physicians are interested in relating the dynamics of blood pressure changes during surgery to post-surgery adverse outcomes, and it is thought that the dynamics are more important when blood pressure is significantly elevated or lowered. Methods: We propose a novel class of extrema-weighted feature (XWF) extraction models. Key components in defining XWFs include the marginal density of the predictor, a function up-weighting values at high quantiles of this marginal, and functionals characterizing local dynamics. Algorithms are proposed for fitting of XWF-based regression and classification models, and are compared with current methods for functional predictors in simulations and a blood pressure during surgery application. Results: XWFs find features of intraoperative blood pressure trajectories that are predictive of postoperative mortality. By their nature, most of these features cannot be found by previous methods.
0
0
0
1
0
0
A representation theorem for stochastic processes with separable covariance functions, and its implications for emulation
Many applications require stochastic processes specified on two- or higher-dimensional domains; spatial or spatial-temporal modelling, for example. In these applications it is attractive, for conceptual simplicity and computational tractability, to propose a covariance function that is separable; e.g., the product of a covariance function in space and one in time. This paper presents a representation theorem for such a proposal, and shows that all processes with continuous separable covariance functions are second-order identical to the product of second-order uncorrelated processes. It discusses the implications of separable or nearly separable prior covariances for the statistical emulation of complicated functions such as computer codes, and critically reexamines the conventional wisdom concerning emulator structure, and size of design.
0
0
1
1
0
0
Facial Recognition Enabled Smart Door Using Microsoft Face API
Privacy and Security are two universal rights and, to ensure that in our daily life we are secure, a lot of research is going on in the field of home security, and IoT is the turning point for the industry, where we connect everyday objects to share data for our betterment. Facial recognition is a well-established process in which the face is detected and identified out of the image. We aim to create a smart door, which secures the gateway on the basis of who we are. In our proof of concept of a smart door we have used a live HD camera on the front side of setup attached to a display monitor connected to the camera to show who is standing in front of the door, also the whole system will be able to give voice outputs by processing text them on the Raspberry Pi ARM processor used and show the answers as output on the screen. We are using a set of electromagnets controlled by the microcontroller, which will act as a lock. So a person can open the smart door with the help of facial recognition and at the same time also be able to interact with it. The facial recognition is done by Microsoft face API but our state of the art desktop application operating over Microsoft Visual Studio IDE reduces the computational time by detecting the face out of the photo and giving that as the output to Microsoft Face API, which is hosted over Microsoft Azure cloud support.
1
0
0
0
0
0
Veamy: an extensible object-oriented C++ library for the virtual element method
This paper summarizes the development of Veamy, an object-oriented C++ library for the virtual element method (VEM) on general polygonal meshes, whose modular design is focused on its extensibility. The linear elastostatic and Poisson problems in two dimensions have been chosen as the starting stage for the development of this library. The theory of the VEM, upon which Veamy is built, is presented using a notation and a terminology that resemble the language of the finite element method (FEM) in engineering analysis. Several examples are provided to demonstrate the usage of Veamy, and in particular, one of them features the interaction between Veamy and the polygonal mesh generator PolyMesher. A computational performance comparison between VEM and FEM is also conducted. Veamy is free and open source software.
1
0
0
0
0
0
Composition by Conversation
Most musical programming languages are developed purely for coding virtual instruments or algorithmic compositions. Although there has been some work in the domain of musical query languages for music information retrieval, there has been little attempt to unify the principles of musical programming and query languages with cognitive and natural language processing models that would facilitate the activity of composition by conversation. We present a prototype framework, called MusECI, that merges these domains, permitting score-level algorithmic composition in a text editor while also supporting connectivity to existing natural language processing frameworks.
1
0
0
0
0
0
Electrostatic and induction effects in the solubility of water in alkanes
Experiments show that at 298~K and 1 atm pressure the transfer free energy, $\mu^{\rm ex}$, of water from its vapor to liquid normal alkanes $C_nH_{2n+2}$ ($n=5\ldots12$) is negative. Earlier it was found that with the united-atom TraPPe model for alkanes and the SPC/E model for water, one had to artificially enhance the attractive alkane-water cross interaction to capture this behavior. Here we revisit the calculation of $\mu^{\rm ex}$ using the polarizable AMOEBA and the non-polarizable Charmm General (CGenFF) forcefields. We test both the AMOEBA03 and AMOEBA14 water models; the former has been validated with the AMOEBA alkane model while the latter is a revision of AMOEBA03 to better describe liquid water. We calculate $\mu^{\rm ex}$ using the test particle method. With CGenFF, $\mu^{\rm ex}$ is positive and the error relative to experiments is about 1.5 $k_{\rm B}T$. With AMOEBA, $\mu^{\rm ex}$ is negative and deviations relative to experiments are between 0.25 $k_{\rm B}T$ (AMOEBA14) and 0.5 $k_{\rm B}T$ (AMOEBA03). Quantum chemical calculations in a continuum solvent suggest that zero point effects may account for some of the deviation. Forcefield limitations notwithstanding, electrostatic and induction effects, commonly ignored in considerations of water-alkane interactions, appear to be decisive in the solubility of water in alkanes.
0
1
0
0
0
0
Impact of theoretical priors in cosmological analyses: the case of single field quintessence
We investigate the impact of general conditions of theoretical stability and cosmological viability on dynamical dark energy models. As a powerful example, we study whether minimally coupled, single field Quintessence models that are safe from ghost instabilities, can source the CPL expansion history recently shown to be mildly favored by a combination of CMB (Planck) and Weak Lensing (KiDS) data. We find that in their most conservative form, the theoretical conditions impact the analysis in such a way that smooth single field Quintessence becomes significantly disfavored with respect to the standard LCDM cosmological model. This is due to the fact that these conditions cut a significant portion of the (w0;wa) parameter space for CPL, in particular eliminating the region that would be favored by weak lensing data. Within the scenario of a smooth dynamical dark energy parametrized with CPL, weak lensing data favors a region that would require multiple fields to ensure gravitational stability.
0
1
0
0
0
0
Spin tracking of polarized protons in the Main Injector at Fermilab
The Main Injector (MI) at Fermilab currently produces high-intensity beams of protons at energies of 120 GeV for a variety of physics experiments. Acceleration of polarized protons in the MI would provide opportunities for a rich spin physics program at Fermilab. To achieve polarized proton beams in the Fermilab accelerator complex, detailed spin tracking simulations with realistic parameters based on the existing facility are required. This report presents studies at the MI using a single 4-twist Siberian snake to determine the depolarizing spin resonances for the relevant synchrotrons. Results will be presented first for a perfect MI lattice, followed by a lattice that includes the real MI imperfections, such as the measured magnet field errors and quadrupole misalignments. The tolerances of each of these factors in maintaining polarization in the Main Injector will be discussed.
0
1
0
0
0
0
Wormholes and masses for Goldstone bosons
There exist non-trivial stationary points of the Euclidean action for an axion particle minimally coupled to Einstein gravity, dubbed wormholes. They explicitly break the continuos global shift symmetry of the axion in a non-perturbative way, and generate an effective potential that may compete with QCD depending on the value of the axion decay constant. In this paper, we explore both theoretical and phenomenological aspects of this issue. On the theory side, we address the problem of stability of the wormhole solutions, and we show that the spectrum of the quadratic action features only positive eigenvalues. On the phenomenological side, we discuss, beside the obvious application to the QCD axion, relevant consequences for models with ultralight dark matter, black hole superradiance, and the relaxation of the electroweak scale. We conclude discussing wormhole solutions for a generic coset and the potential they generate.
0
1
0
0
0
0
A class of states supporting diffusive spin dynamics in the isotropic Heisenberg model
The spin transport in isotropic Heisenberg model in the sector with zero magnetization is generically super-diffusive. Despite that, we here demonstrate that for a specific set of domain-wall-like initial product states it can instead be diffusive. We theoretically explain the time evolution of such states by showing that in the limiting regime of weak spatial modulation they are approximately product states for very long times, and demonstrate that even in the case of larger spatial modulation the bipartite entanglement entropy grows only logarithmically in time. In the limiting regime we derive a simple closed equation governing the dynamics, which in the continuum limit and for the initial step magnetization profile results in a solution expressed in terms of Fresnel integrals.
0
1
0
0
0
0
Learning to Invert: Signal Recovery via Deep Convolutional Networks
The promise of compressive sensing (CS) has been offset by two significant challenges. First, real-world data is not exactly sparse in a fixed basis. Second, current high-performance recovery algorithms are slow to converge, which limits CS to either non-real-time applications or scenarios where massive back-end computing is available. In this paper, we attack both of these challenges head-on by developing a new signal recovery framework we call {\em DeepInverse} that learns the inverse transformation from measurement vectors to signals using a {\em deep convolutional network}. When trained on a set of representative images, the network learns both a representation for the signals (addressing challenge one) and an inverse map approximating a greedy or convex recovery algorithm (addressing challenge two). Our experiments indicate that the DeepInverse network closely approximates the solution produced by state-of-the-art CS recovery algorithms yet is hundreds of times faster in run time. The tradeoff for the ultrafast run time is a computationally intensive, off-line training procedure typical to deep networks. However, the training needs to be completed only once, which makes the approach attractive for a host of sparse recovery problems.
1
0
0
1
0
0
Multi-message Authentication over Noisy Channel with Secure Channel Codes
In this paper, we investigate multi-message authentication to combat adversaries with infinite computational capacity. An authentication framework over a wiretap channel $(W_1,W_2)$ is proposed to achieve information-theoretic security with the same key. The proposed framework bridges the two research areas in physical (PHY) layer security: secure transmission and message authentication. Specifically, the sender Alice first transmits message $M$ to the receiver Bob over $(W_1,W_2)$ with an error correction code; then Alice employs a hash function (i.e., $\varepsilon$-AWU$_2$ hash functions) to generate a message tag $S$ of message $M$ using key $K$, and encodes $S$ to a codeword $X^n$ by leveraging an existing strongly secure channel coding with exponentially small (in code length $n$) average probability of error; finally, Alice sends $X^n$ over $(W_1,W_2)$ to Bob who authenticates the received messages. We develop a theorem regarding the requirements/conditions for the authentication framework to be information-theoretic secure for authenticating a polynomial number of messages in terms of $n$. Based on this theorem, we propose an authentication protocol that can guarantee the security requirements, and prove its authentication rate can approach infinity when $n$ goes to infinity. Furthermore, we design and implement an efficient and feasible authentication protocol over binary symmetric wiretap channel (BSWC) by using \emph{Linear Feedback Shifting Register} based (LFSR-based) hash functions and strong secure polar code. Through extensive experiments, it is demonstrated that the proposed protocol can achieve low time cost, high authentication rate, and low authentication error rate.
1
0
0
0
0
0
The Garden of Eden theorem: old and new
We review topics in the theory of cellular automata and dynamical systems that are related to the Moore-Myhill Garden of Eden theorem.
0
1
1
0
0
0
Bosonization in Non-Relativistic CFTs
We demonstrate explicitly the correspondence between all protected operators in a 2+1 dimensional non-supersymmetric bosonization duality in the non-relativistic limit. Roughly speaking we consider $SU(N)$ Chern-Simons field theory at level $k$ with $N_f$ flavours of fundamental boson, and match its chiral sector to that of a $SU(k)$ theory at level $N$ with $N_f$ fundamental fermions. We present the matching at the level of indices and individual operators, seeing the mechanism of failure for $N_f > N$, and point out that the non-relativistic setting is a particularly friendly setting for studying interesting questions about such dualities.
0
1
0
0
0
0
Learning latent representations for style control and transfer in end-to-end speech synthesis
In this paper, we introduce the Variational Autoencoder (VAE) to an end-to-end speech synthesis model, to learn the latent representation of speaking styles in an unsupervised manner. The style representation learned through VAE shows good properties such as disentangling, scaling, and combination, which makes it easy for style control. Style transfer can be achieved in this framework by first inferring style representation through the recognition network of VAE, then feeding it into TTS network to guide the style in synthesizing speech. To avoid Kullback-Leibler (KL) divergence collapse in training, several techniques are adopted. Finally, the proposed model shows good performance of style control and outperforms Global Style Token (GST) model in ABX preference tests on style transfer.
1
0
0
0
0
0
Directed negative-weight percolation
We consider a directed variant of the negative-weight percolation model in a two-dimensional, periodic, square lattice. The problem exhibits edge weights which are taken from a distribution that allows for both positive and negative values. Additionally, in this model variant all edges are directed. For a given realization of the disorder, a minimally weighted loop/path configuration is determined by performing a non-trivial transformation of the original lattice into a minimum weight perfect matching problem. For this problem, fast polynomial-time algorithms are available, thus we could study large systems with high accuracy. Depending on the fraction of negatively and positively weighted edges in the lattice, a continuous phase transition can be identified, whose characterizing critical exponents we have estimated by a finite-size scaling analyses of the numerically obtained data. We observe a strong change of the universality class with respect to standard directed percolation, as well as with respect to undirected negative-weight percolation. Furthermore, the relation to directed polymers in random media is illustrated.
0
1
0
0
0
0
Integrating Lipschitzian Dynamical Systems using Piecewise Algorithmic Differentiation
In this article we analyze a generalized trapezoidal rule for initial value problems with piecewise smooth right hand side \(F:\R^n\to\R^n\). When applied to such a problem the classical trapezoidal rule suffers from a loss of accuracy if the solution trajectory intersects a nondifferentiability of \(F\). The advantage of the proposed generalized trapezoidal rule is threefold: Firstly we can achieve a higher convergence order than with the classical method. Moreover, the method is energy preserving for piecewise linear Hamiltonian systems. Finally, in analogy to the classical case we derive a third order interpolation polynomial for the numerical trajectory. In the smooth case the generalized rule reduces to the classical one. Hence, it is a proper extension of the classical theory. An error estimator is given and numerical results are presented.
0
0
1
0
0
0
Effects of Planetesimal Accretion on the Thermal and Structural Evolution of Sub-Neptunes
A remarkable discovery of NASA's Kepler mission is the wide diversity in the average densities of planets of similar mass. After gas disk dissipation, fully formed planets could interact with nearby planetesimals from a remnant planetesimal disk. These interactions would often lead to planetesimal accretion due to the relatively high ratio between the planet size and the hill radius for typical planets. We present calculations using the open-source stellar evolution toolkit MESA (Modules for Experiments in Stellar Astrophysics) modified to include the deposition of planetesimals into the H/He envelopes of sub-Neptunes (~1-20 MEarth). We show that planetesimal accretion can alter the mass-radius isochrones for these planets. The same initial planet as a result of the same total accreted planetesimal mass can have up to ~5% difference in mean densities several Gyr after the last accretion due to inherent stochasticity of the accretion process. During the phase of rapid accretion these differences are more dramatic. The additional energy deposition from the accreted planetesimals increase the ratio between the planet's radius to that of the core during rapid accretion, which in turn leads to enhanced loss of atmospheric mass. As a result, the same initial planet can end up with very different envelope mass fractions. These differences manifest as differences in mean densities long after accretion stops. These effects are particularly important for planets initially less massive than ~10 MEarth and with envelope mass fraction less than ~10%, thought to be the most common type of planets discovered by Kepler.
0
1
0
0
0
0
Estimation bounds and sharp oracle inequalities of regularized procedures with Lipschitz loss functions
We obtain estimation error rates and sharp oracle inequalities for regularization procedures of the form \begin{equation*} \hat f \in argmin_{f\in F}\left(\frac{1}{N}\sum_{i=1}^N\ell(f(X_i), Y_i)+\lambda \|f\|\right) \end{equation*} when $\|\cdot\|$ is any norm, $F$ is a convex class of functions and $\ell$ is a Lipschitz loss function satisfying a Bernstein condition over $F$. We explore both the bounded and subgaussian stochastic frameworks for the distribution of the $f(X_i)$'s, with no assumption on the distribution of the $Y_i$'s. The general results rely on two main objects: a complexity function, and a sparsity equation, that depend on the specific setting in hand (loss $\ell$ and norm $\|\cdot\|$). As a proof of concept, we obtain minimax rates of convergence in the following problems: 1) matrix completion with any Lipschitz loss function, including the hinge and logistic loss for the so-called 1-bit matrix completion instance of the problem, and quantile losses for the general case, which enables to estimate any quantile on the entries of the matrix; 2) logistic LASSO and variants such as the logistic SLOPE; 3) kernel methods, where the loss is the hinge loss, and the regularization function is the RKHS norm.
0
0
1
1
0
0
On closed Lie ideals of certain tensor products of $C^*$-algebras
For a simple $C^*$-algebra $A$ and any other $C^*$-algebra $B$, it is proved that every closed ideal of $A \otimes^{\min} B$ is a product ideal if either $A$ is exact or $B$ is nuclear. Closed commutator of a closed ideal in a Banach algebra whose every closed ideal possesses a quasi-central approximate identity is described in terms of the commutator of the Banach algebra. If $\alpha$ is either the Haagerup norm, the operator space projective norm or the $C^*$-minimal norm, then this allows us to identify all closed Lie ideals of $A \otimes^{\alpha} B$, where $A$ and $B$ are simple, unital $C^*$-algebras with one of them admitting no tracial functionals, and to deduce that every non-central closed Lie ideal of $B(H) \otimes^{\alpha} B(H)$ contains the product ideal $K(H) \otimes^{\alpha} K(H)$. Closed Lie ideals of $A \otimes^{\min} C(X)$ are also determined, $A$ being any simple unital $C^*$-algebra with at most one tracial state and $X$ any compact Hausdorff space. And, it is shown that closed Lie ideals of $A \otimes^{\alpha} K(H)$ are precisely the product ideals, where $A$ is any unital $C^*$-algebra and $\alpha$ any completely positive uniform tensor norm.
0
0
1
0
0
0
On the Chemistry of the Young Massive Protostellar core NGC 2264 CMM3
We present the first gas-grain astrochemical model of the NGC 2264 CMM3 protostellar core. The chemical evolution of the core is affected by changing its physical parameters such as the total density and the amount of gas-depletion onto grain surfaces as well as the cosmic ray ionisation rate, $\zeta$. We estimated $\zeta_{\text {CMM3}}$ = 1.6 $\times$ 10$^{-17}$ s$^{-1}$. This value is 1.3 times higher than the standard CR ionisation rate, $\zeta_{\text {ISM}}$ = 1.3 $\times$ 10$^{-17}$ s$^{-1}$. Species response differently to changes into the core physical conditions, but they are more sensitive to changes in the depletion percentage and CR ionisation rate than to variations in the core density. Gas-phase models highlighted the importance of surface reactions as factories of large molecules and showed that for sulphur bearing species depletion is important to reproduce observations. Comparing the results of the reference model with the most recent millimeter observations of the NGC 2264 CMM3 core showed that our model is capable of reproducing the observed abundances of most of the species during early stages ($\le$ 3$\times$10$^4$ yrs) of their chemical evolution. Models with variations in the core density between 1 - 20 $\times$ 10$^6$ cm$^{-3}$ are also in good agreement with observations during the early time interval 1 $\times$ 10$^4 <$ t (yr) $<$ 5 $\times$ 10$^4$. In addition, models with higher CR ionisation rates (5 - 10) $\times \zeta_{\text {ISM}}$ are often overestimating the fractional abundances of the species. However, models with $\zeta_{\text {CMM3}}$ = 5 $\zeta_{\text {ISM}}$ may best fit observations at times $\sim$ 2 $\times$ 10$^4$ yrs. Our results suggest that CMM3 is (1 - 5) $\times$ 10$^4$ yrs old. Therefore, the core is chemically young and it may host a Class 0 object as suggested by previous studies.
0
1
0
0
0
0
Uniform Consistency in Stochastic Block Model with Continuous Community Label
\cite{bickel2009nonparametric} developed a general framework to establish consistency of community detection in stochastic block model (SBM). In most applications of this framework, the community label is discrete. For example, in \citep{bickel2009nonparametric,zhao2012consistency} the degree corrected SBM is assumed to have a discrete degree parameter. In this paper, we generalize the method of \cite{bickel2009nonparametric} to give consistency analysis of maximum likelihood estimator (MLE) in SBM with continuous community label. We show that there is a standard procedure to transform the $||\cdot||_2$ error bound to the uniform error bound. We demonstrate the application of our general results by proving the uniform consistency (strong consistency) of the MLE in the exponential network model with interaction effect. Unfortunately, in the continuous parameter case, the condition ensuring uniform consistency we obtained is much stronger than that in the discrete parameter case, namely $n\mu_n^5/(\log n)^{8}\rightarrow\infty$ versus $n\mu_n/\log n\rightarrow\infty$. Where $n\mu_n$ represents the average degree of the network. But continuous is the limit of discrete. So it is not surprising as we show that by discretizing the community label space into sufficiently small (but not too small) pieces and applying the MLE on the discretized community label space, uniform consistency holds under almost the same condition as in discrete community label space. Such a phenomenon is surprising since the discretization does not depend on the data or the model. This reminds us of the thresholding method.
0
0
0
1
0
0
Fine-scale population structure analysis in Armadillidium vulgare (Isopoda: Oniscidea) reveals strong female philopatry
In the last decades, dispersal studies have benefitted from the use of molecular markers for detecting patterns differing between categories of individuals, and have highlighted sex-biased dispersal in several species. To explain this phenomenon, sex-related handicaps such as parental care have been recently proposed as a hypothesis. Herein we tested this hypothesis in Armadillidium vulgare, a terrestrial isopod in which females bear the totality of the high parental care costs. We performed a fine-scale analysis of sex-specific dispersal patterns, using males and females originating from five sampling points located within 70 meters of each other. Based on microsatellite markers and both F-statistics and spatial autocorrelation analyses, our results revealed that while males did not present a significant structure at this geographic scale, females were significantly more similar to each other when they were collected in the same sampling point. These results support the sex-handicap hypothesis, and we suggest that widening dispersal studies to other isopods or crustaceans, displaying varying levels of parental care but differing in their ecology or mating system, might shed light on the processes underlying the evolution of sex-biased dispersal.
0
0
0
0
1
0
Fast transforms over finite fields of characteristic two
An additive fast Fourier transform over a finite field of characteristic two efficiently evaluates polynomials at every element of an $\mathbb{F}_2$-linear subspace of the field. We view these transforms as performing a change of basis from the monomial basis to the associated Lagrange basis, and consider the problem of performing the various conversions between these two bases, the associated Newton basis, and the '' novel '' basis of Lin, Chung and Han (FOCS 2014). Existing algorithms are divided between two families, those designed for arbitrary subspaces and more efficient algorithms designed for specially constructed subspaces of fields with degree equal to a power of two. We generalise techniques from both families to provide new conversion algorithms that may be applied to arbitrary subspaces, but which benefit equally from the specially constructed subspaces. We then construct subspaces of fields with smooth degree for which our algorithms provide better performance than existing algorithms.
1
0
0
0
0
0
Universality of group embeddability
Working in the framework of Borel reducibility, we study various notions of embeddability between groups. We prove that the embeddability between countable groups, the topological embeddability between (discrete) Polish groups, and the isometric embeddability between separable groups with a bounded bi-invariant complete metric are all invariantly universal analytic quasi-orders. This strengthens some results from [Wil14] and [FLR09].
0
0
1
0
0
0
Plenoptic Monte Carlo Object Localization for Robot Grasping under Layered Translucency
In order to fully function in human environments, robot perception will need to account for the uncertainty caused by translucent materials. Translucency poses several open challenges in the form of transparent objects (e.g., drinking glasses), refractive media (e.g., water), and diffuse partial occlusions (e.g., objects behind stained glass panels). This paper presents Plenoptic Monte Carlo Localization (PMCL) as a method for localizing object poses in the presence of translucency using plenoptic (light-field) observations. We propose a new depth descriptor, the Depth Likelihood Volume (DLV), and its use within a Monte Carlo object localization algorithm. We present results of localizing and manipulating objects with translucent materials and objects occluded by layers of translucency. Our PMCL implementation uses observations from a Lytro first generation light field camera to allow a Michigan Progress Fetch robot to perform grasping.
1
0
0
0
0
0
CUR Decompositions, Similarity Matrices, and Subspace Clustering
A general framework for solving the subspace clustering problem using the CUR decomposition is presented. The CUR decomposition provides a natural way to construct similarity matrices for data that come from a union of unknown subspaces $\mathscr{U}=\underset{i=1}{\overset{M}\bigcup}S_i$. The similarity matrices thus constructed give the exact clustering in the noise-free case. Additionally, this decomposition gives rise to many distinct similarity matrices from a given set of data, which allow enough flexibility to perform accurate clustering of noisy data. We also show that two known methods for subspace clustering can be derived from the CUR decomposition. An algorithm based on the theoretical construction of similarity matrices is presented, and experiments on synthetic and real data are presented to test the method. Additionally, an adaptation of our CUR based similarity matrices is utilized to provide a heuristic algorithm for subspace clustering; this algorithm yields the best overall performance to date for clustering the Hopkins155 motion segmentation dataset.
1
0
0
1
0
0
Approximating Geometric Knapsack via L-packings
We study the two-dimensional geometric knapsack problem (2DK) in which we are given a set of n axis-aligned rectangular items, each one with an associated profit, and an axis-aligned square knapsack. The goal is to find a (non-overlapping) packing of a maximum profit subset of items inside the knapsack (without rotating items). The best-known polynomial-time approximation factor for this problem (even just in the cardinality case) is (2 + \epsilon) [Jansen and Zhang, SODA 2004]. In this paper, we break the 2 approximation barrier, achieving a polynomial-time (17/9 + \epsilon) < 1.89 approximation, which improves to (558/325 + \epsilon) < 1.72 in the cardinality case. Essentially all prior work on 2DK approximation packs items inside a constant number of rectangular containers, where items inside each container are packed using a simple greedy strategy. We deviate for the first time from this setting: we show that there exists a large profit solution where items are packed inside a constant number of containers plus one L-shaped region at the boundary of the knapsack which contains items that are high and narrow and items that are wide and thin. As a second major and the main algorithmic contribution of this paper, we present a PTAS for this case. We believe that this will turn out to be useful in future work in geometric packing problems. We also consider the variant of the problem with rotations (2DKR), where items can be rotated by 90 degrees. Also, in this case, the best-known polynomial-time approximation factor (even for the cardinality case) is (2 + \epsilon) [Jansen and Zhang, SODA 2004]. Exploiting part of the machinery developed for 2DK plus a few additional ideas, we obtain a polynomial-time (3/2 + \epsilon)-approximation for 2DKR, which improves to (4/3 + \epsilon) in the cardinality case.
1
0
0
0
0
0
Impact and mitigation strategy for future solar flares
It is widely established that extreme space weather events associated with solar flares are capable of causing widespread technological damage. We develop a simple mathematical model to assess the economic losses arising from these phenomena over time. We demonstrate that the economic damage is characterized by an initial period of power-law growth, followed by exponential amplification and eventual saturation. We outline a mitigation strategy to protect our planet by setting up a magnetic shield to deflect charged particles at the Lagrange point L$_1$, and demonstrate that this approach appears to be realizable in terms of its basic physical parameters. We conclude our analysis by arguing that shielding strategies adopted by advanced civilizations will lead to technosignatures that are detectable by upcoming missions.
0
1
0
0
0
0
Empirical Bayes Matrix Completion
We develop an empirical Bayes (EB) algorithm for the matrix completion problems. The EB algorithm is motivated from the singular value shrinkage estimator for matrix means by Efron and Morris (1972). Since the EB algorithm is essentially the EM algorithm applied to a simple model, it does not require heuristic parameter tuning other than tolerance. Numerical results demonstrated that the EB algorithm achieves a good trade-off between accuracy and efficiency compared to existing algorithms and that it works particularly well when the difference between the number of rows and columns is large. Application to real data also shows the practical utility of the EB algorithm.
0
0
1
1
0
0
Excitonic Instability and Pseudogap Formation in Nodal Line Semimetal ZrSiS
Electron correlation effects are studied in ZrSiS using a combination of first-principles and model approaches. We show that basic electronic properties of ZrSiS can be described within a two-dimensional lattice model of two nested square lattices. High degree of electron-hole symmetry characteristic for ZrSiS is one of the key features of this model. Having determined model parameters from first-principles calculations, we then explicitly take electron-electron interactions into account and show that at moderately low temperatures ZrSiS exhibits excitonic instability, leading to the formation of a pseudogap in the electronic spectrum. The results can be understood in terms of Coulomb-interaction-assisted pairing of electrons and holes reminiscent to that of an excitonic insulator. Our finding allows us to provide a physical interpretation to the unusual mass enhancement of charge carriers in ZrSiS recently observed experimentally.
0
1
0
0
0
0
The Riemannian Geometry of Deep Generative Models
Deep generative models learn a mapping from a low dimensional latent space to a high-dimensional data space. Under certain regularity conditions, these models parameterize nonlinear manifolds in the data space. In this paper, we investigate the Riemannian geometry of these generated manifolds. First, we develop efficient algorithms for computing geodesic curves, which provide an intrinsic notion of distance between points on the manifold. Second, we develop an algorithm for parallel translation of a tangent vector along a path on the manifold. We show how parallel translation can be used to generate analogies, i.e., to transport a change in one data point into a semantically similar change of another data point. Our experiments on real image data show that the manifolds learned by deep generative models, while nonlinear, are surprisingly close to zero curvature. The practical implication is that linear paths in the latent space closely approximate geodesics on the generated manifold. However, further investigation into this phenomenon is warranted, to identify if there are other architectures or datasets where curvature plays a more prominent role. We believe that exploring the Riemannian geometry of deep generative models, using the tools developed in this paper, will be an important step in understanding the high-dimensional, nonlinear spaces these models learn.
1
0
0
1
0
0
Portable, high-performance containers for HPC
Building and deploying software on high-end computing systems is a challenging task. High performance applications have to reliably run across multiple platforms and environments, and make use of site-specific resources while resolving complicated software-stack dependencies. Containers are a type of lightweight virtualization technology that attempt to solve this problem by packaging applications and their environments into standard units of software that are: portable, easy to build and deploy, have a small footprint, and low runtime overhead. In this work we present an extension to the container runtime of Shifter that provides containerized applications with a mechanism to access GPU accelerators and specialized networking from the host system, effectively enabling performance portability of containers across HPC resources. The presented extension makes possible to rapidly deploy high-performance software on supercomputers from containerized applications that have been developed, built, and tested in non-HPC commodity hardware, e.g. the laptop or workstation of a researcher.
1
0
0
0
0
0
Learning a Deep Convolution Network with Turing Test Adversaries for Microscopy Image Super Resolution
Adversarially trained deep neural networks have significantly improved performance of single image super resolution, by hallucinating photorealistic local textures, thereby greatly reducing the perception difference between a real high resolution image and its super resolved (SR) counterpart. However, application to medical imaging requires preservation of diagnostically relevant features while refraining from introducing any diagnostically confusing artifacts. We propose using a deep convolutional super resolution network (SRNet) trained for (i) minimising reconstruction loss between the real and SR images, and (ii) maximally confusing learned relativistic visual Turing test (rVTT) networks to discriminate between (a) pair of real and SR images (T1) and (b) pair of patches in real and SR selected from region of interest (T2). The adversarial loss of T1 and T2 while backpropagated through SRNet helps it learn to reconstruct pathorealism in the regions of interest such as white blood cells (WBC) in peripheral blood smears or epithelial cells in histopathology of cancerous biopsy tissues, which are experimentally demonstrated here. Experiments performed for measuring signal distortion loss using peak signal to noise ratio (pSNR) and structural similarity (SSIM) with variation of SR scale factors, impact of rVTT adversarial losses, and impact on reporting using SR on a commercially available artificial intelligence (AI) digital pathology system substantiate our claims.
1
0
0
0
0
0
Quantum dynamics of a hydrogen-like atom in a time-dependent box: non-adiabatic regime
We consider a hydrogen atom confined in time-dependent trap created by a spherical impenetrable box with time-dependent radius. For such model we study the behavior of atomic electron under the (non-adiabatic) dynamical confinement caused by the rapidly moving wall of the box. The expectation values of the total and kinetic energy, average force, pressure and coordinate are analyzed as a function of time for linearly expanding, contracting and harmonically breathing boxes. It is shown that linearly extending box leads to de-excitation of the atom, while the rapidly contracting box causes the creation of very high pressure on the atom and transition of the atomic electron into the unbound state. In harmonically breathing box diffusive excitation of atomic electron may occur in analogy with that for atom in a microwave field.
0
1
0
0
0
0
Richardson's solutions in the real- and complex-energy spectrum
The constant pairing Hamiltonian holds exact solutions worked out by Richardson in the early Sixties. This exact solution of the pairing Hamiltonian regained interest at the end of the Nineties. The discret complex-energy states had been included in the Richardson's solutions by Hasegawa et al. [1]. In this contribution we reformulate the problem of determining the exact eigenenergies of the pairing Hamiltonian when the continuum is included through the single particle level density. The solutions with discret complex-energy states is recovered by analytic continuation of the equations to the complex energy plane. This formulation may be applied to loosely bound system where the correlations with the continuum-spectrum of energy is really important. Some details are given to show how the many-body eigenenergy emerges as sum of the pair-energies.
0
1
0
0
0
0
Complexity Results for MCMC derived from Quantitative Bounds
This paper considers how to obtain MCMC quantitative convergence bounds which can be translated into tight complexity bounds in high-dimensional setting. We propose a modified drift-and-minorization approach, which establishes a generalized drift condition defined in a subset of the state space. The subset is called the "large set", and is chosen to rule out some "bad" states which have poor drift property when the dimension gets large. Using the "large set" together with a "centered" drift function, a quantitative bound can be obtained which can be translated into a tight complexity bound. As a demonstration, we analyze a certain realistic Gibbs sampler algorithm and obtain a complexity upper bound for the mixing time, which shows that the number of iterations required for the Gibbs sampler to converge is constant. It is our hope that this modified drift-and-minorization approach can be employed in many other specific examples to obtain complexity bounds for high-dimensional Markov chains.
0
0
0
1
0
0
Symmetries and regularity for holomorphic maps between balls
Let $f:{\mathbb B}^n \to {\mathbb B}^N$ be a holomorphic map. We study subgroups $\Gamma_f \subseteq {\rm Aut}({\mathbb B}^n)$ and $T_f \subseteq {\rm Aut}({\mathbb B}^N)$. When $f$ is proper, we show both these groups are Lie subgroups. When $\Gamma_f$ contains the center of ${\bf U}(n)$, we show that $f$ is spherically equivalent to a polynomial. When $f$ is minimal we show that there is a homomorphism $\Phi:\Gamma_f \to T_f$ such that $f$ is equivariant with respect to $\Phi$. To do so, we characterize minimality via the triviality of a third group $H_f$. We relate properties of ${\rm Ker}(\Phi)$ to older results on invariant proper maps between balls. When $f$ is proper but completely non-rational, we show that either both $\Gamma_f$ and $T_f$ are finite or both are noncompact.
0
0
1
0
0
0
On the treatment of $\ell$-changing proton-hydrogen Rydberg atom collisions
Energy-conserving, angular momentum-changing collisions between protons and highly excited Rydberg hydrogen atoms are important for precise understanding of atomic recombination at the photon decoupling era, and the elemental abundance after primordial nucleosynthesis. Early approaches to $\ell$-changing collisions used perturbation theory for only dipole-allowed ($\Delta \ell=\pm 1$) transitions. An exact non-perturbative quantum mechanical treatment is possible, but it comes at computational cost for highly excited Rydberg states. In this note we show how to obtain a semi-classical limit that is accurate and simple, and develop further physical insights afforded by the non-perturbative quantum mechanical treatment.
0
1
0
0
0
0
Complex Economic Activities Concentrate in Large Cities
Why do some economic activities agglomerate more than others? And, why does the agglomeration of some economic activities continue to increase despite recent developments in communication and transportation technologies? In this paper, we present evidence that complex economic activities concentrate more in large cities. We find this to be true for technologies, scientific publications, industries, and occupations. Using historical patent data, we show that the urban concentration of complex economic activities has been continuously increasing since 1850. These findings suggest that the increasing urban concentration of jobs and innovation might be a consequence of the growing complexity of the economy.
1
0
0
0
0
0